WorldWideScience

Sample records for two-stage sampling design

  1. Precision and cost considerations for two-stage sampling in a panelized forest inventory design.

    Science.gov (United States)

    Westfall, James A; Lister, Andrew J; Scott, Charles T

    2016-01-01

    Due to the relatively high cost of measuring sample plots in forest inventories, considerable attention is given to sampling and plot designs during the forest inventory planning phase. A two-stage design can be efficient from a field work perspective as spatially proximate plots are grouped into work zones. A comparison between subsampling with units of unequal size (SUUS) and a simple random sample (SRS) design in a panelized framework assessed the statistical and economic implications of using the SUUS design for a case study in the Northeastern USA. The sampling errors for estimates of forest land area and biomass were approximately 1.5-2.2 times larger with SUUS prior to completion of the inventory cycle. Considerable sampling error reductions were realized by using the zones within a post-stratified sampling paradigm; however, post-stratification of plots in the SRS design always provided smaller sampling errors in comparison. Cost differences between the two designs indicated the SUUS design could reduce the field work expense by 2-7 %. The results also suggest the SUUS design may provide substantial economic advantage for tropical forest inventories, where remote areas, poor access, and lower wages are typically encountered.

  2. The role of the upper sample size limit in two-stage bioequivalence designs.

    Science.gov (United States)

    Karalis, Vangelis

    2013-11-01

    Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs.

  3. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  4. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Application of composite estimation in studies of animal population production with two-stage repeated sample designs.

    Science.gov (United States)

    Farver, T B; Holt, D; Lehenbauer, T; Greenley, W M

    1997-05-01

    This paper reports results from two example data sets of a two-stage sampling design where sampling (in panels) both farms and animals within selected farms increases the efficiency of parameter estimation from measurements recorded over time. With such a design, not only are farms replaced from time-to-time but also animals subsampled within retained farms are subject to replacement. Three general categories of parameters estimated for the population (the set of animals belonging to the universe of farms of interest) were (1) the total at each measurement occasion; (2) the difference between means or totals on successive measurement occasions; (3) the total over a sequence of successive measurement periods. Whereas several responses at the farm level were highly correlated over time (rho 1), the corresponding animal responses were less correlated over time (rho 2)-leading to only moderate gains in relative efficiency. Intraclass correlation values were too low in most cases to counteract the overall negative impact of rho 2. In general, sizeable gains in relative efficiency were observed for estimating change-confirming a previous result which showed this to be true provided that rho 1 was high (irrespective of rho 2).

  6. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal [alpha] should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  7. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal {alpha} should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  8. Mixed effect regression analysis for a cluster-based two-stage outcome-auxiliary-dependent sampling design with a continuous outcome.

    Science.gov (United States)

    Xu, Wangli; Zhou, Haibo

    2012-09-01

    Two-stage design is a well-known cost-effective way for conducting biomedical studies when the exposure variable is expensive or difficult to measure. Recent research development further allowed one or both stages of the two-stage design to be outcome dependent on a continuous outcome variable. This outcome-dependent sampling feature enables further efficiency gain in parameter estimation and overall cost reduction of the study (e.g. Wang, X. and Zhou, H., 2010. Design and inference for cancer biomarker study with an outcome and auxiliary-dependent subsampling. Biometrics 66, 502-511; Zhou, H., Song, R., Wu, Y. and Qin, J., 2011. Statistical inference for a two-stage outcome-dependent sampling design with a continuous outcome. Biometrics 67, 194-202). In this paper, we develop a semiparametric mixed effect regression model for data from a two-stage design where the second-stage data are sampled with an outcome-auxiliary-dependent sample (OADS) scheme. Our method allows the cluster- or center-effects of the study subjects to be accounted for. We propose an estimated likelihood function to estimate the regression parameters. Simulation study indicates that greater study efficiency gains can be achieved under the proposed two-stage OADS design with center-effects when compared with other alternative sampling schemes. We illustrate the proposed method by analyzing a dataset from the Collaborative Perinatal Project.

  9. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  10. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    Science.gov (United States)

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  11. Two-stage designs for cross-over bioequivalence trials.

    Science.gov (United States)

    Kieser, Meinhard; Rauch, Geraldine

    2015-07-20

    The topic of applying two-stage designs in the field of bioequivalence studies has recently gained attention in the literature and in regulatory guidelines. While there exists some methodological research on the application of group sequential designs in bioequivalence studies, implementation of adaptive approaches has focused up to now on superiority and non-inferiority trials. Especially, no comparison of the features and performance characteristics of these designs has been performed, and therefore, the question of which design to employ in this setting remains open. In this paper, we discuss and compare 'classical' group sequential designs and three types of adaptive designs that offer the option of mid-course sample size recalculation. A comprehensive simulation study demonstrates that group sequential designs can be identified, which show power characteristics that are similar to those of the adaptive designs but require a lower average sample size. The methods are illustrated with a real bioequivalence study example.

  12. On Two-stage Seamless Adaptive Design in Clinical Trials

    Directory of Open Access Journals (Sweden)

    Shein-Chung Chow

    2008-12-01

    Full Text Available In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular because of its efficiency and flexibility in modifying trial and/or statistical procedures of ongoing clinical trials. One of the most commonly considered adaptive designs is probably a two-stage seamless adaptive trial design that combines two separate studies into one single study. In many cases, study endpoints considered in a two-stage seamless adaptive design may be similar but different (e.g. a biomarker versus a regular clinical endpoint or the same study endpoint with different treatment durations. In this case, it is important to determine how the data collected from both stages should be combined for the final analysis. It is also of interest to know how the sample size calculation/allocation should be done for achieving the study objectives originally set for the two stages (separate studies. In this article, formulas for sample size calculation/allocation are derived for cases in which the study endpoints are continuous, discrete (e.g. binary responses, and contain time-to-event data assuming that there is a well-established relationship between the study endpoints at different stages, and that the study objectives at different stages are the same. In cases in which the study objectives at different stages are different (e.g. dose finding at the first stage and efficacy confirmation at the second stage and when there is a shift in patient population caused by protocol amendments, the derived test statistics and formulas for sample size calculation and allocation are necessarily modified for controlling the overall type I error at the prespecified level.

  13. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  14. Two-Stage Sampling Procedures for Comparing Means When Population Distributions Are Non-Normal.

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen

    Two-stage sampling procedures for comparing two population means when variances are heterogeneous have been developed by D. G. Chapman (1950) and B. K. Ghosh (1975). Both procedures assume sampling from populations that are normally distributed. The present study reports on the effect that sampling from non-normal distributions has on Type I error…

  15. Two Stage Fully Differential Sample and Hold Circuit Using .18µm Technology

    Directory of Open Access Journals (Sweden)

    Dharmendra Dongardiye

    2014-05-01

    Full Text Available This paper presents a well-established Fully Differential sample & hold circuitry, implemented in 180-nm CMOS technology. In this two stage method the first stage give us very high gain and second stage gives large voltage swing. The proposed opamp provides 149MHz unity-gain bandwidth , 78 degree phase margin and a differential peak to peak output swing more than 2.4v. using the improved fully differential two stage operational amplifier of 76.7dB gain. Although the sample and hold circuit meets the requirements of SNR specifications.

  16. Two-stage re-estimation adaptive design: a simulation study

    Directory of Open Access Journals (Sweden)

    Francesca Galli

    2013-10-01

    Full Text Available Background: adaptive clinical trial design has been proposed as a promising new approach to improve the drug discovery process. Among the many options available, adaptive sample size re-estimation is of great interest mainly because of its ability to avoid a large ‘up-front’ commitment of resources. In this simulation study, we investigate the statistical properties of two-stage sample size re-estimation designs in terms of type I error control, study power and sample size, in comparison with the fixed-sample study.Methods: we simulated a balanced two-arm trial aimed at comparing two means of normally distributed data, using the inverse normal method to combine the results of each stage, and considering scenarios jointly defined by the following factors: the sample size re-estimation method, the information fraction, the type of group sequential boundaries and the use of futility stopping. Calculations were performed using the statistical software SAS™ (version 9.2.Results: under the null hypothesis, any type of adaptive design considered maintained the prefixed type I error rate, but futility stopping was required to avoid the unwanted increase in sample size. When deviating from the null hypothesis, the gain in power usually achieved with the adaptive design and its performance in terms of sample size were influenced by the specific design options considered.Conclusions: we show that adaptive designs incorporating futility stopping, a sufficiently high information fraction (50-70% and the conditional power method for sample size re-estimation have good statistical properties, which include a gain in power when trial results are less favourable than anticipated. 

  17. A two-stage method to determine optimal product sampling considering dynamic potential market.

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

  18. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  19. On Simon's two-stage design for single-arm phase IIA cancer clinical trials under beta-binomial distribution.

    Science.gov (United States)

    Liu, Junfeng; Lin, Yong; Shih, Weichung Joe

    2010-05-10

    Simon (Control. Clin. Trials 1989; 10:1-10)'s two-stage design has been broadly applied to single-arm phase IIA cancer clinical trials in order to minimize either the expected or the maximum sample size under the null hypothesis of drug inefficacy, i.e. when the pre-specified amount of improvement in response rate (RR) is not expected to be observed. This paper studies a realistic scenario where the standard and experimental treatment RRs follow two continuous distributions (e.g. beta distribution) rather than two single values. The binomial probabilities in Simon's (Control. Clin. Trials 1989; 10:1-10) design are replaced by prior predictive Beta-binomial probabilities that are the ratios of two beta functions and domain-restricted RRs involve incomplete beta functions to induce the null hypothesis acceptance probability. We illustrate that Beta-binomial mixture model based two-stage design retains certain desirable properties for hypothesis testing purpose. However, numerical results show that such designs may not exist under certain hypothesis and error rate (type I and II) setups within maximal sample size approximately 130. Furthermore, we give theoretical conditions for asymptotic two-stage design non-existence (sample size goes to infinity) in order to improve the efficiency of design search and to avoid needless searching.

  20. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  1. Interval estimation of binomial proportion in clinical trials with a two-stage design.

    Science.gov (United States)

    Tsai, Wei-Yann; Chi, Yunchan; Chen, Chia-Min

    2008-01-15

    Generally, a two-stage design is employed in Phase II clinical trials to avoid giving patients an ineffective drug. If the number of patients with significant improvement, which is a binomial response, is greater than a pre-specified value at the first stage, then another binomial response at the second stage is also observed. This paper considers interval estimation of the response probability when the second stage is allowed to continue. Two asymptotic interval estimators, Wald and score, as well as two exact interval estimators, Clopper-Pearson and Sterne, are constructed according to the two binomial responses from this two-stage design, where the binomial response at the first stage follows a truncated binomial distribution. The mean actual coverage probability and expected interval width are employed to evaluate the performance of these interval estimators. According to the comparison results, the score interval is recommended for both Simon's optimal and minimax designs.

  2. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2017-04-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  3. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2016-09-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  4. A covariate adjusted two-stage allocation design for binary responses in randomized clinical trials.

    Science.gov (United States)

    Bandyopadhyay, Uttam; Biswas, Atanu; Bhattacharya, Rahul

    2007-10-30

    In the present work, we develop a two-stage allocation rule for binary response using the log-odds ratio within the Bayesian framework allowing the current allocation to depend on the covariate value of the current subject. We study, both numerically and theoretically, several exact and limiting properties of this design. The applicability of the proposed methodology is illustrated by using some data set. We compare this rule with some of the existing rules by computing various performance measures.

  5. Synthetic control charts with two-stage sampling for monitoring bivariate processes

    Directory of Open Access Journals (Sweden)

    Antonio F. B. Costa

    2007-04-01

    Full Text Available In this article, we consider the synthetic control chart with two-stage sampling (SyTS chart to control bivariate processes. During the first stage, one item of the sample is inspected and two correlated quality characteristics (x;y are measured. If the Hotelling statistic T1² for these individual observations of (x;y is lower than a specified value UCL1 the sampling is interrupted. Otherwise, the sampling goes on to the second stage, where the remaining items are inspected and the Hotelling statistic T2² for the sample means of (x;y is computed. When the statistic T2² is larger than a specified value UCL2, the sample is classified as nonconforming. According to the synthetic control chart procedure, the signal is based on the number of conforming samples between two neighbor nonconforming samples. The proposed chart detects process disturbances faster than the bivariate charts with variable sample size and it is from the practical viewpoint more convenient to administer.Este artigo apresenta um gráfico de controle com regra especial de decisão e amostragens em dois estágios para o monitoramento de processos bivariados. No primeiro estágio, um item da amostra é inspecionado e duas características de qualidade correlacionadas (x;y são medidas. Se a estatística de Hotelling T1² para as observações individuais de (x;y for menor que um valor especificado UCL1 a amostragem é interrompida. Caso contrário, a amostragem segue para o segundo estágio, onde os demais itens da amostra são inspecionados e a estatística de Hotelling T2² para as médias de (x;y é calculada. Quando a estatística T2² é maior que um valor especificado UCL2, a amostra é classificada como não conforme. De acordo com a regra especial de decisão, o alarme é baseado no número de amostras entre duas não conformes. O gráfico proposto é mais ágil e mais simples do ponto de vista operacional que o gráfico de controle bivariado com tamanho de amostras variável.

  6. Structural requirements and basic design concepts for a two-stage winged launcher system (Saenger)

    Science.gov (United States)

    Kuczera, H.; Keller, K.; Kunz, R.

    1988-10-01

    An evaluation is made of materials and structures technologies deemed capable of increasing the mass fraction-to-orbit of the Saenger two-stage launcher system while adequately addressing thermal-control and cryogenic fuel storage insulation problems. Except in its leading edges, nose cone, and airbreathing propulsion system air intakes, Ti alloy-based materials will be the basis of the airframe primary structure. Lightweight metallic thermal-protection measures will be employed. Attention is given to the design of the large lower stage element of Saenger.

  7. Design and Characterization of two stage High-Speed CMOS Operational Amplifier

    Directory of Open Access Journals (Sweden)

    Rahul Chaudhari

    2014-03-01

    Full Text Available A method described in this paper is to design a Two Stage CMOS operational amplifier and analyze the effect of various aspect ratios on the characteristics of this Op-Amp, which operates at 1.8V power supply using tsmc 0.18μm CMOS technology. In this paper trade-off curves are computed between all characteristics such as Gain, PM, GBW, ICMRR, CMRR, Slew Rate etc. The OPAMP designed is a two-stage CMOS OPAMP. The OPAMP is designed to exhibit a unity gain frequency of 14MHz and exhibits a gain of 59.98dB with a 61.235 phase margin. Design has been carried out in Mentor graphics tools. Simulation results are verified using Model Sim Eldo and Design Architect IC. The task of CMOS operational amplifiers (Op-Amps design optimization is investigated in this work. This Paper focused on the optimization of various aspect ratios, which gave the result of different parameter. When this task is analyzed as a search problem, it can be translated into a multi-objective optimization application in which various Op-Amps’ specifications have to be taken into account, i.e., Gain, GBW (gain-bandwidth product, phase margin and others. The results are compared with respect to standard characteristics of the op-amp with the help of graph and table. Simulation results agree with theoretical predictions. Simulations confirm that the settling time can be further improved by increasing the value of GBW, the settling time is achieved 19ns. It has been demonstrated that when W/L increases the parameters GBW increases and settling time reduces.

  8. Two-stage collaborative global optimization design model of the CHPG microgrid

    Science.gov (United States)

    Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng

    2017-06-01

    With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.

  9. Design and Analysis of a Split Deswirl Vane in a Two-Stage Refrigeration Centrifugal Compressor

    Directory of Open Access Journals (Sweden)

    Jeng-Min Huang

    2014-09-01

    Full Text Available This study numerically investigated the influence of using the second row of a double-row deswirl vane as the inlet guide vane of the second stage on the performance of the first stage in a two-stage refrigeration centrifugal compressor. The working fluid was R134a, and the turbulence model was the Spalart-Allmaras model. The parameters discussed included the cutting position of the deswirl vane, the staggered angle of two rows of vane, and the rotation angle of the second row. The results showed that the performance of staggered angle 7.5° was better than that of 15° or 22.5°. When the staggered angle was 7.5°, the performance of cutting at 1/3 and 1/2 of the original deswirl vane length was slightly different from that of the original vane but obviously better than that of cutting at 2/3. When the staggered angle was 15°, the cutting position influenced the performance slightly. At a low flow rate prone to surge, when the second row at a staggered angle 7.5° cutting at the half of vane rotated 10°, the efficiency was reduced by only about 0.6%, and 10% of the swirl remained as the preswirl of the second stage, which is generally better than other designs.

  10. Exact alpha-error determination for two-stage sampling strategies to substantiate freedom from disease.

    Science.gov (United States)

    Kopacka, I; Hofrichter, J; Fuchs, K

    2013-05-01

    Sampling strategies to substantiate freedom from disease are important when it comes to the trade of animals and animal products. When considering imperfect tests and finite populations, sample size calculation can, however, be a challenging task. The generalized hypergeometric formula developed by Cameron and Baldock (1998a) offers a framework that can elegantly be extended to multi-stage sampling strategies, which are widely used to account for disease clustering at herd-level. The achieved alpha-error of such surveys, however, typically depends on the realization of the sample and can differ from the pre-calculated value. In this paper, we introduce a new formula to evaluate the exact alpha-error induced by a specific sample. We further give a numerically viable approximation formula and analyze its properties using a data example of Brucella melitensis in the Austrian sheep population.

  11. Sampling strategies for estimating forest cover from remote sensing-based two-stage inventories

    Institute of Scientific and Technical Information of China (English)

    Piermaria; Corona; Lorenzo; Fattorini; Maria; Chiara; Pagliarella

    2015-01-01

    Background: Remote sensing-based inventories are essential in estimating forest cover in tropical and subtropical countries, where ground inventories cannot be performed periodically at a large scale owing to high costs and forest inaccessibility(e.g. REDD projects) and are mandatory for constructing historical records that can be used as forest cover baselines. Given the conditions of such inventories, the survey area is partitioned into a grid of imagery segments of pre-fixed size where the proportion of forest cover can be measured within segments using a combination of unsupervised(automated or semi-automated) classification of satellite imagery and manual(i.e. visual on-screen)enhancements. Because visual on-screen operations are time expensive procedures, manual classification can be performed only for a sample of imagery segments selected at a first stage, while forest cover within each selected segment is estimated at a second stage from a sample of pixels selected within the segment. Because forest cover data arising from unsupervised satellite imagery classification may be freely available(e.g. Landsat imagery)over the entire survey area(wall-to-wall data) and are likely to be good proxies of manually classified cover data(sample data), they can be adopted as suitable auxiliary information.Methods: The question is how to choose the sample areas where manual classification is carried out. We have investigated the efficiency of one-per-stratum stratified sampling for selecting segments and pixels, where to carry out manual classification and to determine the efficiency of the difference estimator for exploiting auxiliary information at the estimation level. The performance of this strategy is compared with simple random sampling without replacement.Results: Our results were obtained theoretically from three artificial populations constructed from the Landsat classification(forest/non forest) available at pixel level for a study area located in central Italy

  12. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  13. The Design, Construction and Operation of a 75 kW Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Birk; Ahrenfeldt, Jesper; Jensen, Torben Kvist

    2003-01-01

    The Two-Stage Gasifier was operated for several weeks (465 hours) and of these 190 hours continuously. The gasifier is operated automatically unattended day and night, and only small adjustments of the feeding rate were necessary once or twice a day. The operation was successful, and the output a...... of the reactor had to be constructed in some other material....

  14. Some design aspects of a two-stage rail-to-rail CMOS op amp

    NARCIS (Netherlands)

    Gierkink, S.L.J.; Holzmann, Peter J.; Wiegerink, R.J.; Wassenaar, R.F.

    1999-01-01

    A two-stage low-voltage CMOS op amp with rail-to-rail input and output voltage ranges is presented. The circuit uses complementary differential input pairs to achieve the rail-to-rail common-mode input voltage range. The differential pairs operate in strong inversion, and the constant transconductan

  15. A two-stage strategy to accommodate general patterns of confounding in the design of observational studies.

    Science.gov (United States)

    Haneuse, Sebastien; Schildcrout, Jonathan; Gillen, Daniel

    2012-04-01

    Accommodating general patterns of confounding in sample size/power calculations for observational studies is extremely challenging, both technically and scientifically. While employing previously implemented sample size/power tools is appealing, they typically ignore important aspects of the design/data structure. In this paper, we show that sample size/power calculations that ignore confounding can be much more unreliable than is conventionally thought; using real data from the US state of North Carolina, naive calculations yield sample size estimates that are half those obtained when confounding is appropriately acknowledged. Unfortunately, eliciting realistic design parameters for confounding mechanisms is difficult. To overcome this, we propose a novel two-stage strategy for observational study design that can accommodate arbitrary patterns of confounding. At the first stage, researchers establish bounds for power that facilitate the decision of whether or not to initiate the study. At the second stage, internal pilot data are used to estimate key scientific inputs that can be used to obtain realistic sample size/power. Our results indicate that the strategy is effective at replicating gold standard calculations based on knowing the true confounding mechanism. Finally, we show that consideration of the nature of confounding is a crucial aspect of the elicitation process; depending on whether the confounder is positively or negatively associated with the exposure of interest and outcome, naive power calculations can either under or overestimate the required sample size. Throughout, simulation is advocated as the only general means to obtain realistic estimates of statistical power; we describe, and provide in an R package, a simple algorithm for estimating power for a case-control study.

  16. Experiences from the full-scale implementation of a new two-stage vertical flow constructed wetland design.

    Science.gov (United States)

    Langergraber, Guenter; Pressl, Alexander; Haberl, Raimund

    2014-01-01

    This paper describes the results of the first full-scale implementation of a two-stage vertical flow constructed wetland (CW) system developed to increase nitrogen removal. The full-scale system was constructed for the Bärenkogelhaus, which is located in Styria at the top of a mountain, 1,168 m above sea level. The Bärenkogelhaus has a restaurant with 70 seats, 16 rooms for overnight guests and is a popular site for day visits, especially during weekends and public holidays. The CW treatment system was designed for a hydraulic load of 2,500 L.d(-1) with a specific surface area requirement of 2.7 m(2) per person equivalent (PE). It was built in fall 2009 and started operation in April 2010 when the restaurant was re-opened. Samples were taken between July 2010 and June 2013 and were analysed in the laboratory of the Institute of Sanitary Engineering at BOKU University using standard methods. During 2010 the restaurant at Bärenkogelhaus was open 5 days a week whereas from 2011 the Bärenkogelhaus was open only on demand for events. This resulted in decreased organic loads of the system in the later period. In general, the measured effluent concentrations were low and the removal efficiencies high. During the whole period the ammonia nitrogen effluent concentration was below 1 mg/L even at effluent water temperatures below 3 °C. Investigations during high-load periods, i.e. events like weddings and festivals at weekends, with more than 100 visitors, showed a very robust treatment performance of the two-stage CW system. Effluent concentrations of chemical oxygen demand and NH4-N were not affected by these events with high hydraulic loads.

  17. Two-stage sample-to-answer system based on nucleic acid amplification approach for detection of malaria parasites.

    Science.gov (United States)

    Liu, Qing; Nam, Jeonghun; Kim, Sangho; Lim, Chwee Teck; Park, Mi Kyoung; Shin, Yong

    2016-08-15

    Rapid, early, and accurate diagnosis of malaria is essential for effective disease management and surveillance, and can reduce morbidity and mortality associated with the disease. Although significant advances have been achieved for the diagnosis of malaria, these technologies are still far from ideal, being time consuming, complex and poorly sensitive as well as requiring separate assays for sample processing and detection. Therefore, the development of a fast and sensitive method that can integrate sample processing with detection of malarial infection is desirable. Here, we report a two-stage sample-to-answer system based on nucleic acid amplification approach for detection of malaria parasites. It combines the Dimethyl adipimidate (DMA)/Thin film Sample processing (DTS) technique as a first stage and the Mach-Zehnder Interferometer-Isothermal solid-phase DNA Amplification (MZI-IDA) sensing technique as a second stage. The system can extract DNA from malarial parasites using DTS technique in a closed system, not only reducing sample loss and contamination, but also facilitating the multiplexed malarial DNA detection using the fast and accurate MZI-IDA technique. Here, we demonstrated that this system can deliver results within 60min (including sample processing, amplification and detection) with high sensitivity (malaria in low-resource settings.

  18. Sample Design.

    Science.gov (United States)

    Ross, Kenneth N.

    1987-01-01

    This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)

  19. On the design of a low-voltage two-stage OTA using bulk-driven and positive feedback techniques

    Science.gov (United States)

    Khameh, Hassan; Shamsi, Hossein

    2012-09-01

    This article presents the design and simulation of a fully differential two-stage operational transconductance amplifier (OTA) in a 0.18 µm CMOS process with a 0.9 V supply voltage. For this purpose, both the bulk-driven and positive feedback techniques are employed. These techniques increase the DC gain by about 18.5 dB without consuming more power and changing the unity-gain bandwidth and phase margin of the OTA.

  20. Two-Stage Experimental Design for Dose–Response Modeling in Toxicology Studies

    OpenAIRE

    Wang, Kai; Yang, Feng; Porter, Dale W; Wu, Nianqiang

    2013-01-01

    The efficient design of experiments (i.e., selection of experimental doses and allocation of animals) is important to establishing dose–response relationships in toxicology studies. The proposed procedure for design of experiments is distinct from those in the literature because it is able to adequately accommodate the special features of the dose–response data, which include non-normality, variance heterogeneity, possibly nonlinearity of the dose–response curve, and data scarcity. The design...

  1. OPTIMIZING THE PRECISION OF TOXICITY THRESHOLD ESTIMATION USING A TWO-STAGE EXPERIMENTAL DESIGN

    Science.gov (United States)

    An important consideration for risk assessment is the existence of a threshold, i.e., the highest toxicant dose where the response is not distinguishable from background. We have developed methodology for finding an experimental design that optimizes the precision of threshold mo...

  2. Design of a Scalable Modular Production System for a Two-stage Food Service Franchise System

    Directory of Open Access Journals (Sweden)

    Matt

    2012-11-01

    industrial case. Information was collected through multiple site visits, workshops and semi‐structured interviews with the company’s key staff of the project, as well as examination of relevant company documentations. By means of a scenario for the Central European market, the model was reviewed in terms of its development potential and finally approved for implementation. However, research through case survey requires further empirical investigation to fully establish this approach as a valid and reliable design tool.

  3. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  4. A Two-Stage Layered Mixture Experiment Design for a Nuclear Waste Glass Application-Part 2

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, Scott K.; Piepel, Gregory F.; Gan, Hao; Kot, Wing; Pegg, Ian L.

    2003-12-01

    Part 1 (Cooley and Piepel, 2003a) describes the first stage of a two-stage experimental design to support property-composition modeling for high-level waste (HLW) glass to be produced at the Hanford Site in Washington state. Each stage used a layered design having an outer layer, an inner layer, a center point, and some replicates. However, the design variables and constraints defining the layers of the experimental glass composition region (EGCR) were defined differently for the second stage than for the first. The first-stage initial design involved 15 components, all treated as mixture variables. The second-stage augmentation design involved 19 components, with 14 treated as mixture variables and 5 treated as non-mixture variables. For each second-stage layer, vertices were generated and optimal design software was used to select alternative subsets of vertices for the design and calculate design optimality measures. A model containing 29 partial quadratic mixture terms plus 5 linear terms for the non-mixture variables was the basis for the optimal design calculations. Predicted property values were plotted for the alternative subsets of second-stage vertices and the first-stage design points. Based on the optimality measures and the predicted property distributions, a ''best'' subset of vertices was selected for each layer of the second-stage to augment the first-stage design.

  5. A Two-Stage Layered Mixture Experiment Design for a Nuclear Waste Glass Application-Part 1

    Energy Technology Data Exchange (ETDEWEB)

    Cooley, Scott K.; Piepel, Gregory F.; Gan, Hao; Kot, Wing; Pegg, Ian L.

    2003-12-01

    A layered experimental design involving mixture variables was generated to support developing property-composition models for high-level waste (HLW) glasses. The design was generated in two stages, each having unique characteristics. Each stage used a layered design having an outer layer, an inner layer, a center point, and some replicates. The layers were defined by single- and multi-variable constraints. The first stage involved 15 glass components treated as mixture variables. For each layer, vertices were generated and optimal design software was used to select alternative subsets of vertices and calculate design optimality measures. Two partial quadratic mixture models, containing 25 terms for the outer layer and 30 terms for the inner layer, were the basis for the optimal design calculations. Distributions of predicted glass property values were plotted and evaluated for the alternative subsets of vertices. Based on the optimality measures and the predicted property distributions, a ''best'' subset of vertices was selected for each layer to form a layered design for the first stage. The design for the second stage was selected to augment the first-stage design. The discussion of the second-stage design begins in this Part 1 and is continued in Part 2 (Cooley and Piepel, 2003b).

  6. A two-stage algorithm for designing phase I cancer clinical trials for two new molecular entities.

    Science.gov (United States)

    Su, Zheng

    2010-01-01

    The continual reassessment method (CRM) and subsequent developments of the Bayesian approach provide important tools for the design of Phase I cancer clinical trials for a new molecular entity. In recent years the idea of developing a treatment composed of two molecular entities has been proposed. For example, for some tumor types there may be two signaling pathways, both of which need to be blocked simultaneously using two molecules to achieve therapeutic benefit. A two-stage Bayesian and likelihood based algorithm is introduced herein for designing Phase I cancer clinical trials for two new molecular entities. It starts with a modified CRM approach in the first stage and makes use of the accumulated data from the first stage to provide likelihood estimates of model parameters for use in the second stage. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  7. Two-Stage Design Method for Enhanced Inductive Energy Transmission with Q-Constrained Planar Square Loops.

    Directory of Open Access Journals (Sweden)

    Akaa Agbaeze Eteng

    Full Text Available Q-factor constraints are usually imposed on conductor loops employed as proximity range High Frequency Radio Frequency Identification (HF-RFID reader antennas to ensure adequate data bandwidth. However, pairing such low Q-factor loops in inductive energy transmission links restricts the link transmission performance. The contribution of this paper is to assess the improvement that is reached with a two-stage design method, concerning the transmission performance of a planar square loop relative to an initial design, without compromise to a Q-factor constraint. The first stage of the synthesis flow is analytical in approach, and determines the number and spacing of turns by which coupling between similar paired square loops can be enhanced with low deviation from the Q-factor limit presented by an initial design. The second stage applies full-wave electromagnetic simulations to determine more appropriate turn spacing and widths to match the Q-factor constraint, and achieve improved coupling relative to the initial design. Evaluating the design method in a test scenario yielded a more than 5% increase in link transmission efficiency, as well as an improvement in the link fractional bandwidth by more than 3%, without violating the loop Q-factor limit. These transmission performance enhancements are indicative of a potential for modifying proximity HF-RFID reader antennas for efficient inductive energy transfer and data telemetry links.

  8. A Two-Stage Taguchi Design ExampleImage Quality Promotion in Miniature Camera/Cell-Phone Lens

    Directory of Open Access Journals (Sweden)

    Luke K. Wang

    2012-07-01

    Full Text Available A simple, practical manufacturing process, integrating manufacturing capability-oriented design (MCOD philosophy and Taguchi’s method, is presented to tackle the high resolution miniature camera/cell phone lens issues at the manufacturing phase. Meanwhile, we also use optical software to create an analytical simulation model to investigate the quality characteristics due to lens’ thickness, eccentricity, surface profile, and air lens’ gap; a single quality characteristics expressed in terms of modulation transfer function (MTF is defined. Optimal combination of process parameters in experimental scenario using Taguchi’s method is performed, and the results are judged and analyzed by the indices of signal-to-noise ratio (S/N and the analysis of variance (ANOVA. The key idea of the two-stage design is to utilize optical software to conduct the sensitivity analysis of MTF first; an analytical model, dependent on actual process parameters at manufacturing stage, is constructed next; and finally by substituting these outputs from the analytical model back to the optical software to verify the design criterion and do the modifications. By minimizing both the theoretical errors at design stage and the complexity in the manufacturing process, we are able to seeking for the most economical solution, simultaneously attain the optimal/suboptimal combination of process parameters or control factors in lens manufacturing issue.

  9. Accounting for uncertainty in the historical response rate of the standard treatment in single-arm two-stage designs based on Bayesian power functions.

    Science.gov (United States)

    Matano, Francesca; Sambucini, Valeria

    2016-11-01

    In phase II single-arm studies, the response rate of the experimental treatment is typically compared with a fixed target value that should ideally represent the true response rate for the standard of care therapy. Generally, this target value is estimated through previous data, but the inherent variability in the historical response rate is not taken into account. In this paper, we present a Bayesian procedure to construct single-arm two-stage designs that allows to incorporate uncertainty in the response rate of the standard treatment. In both stages, the sample size determination criterion is based on the concepts of conditional and predictive Bayesian power functions. Different kinds of prior distributions, which play different roles in the designs, are introduced, and some guidelines for their elicitation are described. Finally, some numerical results about the performance of the designs are provided and a real data example is illustrated. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Advanced LIGO Two-Stage Twelve-Axis Vibration Isolation and Positioning Platform. Part 1: Design and Production Overview

    CERN Document Server

    Matichard, Fabrice; Mason, Kenneth; Mittleman, Richard; Abbott, Benjamin; Abbott, Samuel; Allwine, Eric; Barnum, Samuel; Birch, Jeremy; Biscans, Sebastien; Clark, Daniel; Coyne, Dennis; DeBra, Dan; DeRosa, Ryan; Foley, Stephany; Fritschel, Peter; Giaime, Joseph A; Gray, Corey; Grabeel, Gregory; Hanson, Joe; Hillard, Michael; Kissel, Jeffrey; Kucharczyk, Christopher; Roux, Adrien Le; Lhuillier, Vincent; Macinnis, Myron; OReilly, Brian; Ottaway, David; Paris, Hugo; Puma, Michael; Radkins, Hugh; Ramet, Celine; Robinson, Mitchell; Ruet, Laurent; Sareen, Pradeep; Shoemaker, Daivid; Stein, Andy; Thomas, Jeremy; Vargas, Michael; Warner, Jimmy

    2014-01-01

    New generations of gravity wave detectors require unprecedented levels of vibration isolation. This paper presents the final design of the vibration isolation and positioning platform used in Advanced LIGO to support the interferometers core optics. This five-ton two-and-half-meter wide system operates in ultra-high vacuum. It features two stages of isolation mounted in series. The stages are imbricated to reduce the overall height. Each stage provides isolation in all directions of translation and rotation. The system is instrumented with a unique combination of low noise relative and inertial sensors. The active control provides isolation from 0.1 Hz to 30 Hz. It brings the platform motion down to 10^(-11) m/Hz^(0.5) at 1 Hz. Active and passive isolation combine to bring the platform motion below 10^(-12) m/Hz^(0.5) at 10 Hz. The passive isolation lowers the motion below 10^(-13) m/Hz^(0.5) at 100 Hz. The paper describes how the platform has been engineered not only to meet the isolation requirements, but a...

  11. Design and Characterization of Thin Stainless Steel Burst Disks for Increasing Two-Stage Light Gas Launcher Efficiency

    Science.gov (United States)

    Tylka, Jonathan M.; Johnson, Kenneth L.; Henderson, Donald; Rodriguez, Karen

    2012-01-01

    Laser etched 300 series Stainless Steel Burst Disks (SSBD) ranging between 0.178 mm (0.007-in.) and 0.508mm (0.020-in.) thick were designed for use in a 17-caliber two-stage light gas launcher. First, a disk manufacturing method was selected using a combination of wire electrical discharge machining (EDM) to form the blank disks and laser etching to define the pedaling fracture pattern. Second, a replaceable insert was designed to go between the SSDB and the barrel. This insert reduced the stress concentration between the SSBD and the barrel, providing a place for the petals of the SSDB to open, and protecting the rifling on the inside of the barrel. Thereafter, a design of experiments was implemented to test and characterize the burst characteristics of SSBDs. Extensive hydrostatic burst testing of the SSBDs was performed to complete the design of experiments study with one-hundred and seven burst tests. The experiment simultaneously tested the effects of the following: two SSBD material states (full hard, annealed); five SSBD thicknesses 0.178, 0.254, 0.305, 0.381 mm (0.007, 0.010, 0.012, 0.015, 0.020-in.); two grain directions relative); number of times the laser etch pattern was repeated (varies between 5-200 times); two heat sink configurations (with and without heat sink); and, two barrel configurations (with and without insert). These tests resulted in the quantification of the relationship between SSBD thickness, laser etch parameters, and desired burst pressure. Of the factors investigated only thickness and number of laser etches were needed to develop a mathematical relationship predicting hydrostatic burst pressure of disks using the same barrel configuration. The fracture surfaces of two representative SSBD bursts were then investigated with a scanning electron microscope, one burst hydrostatically in a fixture and another dynamically in the launcher. The fracture analysis verified that both burst conditions resulted in a ductile overload failure

  12. The Efficiency Level in the Estimation of the Nigerian Population: A Comparison of One-Stage and Two-Stage Sampling Technique (A Case Study of the 2006 Census of Nigerians

    Directory of Open Access Journals (Sweden)

    T.J. Akingbade

    2014-09-01

    Full Text Available This research work compares the one-stage sampling technique (Simple Random Sampling and two-stage sampling technique for estimating the population total of Nigerians using the 2006 census result of Nigerians. A sample size of twenty (20 states was selected out of a population of thirty six (36 states at the Primary Sampling Unit (PSU and one-third of each state selected at the PSU was sample at the Secondary Sampling Unit (SSU and analyzed. The result shows that, with the same sample size at the PSU, one-stage sampling technique (Simple Random Sampling is more efficient than two-stage sampling technique and hence, recommended.

  13. Design, Modelling and Simulation of Two-Phase Two-Stage Electronic System with Orthogonal Output for Supplying of Two-Phase ASM

    Directory of Open Access Journals (Sweden)

    Michal Prazenica

    2011-01-01

    Full Text Available This paper deals with the two-stage two-phase electronic systems with orthogonal output voltages and currents - DC/AC/AC. Design of two-stage DC/AC/AC high frequency converter with two-phase orthogonal output using single-phase matrix converter is also introduced. Output voltages of them are strongly nonharmonic ones, so they must be pulse-modulated due to requested nearly sinusoidal currents with low total harmonic distortion. Simulation experiment results of matrix converter for both steady and transient states for IM motors are given in the paper, also experimental verification under R-L load, so far. The simulation results confirm a very good time-waveform of the phase current and the system seems to be suitable for low-cost application in automotive/aerospace industries and application with high frequency voltage sources.

  14. Two-Stage Control Design of a Buck Converter/DC Motor System without Velocity Measurements via a Σ−Δ-Modulator

    OpenAIRE

    R. Silva-Ortigoza; García-Sánchez, J. R.; J. M. Alba-Martínez; Hernández-Guzmán, V. M.; Marcelino-Aranda, M.; Taud, H.; Bautista-Quintero, R.

    2013-01-01

    This paper presents a two-stage control design for the “Buck power converter/DC motor” system, which allows to perform the sensorless angular velocity trajectory tracking task. The differential flatness property of the DC-motor model is exploited in order to propose a first-stage controller, which is designed to achieve the desired angular velocity trajectory. This controller provides the voltage profiles that must be tracked by the Buck converter. Then, a second-stage controller is meant to ...

  15. Investigation of two-stage air-cooled turbine suitable for flight at Mach number of 2.5 II : blade design

    Science.gov (United States)

    Miser, James W; Stewart, Warner L

    1957-01-01

    A blade design study is presented for a two-stage air-cooled turbine suitable for flight at a Mach number of 2.5 for which velocity diagrams have been previously obtained. The detailed procedure used in the design of the blades is given. In addition, the design blade shapes, surface velocity distributions, inner and outer wall contours, and other design data are presented. Of all the blade rows, the first-stage rotor has the highest solidity, with a value of 2.289 at the mean section. The second-stage stator also had a high mean-section solidity of 1.927, mainly because of its high inlet whirl. The second-stage rotor has the highest value of the suction-surface diffusion parameter, with a value of 0.151. All other blade rows have values for this parameter under 0.100.

  16. Crystal Structures and Inhibition Kinetics Reveal a Two-Stage Catalytic Mechanism with Drug Design Implications for Rhomboid Proteolysis.

    Science.gov (United States)

    Cho, Sangwoo; Dickey, Seth W; Urban, Siniša

    2016-02-04

    Intramembrane proteases signal by releasing proteins from the membrane, but despite their importance, their enzymatic mechanisms remain obscure. We probed rhomboid proteases with reversible, mechanism-based inhibitors that allow precise kinetic analysis and faithfully mimic the transition state structurally. Unexpectedly, inhibition by peptide aldehydes is non-competitive, revealing that in the Michaelis complex, substrate does not contact the catalytic center. Structural analysis in a membrane revealed that all extracellular loops of rhomboid make stabilizing interactions with substrate, but mainly through backbone interactions, explaining rhomboid's broad sequence selectivity. At the catalytic site, the tetrahedral intermediate lies covalently attached to the catalytic serine alone, with the oxyanion stabilized by unusual tripartite interactions with the side chains of H150, N154, and the backbone of S201. We also visualized unexpected substrate-enzyme interactions at the non-essential P2/P3 residues. These "extra" interactions foster potent rhomboid inhibition in living cells, thereby opening avenues for rational design of selective rhomboid inhibitors. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Evaluating Public Health Interventions: 3. The Two-Stage Design for Confounding Bias Reduction-Having Your Cake and Eating It Two.

    Science.gov (United States)

    Spiegelman, Donna; Rivera-Rodriguez, Claudia L; Haneuse, Sebastien

    2016-07-01

    In public health evaluations, confounding bias in the estimate of the intervention effect will typically threaten the validity of the findings. It is a common misperception that the only way to avoid this bias is to measure detailed, high-quality data on potential confounders for every intervention participant, but this strategy for adjusting for confounding bias is often infeasible. Rather than ignoring confounding altogether, the two-phase design and analysis-in which detailed high-quality confounding data are obtained among a small subsample-can be considered. We describe the two-stage design and analysis approach, and illustrate its use in the evaluation of an intervention conducted in Dar es Salaam, Tanzania, of an enhanced community health worker program to improve antenatal care uptake.

  18. Two-Stage Control Design of a Buck Converter/DC Motor System without Velocity Measurements via a Σ−Δ-Modulator

    Directory of Open Access Journals (Sweden)

    R. Silva-Ortigoza

    2013-01-01

    differential flatness property of the DC-motor model is exploited in order to propose a first-stage controller, which is designed to achieve the desired angular velocity trajectory. This controller provides the voltage profiles that must be tracked by the Buck converter. Then, a second-stage controller is meant to assure the aforementioned. This controller is based on flatness property of the Buck power converter model, which provides the input voltage to the DC motor. Due to the fact that the two-stage controller proposed uses the average model of the system, as a practical and effective implementation of this controller, a Σ − Δ-modulator is employed. Finally, in order to verify the control performance of this approach, numerical simulations are included.

  19. Studying feasibility and effects of a two-stage nursing staff training in residential geriatric care using a 30 month mixed-methods design [ISRCTN24344776

    Directory of Open Access Journals (Sweden)

    Hantikainen Virpi

    2011-05-01

    Full Text Available Abstract Background Transfer techniques and lifting weights often cause back pain and disorders for nurses in geriatric care. The Kinaesthetics care conception claims to be an alternative, yielding benefits for nurses as well as for clients. Starting a multi-step research program on the effects of Kinaesthetics, we assess the feasibility of a two-stage nursing staff training and a pre-post research design. Using quantitative and qualitative success criteria, we address mobilisation from the bed to a chair and backwards, walking with aid and positioning in bed on the staff level as well as on the resident level. In addition, effect estimates should help to decide on and to prepare a controlled trial. Methods/Design Standard basic and advanced Kinaesthetics courses (each comprising four subsequent days and an additional counselling day during the following four months are offered to n = 36 out of 60 nurses in a residential geriatric care home, who are in charge of 76 residents. N = 22 residents needing movement support are participating to this study. On the staff level, measurements include focus group discussions, questionnaires, physical strain self-assessment (Borg scale, video recordings and external observation of patient assistance skills using a specialised instrument (SOPMAS. Questionnaires used on the resident level include safety, comfort, pain, and level of own participation during mobilisation. A functional mobility profile is assessed using a specialised test procedure (MOTPA. Measurements will take place at baseline (T0, after basic training (T1, and after the advanced course (T2. Follow-up focus groups will be offered at T1 and 10 months later (T3. Discussion Ten criteria for feasibility success are established before the trial, assigned to resources (missing data, processes (drop-out of nurses and residents and science (minimum effects criteria. This will help to make rational decision on entering the next stage of the research

  20. THE LIBERATION OF ARSENOSUGARS FROM MATRIX COMPONENTS IN DIFFICULT TO EXTRACT SEAFOOD SAMPLES UTILIZING TMAOH/ACETIC ACID SEQUENTIALLY IN A TWO-STAGE EXTRACTION PROCESS

    Science.gov (United States)

    Sample extraction is one of the most important steps in arsenic speciation analysis of solid dietary samples. One of the problem areas in this analysis is the partial extraction of arsenicals from seafood samples. The partial extraction allows the toxicity of the extracted arse...

  1. Simulation, design and proof-of-concept of a two-stage continuous hydrothermal flow synthesis reactor for synthesis of functionalized nano-sized inorganic composite materials

    DEFF Research Database (Denmark)

    Zielke, Philipp; Xu, Yu; Simonsen, Søren Bredmose

    2016-01-01

    Computational fluid dynamics simulations were employed to evaluate several mixer geometries for a novel two-stage continuous hydrothermal flow synthesis reactor. The addition of a second stage holds the promise of allowing the synthesis of functionalized nano-materials as for example core...

  2. LACIE sampling design

    Science.gov (United States)

    Feiveson, A. H.; Chhikara, R. S.; Hallum, C. R. (Principal Investigator)

    1979-01-01

    The sampling design in LACIE consisted of two major components, one for wheat acreage estimation and one for wheat yield prediction. The acreage design was basically a classical survey for which the sampling unit was a 5- by 6-nautical mile segment; however, there were complications caused by measurement errors and loss of data. Yield was predicted by sampling meteorological data from weather stations within a region and then using those data as input to previously fitted regression equations. Wheat production was not estimated directly, but was computed by multiplying yield and acreage estimates. The allocation of samples to countries is discussed as well as the allocation and selection of segments in strata/substrata.

  3. A Two Stage Classification Approach for Handwritten Devanagari Characters

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Malik, Latesh

    2010-01-01

    The paper presents a two stage classification approach for handwritten devanagari characters The first stage is using structural properties like shirorekha, spine in character and second stage exploits some intersection features of characters which are fed to a feedforward neural network. Simple histogram based method does not work for finding shirorekha, vertical bar (Spine) in handwritten devnagari characters. So we designed a differential distance based technique to find a near straight line for shirorekha and spine. This approach has been tested for 50000 samples and we got 89.12% success

  4. 基于视觉显著性的两阶段采样突变目标跟踪算法%Saliency Based Tracking Method for Abrupt Motions via Two-stage Sampling

    Institute of Scientific and Technical Information of China (English)

    江晓莲; 李翠华; 李雄宗

    2014-01-01

    In this paper, a saliency based tracking method via two-stage sampling is proposed for abrupt motions. Firstly, the visual salience is introduced as a prior knowledge into the Wang-Landau Monte Carlo (WLMC)-based tracking algorithm. By dividing the spatial space into disjoint sub-regions and assigning each sub-region a saliency value, a prior knowledge of the promising regions is obtained;then the saliency values of sub-regions are integrated into the Markov chain Monte Carlo (MCMC) acceptance mechanism to guide effective states sampling. Secondly, considering the abrupt motion sequence contains both abrupt and smooth motions, a two-stage sampling model is brought up into the algorithm. In the first stage, the model detects the motion type of the target. According to the result of the first stage, the model chooses either the saliency-based WLMC method to track abrupt motions or the double-chain MCMC method to track smooth motions of the target in the second stage. The algorithm effciently addresses tracking of abrupt motions while smooth motions are also accurately tracked. Experimental results demonstrate that this approach outperforms the state-of-the-art algorithms on abrupt motion sequence and public benchmark sequence in terms of accuracy and robustness.%针对运动突变目标视觉跟踪问题,提出一种基于视觉显著性的两阶段采样跟踪算法。首先,将视觉显著性信息引入到Wang-Landau 蒙特卡罗(Wang-Landau Monte Carlo, WLMC)跟踪算法中,设计了结合显著性先验的接受函数,利用子区域的显著性值来引导马尔可夫链的构造,通过增大目标出现区粒子的接受概率,提高采样效率;其次,针对运动序列中平滑与突变运动共存的特点,建立两阶段采样模型。其中第一阶段对目标当前运动类型进行判定,第二阶段则根据判定结果采用相应算法。突变运动采用基于视觉显著性的WLMC 算法,平滑运动采用双链

  5. The construction of two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1988-01-01

    Although two-stage testing is not the most efficient form of adaptive testing, it has some advantages. In this paper, linear programming models are given for the construction of two-stage tests. In these models, practical constraints with respect to, among other things, test composition, administrat

  6. 基于两段式水煤浆气化的IGCC系统变工况特性%Off-design Characteristics of IGCC System Based on Two-stage Coal-slurry Gasification Technology

    Institute of Scientific and Technical Information of China (English)

    刘耀鑫; 吴少华; 李振中; 王阳

    2012-01-01

    The integrated gasification combined cycle system(IGCC) is often operated at off-design condition.In order to learn the off-design characteristics of IGCC,the software ThermoFlex was used to establish the model of a 200 MW integrated gasification combined cycle(IGCC) system based on the two-stage coal-slurry gasification technology.The effects of gas turbine load,air separation unit integrated coefficient(Xas),atmosphere temperature and atmosphere pressure on the performance of IGCC system were investigated.The results show that gross and net electric efficiency increases at first and then decreases with atmosphere temperature increasing or gas turbine load decreasing.The gross electric efficiency decreases when the air separation unit integrated coefficient increases.Atmosphere pressure has little effect on system efficiency.The application of two-stage coal-slurry gasification technology has good availability to improve IGCC system performance under the above running conditions.The results will provide reference for design and operation of the IGCC power plant.%整体煤气化联合循环(integrated gasification combinedcycle,IGCC)机组在一定情况下处于非设计工况运行。为了研究IGCC系统变工况特性,采用ThermoFlex软件建立基于两段式水煤浆气化技术的200 MW级整体煤气化联合循环系统模型,主要考查燃气轮机负荷、整体空分系数Xas、大气温度、大气压力对系统性能的影响。研究结果表明,降低燃气轮机负荷或者提高大气温度系统效率均呈先升高而后降低的趋势。整体空分系数Xas增加,机组发电效率降低。大气压力对系统效率影响较小。上述条件下采用两段水煤浆气化技术,系统性能可以得到有效改善。研究结果可为采用两段式水煤浆气化技术的IGCC系统的设计、运行提供参考。

  7. Gas loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, Lee; Bartram, Brian; Dattelbaum, Dana; Lang, John; Morris, John

    2015-06-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez and Teflon. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system, and example data from the plate impact experiments will be shown. LA-UR-15-20521

  8. Two-Stage Modelling Of Random Phenomena

    Science.gov (United States)

    Barańska, Anna

    2015-12-01

    The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.

  9. A two-stage rank test using density estimation

    NARCIS (Netherlands)

    Albers, Willem/Wim

    1995-01-01

    For the one-sample problem, a two-stage rank test is derived which realizes a required power against a given local alternative, for all sufficiently smooth underlying distributions. This is achieved using asymptotic expansions resulting in a precision of orderm −1, wherem is the size of the first

  10. Dynamic Modelling of the Two-stage Gasification Process

    DEFF Research Database (Denmark)

    Gøbel, Benny; Henriksen, Ulrik B.; Houbak, Niels

    1999-01-01

    A two-stage gasification pilot plant was designed and built as a co-operative project between the Technical University of Denmark and the company REKA.A dynamic, mathematical model of the two-stage pilot plant was developed to serve as a tool for optimising the process and the operating conditions...... of the gasification plant.The model consists of modules corresponding to the different elements in the plant. The modules are coupled together through mass and heat conservation.Results from the model are compared with experimental data obtained during steady and unsteady operation of the pilot plant. A good...

  11. [Saarland Growth Study: sampling design].

    Science.gov (United States)

    Danker-Hopfe, H; Zabransky, S

    2000-01-01

    The use of reference data to evaluate the physical development of children and adolescents is part of the daily routine in the paediatric ambulance. The construction of such reference data is based on the collection of extensive reference data. There are different kinds of reference data: cross sectional references, which are based on data collected from a big representative cross-sectional sample of the population, longitudinal references, which are based on follow-up surveys of usually smaller samples of individuals from birth to maturity, and mixed longitudinal references, which are a combination of longitudinal and cross-sectional reference data. The advantages and disadvantages of the different methods of data collection and the resulting reference data are discussed. The Saarland Growth Study was conducted for several reasons: growth processes are subject to secular changes, there are no specific reference data for children and adolescents from this part of the country and the growth charts in use in the paediatric praxis are possibly not appropriate any more. Therefore, the Saarland Growth Study served two purposes a) to create actual regional reference data and b) to create a database for future studies on secular trends in growth processes of children and adolescents from Saarland. The present contribution focusses on general remarks on the sampling design of (cross-sectional) growth surveys and its inferences for the design of the present study.

  12. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate.

  13. Techniques for multivariate sample design

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, M.A.

    1990-04-01

    In this report we consider sampling methods applicable to the multi-product Annual Fuel Oil and Kerosene Sales Report (Form EIA-821) Survey. For years prior to 1989, the purpose of the survey was to produce state-level estimates of total sales volumes for each of five target variables: residential No. 2 distillate, other retail No. 2 distillate, wholesale No. 2 distillate, retail residual, and wholesale residual. For the year 1989, the other retail No. 2 distillate and wholesale No. 2 distillate variables were replaced by a new variable defined to be the maximum of the two. The strata for this variable were crossed with the strata for the residential No. 2 distillate variable, resulting in a single stratified No. 2 distillate variable. Estimation for 1989 focused on the single No. 2 distillate variable and the two residual variables. Sampling accuracy requirements for each product were specified in terms of the coefficients of variation (CVs) for the various estimates based on data taken from recent surveys. The target population for the Form EIA-821 survey includes companies that deliver or sell fuel oil or kerosene to end-users. The Petroleum Product Sales Identification Survey (Form EIA-863) data base and numerous state and commercial lists provide the basis of the sampling frame, which is updated as new data become available. In addition, company/state-level volumes for distillates fuel oil, residual fuel oil, and motor gasoline are added to aid the design and selection process. 30 refs., 50 figs., 10 tabs.

  14. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars. In the ......Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars....... In the two-stage gasification concept, the pyrolysis and the gasification processes are physical separated. The volatiles from the pyrolysis are partially oxidized, and the hot gases are used as gasification medium to gasify the char. Hot gases from the gasifier and a combustion unit can be used for drying...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...

  15. Two Stage Gear Tooth Dynamics Program

    Science.gov (United States)

    1989-08-01

    cordi - tions and associated iteration prooedure become more complex. This is due to both the increased number of components and to the time for a...solved for each stage in the two stage solution . There are (3 + ntrrber of planets) degrees of freedom fcr eacb stage plus two degrees of freedom...should be devised. It should be noted that this is not minor task. In general, each stage plus an input or output shaft will have 2 times (4 + number

  16. PROBABILITY SAMPLING DESIGNS FOR VETERINARY EPIDEMIOLOGY

    OpenAIRE

    Xhelil Koleci; Coryn, Chris L.S.; Kristin A. Hobson; Rruzhdi Keci

    2011-01-01

    The objective of sampling is to estimate population parameters, such as incidence or prevalence, from information contained in a sample. In this paper, the authors describe sources of error in sampling; basic probability sampling designs, including simple random sampling, stratified sampling, systematic sampling, and cluster sampling; estimating a population size if unknown; and factors influencing sample size determination for epidemiological studies in veterinary medicine.

  17. Two-stage replication of previous genome-wide association studies of AS3MT-CNNM2-NT5C2 gene cluster region in a large schizophrenia case-control sample from Han Chinese population.

    Science.gov (United States)

    Guan, Fanglin; Zhang, Tianxiao; Li, Lu; Fu, Dongke; Lin, Huali; Chen, Gang; Chen, Teng

    2016-10-01

    Schizophrenia is a devastating psychiatric condition with high heritability. Replicating the specific genetic variants that increase susceptibility to schizophrenia in different populations is critical to better understand schizophrenia. CNNM2 and NT5C2 are genes recently identified as susceptibility genes for schizophrenia in Europeans, but the exact mechanism by which these genes confer risk for schizophrenia remains unknown. In this study, we examined the potential for genetic susceptibility to schizophrenia of a three-gene cluster region, AS3MT-CNNM2-NT5C2. We implemented a two-stage strategy to conduct association analyses of the targeted regions with schizophrenia. A total of 8218 individuals were recruited, and 45 pre-selected single nucleotide polymorphisms (SNPs) were genotyped. Both single-marker and haplotype-based analyses were conducted in addition to imputation analysis to increase the coverage of our genetic markers. Two SNPs, rs11191419 (OR=1.24, P=7.28×10(-5)) and rs11191514 (OR=1.24, P=0.0003), with significant independent effects were identified. These results were supported by the data from both the discovery and validation stages. Further haplotype and imputation analyses also validated these results, and bioinformatics analyses indicated that CALHM1, which is located approximately 630kb away from CNNM2, might be a susceptible gene for schizophrenia. Our results provide further support that AS3MT, CNNM2 and CALHM1 are involved with the etiology and pathogenesis of schizophrenia, suggesting these genes are potential targets of interest for the improvement of disease management and the development of novel pharmacological strategies.

  18. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  19. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds......Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...

  20. Two Stage Sibling Cycle Compressor/Expander.

    Science.gov (United States)

    1994-02-01

    vol. 5, p. 424. 11. L. Bauwens and M.P. Mitchell, " Regenerator Analysis: Validation of the MS*2 Stirling Cycle Code," Proc. XVIIIth International...PL-TR--94-1051 PL-TR-- 94-1051 TWO STAGE SIBLING CYCLE COMPRESSOR/EXPANDER Matthew P. Mitchell . Mitchell/ Stirling Machines/Systems, Inc. No\\ 1995...ty. THIS PAGE IS UNCLASSIFIED PL-TR-94-1051 This final report was prepared byMitchell/ Stirling Machines/Systems, Inc., Berkeley, CA under Contract

  1. Recursive algorithm for the two-stage EFOP estimation method

    Institute of Scientific and Technical Information of China (English)

    LUO GuiMing; HUANG Jian

    2008-01-01

    A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.

  2. Classification in two-stage screening.

    Science.gov (United States)

    Longford, Nicholas T

    2015-11-10

    Decision theory is applied to the problem of setting thresholds in medical screening when it is organised in two stages. In the first stage that involves a less expensive procedure that can be applied on a mass scale, an individual is classified as a negative or a likely positive. In the second stage, the likely positives are subjected to another test that classifies them as (definite) positives or negatives. The second-stage test is more accurate, but also more expensive and more involved, and so there are incentives to restrict its application. Robustness of the method with respect to the parameters, some of which have to be set by elicitation, is assessed by sensitivity analysis.

  3. Two stage gear tooth dynamics program

    Science.gov (United States)

    Boyd, Linda S.

    1989-01-01

    The epicyclic gear dynamics program was expanded to add the option of evaluating the tooth pair dynamics for two epicyclic gear stages with peripheral components. This was a practical extension to the program as multiple gear stages are often used for speed reduction, space, weight, and/or auxiliary units. The option was developed for either stage to be a basic planetary, star, single external-external mesh, or single external-internal mesh. The two stage system allows for modeling of the peripherals with an input mass and shaft, an output mass and shaft, and a connecting shaft. Execution of the initial test case indicated an instability in the solution with the tooth paid loads growing to excessive magnitudes. A procedure to trace the instability is recommended as well as a method of reducing the program's computation time by reducing the number of boundary condition iterations.

  4. Measuring the Learning from Two-Stage Collaborative Group Exams

    CERN Document Server

    Ives, Joss

    2014-01-01

    A two-stage collaborative exam is one in which students first complete the exam individually, and then complete the same or similar exam in collaborative groups immediately afterward. To quantify the learning effect from the group component of these two-stage exams in an introductory Physics course, a randomized crossover design was used where each student participated in both the treatment and control groups. For each of the two two-stage collaborative group midterm exams, questions were designed to form matched near-transfer pairs with questions on an end-of-term diagnostic which was used as a learning test. For learning test questions paired with questions from the first midterm, which took place six to seven weeks before the learning test, an analysis using a mixed-effects logistic regression found no significant differences in learning-test performance between the control and treatment group. For learning test questions paired with questions from the second midterm, which took place one to two weeks prio...

  5. Sampling designs dependent on sample parameters of auxiliary variables

    CERN Document Server

    Wywiał, Janusz L

    2015-01-01

    The book offers a valuable resource for students and statisticians whose work involves survey sampling. An estimation of the population parameters in finite and fixed populations assisted by auxiliary variables is considered. New sampling designs dependent on moments or quantiles of auxiliary variables are presented on the background of the classical methods. Accuracies of the estimators based on original sampling design are compared with classical estimation procedures. Specific conditional sampling designs are applied to problems of small area estimation as well as to estimation of quantiles of variables under study. .

  6. Guided transect sampling - a new design combining prior information and field surveying

    Science.gov (United States)

    Anna Ringvall; Goran Stahl; Tomas Lamas

    2000-01-01

    Guided transect sampling is a two-stage sampling design in which prior information is used to guide the field survey in the second stage. In the first stage, broad strips are randomly selected and divided into grid-cells. For each cell a covariate value is estimated from remote sensing data, for example. The covariate is the basis for subsampling of a transect through...

  7. Research and design of two - stage defensive system in network security%一种网络安全两级防御系统的研究与设计

    Institute of Scientific and Technical Information of China (English)

    杨玉新

    2012-01-01

    传统的单一防御产品难以构建一个安全的网络防御系统,在参照一些先进的网络安全解决方案的基础上,提出了一种两级防御体系结构模型。在整个防御系统中,设计了在Linux环境下以基于网络的入侵防御系统(NIPS:Network—based Intrusion Prevention System)为第一级入侵防御系统,以主机安全代理为第二级防御系统。两系统都与管理中心保持有密切的通信机制,其中主机安全代理不仅以软件的方式运行在内部网络的各个主机中,也可以安装在远程用户的便携机和托管服务器上。%The traditional single defensive products are difficult to construct a network security defense system. This paper propose a two - stage prevention system model by consulted advanced network security solutions, taking a network - based intrusion prevention system in Linux environment as the first stage and the host security agent system as the second stage. The two stages maintain close communication with the administration center, in which host security agent system not only operate in the various mainframe network by software, but also can be installed in the laptop and server care of remote users.

  8. Two-Stage Eagle Strategy with Differential Evolution

    CERN Document Server

    Yang, Xin-She

    2012-01-01

    Efficiency of an optimization process is largely determined by the search algorithm and its fundamental characteristics. In a given optimization, a single type of algorithm is used in most applications. In this paper, we will investigate the Eagle Strategy recently developed for global optimization, which uses a two-stage strategy by combing two different algorithms to improve the overall search efficiency. We will discuss this strategy with differential evolution and then evaluate their performance by solving real-world optimization problems such as pressure vessel and speed reducer design. Results suggest that we can reduce the computing effort by a factor of up to 10 in many applications.

  9. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  10. Planetary Sample Caching System Design Options

    Science.gov (United States)

    Collins, Curtis; Younse, Paulo; Backes, Paul

    2009-01-01

    Potential Mars Sample Return missions would aspire to collect small core and regolith samples using a rover with a sample acquisition tool and sample caching system. Samples would need to be stored in individual sealed tubes in a canister that could be transfered to a Mars ascent vehicle and returned to Earth. A sample handling, encapsulation and containerization system (SHEC) has been developed as part of an integrated system for acquiring and storing core samples for application to future potential MSR and other potential sample return missions. Requirements and design options for the SHEC system were studied and a recommended design concept developed. Two families of solutions were explored: 1)transfer of a raw sample from the tool to the SHEC subsystem and 2)transfer of a tube containing the sample to the SHEC subsystem. The recommended design utilizes sample tool bit change out as the mechanism for transferring tubes to and samples in tubes from the tool. The SHEC subsystem design, called the Bit Changeout Caching(BiCC) design, is intended for operations on a MER class rover.

  11. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  12. The Study on Mental Health at Work: Design and sampling.

    Science.gov (United States)

    Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit

    2017-08-01

    The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.

  13. A Two-Stage Waste Gasification Reactor for Mars In-Situ Resource Utilization Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design, build, and test a two-stage waste processing reactor for space applications. Our proposed technology converts waste from space missions into...

  14. Accuracy assessment with complex sampling designs

    Science.gov (United States)

    Raymond L. Czaplewski

    2010-01-01

    A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...

  15. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  16. Mobile Variable Depth Sampling System Design Study

    Energy Technology Data Exchange (ETDEWEB)

    BOGER, R.M.

    2000-08-25

    A design study is presented for a mobile, variable depth sampling system (MVDSS) that will support the treatment and immobilization of Hanford LAW and HLW. The sampler can be deployed in a 4-inch tank riser and has a design that is based on requirements identified in the Level 2 Specification (latest revision). The waste feed sequence for the MVDSS is based on Phase 1, Case 3S6 waste feed sequence. Technical information is also presented that supports the design study.

  17. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  18. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  19. Missing observations in multiyear rotation sampling designs

    Science.gov (United States)

    Gbur, E. E.; Sielken, R. L., Jr. (Principal Investigator)

    1982-01-01

    Because Multiyear estimation of at-harvest stratum crop proportions is more efficient than single year estimation, the behavior of multiyear estimators in the presence of missing acquisitions was studied. Only the (worst) case when a segment proportion cannot be estimated for the entire year is considered. The effect of these missing segments on the variance of the at-harvest stratum crop proportion estimator is considered when missing segments are not replaced, and when missing segments are replaced by segments not sampled in previous years. The principle recommendations are to replace missing segments according to some specified strategy, and to use a sequential procedure for selecting a sampling design; i.e., choose an optimal two year design and then, based on the observed two year design after segment losses have been taken into account, choose the best possible three year design having the observed two year parent design.

  20. Efficient adaptive designs with mid-course sample size adjustment in clinical trials

    CERN Document Server

    Bartroff, Jay

    2011-01-01

    Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Whereas most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. Not only does this approach maintain the prescribed type I error probability, but it also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group sequential designs when the al...

  1. Sample design effects in landscape genetics

    Science.gov (United States)

    Oyler-McCance, Sara J.; Fedy, Bradley C.; Landguth, Erin L.

    2012-01-01

    An important research gap in landscape genetics is the impact of different field sampling designs on the ability to detect the effects of landscape pattern on gene flow. We evaluated how five different sampling regimes (random, linear, systematic, cluster, and single study site) affected the probability of correctly identifying the generating landscape process of population structure. Sampling regimes were chosen to represent a suite of designs common in field studies. We used genetic data generated from a spatially-explicit, individual-based program and simulated gene flow in a continuous population across a landscape with gradual spatial changes in resistance to movement. Additionally, we evaluated the sampling regimes using realistic and obtainable number of loci (10 and 20), number of alleles per locus (5 and 10), number of individuals sampled (10-300), and generational time after the landscape was introduced (20 and 400). For a simulated continuously distributed species, we found that random, linear, and systematic sampling regimes performed well with high sample sizes (>200), levels of polymorphism (10 alleles per locus), and number of molecular markers (20). The cluster and single study site sampling regimes were not able to correctly identify the generating process under any conditions and thus, are not advisable strategies for scenarios similar to our simulations. Our research emphasizes the importance of sampling data at ecologically appropriate spatial and temporal scales and suggests careful consideration for sampling near landscape components that are likely to most influence the genetic structure of the species. In addition, simulating sampling designs a priori could help guide filed data collection efforts.

  2. 二级驱动的串行TFT-LCD显示终端设计%Design of Serial TFT-LCD Display Terminal Based on Two-Stage Driving

    Institute of Scientific and Technical Information of China (English)

    石建国; 邓春健

    2011-01-01

    A kind of universal serial TFT-LCD display terminal with ATmegal28L as the 2nd driver is designed and implemented.SPI serial Flash is employed to store multiple sets of ASCII and GB2312 character matrix data and background images.High level display commands from external master controller could be received through RS232 or RS485 standard interface and trigger requested display operations.Actual applications reveal that this terminal could help to reduce display control burden for master controller and could be used as universal information output device for many kinds of microcontroller-based systems.%设计并实现了一种采用ATmegal28L作为二级驱动器的通用串行TFT-LCD显示终端,利用SPI串行Flash存储多套中英文字库和背景图片,对外提供RS232和RS485两种标准接口,可接收外部主控器串行发送的高级显示命令,实现其要求的显示效果.多个项目的应用实践表明,该显示终端有利于减轻主控器的显示处理负担,可作为多种单片机应用系统的通用信息输出设备.

  3. Fast Moving Sampling Designs in Temporal Networks

    CERN Document Server

    Thompson, Steven K

    2015-01-01

    In a study related to this one I set up a temporal network simulation environment for evaluating network intervention strategies. A network intervention strategy consists of a sampling design to select nodes in the network. An intervention is applied to nodes in the sample for the purpose of changing the wider network in some desired way. The network intervention strategies can represent natural agents such as viruses that spread in the network, programs to prevent or reduce the virus spread, and the agency of individual nodes, such as people, in forming and dissolving the links that create, maintain or change the network. The present paper examines idealized versions of the sampling designs used to that study. The purpose is to better understand the natural and human network designs in real situations and to provide a simple inference of design-based properties that in turn measure properties of the time-changing network. The designs use link tracing and sometimes other probabilistic procedures to add units ...

  4. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  5. National Longitudinal Study of the High School Class of 1972. Sample Design Efficiency Study: Effects of Stratification, Clustering, and Unequal Weighting on the Variances of NLS Statistics.

    Science.gov (United States)

    National Center for Education Statistics (DHEW), Washington, DC.

    A complex two-stage sample selection process was used in designing the National Longitudinal Study of the High School Class of 1972. The first-stage sampling frame used in the selection of schools was stratified by the following seven variables: public vs. private control, geographic region, grade 12 enrollment, proximity to institutions of higher…

  6. Treatment of cadmium dust with two-stage leaching process

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The treatment of cadmium dust with a two-stage leaching process was investigated to replace the existing sulphation roast-leaching processes. The process parameters in the first stage leaching were basically similar to the neutralleaching in zinc hydrometallurgy. The effects of process parameters in the second stage leaching on the extraction of zincand cadmium were mainly studied. The experimental results indicated that zinc and cadmium could be efficiently recoveredfrom the cadmium dust by two-stage leaching process. The extraction percentages of zinc and cadmium in two stage leach-ing reached 95% and 88% respectively under the optimum conditions. The total extraction percentage of Zn and Cdreached 94%.

  7. Designing an enhanced groundwater sample collection system

    Energy Technology Data Exchange (ETDEWEB)

    Schalla, R.

    1994-10-01

    As part of an ongoing technical support mission to achieve excellence and efficiency in environmental restoration activities at the Laboratory for Energy and Health-Related Research (LEHR), Pacific Northwest Laboratory (PNL) provided guidance on the design and construction of monitoring wells and identified the most suitable type of groundwater sampling pump and accessories for monitoring wells. The goal was to utilize a monitoring well design that would allow for hydrologic testing and reduce turbidity to minimize the impact of sampling. The sampling results of the newly designed monitoring wells were clearly superior to those of the previously installed monitoring wells. The new wells exhibited reduced turbidity, in addition to improved access for instrumentation and hydrologic testing. The variable frequency submersible pump was selected as the best choice for obtaining groundwater samples. The literature references are listed at the end of this report. Despite some initial difficulties, the actual performance of the variable frequency, submersible pump and its accessories was effective in reducing sampling time and labor costs, and its ease of use was preferred over the previously used bladder pumps. The surface seals system, called the Dedicator, proved to be useful accessory to prevent surface contamination while providing easy access for water-level measurements and for connecting the pump. Cost savings resulted from the use of the pre-production pumps (beta units) donated by the manufacturer for the demonstration. However, larger savings resulted from shortened field time due to the ease in using the submersible pumps and the surface seal access system. Proper deployment of the monitoring wells also resulted in cost savings and ensured representative samples.

  8. High magnetostriction parameters for low-temperature sintered cobalt ferrite obtained by two-stage sintering

    Energy Technology Data Exchange (ETDEWEB)

    Khaja Mohaideen, K.; Joy, P.A., E-mail: pa.joy@ncl.res.in

    2014-12-15

    From the studies on the magnetostriction characteristics of two-stage sintered polycrystalline CoFe{sub 2}O{sub 4} made from nanocrystalline powders, it is found that two-stage sintering at low temperatures is very effective for enhancing the density and for attaining higher magnetostriction coefficient. Magnetostriction coefficient and strain derivative are further enhanced by magnetic field annealing and relatively larger enhancement in the magnetostriction parameters is obtained for the samples sintered at lower temperatures, after magnetic annealing, despite the fact that samples sintered at higher temperatures show larger magnetostriction coefficients before annealing. A high magnetostriction coefficient of ∼380 ppm is obtained after field annealing for the sample sintered at 1100 °C, below a magnetic field of 400 kA/m, which is the highest value so far reported at low magnetic fields for sintered polycrystalline cobalt ferrite. - Highlights: • Effect of two-stage sintering on the magnetostriction characteristics of CoFe{sub 2}O{sub 4} is studied. • Two-stage sintering is very effective for enhancing the density and the magnetostriction parameters. • Higher magnetostriction for samples sintered at low temperatures and after magnetic field annealing. • Highest reported magnetostriction of 380 ppm at low fields after two-stage, low-temperature sintering.

  9. A gas-loading system for LANL two-stage gas guns

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Lloyd Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bartram, Brian Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dattelbaum, Dana Mcgraw [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lang, John Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Morris, John Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures.The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  10. A gas-loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, L. L.; Bartram, B. D.; Dattelbaum, D. M.; Lang, J. M.; Morris, J. S.

    2017-01-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  11. LOGISTICS SCHEDULING: ANALYSIS OF TWO-STAGE PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yung-Chia CHANG; Chung-Yee LEE

    2003-01-01

    This paper studies the coordination effects between stages for scheduling problems where decision-making is a two-stage process. Two stages are considered as one system. The system can be a supply chain that links two stages, one stage representing a manufacturer; and the other, a distributor.It also can represent a single manufacturer, while each stage represents a different department responsible for a part of operations. A problem that jointly considers both stages in order to achieve ideal overall system performance is defined as a system problem. In practice, at times, it might not be feasible for the two stages to make coordinated decisions due to (i) the lack of channels that allow decision makers at the two stages to cooperate, and/or (ii) the optimal solution to the system problem is too difficult (or costly) to achieve.Two practical approaches are applied to solve a variant of two-stage logistic scheduling problems. The Forward Approach is defined as a solution procedure by which the first stage of the system problem is solved first, followed by the second stage. Similarly, the Backward Approach is defined as a solution procedure by which the second stage of the system problem is solved prior to solving the first stage. In each approach, two stages are solved sequentially and the solution generated is treated as a heuristic solution with respect to the corresponding system problem. When decision makers at two stages make decisions locally without considering consequences to the entire system,ineffectiveness may result - even when each stage optimally solves its own problem. The trade-off between the time complexity and the solution quality is the main concern. This paper provides the worst-case performance analysis for each approach.

  12. Residential Two-Stage Gas Furnaces - Do They Save Energy?

    Energy Technology Data Exchange (ETDEWEB)

    Lekov, Alex; Franco, Victor; Lutz, James

    2006-05-12

    Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in the DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.

  13. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    JIANG JianCheng; LI JianTao

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.

  14. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.

  15. The CSS and The Two-Staged Methods for Parameter Estimation in SARFIMA Models

    Directory of Open Access Journals (Sweden)

    Erol Egrioglu

    2011-01-01

    Full Text Available Seasonal Autoregressive Fractionally Integrated Moving Average (SARFIMA models are used in the analysis of seasonal long memory-dependent time series. Two methods, which are conditional sum of squares (CSS and two-staged methods introduced by Hosking (1984, are proposed to estimate the parameters of SARFIMA models. However, no simulation study has been conducted in the literature. Therefore, it is not known how these methods behave under different parameter settings and sample sizes in SARFIMA models. The aim of this study is to show the behavior of these methods by a simulation study. According to results of the simulation, advantages and disadvantages of both methods under different parameter settings and sample sizes are discussed by comparing the root mean square error (RMSE obtained by the CSS and two-staged methods. As a result of the comparison, it is seen that CSS method produces better results than those obtained from the two-staged method.

  16. Sample Design and Cohort Selection in the Hispanic Community Health Study/Study of Latinos

    Science.gov (United States)

    LaVange, Lisa M.; Kalsbeek, William; Sorlie, Paul D.; Avilés-Santa, Larissa M.; Kaplan, Robert C.; Barnhart, Janice; Liu, Kiang; Giachello, Aida; Lee, David J.; Ryan, John; Criqui, Michael H.; Elder, John P.

    2010-01-01

    PURPOSE The Hispanic Community Health Study (HCHS)/Study of Latinos (SOL) is a multi-center, community based cohort study of Hispanic/Latino adults in the United States. A diverse participant sample is required that is both representative of the target population and likely to remain engaged throughout follow-up. The choice of sample design, its rationale, and benefits and challenges of design decisions are described in this paper. METHODS The study design calls for recruitment and follow-up of a cohort of 16,000 Hispanics/Latinos aged 18-74 years, with 62.5% (10,000) over 44 years of age and adequate subgroup sample sizes to support inference by Hispanic/Latino background. Participants are recruited in community areas surrounding four field centers in the Bronx, Chicago, Miami, and San Diego. A two-stage area probability sample of households is selected with stratification and over-sampling incorporated at each stage to provide a broadly diverse sample, offer efficiencies in field operations, and ensure that the target age distribution is obtained. CONCLUSIONS Embedding probability sampling within this traditional, multi-site cohort study design enables competing research objectives to be met. However, the use of probability sampling requires developing solutions to some unique challenges in both sample selection and recruitment, as described here. PMID:20609344

  17. Stratified sampling design based on data mining.

    Science.gov (United States)

    Kim, Yeonkook J; Oh, Yoonhwan; Park, Sunghoon; Cho, Sungzoon; Park, Hayoung

    2013-09-01

    To explore classification rules based on data mining methodologies which are to be used in defining strata in stratified sampling of healthcare providers with improved sampling efficiency. We performed k-means clustering to group providers with similar characteristics, then, constructed decision trees on cluster labels to generate stratification rules. We assessed the variance explained by the stratification proposed in this study and by conventional stratification to evaluate the performance of the sampling design. We constructed a study database from health insurance claims data and providers' profile data made available to this study by the Health Insurance Review and Assessment Service of South Korea, and population data from Statistics Korea. From our database, we used the data for single specialty clinics or hospitals in two specialties, general surgery and ophthalmology, for the year 2011 in this study. Data mining resulted in five strata in general surgery with two stratification variables, the number of inpatients per specialist and population density of provider location, and five strata in ophthalmology with two stratification variables, the number of inpatients per specialist and number of beds. The percentages of variance in annual changes in the productivity of specialists explained by the stratification in general surgery and ophthalmology were 22% and 8%, respectively, whereas conventional stratification by the type of provider location and number of beds explained 2% and 0.2% of variance, respectively. This study demonstrated that data mining methods can be used in designing efficient stratified sampling with variables readily available to the insurer and government; it offers an alternative to the existing stratification method that is widely used in healthcare provider surveys in South Korea.

  18. PERFORMANCE STUDY OF A TWO STAGE SOLAR ADSORPTION REFRIGERATION SYSTEM

    Directory of Open Access Journals (Sweden)

    BAIJU. V

    2011-07-01

    Full Text Available The present study deals with the performance of a two stage solar adsorption refrigeration system with activated carbon-methanol pair investigated experimentally. Such a system was fabricated and tested under the conditions of National Institute of Technology Calicut, Kerala, India. The system consists of a parabolic solar concentrator,two water tanks, two adsorbent beds, condenser, expansion device, evaporator and accumulator. In this particular system the second water tank is act as a sensible heat storage device so that the system can be used during night time also. The system has been designed for heating 50 litres of water from 25oC to 90oC as well ascooling 10 litres of water from 30oC to 10oC within one hour. The performance parameters such as specific cooling power (SCP, coefficient of performance, solar COP and exergetic efficiency are studied. The dependency between the exergetic efficiency and cycle COP with the driving heat source temperature is also studied. The optimum heat source temperature for this system is determined as 72.4oC. The results show that the system has better performance during night time as compared to the day time. The system has a mean cycle COP of 0.196 during day time and 0.335 for night time. The mean SCP values during day time and night time are 47.83 and 68.2, respectively. The experimental results also demonstrate that the refrigerator has cooling capacity of 47 to 78 W during day time and 57.6 W to 104.4W during night time.

  19. STARS A Two Stage High Gain Harmonic Generation FEL Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    M. Abo-Bakr; W. Anders; J. Bahrdt; P. Budz; K.B. Buerkmann-Gehrlein; O. Dressler; H.A. Duerr; V. Duerr; W. Eberhardt; S. Eisebitt; J. Feikes; R. Follath; A. Gaupp; R. Goergen; K. Goldammer; S.C. Hessler; K. Holldack; E. Jaeschke; Thorsten Kamps; S. Klauke; J. Knobloch; O. Kugeler; B.C. Kuske; P. Kuske; A. Meseck; R. Mitzner; R. Mueller; M. Neeb; A. Neumann; K. Ott; D. Pfluckhahn; T. Quast; M. Scheer; Th. Schroeter; M. Schuster; F. Senf; G. Wuestefeld; D. Kramer; Frank Marhauser

    2007-08-01

    BESSY is proposing a demonstration facility, called STARS, for a two-stage high-gain harmonic generation free electron laser (HGHG FEL). STARS is planned for lasing in the wavelength range 40 to 70 nm, requiring a beam energy of 325 MeV. The facility consists of a normal conducting gun, three superconducting TESLA-type acceleration modules modified for CW operation, a single stage bunch compressor and finally a two-stage HGHG cascaded FEL. This paper describes the faciliy layout and the rationale behind the operation parameters.

  20. Development of a linear compressor for two-stage pulse tube cryocoolers

    Institute of Scientific and Technical Information of China (English)

    Peng-da YAN; Wei-li GAO; Guo-bang CHEN

    2009-01-01

    A valveless linear compressor was built up to drive a self-made two-stage pulse tube cryocooler. With a designed maximum swept volume of 60 cm~3, the compressor can provide the cryocooler with a pressure volume (PV) power of 400 W.Preliminary measurements of the compressor indicated that both an efficiency of 35%~55% and a pressure ratio of 1.3~1.4 could be obtained. The two-stage pulse tube cryocooler driven by this compressor achieved the lowest temperature of 14.2 K.

  1. Development of a heavy-duty diesel engine with two-stage turbocharging

    NARCIS (Netherlands)

    Sturm, L.; Kruithof, J.

    2001-01-01

    A mean value model was developed by using Matrixx/ Systembuild simulation tool for designing real-time control algorithms for the two-stage engine. All desired characteristics are achieved, apart from lower A/F ratio at lower engine speeds and Turbocharger matches calculations. The CANbus is used to

  2. Two-stage, dilute sulfuric acid hydrolysis of wood : an investigation of fundamentals

    Science.gov (United States)

    John F. Harris; Andrew J. Baker; Anthony H. Conner; Thomas W. Jeffries; James L. Minor; Roger C. Pettersen; Ralph W. Scott; Edward L Springer; Theodore H. Wegner; John I. Zerbe

    1985-01-01

    This paper presents a fundamental analysis of the processing steps in the production of methanol from southern red oak (Quercus falcata Michx.) by two-stage dilute sulfuric acid hydrolysis. Data for hemicellulose and cellulose hydrolysis are correlated using models. This information is used to develop and evaluate a process design.

  3. A Two-Stage Exercise on the Binomial Distribution Using Minitab.

    Science.gov (United States)

    Shibli, M. Abdullah

    1990-01-01

    Describes a two-stage experiment that was designed to explain binomial distribution to undergraduate statistics students. A manual coin flipping exercise is explained as the first stage; a computerized simulation using MINITAB software is presented as stage two; and output from the MINITAB exercises is included. (two references) (LRW)

  4. Analysis of performance and optimum configuration of two-stage semiconductor thermoelectric module

    Institute of Scientific and Technical Information of China (English)

    Li Kai-Zhen; Liang Rui-Sheng; Wei Zheng-Jun

    2008-01-01

    In this paper, the theoretical analysis and simulating calculation were conducted for a basic two-stage semiconductor thermoelectric module, which contains one thermocouple in the second stage and several thermocouples in the first stage. The study focused on the configuration of the two-stage semiconductor thermoelectric cooler, especially investigating the influences of some parameters, such as the current I1 of the first stage, the area A1 of every thermocouple and the number n of thermocouples in the first stage, on the cooling performance of the module. The obtained results of analysis indicate that changing the current I1 of the first stage, the area A1 of thcrmocouples and the number n of thermocouples in the first stage can improve the cooling performance of the module. These results can be used to optimize the configuration of the two-stage semiconductor thermoelectric module and provide guides for the design and application of thermoelectric cooler.

  5. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  6. Efficient Two-Stage Group Testing Algorithms for DNA Screening

    CERN Document Server

    Huber, Michael

    2011-01-01

    Group testing algorithms are very useful tools for DNA library screening. Building on recent work by Levenshtein (2003) and Tonchev (2008), we construct in this paper new infinite classes of combinatorial structures, the existence of which are essential for attaining the minimum number of individual tests at the second stage of a two-stage disjunctive testing algorithm.

  7. FREE GRAFT TWO-STAGE URETHROPLASTY FOR HYPOSPADIAS REPAIR

    Institute of Scientific and Technical Information of China (English)

    Zhong-jin Yue; Ling-jun Zuo; Jia-ji Wang; Gan-ping Zhong; Jian-ming Duan; Zhi-ping Wang; Da-shan Qin

    2005-01-01

    Objective To evaluate the effectiveness of free graft transplantation two-stage urethroplasty for hypospadias repair.Methods Fifty-eight cases with different types of hypospadias including 10 subcoronal, 36 penile shaft, 9 scrotal, and 3 perineal were treated with free full-thickness skin graft or (and) buccal mucosal graft transplantation two-stage urethroplasty. Of 58 cases, 45 were new cases, 13 had history of previous failed surgeries. Operative procedure included two stages: the first stage is to correct penile curvature (chordee), prepare transplanting bed, harvest and prepare full-thickness skin graft, buccal mucosal graft, and perform graft transplantation. The second stage is to complete urethroplasty and glanuloplasty.Results After the first stage operation, 56 of 58 cases (96.6%) were successful with grafts healing well, another 2foreskin grafts got gangrened. After the second stage operation on 56 cases, 5 cases failed with newly formed urethras opened due to infection, 8 cases had fistulas, 43 (76.8%) cases healed well.Conclusions Free graft transplantation two-stage urethroplasty for hypospadias repair is a kind of effective treatment with broad indication, comparatively high success rate, less complicationsand good cosmatic results, indicative of various types of hypospadias repair.

  8. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  9. The construction of customized two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1990-01-01

    In this paper mixed integer linear programming models for customizing two-stage tests are given. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. It is not difficult to modify the models to make them use

  10. A two-stage subsurface vertical flow constructed wetland for high-rate nitrogen removal.

    Science.gov (United States)

    Langergraber, Guenter; Leroch, Klaus; Pressl, Alexander; Rohrhofer, Roland; Haberl, Raimund

    2008-01-01

    By using a two-stage constructed wetland (CW) system operated with an organic load of 40 gCOD.m(-2).d(-1) (2 m2 per person equivalent) average nitrogen removal efficiencies of about 50% and average nitrogen elimination rates of 980 g N.m(-2).yr(-1) could be achieved. Two vertical flow beds with intermittent loading have been operated in series. The first stage uses sand with a grain size of 2-3.2 mm for the main layer and has a drainage layer that is impounded; the second stage sand with a grain size of 0.06-4 mm and a drainage layer with free drainage. The high nitrogen removal can be achieved without recirculation thus it is possible to operate the two-stage CW system without energy input. The paper shows performance data for the two-stage CW system regarding removal of organic matter and nitrogen for the two year operating period of the system. Additionally, its efficiency is compared with the efficiency of a single-stage vertical flow CW system designed and operated according to the Austrian design standards with 4 m2 per person equivalent. The comparison shows that a higher effluent quality could be reached with the two-stage system although the two-stage CW system is operated with the double organic load or half the specific surface area requirement, respectively. Another advantage is that the specific investment costs of the two-stage CW system amount to 1,200 EUR per person (without mechanical pre-treatment) and are only about 60% of the specific investment costs of the singe-stage CW system. IWA Publishing 2008.

  11. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    Science.gov (United States)

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Health care planning and education via gaming-simulation: a two-stage experiment.

    Science.gov (United States)

    Gagnon, J H; Greenblat, C S

    1977-01-01

    A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.

  13. An intracooling system for a novel two-stage sliding-vane air compressor

    Science.gov (United States)

    Murgia, Stefano; Valenti, Gianluca; Costanzo, Ida; Colletta, Daniele; Contaldi, Giulio

    2017-08-01

    Lube-oil injection is used in positive-displacement compressors and, among them, in sliding-vane machines to guarantee the correct lubrication of the moving parts and as sealing to prevent air leakage. Furthermore, lube-oil injection allows to exploit lubricant also as thermal ballast with a great thermal capacity to minimize the temperature increase during the compression. This study presents the design of a two-stage sliding-vane rotary compressor in which the air cooling is operated by high-pressure cold oil injection into a connection duct between the two stages. The heat exchange between the atomized oil jet and the air results in a decrease of the air temperature before the second stage, improving the overall system efficiency. This cooling system is named here intracooling, as opposed to intercooling. The oil injection is realized via pressure-swirl nozzles, both within the compressors and inside the intracooling duct. The design of the two-stage sliding-vane compressor is accomplished by way of a lumped parameter model. The model predicts an input power reduction as large as 10% for intercooled and intracooled two-stage compressors, the latter being slightly better, with respect to a conventional single-stage compressor for compressed air applications. An experimental campaign is conducted on a first prototype that comprises the low-pressure compressor and the intracooling duct, indicating that a significant temperature reduction is achieved in the duct.

  14. Square Kilometre Array station configuration using two-stage beamforming

    CERN Document Server

    Jiwani, Aziz; Razavi-Ghods, Nima; Hall, Peter J; Padhi, Shantanu; de Vaate, Jan Geralt bij

    2012-01-01

    The lowest frequency band (70 - 450 MHz) of the Square Kilometre Array will consist of sparse aperture arrays grouped into geographically-localised patches, or stations. Signals from thousands of antennas in each station will be beamformed to produce station beams which form the inputs for the central correlator. Two-stage beamforming within stations can reduce SKA-low signal processing load and costs, but has not been previously explored for the irregular station layouts now favoured in radio astronomy arrays. This paper illustrates the effects of two-stage beamforming on sidelobes and effective area, for two representative station layouts (regular and irregular gridded tile on an irregular station). The performance is compared with a single-stage, irregular station. The inner sidelobe levels do not change significantly between layouts, but the more distant sidelobes are affected by the tile layouts; regular tile creates diffuse, but regular, grating lobes. With very sparse arrays, the station effective area...

  15. Two stage sorption type cryogenic refrigerator including heat regeneration system

    Science.gov (United States)

    Jones, Jack A.; Wen, Liang-Chi; Bard, Steven

    1989-01-01

    A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system is disclosed. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The two stages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

  16. Two-stage approach to full Chinese parsing

    Institute of Scientific and Technical Information of China (English)

    Cao Hailong; Zhao Tiejun; Yang Muyun; Li Sheng

    2005-01-01

    Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform model, we utilize a divide and conquer strategy. We propose an effective and fast method based on Markov model to identify the base phrases. Then we make the first attempt to extend one of the best English parsing models i.e. the head-driven model to recognize Chinese complex phrases. Our two-stage approach is superior to the uniform approach in two aspects. First, it creates synergy between the Markov model and the head-driven model. Second, it reduces the complexity of full Chinese parsing and makes the parsing system space and time efficient. We evaluate our approach in PARSEVAL measures on the open test set, the parsing system performances at 87.53% precision, 87.95% recall.

  17. Income and Poverty across SMSAs: A Two-Stage Analysis

    OpenAIRE

    1993-01-01

    Two popular explanations of urban poverty are the "welfare-disincentive" and "urban-deindustrialization" theories. Using cross-sectional Census data, we develop a two-stage model to predict an SMSAs median family income and poverty rate. The model allows the city's welfare level and industrial structure to affect its median family income and poverty rate directly. It also allows welfare and industrial structure to affect income and poverty indirectly, through their effects on family structure...

  18. A Two-stage Polynomial Method for Spectrum Emissivity Modeling

    OpenAIRE

    Qiu, Qirong; Liu, Shi; Teng, Jing; Yan, Yong

    2015-01-01

    Spectral emissivity is a key in the temperature measurement by radiation methods, but not easy to determine in a combustion environment, due to the interrelated influence of temperature and wave length of the radiation. In multi-wavelength radiation thermometry, knowing the spectral emissivity of the material is a prerequisite. However in many circumstances such a property is a complex function of temperature and wavelength and reliable models are yet to be sought. In this study, a two stages...

  19. Forty-five-degree two-stage venous cannula: advantages over standard two-stage venous cannulation.

    Science.gov (United States)

    Lawrence, D R; Desai, J B

    1997-01-01

    We present a 45-degree two-stage venous cannula that confers advantage to the surgeon using cardiopulmonary bypass. This cannula exits the mediastinum under the transverse bar of the sternal retractor, leaving the rostral end of the sternal incision free of apparatus. It allows for lifting of the heart with minimal effect on venous return and does not interfere with the radially laid out sutures of an aortic valve replacement using an interrupted suture technique.

  20. Innovative two-stage anaerobic process for effective codigestion of cheese whey and cattle manure.

    Science.gov (United States)

    Bertin, Lorenzo; Grilli, Selene; Spagni, Alessandro; Fava, Fabio

    2013-01-01

    The valorisation of agroindustrial waste through anaerobic digestion represents a significant opportunity for refuse treatment and renewable energy production. This study aimed to improve the codigestion of cheese whey (CW) and cattle manure (CM) by an innovative two-stage process, based on concentric acidogenic and methanogenic phases, designed for enhancing performance and reducing footprint. The optimum CW to CM ratio was evaluated under batch conditions. Thereafter, codigestion was implemented under continuous-flow conditions comparing one- and two-stage processes. The results demonstrated that the addition of CM in codigestion with CW greatly improved the anaerobic process. The highest methane yield was obtained co-treating the two substrates at equal ratio by using the innovative two-stage process. The proposed system reached the maximum value of 258 mL(CH4) g(gv(-1), which was more than twice the value obtained by the one-stage process and 10% higher than the value obtained by the two-stage one.

  1. Manual for the Sampling Design Tool for ArcGIS

    OpenAIRE

    Buja, Ken; Menza, Charles

    2008-01-01

    The Biogeography Branch’s Sampling Design Tool for ArcGIS provides a means to effectively develop sampling strategies in a geographic information system (GIS) environment. The tool was produced as part of an iterative process of sampling design development, whereby existing data informs new design decisions. The objective of this process, and hence a product of this tool, is an optimal sampling design which can be used to achieve accurate, high-precision estimates of population metrics at a m...

  2. Analysing designed experiments in distance sampling

    Science.gov (United States)

    Stephen T. Buckland; Robin E. Russell; Brett G. Dickson; Victoria A. Saab; Donal N. Gorman; William M. Block

    2009-01-01

    Distance sampling is a survey technique for estimating the abundance or density of wild animal populations. Detection probabilities of animals inherently differ by species, age class, habitats, or sex. By incorporating the change in an observer's ability to detect a particular class of animals as a function of distance, distance sampling leads to density estimates...

  3. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa.

    Science.gov (United States)

    Yadavalli, Rajasri; Heggers, Goutham Rao Venkata Naga

    2013-12-19

    Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer.

  4. Two-stage series array SQUID amplifier for space applications

    Science.gov (United States)

    Tuttle, J. G.; DiPirro, M. J.; Shirron, P. J.; Welty, R. P.; Radparvar, M.

    We present test results for a two-stage integrated SQUID amplifier which uses a series array of d.c. SQUIDS to amplify the signal from a single input SQUID. The device was developed by Welty and Martinis at NIST and recent versions have been manufactured by HYPRES, Inc. Shielding and filtering techniques were employed during the testing to minimize the external noise. Energy resolution of 300 h was demonstrated using a d.c. excitation at frequencies above 1 kHz, and better than 500 h resolution was typical down to 300 Hz.

  5. Two-Stage Aggregate Formation via Streams in Myxobacteria

    Science.gov (United States)

    Alber, Mark; Kiskowski, Maria; Jiang, Yi

    2005-03-01

    In response to adverse conditions, myxobacteria form aggregates which develop into fruiting bodies. We model myxobacteria aggregation with a lattice cell model based entirely on short range (non-chemotactic) cell-cell interactions. Local rules result in a two-stage process of aggregation mediated by transient streams. Aggregates resemble those observed in experiment and are stable against even very large perturbations. Noise in individual cell behavior increases the effects of streams and result in larger, more stable aggregates. Phys. Rev. Lett. 93: 068301 (2004).

  6. Straw Gasification in a Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Additive-prepared straw pellets were gasified in the 100 kW two-stage gasifier at The Department of Mechanical Engineering of the Technical University of Denmark (DTU). The fixed bed temperature range was 800-1000°C. In order to avoid bed sintering, as observed earlier with straw gasification...... residues were examined after the test. No agglomeration or sintering was observed in the ash residues. The tar content was measured both by solid phase amino adsorption (SPA) method and cold trapping (Petersen method). Both showed low tar contents (~42 mg/Nm3 without gas cleaning). The particle content...

  7. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  8. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  9. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  10. Two-Stage Heuristic Algorithm for Aircraft Recovery Problem

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2017-01-01

    Full Text Available This study focuses on the aircraft recovery problem (ARP. In real-life operations, disruptions always cause schedule failures and make airlines suffer from great loss. Therefore, the main objective of the aircraft recovery problem is to minimize the total recovery cost and solve the problem within reasonable runtimes. An aircraft recovery model (ARM is proposed herein to formulate the ARP and use feasible line of flights as the basic variables in the model. We define the feasible line of flights (LOFs as a sequence of flights flown by an aircraft within one day. The number of LOFs exponentially grows with the number of flights. Hence, a two-stage heuristic is proposed to reduce the problem scale. The algorithm integrates a heuristic scoring procedure with an aggregated aircraft recovery model (AARM to preselect LOFs. The approach is tested on five real-life test scenarios. The computational results show that the proposed model provides a good formulation of the problem and can be solved within reasonable runtimes with the proposed methodology. The two-stage heuristic significantly reduces the number of LOFs after each stage and finally reduces the number of variables and constraints in the aircraft recovery model.

  11. Sampling Designs in Qualitative Research: Making the Sampling Process More Public

    Science.gov (United States)

    Onwuegbuzie, Anthony J.; Leech, Nancy L.

    2007-01-01

    The purpose of this paper is to provide a typology of sampling designs for qualitative researchers. We introduce the following sampling strategies: (a) parallel sampling designs, which represent a body of sampling strategies that facilitate credible comparisons of two or more different subgroups that are extracted from the same levels of study;…

  12. [Variance estimation considering multistage sampling design in multistage complex sample analysis].

    Science.gov (United States)

    Li, Yichong; Zhao, Yinjun; Wang, Limin; Zhang, Mei; Zhou, Maigeng

    2016-03-01

    Multistage sampling is a frequently-used method in random sampling survey in public health. Clustering or independence between observations often exists in the sampling, often called complex sample, generated by multistage sampling. Sampling error may be underestimated and the probability of type I error may be increased if the multistage sample design was not taken into consideration in analysis. As variance (error) estimator in complex sample is often complicated, statistical software usually adopt ultimate cluster variance estimate (UCVE) to approximate the estimation, which simply assume that the sample comes from one-stage sampling. However, with increased sampling fraction of primary sampling unit, contribution from subsequent sampling stages is no more trivial, and the ultimate cluster variance estimate may, therefore, lead to invalid variance estimation. This paper summarize a method of variance estimation considering multistage sampling design. The performances are compared with UCVE and the method considering multistage sampling design by simulating random sampling under different sampling schemes using real world data. Simulation showed that as primary sampling unit (PSU) sampling fraction increased, UCVE tended to generate increasingly biased estimation, whereas accurate estimates were obtained by using the method considering multistage sampling design.

  13. Optimisation of sampling windows design for population pharmacokinetic experiments.

    Science.gov (United States)

    Ogungbenro, Kayode; Aarons, Leon

    2008-08-01

    This paper describes an approach for optimising sampling windows for population pharmacokinetic experiments. Sampling windows designs are more practical in late phase drug development where patients are enrolled in many centres and in out-patient clinic settings. Collection of samples under the uncontrolled environment at these centres at fixed times may be problematic and can result in uninformative data. Population pharmacokinetic sampling windows design provides an opportunity to control when samples are collected by allowing some flexibility and yet provide satisfactory parameter estimation. This approach uses information obtained from previous experiments about the model and parameter estimates to optimise sampling windows for population pharmacokinetic experiments within a space of admissible sampling windows sequences. The optimisation is based on a continuous design and in addition to sampling windows the structure of the population design in terms of the proportion of subjects in elementary designs, number of elementary designs in the population design and number of sampling windows per elementary design is also optimised. The results obtained showed that optimal sampling windows designs obtained using this approach are very efficient for estimating population PK parameters and provide greater flexibility in terms of when samples are collected. The results obtained also showed that the generalized equivalence theorem holds for this approach.

  14. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  15. Laparoscopic management of a two staged gall bladdertorsion

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    Gall bladder torsion (GBT) is a relatively uncommonentity and rarely diagnosed preoperatively. A constantfactor in all occurrences of GBT is a freely mobilegall bladder due to congenital or acquired anomalies.GBT is commonly observed in elderly white females.We report a 77-year-old, Caucasian lady who wasoriginally diagnosed as gall bladder perforation butwas eventually found with a two staged torsion of thegall bladder with twisting of the Riedel's lobe (partof tongue like projection of liver segment 4A). Thistogether, has not been reported in literature, to thebest of our knowledge. We performed laparoscopiccholecystectomy and she had an uneventful postoperativeperiod. GBT may create a diagnostic dilemmain the context of acute cholecystitis. Timely diagnosisand intervention is necessary, with extra care whileoperating as the anatomy is generally distorted. Thefundus first approach can be useful due to alteredanatomy in the region of Calot's triangle. Laparoscopiccholecystectomy has the benefit of early recovery.

  16. Lightweight Concrete Produced Using a Two-Stage Casting Process

    Directory of Open Access Journals (Sweden)

    Jin Young Yoon

    2015-03-01

    Full Text Available The type of lightweight aggregate and its volume fraction in a mix determine the density of lightweight concrete. Minimizing the density obviously requires a higher volume fraction, but this usually causes aggregates segregation in a conventional mixing process. This paper proposes a two-stage casting process to produce a lightweight concrete. This process involves placing lightweight aggregates in a frame and then filling in the remaining interstitial voids with cementitious grout. The casting process results in the lowest density of lightweight concrete, which consequently has low compressive strength. The irregularly shaped aggregates compensate for the weak point in terms of strength while the round-shape aggregates provide a strength of 20 MPa. Therefore, the proposed casting process can be applied for manufacturing non-structural elements and structural composites requiring a very low density and a strength of at most 20 MPa.

  17. TWO-STAGE OCCLUDED OBJECT RECOGNITION METHOD FOR MICROASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    WANG Huaming; ZHU Jianying

    2007-01-01

    A two-stage object recognition algorithm with the presence of occlusion is presented for microassembly. Coarse localization determines whether template is in image or not and approximately where it is, and fine localization gives its accurate position. In coarse localization, local feature, which is invariant to translation, rotation and occlusion, is used to form signatures. By comparing signature of template with that of image, approximate transformation parameter from template to image is obtained, which is used as initial parameter value for fine localization. An objective function, which is a function of transformation parameter, is constructed in fine localization and minimized to realize sub-pixel localization accuracy. The occluded pixels are not taken into account in objective function, so the localization accuracy will not be influenced by the occlusion.

  18. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  19. Description of sampling designs using a comprehensive data structure

    Science.gov (United States)

    John C. Byrne; Albert R. Stage

    1988-01-01

    Maintaining permanent plot data with different sampling designs over long periods within an organization, as well as sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is based on the definition of subpopulations of trees sampled, the rules...

  20. Two Stage Assessment of Thermal Hazard in An Underground Mine

    Science.gov (United States)

    Drenda, Jan; Sułkowski, Józef; Pach, Grzegorz; Różański, Zenon; Wrona, Paweł

    2016-06-01

    The results of research into the application of selected thermal indices of men's work and climate indices in a two stage assessment of climatic work conditions in underground mines have been presented in this article. The difference between these two kinds of indices was pointed out during the project entitled "The recruiting requirements for miners working in hot underground mine environments". The project was coordinated by The Institute of Mining Technologies at Silesian University of Technology. It was a part of a Polish strategic project: "Improvement of safety in mines" being financed by the National Centre of Research and Development. Climate indices are based only on physical parameters of air and their measurements. Thermal indices include additional factors which are strictly connected with work, e.g. thermal resistance of clothing, kind of work etc. Special emphasis has been put on the following indices - substitute Silesian temperature (TS) which is considered as the climatic index, and the thermal discomfort index (δ) which belongs to the thermal indices group. The possibility of the two stage application of these indices has been taken into consideration (preliminary and detailed estimation). Based on the examples it was proved that by the application of thermal hazard (detailed estimation) it is possible to avoid the use of additional technical solutions which would be necessary to reduce thermal hazard in particular work places according to the climate index. The threshold limit value for TS has been set, based on these results. It was shown that below TS = 24°C it is not necessary to perform detailed estimation.

  1. S-band gain-flattened EDFA with two-stage double-pass configuration

    Science.gov (United States)

    Fu, Hai-Wei; Xu, Shi-Chao; Qiao, Xue-Guang; Jia, Zhen-An; Liu, Ying-Gang; Zhou, Hong

    2011-11-01

    A gain-flattened S-band erbium-doped fiber amplifier (EDFA) using standard erbium-doped fiber (EDF) is proposed and experimentally demonstrated. The proposed amplifier with two-stage double-pass configuration employs two C-band suppressing filters to obtain the optical gain in S-band. The amplifier provides a maximum signal gain of 41.6 dB at 1524 nm with the corresponding noise figure of 3.8 dB. Furthermore, with a well-designed short-pass filter as a gain flattening filter (GFF), we are able to develop the S-band EDFA with a flattened gain of more than 20 dB in 1504-1524 nm. In the experiment, the two-stage double-pass amplifier configuration improves performance of gain and noise figure compared with the configuration of single-stage double-pass S-band EDFA.

  2. Performance of a highly loaded two stage axial-flow fan

    Science.gov (United States)

    Ruggeri, R. S.; Benser, W. A.

    1974-01-01

    A two-stage axial-flow fan with a tip speed of 1450 ft/sec (442 m/sec) and an overall pressure ratio of 2.8 was designed, built, and tested. At design speed and pressure ratio, the measured flow matched the design value of 184.2 lbm/sec (83.55kg/sec). The adiabatic efficiency at the design operating point was 85.7 percent. The stall margin at design speed was 10 percent. A first-bending-mode flutter of the second-stage rotor blades was encountered near stall at speeds between 77 and 93 percent of design, and also at high pressure ratios at speeds above 105 percent of design. A 5 deg closed reset of the first-stage stator eliminated second-stage flutter for all but a narrow speed range near 90 percent of design.

  3. 自然工质风冷太阳能双级喷射中低温空调制冷系统的设计及性能分析%Design and performance analysis of solar-powered air-cooled two-staged ejector cooling systems with natural refrigerants for middle and low temperature purpose

    Institute of Scientific and Technical Information of China (English)

    卢苇; 陈洪杰; 杨林; 曹聪

    2012-01-01

    依据中低温空调温度要求,分别以水、氨、R290和R600a为工质,设计了额定制冷量为10kW的风冷太阳能双级喷射制冷系统并对其进行变工况性能分析.在获得相同制冷量和室内温度的条件下,水系统最省材料,其次是氨和R290系统,且二者相当,R600a系统最耗材.4种工质系统均具有较强的变工况性能;综合考虑环境温度和太阳辐照度的影响,各系统制冷能力相当.水系统的COP较其他系统的高,且在低太阳辐照度时更明显;其余3个系统COP从高到低依次为氨、R290、R600a.在太阳辐照度较弱地区,使用水喷射制冷系统更合理.%According to the requirement of middle and low temperature air conditioning,the solar-powered air-cooled two-staged ejector cooling system with rated cooling capacity of 10 kW is designed,using water,ammonia,R290 and R600a as working fluids separately. The performance is analyzed. At certain cooling capacity and indoor temperature,the water system is the most material-saving system,followed by the ammonia and R290 ones with equivalent consumption,and the R600a system is the most material-consuming one. The four systems have relatively perfect off-design performance and their cooling capacities are almost the same on comprehensive consideration of the influences of ambient temperature and solar irradiance. Among the four systems,the water system presents higher COP value,the effect of which is more obvious under weaker solar irradiance,followed by ammonia system,R290 and R600a systems. A solar-powered air-cooled ejector refrigeration system with water as working fluid is more suitable for use in the regions with relatively weak solar irradiance.

  4. Design and evaluation of two-stage multiplex real-time PCR method for detecting O157:H7 and non-O157 STEC strains from beef samples

    Science.gov (United States)

    Introduction: E. coli O157:H7 was first recognized as a human pathogen in 1982 and until recently was the only E. coli strain mandated for testing by the USDA. In June 2012, the USDA declared six additional Shiga-toxin producing E. coli serogroups (O26, O45, O103, O111, O121, and O145) as adulterant...

  5. Development of Two-Stage Stirling Cooler for ASTRO-F

    Science.gov (United States)

    Narasaki, K.; Tsunematsu, S.; Ootsuka, K.; Kyoya, M.; Matsumoto, T.; Murakami, H.; Nakagawa, T.

    2004-06-01

    A two-stage small Stirling cooler has been developed and tested for the infrared astronomical satellite ASTRO-F that is planned to be launched by Japanese M-V rocket in 2005. ASTRO-F has a hybrid cryogenic system that is a combination of superfluid liquid helium (HeII) and two-stage Stirling coolers. The mechanical cooler has a two-stage displacer driven by a linear motor in a cold head and a new linear-ball-bearing system for the piston-supporting structure in a compressor. The linear-ball-bearing supporting system achieves the piston clearance seal, the long piston-stroke operation and the low frequency operation. The typical cooling power is 200 mW at 20 K and the total input power to the compressor and the cold head is below 90 W without driver electronics. The engineering, the prototype and the flight models of the cooler have been fabricated and evaluated to verify the capability for ASTRO-F. This paper describes the design of the cooler and the results from verification tests including cooler performance test, thermal vacuum test, vibration test and lifetime test.

  6. A two-stage Stirling-type pulse tube cryocooler with a cold inertance tube

    Science.gov (United States)

    Gan, Z. H.; Fan, B. Y.; Wu, Y. Z.; Qiu, L. M.; Zhang, X. J.; Chen, G. B.

    2010-06-01

    A thermally coupled two-stage Stirling-type pulse tube cryocooler (PTC) with inertance tubes as phase shifters has been designed, manufactured and tested. In order to obtain a larger phase shift at the low acoustic power of about 2.0 W, a cold inertance tube as well as a cold reservoir for the second stage, precooled by the cold end of the first stage, was introduced into the system. The transmission line model was used to calculate the phase shift produced by the cold inertance tube. Effect of regenerator material, geometry and charging pressure on the performance of the second stage of the two-stage PTC was investigated based on the well known regenerator model REGEN. Experimental results of the two-stage PTC were carried out with an emphasis on the performance of the second stage. A lowest cooling temperature of 23.7 K and 0.50 W at 33.9 K were obtained with an input electric power of 150.0 W and an operating frequency of 40 Hz.

  7. Industrial demonstration plant for the gasification of herb residue by fluidized bed two-stage process.

    Science.gov (United States)

    Zeng, Xi; Shao, Ruyi; Wang, Fang; Dong, Pengwei; Yu, Jian; Xu, Guangwen

    2016-04-01

    A fluidized bed two-stage gasification process, consisting of a fluidized-bed (FB) pyrolyzer and a transport fluidized bed (TFB) gasifier, has been proposed to gasify biomass for fuel gas production with low tar content. On the basis of our previous fundamental study, an autothermal two-stage gasifier has been designed and built for gasify a kind of Chinese herb residue with a treating capacity of 600 kg/h. The testing data in the operational stable stage of the industrial demonstration plant showed that when keeping the reaction temperatures of pyrolyzer and gasifier respectively at about 700 °C and 850 °C, the heating value of fuel gas can reach 1200 kcal/Nm(3), and the tar content in the produced fuel gas was about 0.4 g/Nm(3). The results from this pilot industrial demonstration plant fully verified the feasibility and technical features of the proposed FB two-stage gasification process.

  8. Noncausal two-stage image filtration at presence of observations with anomalous errors

    Directory of Open Access Journals (Sweden)

    S. V. Vishnevyy

    2013-04-01

    Full Text Available Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptive one-dimensional algorithm for causal filtration is used for independent processing along rows and columns of image. On the second stage the obtained data are united and a posteriori estimations are calculated. Results of experimental investigations. The developed adaptive algorithm for noncausal images filtration at presence of observations with anomalous errors is investigated on the model sample by means of statistical modeling on PC. The image is modeled as a realization of Gaussian-Markov random field. The modeled image is corrupted with uncorrelated Gaussian noise. Regions of image with anomalous errors are corrupted with uncorrelated Gaussian noise which has higher power than normal noise on the rest part of the image. Conclusions. The analysis of adaptive algorithm for noncausal two-stage filtration is done. The characteristics of accuracy of computed estimations are shown. The comparisons of first stage and second stage of the developed adaptive algorithm are done. Adaptive algorithm is compared with known uniform two-stage algorithm of image filtration. According to the obtained results the uniform algorithm does not suppress anomalous noise meanwhile the adaptive algorithm shows good results.

  9. Characterization of component interactions in two-stage axial turbine

    Directory of Open Access Journals (Sweden)

    Adel Ghenaiet

    2016-08-01

    Full Text Available This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic performances show noticeable differences when simulating the turbine stages simultaneously or separately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while downstream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevailing effect is rather linked to the blade tip flow structure.

  10. A continuous two stage solar coal gasification system

    Science.gov (United States)

    Mathur, V. K.; Breault, R. W.; Lakshmanan, S.; Manasse, F. K.; Venkataramanan, V.

    The characteristics of a two-stage fluidized-bed hybrid coal gasification system to produce syngas from coal, lignite, and peat are described. Devolatilization heat of 823 K is supplied by recirculating gas heated by a solar receiver/coal heater. A second-stage gasifier maintained at 1227 K serves to crack remaining tar and light oil to yield a product free from tar and other condensables, and sulfur can be removed by hot clean-up processes. CO is minimized because the coal is not burned with oxygen, and the product gas contains 50% H2. Bench scale reactors consist of a stage I unit 0.1 m in diam which is fed coal 200 microns in size. A stage II reactor has an inner diam of 0.36 m and serves to gasify the char from stage I. A solar power source of 10 kWt is required for the bench model, and will be obtained from a central receiver with quartz or heat pipe configurations for heat transfer.

  11. Characterization of component interactions in two-stage axial turbine

    Institute of Scientific and Technical Information of China (English)

    Adel Ghenaiet; Kaddour Touil

    2016-01-01

    This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic perfor-mances show noticeable differences when simulating the turbine stages simultaneously or sepa-rately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD) produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while down-stream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevail-ing effect is rather linked to the blade tip flow structure.

  12. Two stages kinetics of municipal solid waste inoculation composting processes

    Institute of Scientific and Technical Information of China (English)

    XI Bei-dou1; HUANG Guo-he; QIN Xiao-sheng; LIU Hong-liang

    2004-01-01

    In order to understand the key mechanisms of the composting processes, the municipal solid waste(MSW) composting processes were divided into two stages, and the characteristics of typical experimental scenarios from the viewpoint of microbial kinetics was analyzed. Through experimentation with advanced composting reactor under controlled composting conditions, several equations were worked out to simulate the degradation rate of the substrate. The equations showed that the degradation rate was controlled by concentration of microbes in the first stage. The degradation rates of substrates of inoculation Run A, B, C and Control composting systems were 13.61 g/(kg·h), 13.08 g/(kg·h), 15.671 g/(kg·h), and 10.5 g/(kg·h), respectively. The value of Run C is around 1.5 times higher than that of Control system. The decomposition rate of the second stage is controlled by concentration of substrate. Although the organic matter decomposition rates were similar to all Runs, inoculation could reduce the values of the half velocity coefficient and could be more efficient to make the composting stable. Particularly. For Run C, the decomposition rate is high in the first stage, and is low in the second stage. The results indicated that the inoculation was efficient for the composting processes.

  13. From Continuous-Time Design to Sampled-Data Design of Nonlinear Observers

    OpenAIRE

    Karafyllis, Iasson; Kravaris, Costas

    2008-01-01

    In this work, a sampled-data nonlinear observer is designed using a continuous-time design coupled with an inter-sample output predictor. The proposed sampled-data observer is a hybrid system. It is shown that under certain conditions, the robustness properties of the continuous-time design are inherited by the sampled-data design, as long as the sampling period is not too large. The approach is applied to linear systems and to triangular globally Lipschitz systems.

  14. System design description for sampling fuel in K basins

    Energy Technology Data Exchange (ETDEWEB)

    Ritter, G.A., Westinghouse Hanford

    1996-09-17

    This System Design Description provides: (1) statements of the Spent Nuclear Fuel Project`s needs for sampling of fuel in the K East and K West Basins, (2) the sampling equipment functions and requirements, (3) a general work plan and the design logic followed to develop the equipment, and (4) a summary description of the design for the sampling equipment. This report summarizes the integrated application of both the subject equipment and the canister sludge sampling system in the characterization campaigns at K Basins.

  15. A Frequency Domain Design Method For Sampled-Data Compensators

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Jannerup, Ole Erik

    1990-01-01

    A new approach to the design of a sampled-data compensator in the frequency domain is investigated. The starting point is a continuous-time compensator for the continuous-time system which satisfy specific design criteria. The new design method will graphically show how the discrete...

  16. Generalized Yule-walker and two-stage identification algorithms for dual-rate systems

    Institute of Scientific and Technical Information of China (English)

    Feng DING

    2006-01-01

    In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. The first is the generalized Yule-Walker algorithm and the second is a two-stage algorithm based on the correlation technique. The basic idea is to directly identify the parameters of underlying single-rate models instead of the lifted models of dual-rate systems from the dual-rate input-output data, assuming that the measurement data are stationary and ergodic. An example is given.

  17. FORMATION OF HIGHLY RESISTANT CARBIDE AND BORIDE COATINGS BY A TWO-STAGE DEPOSITION METHOD

    Directory of Open Access Journals (Sweden)

    W. I. Sawich

    2011-01-01

    Full Text Available A study was made of the aspects of forming highly resistant coatings in the surface zone of tool steels and solid carbide inserts by a two-stage method. at the first stage of the method, pure Ta or Nb coatings were electrodeposited on samples of tool steel and solid carbide insert in a molten salt bath containing Ta and Nb fluorides. at the second stage, the electrodeposited coating of Ta (Nb was subjected to carburizing or boriding to form carbide (TaC, NbC or boride (TaB, NbB cladding layers.

  18. SQL/JavaScript Hybrid Worms As Two-stage Quines

    CERN Document Server

    Orlicki, José I

    2009-01-01

    Delving into present trends and anticipating future malware trends, a hybrid, SQL on the server-side, JavaScript on the client-side, self-replicating worm based on two-stage quines was designed and implemented on an ad-hoc scenario instantiating a very common software pattern. The proof of concept code combines techniques seen in the wild, in the form of SQL injections leading to cross-site scripting JavaScript inclusion, and seen in the laboratory, in the form of SQL quines propa- gated via RFIDs, resulting in a hybrid code injection. General features of hybrid worms are also discussed.

  19. Performance of the SITP 35K two-stage Stirling cryocooler

    Science.gov (United States)

    Liu, Dongyu; Li, Ao; Li, Shanshan; Wu, Yinong

    2010-04-01

    This paper presents the design, development, optimization experiment and performance of the SITP two-stage Stirling cryocooler. The geometry size of the cooler, especially the diameter and length of the regenerator were analyzed. Operating parameters by experiments were optimized to maximize the second stage cooling performance. In the test the cooler was operated at various drive frequency, phase shift between displacer and piston, fill pressure. The experimental results indicate that the cryocooler has a higher efficiency with a performance of 0.85W at 35K with a compressor input power of 56W at a phase shift of 65°, an operating frequency of 40Hz, 1MPa fill pressure.

  20. Space Station Freedom carbon dioxide removal assembly two-stage rotary sliding vane pump

    Science.gov (United States)

    Matteau, Dennis

    1992-07-01

    The design and development of a positive displacement pump selected to operate as an essential part of the carbon dioxide removal assembly (CDRA) are described. An oilless two-stage rotary sliding vane pump was selected as the optimum concept to meet the CDRA application requirements. This positive displacement pump is characterized by low weight and small envelope per unit flow, ability to pump saturated gases and moderate amount of liquid, small clearance volumes, and low vibration. It is easily modified to accommodate several stages on a single shaft optimizing space and weight, which makes the concept ideal for a range of demanding space applications.

  1. Probability sampling design in ethnobotanical surveys of medicinal plants

    Directory of Open Access Journals (Sweden)

    Mariano Martinez Espinosa

    2012-12-01

    Full Text Available Non-probability sampling design can be used in ethnobotanical surveys of medicinal plants. However, this method does not allow statistical inferences to be made from the data generated. The aim of this paper is to present a probability sampling design that is applicable in ethnobotanical studies of medicinal plants. The sampling design employed in the research titled "Ethnobotanical knowledge of medicinal plants used by traditional communities of Nossa Senhora Aparecida do Chumbo district (NSACD, Poconé, Mato Grosso, Brazil" was used as a case study. Probability sampling methods (simple random and stratified sampling were used in this study. In order to determine the sample size, the following data were considered: population size (N of 1179 families; confidence coefficient, 95%; sample error (d, 0.05; and a proportion (p, 0.5. The application of this sampling method resulted in a sample size (n of at least 290 families in the district. The present study concludes that probability sampling methods necessarily have to be employed in ethnobotanical studies of medicinal plants, particularly where statistical inferences have to be made using data obtained. This can be achieved by applying different existing probability sampling methods, or better still, a combination of such methods.

  2. Right Axillary Sweating After Left Thoracoscopic Sypathectomy in Two-Stage Surgery

    Directory of Open Access Journals (Sweden)

    Berkant Ozpolat

    2013-06-01

    Full Text Available One stage bilateral or two stage unilateral video assisted thoracoscopic sympathectomy could be performed in the treatment of primary focal hyperhidrosis. Here we present a case with compensatory sweating of contralateral side after a two stage operation.

  3. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  4. Design of a gravity corer for near shore sediment sampling

    Digital Repository Service at National Institute of Oceanography (India)

    Bhat, S.T.; Sonawane, A; Nayak, B

    For the purpose of geotechnical investigation a gravity corer has been designed and fabricated to obtain undisturbed sediment core samples from near shore waters. The corer was successfully operated at 75 stations up to water depth 30 m. Simplicity...

  5. Design of perfect reconstruction rational sampling filter banks

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The design of rational sampling filter banks based on a recombination structure can be formulated as a problem with two objective functions to be optimized. A new hybrid optimization method for designing perfectreconstruction rational sampling filter banks is presented, which can be used to solve a class of problems with two objective functions. This method is of good convergence and mezzo calculation cost. Satisfactory results free of aliasing in analysis and synthesis filters can be obtained by the proposed method.

  6. Using Errors to Teach through a Two-Staged, Structured Review: Peer-Reviewed Quizzes and "What's Wrong with Me?"

    Science.gov (United States)

    Coppola, Brian P.; Pontrello, Jason K.

    2014-01-01

    Using errors as a method of learning has been made explicit through a two-staged peer review and discussion. During organic chemistry discussion sessions, quizzes are followed by a structured peer review designed to help students identify and discuss student errors. After the face-to-face discussion, a second stage of review involves analyzing and…

  7. Two-stage dental implants inserted in a one-stage procedure : a prospective comparative clinical study

    NARCIS (Netherlands)

    Heijdenrijk, Kees

    2002-01-01

    The results of this study indicate that dental implants designed for a submerged implantation procedure can be used in a single-stage procedure and may be as predictable as one-stage implants. Although one-stage implant systems and two-stage.

  8. Two-stage dental implants inserted in a one-stage procedure : a prospective comparative clinical study

    NARCIS (Netherlands)

    Heijdenrijk, Kees

    2002-01-01

    The results of this study indicate that dental implants designed for a submerged implantation procedure can be used in a single-stage procedure and may be as predictable as one-stage implants. Although one-stage implant systems and two-stage.

  9. Two staged incentive contract focused on efficiency and innovation matching in critical chain project management

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2014-09-01

    Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a two staged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The two staged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

  10. Two-Stage Exams Improve Student Learning in an Introductory Geology Course: Logistics, Attendance, and Grades

    Science.gov (United States)

    Knierim, Katherine; Turner, Henry; Davis, Ralph K.

    2015-01-01

    Two-stage exams--where students complete part one of an exam closed book and independently and part two is completed open book and independently (two-stage independent, or TS-I) or collaboratively (two-stage collaborative, or TS-C)--provide a means to include collaborative learning in summative assessments. Collaborative learning has been shown to…

  11. ANL small-sample calorimeter system design and operation

    Energy Technology Data Exchange (ETDEWEB)

    Roche, C.T.; Perry, R.B.; Lewis, R.N.; Jung, E.A.; Haumann, J.R.

    1978-07-01

    The Small-Sample Calorimetric System is a portable instrument designed to measure the thermal power produced by radioactive decay of plutonium-containing fuels. The small-sample calorimeter is capable of measuring samples producing power up to 32 milliwatts at a rate of one sample every 20 min. The instrument is contained in two packages: a data-acquisition module consisting of a microprocessor with an 8K-byte nonvolatile memory, and a measurement module consisting of the calorimeter and a sample preheater. The total weight of the system is 18 kg.

  12. Experimental and Sampling Design for the INL-2 Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F.; Amidan, Brett G.; Matzke, Brett D.

    2009-02-16

    This report describes the experimental and sampling design developed to assess sampling approaches and methods for detecting contamination in a building and clearing the building for use after decontamination. An Idaho National Laboratory (INL) building will be contaminated with BG (Bacillus globigii, renamed Bacillus atrophaeus), a simulant for Bacillus anthracis (BA). The contamination, sampling, decontamination, and re-sampling will occur per the experimental and sampling design. This INL-2 Sample Collection Operational Test is being planned by the Validated Sampling Plan Working Group (VSPWG). The primary objectives are: 1) Evaluate judgmental and probabilistic sampling for characterization as well as probabilistic and combined (judgment and probabilistic) sampling approaches for clearance, 2) Conduct these evaluations for gradient contamination (from low or moderate down to absent or undetectable) for different initial concentrations of the contaminant, 3) Explore judgment composite sampling approaches to reduce sample numbers, 4) Collect baseline data to serve as an indication of the actual levels of contamination in the tests. A combined judgmental and random (CJR) approach uses Bayesian methodology to combine judgmental and probabilistic samples to make clearance statements of the form "X% confidence that at least Y% of an area does not contain detectable contamination” (X%/Y% clearance statements). The INL-2 experimental design has five test events, which 1) vary the floor of the INL building on which the contaminant will be released, 2) provide for varying the amount of contaminant released to obtain desired concentration gradients, and 3) investigate overt as well as covert release of contaminants. Desirable contaminant gradients would have moderate to low concentrations of contaminant in rooms near the release point, with concentrations down to zero in other rooms. Such gradients would provide a range of contamination levels to challenge the sampling

  13. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  14. Spatial Sampling Design for Estimating Regional GPP With Spatial Heterogeneities

    NARCIS (Netherlands)

    Wang, J.H.; Ge, Y.; Heuvelink, G.B.M.; Zhou, C.H.

    2014-01-01

    The estimation of regional gross primary production (GPP) is a crucial issue in carbon cycle studies. One commonly used way to estimate the characteristics of GPP is to infer the total amount of GPP by collecting field samples. In this process, the spatial sampling design will affect the error

  15. Spatial Sampling Design for Estimating Regional GPP With Spatial Heterogeneities

    NARCIS (Netherlands)

    Wang, J.H.; Ge, Y.; Heuvelink, G.B.M.; Zhou, C.H.

    2014-01-01

    The estimation of regional gross primary production (GPP) is a crucial issue in carbon cycle studies. One commonly used way to estimate the characteristics of GPP is to infer the total amount of GPP by collecting field samples. In this process, the spatial sampling design will affect the error varia

  16. DESIGN SAMPLING AND REPLICATION ASSIGNMENT UNDER FIXED COMPUTING BUDGET

    Institute of Scientific and Technical Information of China (English)

    Loo Hay LEE; Ek Peng CHEW

    2005-01-01

    For many real world problems, when the design space is huge and unstructured, and time consuming simulation is needed to estimate the performance measure, it is important to decide how many designs to sample and how long to run for each design alternative given that we have only a fixed amount of computing time. In this paper, we present a simulation study on how the distribution of the performance measures and distribution of the estimation errors/noises will affect the decision.From the analysis, it is observed that when the underlying distribution of the noise is bounded and if there is a high chance that we can get the smallest noise, then the decision will be to sample as many as possible, but if the noise is unbounded, then it will be important to reduce the noise level first by assigning more replications for each design. On the other hand, if the distribution of the performance measure indicates that we will have a high chance of getting good designs, the suggestion is also to reduce the noise level, otherwise, we need to sample more designs so as to increase the chances of getting good designs. For the special case when the distributions of both the performance measures and noise are normal, we are able to estimate the number of designs to sample, and the number of replications to run in order to obtain the best performance.

  17. Alternative sampling designs and estimators for annual surveys

    Science.gov (United States)

    Paul C. Van Deusen

    2000-01-01

    Annual forest inventory systems in the United States have generally converged on sampling designs that: (1) measure equal proportions of the total number of plots each year; and (2) call for the plots to be systematically dispersed. However, there will inevitably be a need to deviate from the basic design to respond to special requests, natural disasters, and budgetary...

  18. Implications of sampling design and sample size for national carbon accounting systems.

    Science.gov (United States)

    Köhl, Michael; Lister, Andrew; Scott, Charles T; Baldauf, Thomas; Plugge, Daniel

    2011-11-08

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of earth-observation data and in-situ field assessments as data sources. We compared the cost-efficiency of four different sampling design alternatives (simple random sampling, regression estimators, stratified sampling, 2-phase sampling with regression estimators) that have been proposed in the scope of REDD. Three of the design alternatives provide for a combination of in-situ and earth-observation data. Under different settings of remote sensing coverage, cost per field plot, cost of remote sensing imagery, correlation between attributes quantified in remote sensing and field data, as well as population variability and the percent standard error over total survey cost was calculated. The cost-efficiency of forest carbon stock assessments is driven by the sampling design chosen. Our results indicate that the cost of remote sensing imagery is decisive for the cost-efficiency of a sampling design. The variability of the sample population impairs cost-efficiency, but does not reverse the pattern of cost-efficiency of the individual design alternatives. Our results clearly indicate that it is important to consider cost-efficiency in the development of forest carbon stock assessments and the selection of remote sensing techniques. The development of MRV-systems for REDD need to be based on a sound optimization process that compares different data sources and sampling designs with respect to their cost-efficiency. This helps to reduce the uncertainties related with the quantification of carbon stocks and to increase the financial benefits from adopting a REDD regime.

  19. Two-stage high frequency pulse tube cooler for refrigeration at 25 K

    CERN Document Server

    Dietrich, M

    2009-01-01

    A two-stage Stirling-type U-shape pulse tube cryocooler driven by a 10 kW-class linear compressor was designed, built and tested. A special feature of the cold head is the absence of a heat exchanger at the cold end of the first stage, since the intended application requires no cooling power at an intermediate temperature. Simulations where done using Sage-software to find optimum operating conditions and cold head geometry. Flow-impedance matching was required to connect the compressor designed for 60 Hz operation to the 40 Hz cold head. A cooling power of 12.9 W at 25 K with an electrical input power of 4.6 kW has been achieved up to now. The lowest temperature reached is 13.7 K.

  20. Alignment and characterization of the two-stage time delay compensating XUV monochromator

    CERN Document Server

    Eckstein, Martin; Kubin, Markus; Yang, Chung-Hsin; Frassetto, Fabio; Poletto, Luca; Vrakking, Marc J J; Kornilov, Oleg

    2016-01-01

    We present the design, implementation and alignment procedure for a two-stage time delay compensating monochromator. The setup spectrally filters the radiation of a high-order harmonic generation source providing wavelength-selected XUV pulses with a bandwidth of 300 to 600~meV in the photon energy range of 3 to 50~eV. XUV pulses as short as $12\\pm3$~fs are demonstrated. Transmission of the 400~nm (3.1~eV) light facilitates precise alignment of the monochromator. This alignment strategy together with the stable mechanical design of the motorized beamline components enables us to automatically scan the XUV photon energ in pump-probe experiments that require XUV beam pointing stability. The performance of the beamline is demonstrated by the generation of IR-assisted sidebands in XUV photoionization of argon atoms.

  1. a Remote Liquid Target Loading System for a Two-Stage Gas Gun

    Science.gov (United States)

    Gibson, L. L.; Bartram, B.; Dattelbaum, D. M.; Sheffield, S. A.; Stahl, D. B.

    2009-12-01

    A Remote Liquid Loading System (RLLS) was designed and tested for the application of loading high-hazard liquid materials into instrumented target cells for gas gun-driven plate impact experiments. These high hazard liquids tend to react with confining materials in a short period of time, degrading target assemblies and potentially building up pressure through the evolution of gas in the reactions. Therefore, the ability to load a gas gun target immediately prior to gun firing provides the most stable and reliable target fielding approach. We present the design and evaluation of an RLLS built for the LANL two-stage gas gun. The system has been used successfully to interrogate the shock initiation behavior of ˜98 wt% percent hydrogen peroxide (H2O2) solutions, using embedded electromagnetic gauges for measurement of shock wave profiles in-situ.

  2. Smart laser scanning sampling head design for image acquisition applications.

    Science.gov (United States)

    Amin, M Junaid; Riza, Nabeel A

    2013-07-10

    A smart laser scanning sampling head design is presented using an electronically controlled variable focal length lens to achieve the smallest sampling laser spot possible at target plane distances reaching 8 m. A proof-of-concept experiment is conducted using a 10 mW red 633 nm laser coupled with beam conditioning optics that includes an electromagnetically actuated deformable membrane liquid lens to demonstrate sampling laser spot radii under 1 mm over a target range of 20-800 cm. Applications for the proposed sampling head are diverse and include laser machining and component inspection.

  3. Aerobic and two-stage anaerobic-aerobic sludge digestion with pure oxygen and air aeration.

    Science.gov (United States)

    Zupancic, Gregor D; Ros, Milenko

    2008-01-01

    The degradability of excess activated sludge from a wastewater treatment plant was studied. The objective was establishing the degree of degradation using either air or pure oxygen at different temperatures. Sludge treated with pure oxygen was degraded at temperatures from 22 degrees C to 50 degrees C while samples treated with air were degraded between 32 degrees C and 65 degrees C. Using air, sludge is efficiently degraded at 37 degrees C and at 50-55 degrees C. With oxygen, sludge was most effectively degraded at 38 degrees C or at 25-30 degrees C. Two-stage anaerobic-aerobic processes were studied. The first anaerobic stage was always operated for 5 days HRT, and the second stage involved aeration with pure oxygen and an HRT between 5 and 10 days. Under these conditions, there is 53.5% VSS removal and 55.4% COD degradation at 15 days HRT - 5 days anaerobic, 10 days aerobic. Sludge digested with pure oxygen at 25 degrees C in a batch reactor converted 48% of sludge total Kjeldahl nitrogen to nitrate. Addition of an aerobic stage with pure oxygen aeration to the anaerobic digestion enhances ammonium nitrogen removal. In a two-stage anaerobic-aerobic sludge digestion process within 8 days HRT of the aerobic stage, the removal of ammonium nitrogen was 85%.

  4. A new design of groundwater sampling device and its application

    Institute of Scientific and Technical Information of China (English)

    Yih-Jin Tsai; Ming-Ching T.Kuo

    2005-01-01

    Compounds in the atmosphere contaminate samples of groundwater. An inexpensive and simple method for collecting groundwater samples is developed to prevent contamination when the background concentration of contaminants is high. This new design of groundwater sampling device involves a glass sampling bottle with a Teflon-lined valve at each end. A cleaned and dried sampling bottle was connected to a low flow-rate peristaltic pump with Teflon tubing and was filled with water. No headspace volume was remained in the sampling bottle. The sample bottle was then packed in a PVC bag to prevent the target component from infiltrating into the water sample through the valves. In this study, groundwater was sampled at six wells using both the conventional method and the improved method.The analysis of trichlorofluoromethane(CFC-11 ) concentrations at these six wells indicates that all the groundwater samples obtained by the conventional sampling method were contaminated by CFC-11 from the atmosphere. The improved sampling method greatly eliminated theproblems of contamination, preservation and quantitative analysis of natural water.

  5. A two-stage series diode for intense large-area moderate pulsed X rays production

    Science.gov (United States)

    Lai, Dingguo; Qiu, Mengtong; Xu, Qifu; Su, Zhaofeng; Li, Mo; Ren, Shuqing; Huang, Zhongliang

    2017-01-01

    This paper presents a method for moderate pulsed X rays produced by a series diode, which can be driven by high voltage pulse to generate intense large-area uniform sub-100-keV X rays. A two stage series diode was designed for Flash-II accelerator and experimentally investigated. A compact support system of floating converter/cathode was invented, the extra cathode is floating electrically and mechanically, by withdrawing three support pins several milliseconds before a diode electrical pulse. A double ring cathode was developed to improve the surface electric field and emission stability. The cathode radii and diode separation gap were optimized to enhance the uniformity of X rays and coincidence of the two diode voltages based on the simulation and theoretical calculation. The experimental results show that the two stage series diode can work stably under 700 kV and 300 kA, the average energy of X rays is 86 keV, and the dose is about 296 rad(Si) over 615 cm2 area with uniformity 2:1 at 5 cm from the last converter. Compared with the single diode, the average X rays' energy reduces from 132 keV to 88 keV, and the proportion of sub-100-keV photons increases from 39% to 69%.

  6. A separate two-stage pulse tube cooler working at liquid helium temperature

    Institute of Scientific and Technical Information of China (English)

    QIU Limin; HE Yonglin; GAN Zhihua; WAN Laihong; CHEN Guobang

    2005-01-01

    A novel 4 K separate two-stage pulse tube cooler (PTC) was designed and tested. The cooler consists of two separate pulse tube coolers, in which the cold end of the first stage regenerator is thermally connected with the middle part of the second regenerator. Compared to the traditional coupled multi-stage pulse tube cooler, the mutual interference between stages can be significantly eliminated. The lowest refrigeration temperature obtained at the first stage pulse tube was 13.8 K. This is a new record for single stage PTC. With two compressors and two rotary valves driving mode, the separate two-stage PTC obtained a refrigeration temperature of 2.5 K at the second stage. Cooling capacities of 508 mW at 4.2 K and 15 W at 37.5 K were achieved simultaneously. A one-compressor and one-rotary valve driving mode has been proposed to further simplify the structure of separate type PTC.

  7. Simultaneous bile duct and portal venous branch ligation in two-stage hepatectomy

    Institute of Scientific and Technical Information of China (English)

    Hiroya Iida; Chiaki Yasui; Tsukasa Aihara; Shinichi Ikuta; Hidenori Yoshie; Naoki Yamanaka

    2011-01-01

    Hepatectomy is an effective surgical treatment for multiple bilobar liver metastases from colon cancer; however, one of the primary obstacles to completing surgical resection for these cases is an insufficient volume of the future remnant liver, which may cause postoperative liver failure. To induce atrophy of the unilateral lobe and hypertrophy of the future remnant liver, procedures to occlude the portal vein have been conventionally used prior to major hepatectomy. We report a case of a 50-year-old woman in whom two-stage hepatectomy was performed in combination with intraoperative ligation of the portal vein and the bile duct of the right hepatic lobe. This procedure was designed to promote the atrophic effect on the right hepatic lobe more effectively than the conventional technique, and to the best of our knowledge, it was used for the first time in the present case. Despite successful induction of liver volume shift as well as the following procedure, the patient died of subsequent liver failure after developing recurrent tumors. We discuss the first case in which simultaneous ligation of the portal vein and the biliary system was successfully applied as part of the first step of two-stage hepatectomy.

  8. Development and optimization of a two-stage gasifier for heat and power production

    Science.gov (United States)

    Kosov, V. V.; Zaichenko, V. M.

    2016-11-01

    The major methods of biomass thermal conversion are combustion in excess oxygen, gasification in reduced oxygen, and pyrolysis in the absence of oxygen. The end products of these methods are heat, gas, liquid and solid fuels. From the point of view of energy production, none of these methods can be considered optimal. A two-stage thermal conversion of biomass based on pyrolysis as the first stage and pyrolysis products cracking as the second stage can be considered the optimal method for energy production that allows obtaining synthesis gas consisting of hydrogen and carbon monoxide and not containing liquid or solid particles. On the base of the two stage cracking technology, there was designed an experimental power plant of electric power up to 50 kW. The power plant consists of a thermal conversion module and a gas engine power generator adapted for operation on syngas. Purposes of the work were determination of an optimal operation temperature of the thermal conversion module and an optimal mass ratio of processed biomass and charcoal in cracking chamber of the thermal conversion module. Experiments on the pyrolysis products cracking at various temperatures show that the optimum cracking temperature is equal to 1000 °C. From the results of measuring the volume of gas produced in different mass ratios of charcoal and wood biomass processed, it follows that the maximum volume of the gas in the range of the mass ratio equal to 0.5-0.6.

  9. Numerical simulation of municipal solid waste combustion in a novel two-stage reciprocating incinerator.

    Science.gov (United States)

    Huai, X L; Xu, W L; Qu, Z Y; Li, Z G; Zhang, F P; Xiang, G M; Zhu, S Y; Chen, G

    2008-01-01

    A mathematical model was presented in this paper for the combustion of municipal solid waste in a novel two-stage reciprocating grate furnace. Numerical simulations were performed to predict the temperature, the flow and the species distributions in the furnace, with practical operational conditions taken into account. The calculated results agree well with the test data, and the burning behavior of municipal solid waste in the novel two-stage reciprocating incinerator can be demonstrated well. The thickness of waste bed, the initial moisture content, the excessive air coefficient and the secondary air are the major factors that influence the combustion process. If the initial moisture content of waste is high, both the heat value of waste and the temperature inside incinerator are low, and less oxygen is necessary for combustion. The air supply rate and the primary air distribution along the grate should be adjusted according to the initial moisture content of the waste. A reasonable bed thickness and an adequate excessive air coefficient can keep a higher temperature, promote the burnout of combustibles, and consequently reduce the emission of dioxin pollutants. When the total air supply is constant, reducing primary air and introducing secondary air properly can enhance turbulence and mixing, prolong the residence time of flue gas, and promote the complete combustion of combustibles. This study provides an important reference for optimizing the design and operation of municipal solid wastes furnace.

  10. Configuration Consideration for Expander in Transcritical Carbon Dioxide Two-Stage Compression Cycle

    Institute of Scientific and Technical Information of China (English)

    MA Yitai; YANG Junlan; GUAN Haiqing; LI Minxia

    2005-01-01

    To investigate the configuration consideration of expander in transcritical carbon dioxide two-stage compression cycle, the best place in the cycle should be searched for to reinvest the recovery work so as to improve the system efficiency. The expander and the compressor are connected to the same shaft and integrated into one unit, with the latter being driven by the former, thus the transfer loss and leakage loss can be decreased greatly. In these systems, the expander can be either connected with the first stage compressor (shortened as DCDL cycle) or the second stage compressor (shortened as DCDH cycle), but the two configuration ways can get different performances. By setting up theoretical model for two kinds of expander configuration ways in the transcritical carbon dioxide two-stage compression cycle, the first and the second laws of thermodynamics are used to analyze the coefficient of performance, exergy efficiency, inter-stage pressure, discharge temperature and exergy losses of each component for the two cycles. From the model results, the performance of DCDH cycle is better than that of DCDL cycle. The analysis results are indispensable to providing a theoretical basis for practical design and operating.

  11. Two-stage coordination multi-radio multi-channel mac protocol for wireless mesh networks

    CERN Document Server

    Zhao, Bingxuan

    2011-01-01

    Within the wireless mesh network, a bottleneck problem arises as the number of concurrent traffic flows (NCTF) increases over a single common control channel, as it is for most conventional networks. To alleviate this problem, this paper proposes a two-stage coordination multi-radio multi-channel MAC (TSC-M2MAC) protocol that designates all available channels as both control channels and data channels in a time division manner through a two-stage coordination. At the first stage, a load balancing breadth-first-search-based vertex coloring algorithm for multi-radio conflict graph is proposed to intelligently allocate multiple control channels. At the second stage, a REQ/ACK/RES mechanism is proposed to realize dynamical channel allocation for data transmission. At this stage, the Channel-and-Radio Utilization Structure (CRUS) maintained by each node is able to alleviate the hidden nodes problem; also, the proposed adaptive adjustment algorithm for the Channel Negotiation and Allocation (CNA) sub-interval is ab...

  12. Experimental Design for the INL Sample Collection Operational Test

    Energy Technology Data Exchange (ETDEWEB)

    Amidan, Brett G.; Piepel, Gregory F.; Matzke, Brett D.; Filliben, James J.; Jones, Barbara

    2007-12-13

    This document describes the test events and numbers of samples comprising the experimental design that was developed for the contamination, decontamination, and sampling of a building at the Idaho National Laboratory (INL). This study is referred to as the INL Sample Collection Operational Test. Specific objectives were developed to guide the construction of the experimental design. The main objective is to assess the relative abilities of judgmental and probabilistic sampling strategies to detect contamination in individual rooms or on a whole floor of the INL building. A second objective is to assess the use of probabilistic and Bayesian (judgmental + probabilistic) sampling strategies to make clearance statements of the form “X% confidence that at least Y% of a room (or floor of the building) is not contaminated. The experimental design described in this report includes five test events. The test events (i) vary the floor of the building on which the contaminant will be released, (ii) provide for varying or adjusting the concentration of contaminant released to obtain the ideal concentration gradient across a floor of the building, and (iii) investigate overt as well as covert release of contaminants. The ideal contaminant gradient would have high concentrations of contaminant in rooms near the release point, with concentrations decreasing to zero in rooms at the opposite end of the building floor. For each of the five test events, the specified floor of the INL building will be contaminated with BG, a stand-in for Bacillus anthracis. The BG contaminant will be disseminated from a point-release device located in the room specified in the experimental design for each test event. Then judgmental and probabilistic samples will be collected according to the pre-specified sampling plan. Judgmental samples will be selected based on professional judgment and prior information. Probabilistic samples will be selected in sufficient numbers to provide desired confidence

  13. Economic design of VSI GCCC charts for correlated samples

    Directory of Open Access Journals (Sweden)

    Y K Chen

    2013-08-01

    Full Text Available Generalised cumulative count of conforming (GCCC charts have been proposed for monitoring a high-yield process that allows the items to be inspected sample by sample and not according to the production order. Recent study has shown that the GCCC chart with a variable sampling interval (VSI is superior to the traditional one with a fixed sampling interval (FSI because of the additional flexibility of sampling interval it offers. However, the VSI chart is still costly when used for the prevention of defective products. This paper presents an economic model for the design problem of the VSI GCCC chart, taking into account the correlation of the production outputs within the same sample. In the economic design, a cost function is developed that includes the cost of sampling and inspection, the cost of false alarms, the cost of detecting and removing the assignable cause, and the cost when the process is out-of-control. An evolutionary search method using the cost function is presented for finding the optimal design parameters of the VSI GCCC chart. Comparisons between VSI and FSI charts for expected cost per unit time are also made for various process and cost parameters.

  14. Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.

    Science.gov (United States)

    Hund, Lauren; Bedrick, Edward J; Pagano, Marcello

    2015-01-01

    Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.

  15. Latent spatial models and sampling design for landscape genetics

    Science.gov (United States)

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  16. A Designated Harmonic Suppression Technology for Sampled SPWM

    Institute of Scientific and Technical Information of China (English)

    YANG Ping

    2005-01-01

    Sampled SPWM is an excellent VVVF method of motor speed control, meanwhile the harmonic components of the output wave impairs its applications in practice. A designated harmonic suppression technology is presented for sampled SPWM, which is an improved algorithm for the harmonic suppression in high voltage and high frequency spectrum. As the technology is applied in whole speed adjusting range, the voltage can be conveniently controlled and high frequency harmonic of SP WM is also improved.

  17. Design of a Mars rover and sample return mission

    Science.gov (United States)

    Bourke, Roger D.; Kwok, Johnny H.; Friedlander, Alan

    1990-01-01

    The design of a Mars Rover Sample Return (MRSR) mission that satisfies scientific and human exploration precursor needs is described. Elements included in the design include an imaging rover that finds and certifies safe landing sites and maps rover traverse routes, a rover that operates the surface with an associated lander for delivery, and a Mars communications orbiter that allows full-time contact with surface elements. A graph of MRSR candidate launch vehice performances is presented.

  18. Convolution kernel design and efficient algorithm for sampling density correction.

    Science.gov (United States)

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  19. Optimized design and analysis of sparse-sampling FMRI experiments.

    Science.gov (United States)

    Perrachione, Tyler K; Ghosh, Satrajit S

    2013-01-01

    Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase

  20. Optimal Design and Purposeful Sampling: Complementary Methodologies for Implementation Research.

    Science.gov (United States)

    Duan, Naihua; Bhaumik, Dulal K; Palinkas, Lawrence A; Hoagwood, Kimberly

    2015-09-01

    Optimal design has been an under-utilized methodology. However, it has significant real-world applications, particularly in mixed methods implementation research. We review the concept and demonstrate how it can be used to assess the sensitivity of design decisions and balance competing needs. For observational studies, this methodology enables selection of the most informative study units. For experimental studies, it entails selecting and assigning study units to intervention conditions in the most informative manner. We blend optimal design methods with purposeful sampling to show how these two concepts balance competing needs when there are multiple study aims, a common situation in implementation research.

  1. Two Stage Battery System for the ROSETTA Lander

    Science.gov (United States)

    Debus, André

    2002-01-01

    The ROSETTA mission, lead by ESA, will be launched by Ariane V from Kourou in January 2003 and after a long trip, the spacecraft will reach the comet Wirtanen 46P in 2011. The mission includes a lander, built under the leadership of DLR, on which CNES has a large participation and is concerned by providing a part of the payload and some lander systems. Among these, CNES delivers a specific battery system in order to comply with the mission environment and the mission scenario, avoiding particularly the use of radio-isotopic heaters and radio-isotopic electrical generators usually used for such missions far from the Sun. The battery system includes : - a pack of primary batteries of lithium/thionyl chloride cells, this kind of generator - a secondary stage, including rechargeable lithium-ion cells, used as redundancy for the - a specific electronic system dedicated to the battery handling and to secondary battery - a mechanical and thermal (insulation, and heating devices) structures permitting the The complete battery system has been designed, built and qualified in order to comply with the trip and mission requirements, keeping within low mass and low volume limits. This battery system is presently integrated into the Rosetta Lander flight model and will leave the Earth at the beginning of next year. Such a development and experience could be re-used in the frame of cometary and planetary missions.

  2. Experimental and modeling study of a two-stage pilot scale high solid anaerobic digester system.

    Science.gov (United States)

    Yu, Liang; Zhao, Quanbao; Ma, Jingwei; Frear, Craig; Chen, Shulin

    2012-11-01

    This study established a comprehensive model to configure a new two-stage high solid anaerobic digester (HSAD) system designed for highly degradable organic fraction of municipal solid wastes (OFMSW). The HSAD reactor as the first stage was naturally separated into two zones due to biogas floatation and low specific gravity of solid waste. The solid waste was retained in the upper zone while only the liquid leachate resided in the lower zone of the HSAD reactor. Continuous stirred-tank reactor (CSTR) and advective-diffusive reactor (ADR) models were constructed in series to describe the whole system. Anaerobic digestion model No. 1 (ADM1) was used as reaction kinetics and incorporated into each reactor module. Compared with the experimental data, the simulation results indicated that the model was able to well predict the pH, volatile fatty acid (VFA) and biogas production.

  3. Study of a two-stage photobase generator for photolithography in microelectronics.

    Science.gov (United States)

    Turro, Nicholas J; Li, Yongjun; Jockusch, Steffen; Hagiwara, Yuji; Okazaki, Masahiro; Mesch, Ryan A; Schuster, David I; Willson, C Grant

    2013-03-01

    The investigation of the photochemistry of a two-stage photobase generator (PBG) is described. Absorption of a photon by a latent PBG (1) (first step) produces a PBG (2). Irradiation of 2 in the presence of water produces a base (second step). This two-photon sequence (1 + hν → 2 + hν → base) is an important component in the design of photoresists for pitch division technology, a method that doubles the resolution of projection photolithography for the production of microelectronic chips. In the present system, the excitation of 1 results in a Norrish type II intramolecular hydrogen abstraction to generate a 1,4-biradiacal that undergoes cleavage to form 2 and acetophenone (Φ ∼ 0.04). In the second step, excitation of 2 causes cleavage of the oxime ester (Φ = 0.56) followed by base generation after reaction with water.

  4. STOCHASTIC DISCRETE MODEL OF TWO-STAGE ISOLATION SYSTEM WITH RIGID LIMITERS

    Institute of Scientific and Technical Information of China (English)

    HE Hua; FENG Qi; SHEN Rong-ying; WANG Yu

    2006-01-01

    The possible intermittent impacts of a two-stage isolation system with rigid limiters have been investigated. The isolation system is under periodic external excitation disturbed by small stationary Gaussian white noise after shock. The maximal impact Then in the period after shock, the zero order approximate stochastic discrete model and the first order approximate stochastic model are developed. The real isolation system of an MTU diesel engine is used to evaluate the established model. After calculating of the numerical example, the effects of noise excitation on the isolation system are discussed.The results show that the property of the system is complicated due to intermittent impact. The difference between zero order model and the first order model may be great.The effect of small noise is obvious. The results may be expected useful to the naval designers.

  5. Two-stage reflective optical system for achromatic 10 nm x-ray focusing

    Science.gov (United States)

    Motoyama, Hiroto; Mimura, Hidekazu

    2015-12-01

    Recently, coherent x-ray sources have promoted developments of optical systems for focusing, imaging, and interferometers. In this paper, we propose a two-stage focusing optical system with the goal of achromatically focusing pulses from an x-ray free-electron laser (XFEL), with a focal width of 10 nm. In this optical system, the x-ray beam is expanded by a grazing-incidence aspheric mirror, and it is focused by a mirror that is shaped as a solid of revolution. We describe the design procedure and discuss the theoretical focusing performance. In theory, soft-XFEL lights can be focused to a 10 nm area without chromatic aberration and with high reflectivity; this creates an unprecedented power density of 1020 W cm-2 in the soft-x-ray range.

  6. Two-Stage Approach for Protein Superfamily Classification

    Directory of Open Access Journals (Sweden)

    Swati Vipsita

    2013-01-01

    Full Text Available We deal with the problem of protein superfamily classification in which the family membership of newly discovered amino acid sequence is predicted. Correct prediction is a matter of great concern for the researchers and drug analyst which helps them in discovery of new drugs. As this problem falls broadly under the category of pattern classification problem, we have made all efforts to optimize feature extraction in the first stage and classifier design in the second stage with an overall objective to maximize the performance accuracy of the classifier. In the feature extraction phase, Genetic Algorithm- (GA- based wrapper approach is used to select few eigenvectors from the principal component analysis (PCA space which are encoded as binary strings in the chromosome. On the basis of position of 1’s in the chromosome, the eigenvectors are selected to build the transformation matrix which then maps the original high-dimension feature space to lower dimension feature space. Using PCA-NSGA-II (non-dominated sorting GA, the nondominated solutions obtained from the Pareto front solve the trade-off problem by compromising between the number of eigenvectors selected and the accuracy obtained by the classifier. In the second stage, recursive orthogonal least square algorithm (ROLSA is used for training radial basis function network (RBFN to select optimal number of hidden centres as well as update the output layer weighting matrix. This approach can be applied to large data set with much lower requirements of computer memory. Thus, very small architectures having few number of hidden centres are obtained showing higher level of performance accuracy.

  7. Mars Science Laboratory Sample Acquisition, Sample Processing and Handling: Subsystem Design and Test Challenges

    Science.gov (United States)

    Jandura, Louise

    2010-01-01

    The Sample Acquisition/Sample Processing and Handling subsystem for the Mars Science Laboratory is a highly-mechanized, Rover-based sampling system that acquires powdered rock and regolith samples from the Martian surface, sorts the samples into fine particles through sieving, and delivers small portions of the powder into two science instruments inside the Rover. SA/SPaH utilizes 17 actuated degrees-of-freedom to perform the functions needed to produce 5 sample pathways in support of the scientific investigation on Mars. Both hardware redundancy and functional redundancy are employed in configuring this sampling system so some functionality is retained even with the loss of a degree-of-freedom. Intentional dynamic environments are created to move sample while vibration isolators attenuate this environment at the sensitive instruments located near the dynamic sources. In addition to the typical flight hardware qualification test program, two additional types of testing are essential for this kind of sampling system: characterization of the intentionally-created dynamic environment and testing of the sample acquisition and processing hardware functions using Mars analog materials in a low pressure environment. The overall subsystem design and configuration are discussed along with some of the challenges, tradeoffs, and lessons learned in the areas of fault tolerance, intentional dynamic environments, and special testing

  8. The OSIRIS-REx Asteroid Sample Return Mission Operations Design

    Science.gov (United States)

    Gal-Edd, Jonathan S.; Cheuvront, Allan

    2015-01-01

    OSIRIS-REx is an acronym that captures the scientific objectives: Origins, Spectral Interpretation, Resource Identification, and Security-Regolith Explorer. OSIRIS-REx will thoroughly characterize near-Earth asteroid Bennu (Previously known as 1019551999 RQ36). The OSIRIS-REx Asteroid Sample Return Mission delivers its science using five instruments and radio science along with the Touch-And-Go Sample Acquisition Mechanism (TAGSAM). All of the instruments and data analysis techniques have direct heritage from flown planetary missions. The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the mission's science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the Sample Return Capsule (SRC) lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis. Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together spacecraft, instrument and operations scenarios. Asteroid Touch and Go (TAG) has various options varying from ground only to fully automated (natural feature tracking). Spacecraft constraints such as thermo and high gain antenna pointing impact the timeline. The mission is sensitive to navigation errors, so a late command update has been implemented. The project implemented lessons learned from other "small body" missions. The key lesson learned was 'expect the unexpected' and implement planning tools early in the lifecycle

  9. Computer-methodology for designing pest sampling and monitoring programs

    NARCIS (Netherlands)

    Werf, van der W.; Nyrop, J.P.; Binns, M.R.; Kovach, J.

    1999-01-01

    This paper evaluates two distinct enterprises: (1) an ongoing attempt to produce an introductory book plus accompanying software tools on sampling and monitoring in pest management; and (2) application of the modelling approaches discussed in that book to the design of monitoring methods for

  10. Mixed Estimation for a Forest Survey Sample Design

    Science.gov (United States)

    Francis A. Roesch

    1999-01-01

    Three methods of estimating the current state of forest attributes over small areas for the USDA Forest Service Southern Research Station's annual forest sampling design are compared. The three methods were (I) simple moving average, (II) single imputation of plot data that had been updated by externally developed models, and (III) local application of a global...

  11. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    Science.gov (United States)

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  12. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.

    Science.gov (United States)

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between

  13. A Sample Handling System for Mars Sample Return - Design and Status

    Science.gov (United States)

    Allouis, E.; Renouf, I.; Deridder, M.; Vrancken, D.; Gelmi, R.; Re, E.

    2009-04-01

    A mission to return atmosphere and soil samples form the Mars is highly desired by planetary scientists from around the world and space agencies are starting preparation for the launch of a sample return mission in the 2020 timeframe. Such a mission would return approximately 500 grams of atmosphere, rock and soil samples to Earth by 2025. Development of a wide range of new technology will be critical to the successful implementation of such a challenging mission. Technical developments required to realise the mission include guided atmospheric entry, soft landing, sample handling robotics, biological sealing, Mars atmospheric ascent sample rendezvous & capture and Earth return. The European Space Agency has been performing system definition studies along with numerous technology development studies under the framework of the Aurora programme. Within the scope of these activities Astrium has been responsible for defining an overall sample handling architecture in collaboration with European partners (sample acquisition and sample capture, Galileo Avionica; sample containment and automated bio-sealing, Verhaert). Our work has focused on the definition and development of the robotic systems required to move the sample through the transfer chain. This paper presents the Astrium team's high level design for the surface transfer system and the orbiter transfer system. The surface transfer system is envisaged to use two robotic arms of different sizes to allow flexible operations and to enable sample transfer over relatively large distances (~2 to 3 metres): The first to deploy/retract the Drill Assembly used for sample collection, the second for the transfer of the Sample Container (the vessel containing all the collected samples) from the Drill Assembly to the Mars Ascent Vehicle (MAV). The sample transfer actuator also features a complex end-effector for handling the Sample Container. The orbiter transfer system will transfer the Sample Container from the capture

  14. An unit cost adjusting heuristic algorithm for the integrated planning and scheduling of a two-stage supply chain

    Directory of Open Access Journals (Sweden)

    Jianhua Wang

    2014-10-01

    Full Text Available Purpose: The stable relationship of one-supplier-one-customer is replaced by a dynamic relationship of multi-supplier-multi-customer in current market gradually, and efficient scheduling techniques are important tools of the dynamic supply chain relationship establishing process. This paper studies the optimization of the integrated planning and scheduling problem of a two-stage supply chain with multiple manufacturers and multiple retailers to obtain a minimum supply chain operating cost, whose manufacturers have different production capacities, holding and producing cost rates, transportation costs to retailers.Design/methodology/approach: As a complex task allocation and scheduling problem, this paper sets up an INLP model for it and designs a Unit Cost Adjusting (UCA heuristic algorithm that adjust the suppliers’ supplying quantity according to their unit costs step by step to solve the model.Findings: Relying on the contrasting analysis between the UCA and the Lingo solvers for optimizing many numerical experiments, results show that the INLP model and the UCA algorithm can obtain its near optimal solution of the two-stage supply chain’s planning and scheduling problem within very short CPU time.Research limitations/implications: The proposed UCA heuristic can easily help managers to optimizing the two-stage supply chain scheduling problems which doesn’t include the delivery time and batch of orders. For two-stage supply chains are the most common form of actual commercial relationships, so to make some modification and study on the UCA heuristic should be able to optimize the integrated planning and scheduling problems of a supply chain with more reality constraints.Originality/value: This research proposes an innovative UCA heuristic for optimizing the integrated planning and scheduling problem of two-stage supply chains with the constraints of suppliers’ production capacity and the orders’ delivering time, and has a great

  15. Loss Function Based Ranking in Two-Stage, Hierarchical Models

    Science.gov (United States)

    Lin, Rongheng; Louis, Thomas A.; Paddock, Susan M.; Ridgeway, Greg

    2009-01-01

    Performance evaluations of health services providers burgeons. Similarly, analyzing spatially related health information, ranking teachers and schools, and identification of differentially expressed genes are increasing in prevalence and importance. Goals include valid and efficient ranking of units for profiling and league tables, identification of excellent and poor performers, the most differentially expressed genes, and determining “exceedances” (how many and which unit-specific true parameters exceed a threshold). These data and inferential goals require a hierarchical, Bayesian model that accounts for nesting relations and identifies both population values and random effects for unit-specific parameters. Furthermore, the Bayesian approach coupled with optimizing a loss function provides a framework for computing non-standard inferences such as ranks and histograms. Estimated ranks that minimize Squared Error Loss (SEL) between the true and estimated ranks have been investigated. The posterior mean ranks minimize SEL and are “general purpose,” relevant to a broad spectrum of ranking goals. However, other loss functions and optimizing ranks that are tuned to application-specific goals require identification and evaluation. For example, when the goal is to identify the relatively good (e.g., in the upper 10%) or relatively poor performers, a loss function that penalizes classification errors produces estimates that minimize the error rate. We construct loss functions that address this and other goals, developing a unified framework that facilitates generating candidate estimates, comparing approaches and producing data analytic performance summaries. We compare performance for a fully parametric, hierarchical model with Gaussian sampling distribution under Gaussian and a mixture of Gaussians prior distributions. We illustrate approaches via analysis of standardized mortality ratio data from the United States Renal Data System. Results show that SEL

  16. Secondary Analysis under Cohort Sampling Designs Using Conditional Likelihood

    Directory of Open Access Journals (Sweden)

    Olli Saarela

    2012-01-01

    Full Text Available Under cohort sampling designs, additional covariate data are collected on cases of a specific type and a randomly selected subset of noncases, primarily for the purpose of studying associations with a time-to-event response of interest. With such data available, an interest may arise to reuse them for studying associations between the additional covariate data and a secondary non-time-to-event response variable, usually collected for the whole study cohort at the outset of the study. Following earlier literature, we refer to such a situation as secondary analysis. We outline a general conditional likelihood approach for secondary analysis under cohort sampling designs and discuss the specific situations of case-cohort and nested case-control designs. We also review alternative methods based on full likelihood and inverse probability weighting. We compare the alternative methods for secondary analysis in two simulated settings and apply them in a real-data example.

  17. High speed sampling circuit design for pulse laser ranging

    Science.gov (United States)

    Qian, Rui-hai; Gao, Xuan-yi; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Guo, Xiao-kang; He, Shi-jie

    2016-10-01

    In recent years, with the rapid development of digital chip, high speed sampling rate analog to digital conversion chip can be used to sample narrow laser pulse echo. Moreover, high speed processor is widely applied to achieve digital laser echo signal processing algorithm. The development of digital chip greatly improved the laser ranging detection accuracy. High speed sampling and processing circuit used in the laser ranging detection system has gradually been a research hotspot. In this paper, a pulse laser echo data logging and digital signal processing circuit system is studied based on the high speed sampling. This circuit consists of two parts: the pulse laser echo data processing circuit and the data transmission circuit. The pulse laser echo data processing circuit includes a laser diode, a laser detector and a high sample rate data logging circuit. The data transmission circuit receives the processed data from the pulse laser echo data processing circuit. The sample data is transmitted to the computer through USB2.0 interface. Finally, a PC interface is designed using C# language, in which the sampling laser pulse echo signal is demonstrated and the processed laser pulse is plotted. Finally, the laser ranging experiment is carried out to test the pulse laser echo data logging and digital signal processing circuit system. The experiment result demonstrates that the laser ranging hardware system achieved high speed data logging, high speed processing and high speed sampling data transmission.

  18. DESIGN OF NOVEL HIGH PRESSURE- RESISTANT HYDROTHERMAL FLUID SAMPLE VALVE

    Institute of Scientific and Technical Information of China (English)

    LIU Wei; YANG Canjun; WU Shijun; XIE Yingjun; CHEN Ying

    2008-01-01

    Sampling study is an effective exploration method, but the most extreme environments of hydrothermal vents pose considerable engineering challenges for sampling hydrothermal fluids. Moreover, traditional sampler systems with sample valves have difficulty in maintaining samples in situ pressure. However, decompression changes have effect on microorganisms sensitive to such stresses. To address the technical difficulty of collecting samples from hydrothermal vents, a new bidirectional high pressure-resistant sample valve with balanced poppet was designed. The sample valve utilizes a soft high performance plastic "PEEK" as poppet. The poppet with inapposite dimension is prone to occur to plastic deformation or rupture for high working pressure in experiments. To address this issue, based on the finite element model, simulated results on stress distribution of the poppet with different structure parameters and preload spring force were obtained. The static axial deformations on top of the poppet were experimented. The simulated results agree with the experimental results. The new sample valve seals well and it can withstand high working pressure.

  19. Design-based inference in time-location sampling.

    Science.gov (United States)

    Leon, Lucie; Jauffret-Roustide, Marie; Le Strat, Yann

    2015-07-01

    Time-location sampling (TLS), also called time-space sampling or venue-based sampling is a sampling technique widely used in populations at high risk of infectious diseases. The principle is to reach individuals in places and at times where they gather. For example, men who have sex with men meet in gay venues at certain times of the day, and homeless people or drug users come together to take advantage of services provided to them (accommodation, care, meals). The statistical analysis of data coming from TLS surveys has been comprehensively discussed in the literature. Two issues of particular importance are the inclusion or not of sampling weights and how to deal with the frequency of venue attendance (FVA) of individuals during the course of the survey. The objective of this article is to present TLS in the context of sampling theory, to calculate sampling weights and to propose design-based inference taking into account the FVA. The properties of an estimator ignoring the FVA and of the design-based estimator are assessed and contrasted both through a simulation study and using real data from a recent cross-sectional survey conducted in France among drug users. We show that the estimators of prevalence or a total can be strongly biased if the FVA is ignored, while the design-based estimator taking FVA into account is unbiased even when declarative errors occur in the FVA. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Design of sampling locations for river water quality monitoring considering seasonal variation of point and diffuse pollution loads.

    Science.gov (United States)

    Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar; Ghosh, N C

    2015-06-01

    The design of a water quality monitoring network (WQMN) is a complicated decision-making process because each sampling involves high installation, operational, and maintenance costs. Therefore, data with the highest information content should be collected. The effect of seasonal variation in point and diffuse pollution loadings on river water quality may have a significant impact on the optimal selection of sampling locations, but this possible effect has never been addressed in the evaluation and design of monitoring networks. The present study proposes a systematic approach for siting an optimal number and location of river water quality sampling stations based on seasonal or monsoonal variations in both point and diffuse pollution loadings. The proposed approach conceptualizes water quality monitoring as a two-stage process; the first stage of which is to consider all potential water quality sampling sites, selected based on the existing guidelines or frameworks, and the locations of both point and diffuse pollution sources. The monitoring at all sampling sites thus identified should be continued for an adequate period of time to account for the effect of the monsoon season. In the second stage, the monitoring network is then designed separately for monsoon and non-monsoon periods by optimizing the number and locations of sampling sites, using a modified Sanders approach. The impacts of human interventions on the design of the sampling net are quantified geospatially by estimating diffuse pollution loads and verified with land use map. To demonstrate the proposed methodology, the Kali River basin in the western Uttar Pradesh state of India was selected as a study area. The final design suggests consequential pre- and post-monsoonal changes in the location and priority of water quality monitoring stations based on the seasonal variation of point and diffuse pollution loadings.

  1. Tumor producing fibroblast growth factor 23 localized by two-staged venous sampling.

    NARCIS (Netherlands)

    Boekel, G.A.J van; Ruinemans-Koerts, J.; Joosten, F.; Dijkhuizen, P.; Sorge, A van; Boer, H de

    2008-01-01

    BACKGROUND: Tumor-induced osteomalacia is a rare paraneoplastic syndrome characterized by hypophosphatemia, renal phosphate wasting, suppressed 1,25-dihydroxyvitamin D production, and osteomalacia. It is caused by a usually benign mesenchymal tumor producing fibroblast growth factor 23 (FGF-23). Sur

  2. Software Radio Sampling Rate Selection, Design and Synchronization

    CERN Document Server

    Venosa, Elettra; Palmieri, Francesco A N

    2012-01-01

    Software Radio represents the future of communication devices. By moving a radio's hardware functionalities into software, SWR promises to change the communication devices creating radios that, built on DSP based hardware platforms, are multiservice, multiband, reconfigurable and reprogrammable. This book describes the design of Software Radio (SWR). Rather than providing an overview of digital signal processing and communications, this book focuses on topics which are crucial in the design and development of a SWR, explaining them in a very simple, yet precise manner, giving simulation results that confirm the effectiveness of the proposed design.  Readers will gain in-depth knowledge of key issues so they can actually implement a SWR.  Specifically the book addresses the following issues: proper low-sampling rate selection in the multi-band received signal scenario, architecture design for both software radio receiver and transmitter devices and radio synchronization. Addresses very precisely the most imp...

  3. ACS sampling system: design, implementation, and performance evaluation

    Science.gov (United States)

    Di Marcantonio, Paolo; Cirami, Roberto; Chiozzi, Gianluca

    2004-09-01

    By means of ACS (ALMA Common Software) framework we designed and implemented a sampling system which allows sampling of every Characteristic Component Property with a specific, user-defined, sustained frequency limited only by the hardware. Collected data are sent to various clients (one or more Java plotting widgets, a dedicated GUI or a COTS application) using the ACS/CORBA Notification Channel. The data transport is optimized: samples are cached locally and sent in packets with a lower and user-defined frequency to keep network load under control. Simultaneous sampling of the Properties of different Components is also possible. Together with the design and implementation issues we present the performance of the sampling system evaluated on two different platforms: on a VME based system using VxWorks RTOS (currently adopted by ALMA) and on a PC/104+ embedded platform using Red Hat 9 Linux operating system. The PC/104+ solution offers, as an alternative, a low cost PC compatible hardware environment with free and open operating system.

  4. The OSIRIS-Rex Asteroid Sample Return: Mission Operations Design

    Science.gov (United States)

    Gal-Edd, Jonathan; Cheuvront, Allan

    2014-01-01

    The OSIRIS-REx mission employs a methodical, phased approach to ensure success in meeting the missions science requirements. OSIRIS-REx launches in September 2016, with a backup launch period occurring one year later. Sampling occurs in 2019. The departure burn from Bennu occurs in March 2021. On September 24, 2023, the SRC lands at the Utah Test and Training Range (UTTR). Stardust heritage procedures are followed to transport the SRC to Johnson Space Center, where the samples are removed and delivered to the OSIRIS-REx curation facility. After a six-month preliminary examination period the mission will produce a catalog of the returned sample, allowing the worldwide community to request samples for detailed analysis.Traveling and returning a sample from an Asteroid that has not been explored before requires unique operations consideration. The Design Reference Mission (DRM) ties together space craft, instrument and operations scenarios. The project implemented lessons learned from other small body missions: APLNEAR, JPLDAWN and ESARosetta. The key lesson learned was expected the unexpected and implement planning tools early in the lifecycle. In preparation to PDR, the project changed the asteroid arrival date, to arrive one year earlier and provided additional time margin. STK is used for Mission Design and STKScheduler for instrument coverage analysis.

  5. Mineral chemistry of the Tissint meteorite: Indications of two-stage crystallization in a closed system

    Science.gov (United States)

    Liu, Yang; Baziotis, Ioannis P.; Asimow, Paul D.; Bodnar, Robert J.; Taylor, Lawrence A.

    2016-12-01

    The Tissint meteorite is a geochemically depleted, olivine-phyric shergottite. Olivine megacrysts contain 300-600 μm cores with uniform Mg# ( 80 ± 1) followed by concentric zones of Fe-enrichment toward the rims. We applied a number of tests to distinguish the relationship of these megacrysts to the host rock. Major and trace element compositions of the Mg-rich core in olivine are in equilibrium with the bulk rock, within uncertainty, and rare earth element abundances of melt inclusions in Mg-rich olivines reported in the literature are similar to those of the bulk rock. Moreover, the P Kα intensity maps of two large olivine grains show no resorption between the uniform core and the rim. Taken together, these lines of evidence suggest the olivine megacrysts are phenocrysts. Among depleted olivine-phyric shergottites, Tissint is the first one that acts mostly as a closed system with olivine megacrysts being the phenocrysts. The texture and mineral chemistry of Tissint indicate a crystallization sequence of: olivine (Mg# 80 ± 1) → olivine (Mg# 76) + chromite → olivine (Mg# 74) + Ti-chromite → olivine (Mg# 74-63) + pyroxene (Mg# 76-65) + Cr-ulvöspinel → olivine (Mg# 63-35) + pyroxene (Mg# 65-60) + plagioclase, followed by late-stage ilmenite and phosphate. The crystallization of the Tissint meteorite likely occurred in two stages: uniform olivine cores likely crystallized under equilibrium conditions; and a fractional crystallization sequence that formed the rest of the rock. The two-stage crystallization without crystal settling is simulated using MELTS and the Tissint bulk composition, and can broadly reproduce the crystallization sequence and mineral chemistry measured in the Tissint samples. The transition between equilibrium and fractional crystallization is associated with a dramatic increase in cooling rate and might have been driven by an acceleration in the ascent rate or by encounter with a steep thermal gradient in the Martian crust.

  6. Design, data analysis and sampling techniques for clinical research.

    Science.gov (United States)

    Suresh, Karthik; Thomas, Sanjeev V; Suresh, Geetha

    2011-10-01

    Statistical analysis is an essential technique that enables a medical research practitioner to draw meaningful inference from their data analysis. Improper application of study design and data analysis may render insufficient and improper results and conclusion. Converting a medical problem into a statistical hypothesis with appropriate methodological and logical design and then back-translating the statistical results into relevant medical knowledge is a real challenge. This article explains various sampling methods that can be appropriately used in medical research with different scenarios and challenges.

  7. Anti-kindling Induced by Two-Stage Coordinated Reset Stimulation with Weak Onset Intensity

    Science.gov (United States)

    Zeitler, Magteld; Tass, Peter A.

    2016-01-01

    Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR) stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e., unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies. PMID:27242500

  8. Novel two-stage piezoelectric-based ocean wave energy harvesters for moored or unmoored buoys

    Science.gov (United States)

    Murray, R.; Rastegar, J.

    2009-03-01

    Harvesting mechanical energy from ocean wave oscillations for conversion to electrical energy has long been pursued as an alternative or self-contained power source. The attraction to harvesting energy from ocean waves stems from the sheer power of the wave motion, which can easily exceed 50 kW per meter of wave front. The principal barrier to harvesting this power is the very low and varying frequency of ocean waves, which generally vary from 0.1Hz to 0.5Hz. In this paper the application of a novel class of two-stage electrical energy generators to buoyant structures is presented. The generators use the buoy's interaction with the ocean waves as a low-speed input to a primary system, which, in turn, successively excites an array of vibratory elements (secondary system) into resonance - like a musician strumming a guitar. The key advantage of the present system is that by having two decoupled systems, the low frequency and highly varying buoy motion is converted into constant and much higher frequency mechanical vibrations. Electrical energy may then be harvested from the vibrating elements of the secondary system with high efficiency using piezoelectric elements. The operating principles of the novel two-stage technique are presented, including analytical formulations describing the transfer of energy between the two systems. Also, prototypical design examples are offered, as well as an in-depth computer simulation of a prototypical heaving-based wave energy harvester which generates electrical energy from the up-and-down motion of a buoy riding on the ocean's surface.

  9. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  10. Reliability of single sample experimental designs: comfortable effort level.

    Science.gov (United States)

    Brown, W S; Morris, R J; DeGroot, T; Murry, T

    1998-12-01

    This study was designed to ascertain the intrasubject variability across multiple recording sessions-most often disregarded in reporting group mean data or unavailable because of single sample experimental designs. Intrasubject variability was assessed within and across several experimental sessions from measures of speaking fundamental frequency, vocal intensity, and reading rate. Three age groups of men and women--young, middle-aged, and elderly--repeated the vowel /a/, read a standard passage, and spoke extemporaneously during each experimental session. Statistical analyses were performed to assess each speaker's variability from his or her own mean, and that which consistently varied for any one speaking sample type, both within or across days. Results indicated that intrasubject variability was minimal, with approximately 4% of the data exhibiting significant variation across experimental sessions.

  11. Preemptive scheduling in a two-stage supply chain to minimize the makespan

    NARCIS (Netherlands)

    Pei, Jun; Fan, Wenjuan; Pardalos, Panos M.; Liu, Xinbao; Goldengorin, Boris; Yang, Shanlin

    2015-01-01

    This paper deals with the problem of preemptive scheduling in a two-stage supply chain framework. The supply chain environment contains two stages: production and transportation. In the production stage jobs are processed on a manufacturer's bounded serial batching machine, preemptions are allowed,

  12. Two-stage removal of nitrate from groundwater using biological and chemical treatments.

    Science.gov (United States)

    Ayyasamy, Pudukadu Munusamy; Shanthi, Kuppusamy; Lakshmanaperumalsamy, Perumalsamy; Lee, Soon-Jae; Choi, Nag-Choul; Kim, Dong-Ju

    2007-08-01

    In this study, we attempted to treat groundwater contaminated with nitrate using a two-stage removal system: one is biological treatment using the nitrate-degrading bacteria Pseudomonas sp. RS-7 and the other is chemical treatment using a coagulant. For the biological system, the effect of carbon sources on nitrate removal was first investigated using mineral salt medium (MSM) containing 500 mg l(-1) nitrate to select the most effective carbon source. Among three carbon sources, namely, glucose, starch and cellulose, starch at 1% was found to be the most effective. Thus, starch was used as a representative carbon source for the remaining part of the biological treatment where nitrate removal was carried out for MSM solution and groundwater samples containing 500 mg l(-1) and 460 mg l(-1) nitrate, respectively. About 86% and 89% of nitrate were removed from the MSM solution and groundwater samples, respectively at 72 h. Chemical coagulants such as alum, lime and poly aluminium chloride were tested for the removal of nitrate remaining in the samples. Among the coagulants, lime at 150 mg l(-1) exhibited the highest nitrate removal efficiency with complete disappearance for the MSM solutions. Thus, a combined system of biological and chemical treatments was found to be more effective for the complete removal of nitrate from groundwater.

  13. Incorporating the sampling design in weighting adjustments for panel attrition.

    Science.gov (United States)

    Chen, Qixuan; Gelman, Andrew; Tracy, Melissa; Norris, Fran H; Galea, Sandro

    2015-12-10

    We review weighting adjustment methods for panel attrition and suggest approaches for incorporating design variables, such as strata, clusters, and baseline sample weights. Design information can typically be included in attrition analysis using multilevel models or decision tree methods such as the chi-square automatic interaction detection algorithm. We use simulation to show that these weighting approaches can effectively reduce bias in the survey estimates that would occur from omitting the effect of design factors on attrition while keeping the resulted weights stable. We provide a step-by-step illustration on creating weighting adjustments for panel attrition in the Galveston Bay Recovery Study, a survey of residents in a community following a disaster, and provide suggestions to analysts in decision-making about weighting approaches. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Remote liquid target loading system for LANL two-stage gas gun

    Science.gov (United States)

    Gibson, L. L.; Bartram, B.; Dattelbaum, D. M.; Sheffield, S. A.; Stahl, D. B.

    2009-06-01

    A Remote Liquid Loading System (RLLS) was designed to load high hazard liquid materials into targets for gas-gun driven impact experiments. These high hazard liquids tend to react with confining materials in a short period of time, degrading target assemblies and potentially building up pressure through the evolution of gas in the reactions. Therefore, the ability to load a gas gun target in place immediately prior to firing the gun, provides the most stable and reliable target fielding approach. We present the design and evaluation of a RLLS built for the LANL two-stage gas gun. Targets for the gun are made of PMMA and assembled to form a liquid containment cell with a volume of approximately 25 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with highly concentrated hydrogen peroxide. Teflon and 304-stainless steel were the two most compatible materials with the materials to be tested. Teflon valves and tubing, as well as stainless steel tubing, were used to handle the liquid, along with a stainless steel reservoir. Preliminary testing was done to ensure proper flow rate and safety. The system has been used to successfully load 97.5 percent hydrogen peroxide into a target cell just prior to a successful multiple magnetic gauge experiment. TV cameras on the target verified the bubble-free filling operation.

  15. Fabrication, phase formation and microstructure of Ni4Nb2O9 ceramics fabricated by using the two-stage sintering technique

    Science.gov (United States)

    Khamman, Orawan; Jainumpone, Jiraporn; Watcharapasorn, Anucha; Ananta, Supon

    2016-08-01

    The potential utilization of two-stage sintering for the production of very dense and pure nickel diniobate (Ni4Nb2O9) ceramics with low firing temperature was demonstrated. The effects of the designed sintering conditions on the phase formation, densification and microstructure of the ceramics were characterized by using X-ray diffraction (XRD), Archimedes method and scanning electron microscopy (SEM), respectively. The minor phase of columbite NiNb2O6 tended to form together with the desired Ni4Nb2O9 phase, depending on the sintering conditions. Optimization of the sintering conditions may lead to a single-phase Ni4Nb2O9 ceramics with an orthorhombic structure. The ceramics doubly sintered at 950/1250 °C for 4 h exhibited a maximum density of ~92%. Microstructures with denser angular grain-packing were generally found in the sintered Ni4Nb2O9 ceramics. However, the grains were irregular in shape when the samples were sintered at 1050/1250 °C. Two-stage sintering was also found to enhance the ferroelectric behavior of the Ni4Nb2O9 ceramic.

  16. A Structural Design of Insulated and Heat Conduction Ceramic Plate Used in Two Stage Depressed Collectors%一种应用于两级降压收集极的绝缘导热瓷件的结构设计

    Institute of Scientific and Technical Information of China (English)

    骆岷; 杨定义; 周明干

    2012-01-01

    A structural design of insulated and heat conduction ceramic plate,which think about material choice, insulated space and so on. And the design results used in practice.%介绍了一种绝缘导热瓷件的结构设计,从材料的选择、绝缘距离的考虑等方面进行了考虑,给出了设计结果及实际应用情况.

  17. Comparison of single-stage and a two-stage vertical flow constructed wetland systems for different load scenarios.

    Science.gov (United States)

    Langergraber, Guenter; Pressl, Alexander; Leroch, Klaus; Rohrhofer, Roland; Haberl, Raimund

    2010-01-01

    Constructed wetlands (CWs) are known to be robust wastewater treatment systems and are therefore very suitable for small villages and single households. When nitrification is required, vertical flow (VF) CWs are widely used. This contribution compares the behaviour and treatment efficiencies of a single-stage VF CW and a two-stage VF CW system under varying operating and loading conditions according to standardized testing procedures for small wastewater treatment plants as described in the European standard EN 12566-3. The single-stage VF CW is designed and operated according to the Austrian design standards with an organic load of 20 g COD m(-2) d(-1) (i.e. 4 m(2) per person equivalent (PE)) The two-stage VF CW system is operated with 40 g COD m(-2) d(-1) (i.e. 2 m(2) per PE). During the 48 week testing period the Austrian threshold effluent concentrations have not been exceeded in either system. The two-stage VF CW system showed to be more robust as compared to the single-stage VF CW especially during highly fluctuating loads at low temperatures.

  18. Decentralized combined heat and power production by two-stage biomass gasification and solid oxide fuel cells

    DEFF Research Database (Denmark)

    Bang-Møller, Christian; Rokni, Masoud; Elmegaard, Brian

    2013-01-01

    To investigate options for increasing the electrical efficiency of decentralized combined heat and power (CHP) plants fuelled with biomass compared to conventional technology, this research explored the performance of an alternative plant design based on thermal biomass gasification and solid oxide...... fuel cells (SOFC). Based on experimental data from a demonstrated 0.6 MWth two-stage gasifier, a model of the gasifier plant was developed and calibrated. Similarly, an SOFC model was developed using published experimental data. Simulation of a 3 MWth plant combining two-stage biomass gasification......, carbon conversion factor in the gasifier and the efficiency of the DC/AC inverter were the most influential parameters in the model. Thus, a detailed study of the practical values of these parameters was conducted to determine the performance of the plant with the lowest possible uncertainty. The SOFC...

  19. A Two-Stage State Recognition Method for Asynchronous SSVEP-Based Brain-Computer Interface System

    Institute of Scientific and Technical Information of China (English)

    ZHANG Zimu; DENG Zhidong

    2013-01-01

    A two-stage state recognition method is proposed for asynchronous SSVEP (steady-state visual evoked potential) based brain-computer interface (SBCI) system.The two-stage method is composed of the idle state (IS) detection and control state (CS) discrimination modules.Based on blind source separation and continuous wavelet transform techniques,the proposed method integrates functions of multi-electrode spatial filtering and feature extraction.In IS detection module,a method using the ensemble IS feature is proposed.In CS discrimination module,the ensemble CS feature is designed as feature vector for control intent classification.Further,performance comparisons are investigated among our IS detection module and other existing ones.Also the experimental results validate the satisfactory performance of our CS discrimination module.

  20. Two stage hydrolysis of corn stover at high solids content for mixing power saving and scale-up applications.

    Science.gov (United States)

    Liu, Ke; Zhang, Jian; Bao, Jie

    2015-11-01

    A two stage hydrolysis of corn stover was designed to solve the difficulties between sufficient mixing at high solids content and high power input encountered in large scale bioreactors. The process starts with the quick liquefaction to convert solid cellulose to liquid slurry with strong mixing in small reactors, then followed the comprehensive hydrolysis to complete saccharification into fermentable sugars in large reactors without agitation apparatus. 60% of the mixing energy consumption was saved by removing the mixing apparatus in large scale vessels. Scale-up ratio was small for the first step hydrolysis reactors because of the reduced reactor volume. For large saccharification reactors in the second step, the scale-up was easy because of no mixing mechanism was involved. This two stage hydrolysis is applicable for either simple hydrolysis or combined fermentation processes. The method provided a practical process option for industrial scale biorefinery processing of lignocellulose biomass. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Two stages of isotopic exchanges experienced by the Ertaibei granite pluton, northern Xinjiang, China

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    18O/16O and D/H of coexisting feldspar, quartz, and biotite separates of twenty samples collected from the Ertaibei granite pluton, northern Xinjiang, China are determined. It is shown that the Ertaibei pluton experienced two stages of isotopic exchanges. The second stage of 18O/16O and D/H exchanges with meteoric water brought about a marked decrease in the δ18O values of feldspar and biotite from the second group of samples. The D/H of biotite exhibits a higher sensitivity to the meteoric water alteration than its 18O/16O. However, the first stage of 18O/16O exchange with the 18O-rich aqueous fluid derived from the dehydration within the deep crust caused the δ18OQuartz-Feldspar reversal. It is inferred that the dehydration-melting may have been an important mechanism for anatexis. It is shown that the deep fluid encircled the Ertaibei pluton like an envelope which serves as an effective screen to the surface waters.

  2. Two stages of isotopic exchanges experienced by the Ertaibei granite pluton, northern Xinjiang, China

    Institute of Scientific and Technical Information of China (English)

    刘伟

    2000-01-01

    18O/16O and D/H of coexisting feldspar, quartz, and biotite separates of twenty samples collected from the Ertaibei granite pluton, northern Xinjiang, China are determined. It is shown that the Ertaibei pluton experienced two stages of isotopic exchanges. The second stage of 18O/16O and D/H exchanges with meteoric water brought about a marked decrease in the δ18O values of feldspar and biotite from the second group of samples. The D/H of biotite exhibits a higher sensitivity to the meteoric water alteration than its 18O/16O. However, the first stage of 18O/16O exchange with the 18O-rich aqueous fluid derived from the dehydration within the deep crust caused the Δ18OQuariz-Feidspar reversal. It is inferred that the dehydration-melting may have been an important mechanism for anatexis. It is shown that the deep fluid encircled the Ertaibei pluton like an envelope which serves as an effective screen to the surface waters.

  3. A CURRENT MIRROR BASED TWO STAGE CMOS CASCODE OP-AMP FOR HIGH FREQUENCY APPLICATION

    Directory of Open Access Journals (Sweden)

    RAMKRISHNA KUNDU

    2017-03-01

    Full Text Available This paper presents a low power, high slew rate, high gain, ultra wide band two stage CMOS cascode operational amplifier for radio frequency application. Current mirror based cascoding technique and pole zero cancelation technique is used to ameliorate the gain and enhance the unity gain bandwidth respectively, which is the novelty of the circuit. In cascading technique a common source transistor drive a common gate transistor. The cascoding is used to enhance the output resistance and hence improve the overall gain of the operational amplifier with less complexity and less power dissipation. To bias the common gate transistor, a current mirror is used in this paper. The proposed circuit is designed and simulated using Cadence analog and digital system design tools of 45 nanometer CMOS technology. The simulated results of the circuit show DC gain of 63.62 dB, unity gain bandwidth of 2.70 GHz, slew rate of 1816 V/µs, phase margin of 59.53º, power supply of the proposed operational amplifier is 1.4 V (rail-to-rail ±700 mV, and power consumption is 0.71 mW. This circuit specification has encountered the requirements of radio frequency application.

  4. A two-stage storage routing model for green roof runoff detention.

    Science.gov (United States)

    Vesuviano, Gianni; Sonnenwald, Fred; Stovin, Virginia

    2014-01-01

    Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.

  5. Multifunctional Solar Systems Based On Two-Stage Regeneration Absorbent Solution

    Directory of Open Access Journals (Sweden)

    Doroshenko A.V.

    2015-04-01

    Full Text Available The concepts of multifunctional dehumidification solar systems, heat supply, cooling, and air conditioning based on the open absorption cycle with direct absorbent regeneration developed. The solar systems based on preliminary drainage of current of air and subsequent evaporated cooling. The solar system using evaporative coolers both types (direct and indirect. The principle of two-stage regeneration of absorbent used in the solar systems, it used as the basis of liquid and gas-liquid solar collectors. The main principle solutions are designed for the new generation of gas-liquid solar collectors. Analysis of the heat losses in the gas-liquid solar collectors, due to the mechanism of convection and radiation is made. Optimal cost of gas and liquid, as well as the basic dimensions and configuration of the working channel of the solar collector identified. Heat and mass transfer devices, belonging to the evaporative cooling system based on the interaction between the film and the gas stream and the liquid therein. Multichannel structure of the polymeric materials used to create the tip. Evaporative coolers of water and air both types (direct and indirect are used in the cooling of the solar systems. Preliminary analysis of the possibilities of multifunctional solar absorption systems made reference to problems of cooling media and air conditioning on the basis of experimental data the authors. Designed solar systems feature low power consumption and environmental friendliness.

  6. A Risk-Based Interval Two-Stage Programming Model for Agricultural System Management under Uncertainty

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2016-01-01

    Full Text Available Nonpoint source (NPS pollution caused by agricultural activities is main reason that water quality in watershed becomes worse, even leading to deterioration. Moreover, pollution control is accompanied with revenue’s fall for agricultural system. How to design and generate a cost-effective and environmentally friendly agricultural production pattern is a critical issue for local managers. In this study, a risk-based interval two-stage programming model (RBITSP was developed. Compared to general ITSP model, significant contribution made by RBITSP model was that it emphasized importance of financial risk under various probabilistic levels, rather than only being concentrated on expected economic benefit, where risk is expressed as the probability of not meeting target profit under each individual scenario realization. This way effectively avoided solutions’ inaccuracy caused by traditional expected objective function and generated a variety of solutions through adjusting weight coefficients, which reflected trade-off between system economy and reliability. A case study of agricultural production management with the Tai Lake watershed was used to demonstrate superiority of proposed model. Obtained results could be a base for designing land-structure adjustment patterns and farmland retirement schemes and realizing balance of system benefit, system-failure risk, and water-body protection.

  7. Two-stage seasonal streamflow forecasts to guide water resources decisions and water rights allocation

    Science.gov (United States)

    Block, P. J.; Gonzalez, E.; Bonnafous, L.

    2011-12-01

    Decision-making in water resources is inherently uncertain producing copious risks, ranging from operational (present) to planning (season-ahead) to design/adaptation (decadal) time-scales. These risks include human activity and climate variability/change. As the risks in designing and operating water systems and allocating available supplies vary systematically in time, prospects for predicting and managing such risks become increasingly attractive. Considerable effort has been undertaken to improve seasonal forecast skill and advocate for integration to reduce risk, however only minimal adoption is evident. Impediments are well defined, yet tailoring forecast products and allowing for flexible adoption assist in overcoming some obstacles. The semi-arid Elqui River basin in Chile is contending with increasing levels of water stress and demand coupled with insufficient investment in infrastructure, taxing its ability to meet agriculture, hydropower, and environmental requirements. The basin is fed from a retreating glacier, with allocation principles founded on a system of water rights and markets. A two-stage seasonal streamflow forecast at leads of one and two seasons prescribes the probability of reductions in the value of each water right, allowing water managers to inform their constituents in advance. A tool linking the streamflow forecast to a simple reservoir decision model also allows water managers to select a level of confidence in the forecast information.

  8. Confronting the ironies of optimal design: Nonoptimal sampling designs with desirable properties

    Science.gov (United States)

    Casman, Elizabeth A.; Naiman, Daniel Q.; Chamberlin, Charles E.

    1988-03-01

    Two sampling designs are developed for the improvement of parameter estimate precision in nonlinear regression, one for when there is uncertainty in the parameter values, and the other for when the correct model formulation is unknown. Although based on concepts of optimal design theory, the design criteria emphasize efficiency rather than optimality. The development is illustrated using a Streeter-Phelps dissolved oxygen-biochemical oxygen demand model.

  9. Evaluation of design flood estimates with respect to sample size

    Science.gov (United States)

    Kobierska, Florian; Engeland, Kolbjorn

    2016-04-01

    Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.

  10. Thermodynamic analysis of small-scale dimethyl ether (DME) and methanol plants based on the efficient two-stage gasifier

    DEFF Research Database (Denmark)

    Clausen, Lasse Røngaard; Elmegaard, Brian; Ahrenfeldt, Jesper

    2011-01-01

    Models of dimethyl ether (DME) and methanol synthesis plants have been designed by combining the features of the simulation tools DNA and Aspen Plus. The plants produce DME or methanol by catalytic conversion of a syngas generated by gasification of woody biomass. Electricity is co......-produced in the plants by a gas engine utilizing the unconverted syngas. A two-stage gasifier with a cold gas efficiency of 93% is used, but because of the design of this type of gasifier, the plants have to be of small-scale (5 MWth biomass input). The plant models show energy efficiencies from biomass to DME/methanol...

  11. Numerical simulation of a step-piston type series two-stage pulse tube refrigerator

    Science.gov (United States)

    Zhu, Shaowei; Nogawa, Masafumi; Inoue, Tatsuo

    2007-09-01

    A two-stage pulse tube refrigerator has a great advantage in that there are no moving parts at low temperatures. The problem is low theoretical efficiency. In an ordinary two-stage pulse tube refrigerator, the expansion work of the first stage pulse tube is rather large, but is changed to heat. The theoretical efficiency is lower than that of a Stirling refrigerator. A series two-stage pulse tube refrigerator was introduced for solving this problem. The hot end of the regenerator of the second stage is connected to the hot end of the first stage pulse tube. The expansion work in the first stage pulse tube is part of the input work of the second stage, therefore the efficiency is increased. In a simulation result for a step-piston type two-stage series pulse tube refrigerator, the efficiency is increased by 13.8%.

  12. Theory and calculation of two-stage voltage stabilizer on zener diodes

    Directory of Open Access Journals (Sweden)

    G. S. Veksler

    1966-12-01

    Full Text Available Two-stage stabilizer is compared with one-stage. There have been got formulas, which give the possibility to make an engineering calculation. There is an example of the calculation.

  13. Two-stage fungal pre-treatment for improved biogas production from sisal leaf decortication residues

    National Research Council Canada - National Science Library

    Muthangya, Mutemi; Mshandete, Anthony Manoni; Kivaisi, Amelia Kajumulo

    2009-01-01

    .... Pre-treatment of the residue prior to its anaerobic digestion (AD) was investigated using a two-stage pre-treatment approach with two fungal strains, CCHT-1 and Trichoderma reesei in succession in anaerobic batch bioreactors...

  14. Experiment research on two-stage dry-fed entrained flow coal gasifier

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The process flow and the main devices of a new two-stage dry-fed coal gasification pilot plant with a throughout of 36 t/d are introduced in this paper. For comparison with the traditional one-stage gasifiers, the influences of the coal feed ratio between two stages on the performance of the gasifier are detailedly studied by a series of experiments. The results reveal that the two-stage gasification decreases the temperature of the syngas at the outlet of the gasifier, simplifies the gasification process, and reduces the size of the syngas cooler. Moreover, the cold gas efficiency of the gasifier can be improved by using the two-stage gasification. In our experiments, the efficiency is about 3%-6% higher than the existing one-stage gasifiers.

  15. TWO-STAGE CHARACTER CLASSIFICATION : A COMBINED APPROACH OF CLUSTERING AND SUPPORT VECTOR CLASSIFIERS

    NARCIS (Netherlands)

    Vuurpijl, L.; Schomaker, L.

    2000-01-01

    This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

  16. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  17. Bayesian and frequentist two-stage treatment strategies based on sequential failure times subject to interval censoring.

    Science.gov (United States)

    Thall, Peter F; Wooten, Leiko H; Logothetis, Christopher J; Millikan, Randall E; Tannir, Nizar M

    2007-11-20

    For many diseases, therapy involves multiple stages, with the treatment in each stage chosen adaptively based on the patient's current disease status and history of previous treatments and clinical outcomes. Physicians routinely use such multi-stage treatment strategies, also called dynamic treatment regimes or treatment policies. We present a Bayesian framework for a clinical trial comparing two-stage strategies based on the time to overall failure, defined as either second disease worsening or discontinuation of therapy. Each patient is randomized among a set of treatments at enrollment, and if disease worsening occurs the patient is then re-randomized among a set of treatments excluding the treatment received initially. The goal is to select the two-stage strategy having the largest average overall failure time. A parametric model is formulated to account for non-constant failure time hazards, regression of the second failure time on the patient's first worsening time, and the complications that the failure time in either stage may be interval censored and there may be a delay between first worsening and the start of the second stage of therapy. Four different criteria, two Bayesian and two frequentist, for selecting a best strategy are considered. The methods are applied to a trial comparing two-stage strategies for treating metastatic renal cancer, and a simulation study in the context of this trial is presented. Advantages and disadvantages of this design compared to standard methods are discussed.

  18. A new multi-motor drive system based on two-stage direct power converter

    OpenAIRE

    Kumar, Dinesh

    2011-01-01

    The two-stage AC to AC direct power converter is an alternative matrix converter topology, which offers the benefits of sinusoidal input currents and output voltages, bidirectional power flow and controllable input power factor. The absence of any energy storage devices, such as electrolytic capacitors, has increased the potential lifetime of the converter. In this research work, a new multi-motor drive system based on a two-stage direct power converter has been proposed, with two motors c...

  19. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  20. Two-Stage Conversion of Land and Marine Biomass for Biogas and Biohydrogen Production

    OpenAIRE

    Nkemka, Valentine

    2012-01-01

    The replacement of fossil fuels by renewable fuels such as biogas and biohydrogen will require efficient and economically competitive process technologies together with new kinds of biomass. A two-stage system for biogas production has several advantages over the widely used one-stage continuous stirred tank reactor (CSTR). However, it has not yet been widely implemented on a large scale. Biohydrogen can be produced in the anaerobic two-stage system. It is considered to be a useful fuel for t...

  1. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie

    2009-08-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

  2. The impact of alcohol marketing on youth drinking behaviour: a two-stage cohort study.

    Science.gov (United States)

    Gordon, Ross; MacKintosh, Anne Marie; Moodie, Crawford

    2010-01-01

    To examine whether awareness of, and involvement with alcohol marketing at age 13 is predictive of initiation of drinking, frequency of drinking and units of alcohol consumed at age 15. A two-stage cohort study, involving a questionnaire survey, combining interview and self-completion, was administered in respondents' homes. Respondents were drawn from secondary schools in three adjoining local authority areas in the West of Scotland, UK. From a baseline sample of 920 teenagers (aged 12-14, mean age 13), in 2006, a cohort of 552 was followed up 2 years later (aged 14-16, mean age 15). Data were gathered on multiple forms of alcohol marketing and measures of drinking initiation, frequency and consumption. At follow-up, logistic regression demonstrated that, after controlling for confounding variables, involvement with alcohol marketing at baseline was predictive of both uptake of drinking and increased frequency of drinking. Awareness of marketing at baseline was also associated with an increased frequency of drinking at follow-up. Our findings demonstrate an association between involvement with, and awareness of, alcohol marketing and drinking uptake or increased drinking frequency, and we consider whether the current regulatory environment affords youth sufficient protection from alcohol marketing.

  3. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  4. A Two-Stage Compression Method for the Fault Detection of Roller Bearings

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2016-01-01

    Full Text Available Data measurement of roller bearings condition monitoring is carried out based on the Shannon sampling theorem, resulting in massive amounts of redundant information, which will lead to a big-data problem increasing the difficulty of roller bearing fault diagnosis. To overcome the aforementioned shortcoming, a two-stage compressed fault detection strategy is proposed in this study. First, a sliding window is utilized to divide the original signals into several segments and a selected symptom parameter is employed to represent each segment, through which a symptom parameter wave can be obtained and the raw vibration signals are compressed to a certain level with the faulty information remaining. Second, a fault detection scheme based on the compressed sensing is applied to extract the fault features, which can compress the symptom parameter wave thoroughly with a random matrix called the measurement matrix. The experimental results validate the effectiveness of the proposed method and the comparison of the three selected symptom parameters is also presented in this paper.

  5. Two-stage biofilter for effective NH3 removal from waste gases containing high concentrations of H2S.

    Science.gov (United States)

    Chung, Ying-Chien; Ho, Kuo-Ling; Tseng, Ching-Ping

    2007-03-01

    A high H2S concentration inhibits nitrification when H2S and NH3 are simultaneously treated in a single biofilter. To improve NH3 removal from waste gases containing concentrated H2S, a two-stage biofilter was designed to solve the problem. In this study, the first biofilter, inoculated with Thiobacillus thioparus, was intended mainly to remove H2S and to reduce the effect of H2S concentration on nitrification in the second biofilter, and the second biofilter, inoculated with Nitrosomonas europaea, was to remove NH3. Extensive studies, which took into account the characteristics of gas removal, the engineering properties of the two biofilters, and biological parameters, were conducted in a 210-day operation. The results showed that an average 98% removal efficiency for H2S and a 100% removal efficiency for NH3 (empty bed retention time = 23-180 sec) were achieved after 70 days. The maximum degradation rate for NH3 was measured as 2.35 g N day(-1) kg of dry granular activated carbon(-1). Inhibition of nitrification was not found in the biofilter. This two-stage biofilter also exhibited good adaptability to shock loading and shutdown periods. Analysis of metabolic product and observation of the bacterial community revealed no obvious acidification or alkalinity phenomena. In addition, a lower moisture content (approximately 40%) for microbial survival and low pressure drop (average 24.39 mm H2O m(-1)) for system operation demonstrated that the two-stage biofilter was energy saving and economic. Thus, the two-stage biofilter is a feasible system to enhance NH3 removal in the concentrated coexistence of H2S.

  6. A farm-scale pilot plant for biohydrogen and biomethane production by two-stage fermentation

    Directory of Open Access Journals (Sweden)

    R. Oberti

    2013-09-01

    Full Text Available Hydrogen is considered one of the possible main energy carriers for the future, thanks to its unique environmental properties. Indeed, its energy content (120 MJ/kg can be exploited virtually without emitting any exhaust in the atmosphere except for water. Renewable production of hydrogen can be obtained through common biological processes on which relies anaerobic digestion, a well-established technology in use at farm-scale for treating different biomass and residues. Despite two-stage hydrogen and methane producing fermentation is a simple variant of the traditional anaerobic digestion, it is a relatively new approach mainly studied at laboratory scale. It is based on biomass fermentation in two separate, seuqential stages, each maintaining conditions optimized to promote specific bacterial consortia: in the first acidophilic reactorhydrogen is produced production, while volatile fatty acids-rich effluent is sent to the second reactor where traditional methane rich biogas production is accomplished. A two-stage pilot-scale plant was designed, manufactured and installed at the experimental farm of the University of Milano and operated using a biomass mixture of livestock effluents mixed with sugar/starch-rich residues (rotten fruits and potatoes and expired fruit juices, afeedstock mixture based on waste biomasses directly available in the rural area where plant is installed. The hydrogenic and the methanogenic reactors, both CSTR type, had a total volume of 0.7m3 and 3.8 m3 respectively, and were operated in thermophilic conditions (55 2 °C without any external pH control, and were fully automated. After a brief description of the requirements of the system, this contribution gives a detailed description of its components and of engineering solutions to the problems encountered during the plant realization and start-up. The paper also discusses the results obtained in a first experimental run which lead to production in the range of previous

  7. An evaluation of a two-stage spiral processing ultrafine bituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Matthew D. Benusa; Mark S. Klima [Penn State University, University Park, PA (United States). Energy and Mineral Engineering

    2008-10-15

    Testing was conducted to evaluate the performance of a multistage Multotec SX7 spiral concentrator treating ultrafine bituminous coal. This spiral mimics a two-stage separation in that the refuse is removed after four turns, and the clean coal and middlings are repulped (without water addition) and then separated in the final three turns. Feed samples were collected from the spiral circuit of a coal cleaning plant located in southwestern Pennsylvania. The samples consisted of undeslimed cyclone feed (nominal -0.15 mm) and deslimed spiral feed (nominal 0.15 x 0.053 mm). Testing was carried out to investigate the effects of slurry flow rate and solids concentration on spiral performance. Detailed size and ash analyses were performed on the spiral feed and product samples. For selected tests, float-sink and sulfur analyses were performed. In nearly all cases, ash reduction occurred down to approximately 0.025 mm, with some sulfur reduction occurring even in the -0.025 mm interval. The separation of the +0.025 mm material was not significantly affected by the presence of the -0.025 mm material when treating the undeslimed feed. The -0.025 mm material split in approximately the same ratio as the slurry, and the majority of the water traveled to the clean coal stream. This split ultimately increased the overall clean coal ash value. A statistical analysis determined that both flow rate and solids concentration affected the clean coal ash value and yield, though the flow rate had a greater effect on the separation. 23 refs.

  8. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  9. New Grapheme Generation Rules for Two-Stage Modelbased Grapheme-to-Phoneme Conversion

    Directory of Open Access Journals (Sweden)

    Seng Kheang

    2015-01-01

    Full Text Available The precise conversion of arbitrary text into its  corresponding phoneme sequence (grapheme-to-phoneme or G2P conversion is implemented in speech synthesis and recognition, pronunciation learning software, spoken term detection and spoken document retrieval systems. Because the quality of this module plays an important role in the performance of such systems and many problems regarding G2P conversion have been reported, we propose a novel two-stage model-based approach, which is implemented using an existing weighted finite-state transducer-based G2P conversion framework, to improve the performance of the G2P conversion model. The first-stage model is built for automatic conversion of words  to phonemes, while  the second-stage  model utilizes the input graphemes and output phonemes obtained from the first stage to determine the best final output phoneme sequence. Additionally, we designed new grapheme generation rules, which enable extra detail for the vowel and consonant graphemes appearing within a word. When compared with previous approaches, the evaluation results indicate that our approach using rules focusing on the vowel graphemes slightly improved the accuracy of the out-of-vocabulary dataset and consistently increased the accuracy of the in-vocabulary dataset.

  10. Cooperative Hierarchical PSO With Two Stage Variable Interaction Reconstruction for Large Scale Optimization.

    Science.gov (United States)

    Ge, Hongwei; Sun, Liang; Tan, Guozhen; Chen, Zheng; Chen, C L Philip

    2017-09-01

    Large scale optimization problems arise in diverse fields. Decomposing the large scale problem into small scale subproblems regarding the variable interactions and optimizing them cooperatively are critical steps in an optimization algorithm. To explore the variable interactions and perform the problem decomposition tasks, we develop a two stage variable interaction reconstruction algorithm. A learning model is proposed to explore part of the variable interactions as prior knowledge. A marginalized denoising model is proposed to construct the overall variable interactions using the prior knowledge, with which the problem is decomposed into small scale modules. To optimize the subproblems and relieve premature convergence, we propose a cooperative hierarchical particle swarm optimization framework, where the operators of contingency leadership, interactional cognition, and self-directed exploitation are designed. Finally, we conduct theoretical analysis for further understanding of the proposed algorithm. The analysis shows that the proposed algorithm can guarantee converging to the global optimal solutions if the problems are correctly decomposed. Experiments are conducted on the CEC2008 and CEC2010 benchmarks. The results demonstrate the effectiveness, convergence, and usefulness of the proposed algorithm.

  11. Autothermal two-stage gasification of low-density waste-derived fuels

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, Stefan [Universitaet Siegen, Institut fuer Energietechnik, Paul-Bonatz-Str. 9-11, D-57068 Siegen (Germany); Hasselbach, Holger [Universitaet Siegen, Institut fuer Energietechnik, Paul-Bonatz-Str. 9-11, D-57068 Siegen (Germany); Weil, Steffen [Universitaet Siegen, Institut fuer Energietechnik, Paul-Bonatz-Str. 9-11, D-57068 Siegen (Germany); Krumm, Wolfgang [Universitaet Siegen, Institut fuer Energietechnik, Paul-Bonatz-Str. 9-11, D-57068 Siegen (Germany)]. E-mail: w.krumm@et.mb.uni-siegen.de

    2007-02-15

    In order to increase the efficiency of waste utilization in thermal conversion processes, pre-treatment is advantageous. With the Herhof Stabilat[reg] process, residual domestic waste is upgraded to waste-derived fuel by means of biological drying and mechanical separation of inerts and metals. The dried and homogenized waste-derived Stabilat[reg] fuel has a relatively high calorific value and contains high volatile matter which makes it suitable for gasification. As a result of extensive mechanical treatment, the Stabilat[reg] produced is of a fluffy appearance with a low density. A two-stage gasifier, based on a parallel-arranged bubbling fluidized bed and a fixed bed reactor, has been developed to convert Stabilat[reg] into hydrogen-rich product gas. This paper focuses on the design and construction of the configured laboratory-scale gasifier and experience with its operation. The processing of low-density fluffy waste-derived fuel using small-scale equipment demands special technical solutions for the core components as well as for the peripheral equipment. These are discussed here. The operating results of Stabilat[reg] gasification are also presented.

  12. A two-stage visual tracking algorithm using dual-template

    Directory of Open Access Journals (Sweden)

    Yu Xia

    2016-10-01

    Full Text Available Template matching and updates are crucial steps in visual object tracking. In this article, we propose a two-stage object tracking algorithm using a dual-template. By design, the initial state of a target can be estimated using a prior fixed template at the first stage with a particle-filter-based tracking framework. The use of prior templates maintains the stability of an object tracking algorithm, because it consists of invariant and important features. In the second step, a mean shift is used to gain the optimal location of the object with the stage update template. The stage template improves the ability of target recognition using a classified update method. The complementary of dual-template improves the quality of template matching and the performance of object tracking. Experimental results demonstrate that the proposed algorithm improves the tracking performance in terms of accuracy and robustness, and it exhibits good results in the presence of deformation, noise and occlusion.

  13. TWO-STAGE PRODUCTION SCHEDULING WITH AN OPTION OF OUTSOURCING FROM A REMOTE SUPPLIER

    Institute of Scientific and Technical Information of China (English)

    Xiangtong QI

    2009-01-01

    This paper studies a two-stage production system with n job orders where each job needs two sequential operations. In addition to the two in-house production facilities, the manufacturer has another option of outsourcing some stage-one operations to a remote outside supplier. The jobs with their stage-one operations outsourced are subject to a batch transportation delay from the outside supplier before their respective stage-two operations can be started in-house. The problem is to design an integrated schedule that considers both the in-house production and the outsourcing with the aim of optimally balancing the outsourcing cost and the makespan. The problem is NP-hard. We have developed an optimal algorithm and a heuristic algorithm to solve the problem, and conducted computational experiments to validate our model and algorithms. Our modeling and algorithm framework can be extended to handle other more general cases such as when the outside supplier has a production facility with a different processing efficiency and when there are many outside suppliers on a spot market.

  14. A Compact Two-Stage 120 W GaN High Power Amplifier for SweepSAR Radar Systems

    Science.gov (United States)

    Thrivikraman, Tushar; Horst, Stephen; Price, Douglas; Hoffman, James; Veilleux, Louise

    2014-01-01

    This work presents the design and measured results of a fully integrated switched power two-stage GaN HEMT high-power amplifier (HPA) achieving 60% power-added efficiency at over 120Woutput power. This high-efficiency GaN HEMT HPA is an enabling technology for L-band SweepSAR interferometric instruments that enable frequent repeat intervals and high-resolution imagery. The L-band HPA was designed using space-qualified state-of-the-art GaN HEMT technology. The amplifier exhibits over 34 dB of power gain at 51 dBm of output power across an 80 MHz bandwidth. The HPA is divided into two stages, an 8 W driver stage and 120 W output stage. The amplifier is designed for pulsed operation, with a high-speed DC drain switch operating at the pulsed-repetition interval and settles within 200 ns. In addition to the electrical design, a thermally optimized package was designed, that allows for direct thermal radiation to maintain low-junction temperatures for the GaN parts maximizing long-term reliability. Lastly, real radar waveforms are characterized and analysis of amplitude and phase stability over temperature demonstrate ultra-stable operation over temperature using integrated bias compensation circuitry allowing less than 0.2 dB amplitude variation and 2 deg phase variation over a 70 C range.

  15. Mars Rover Sample Return aerocapture configuration design and packaging constraints

    Science.gov (United States)

    Lawson, Shelby J.

    1989-01-01

    This paper discusses the aerodynamics requirements, volume and mass constraints that lead to a biconic aeroshell vehicle design that protects the Mars Rover Sample Return (MRSR) mission elements from launch to Mars landing. The aerodynamic requirements for Mars aerocapture and entry and packaging constraints for the MRSR elements result in a symmetric biconic aeroshell that develops a L/D of 1.0 at 27.0 deg angle of attack. A significant problem in the study is obtaining a cg that provides adequate aerodynamic stability and performance within the mission imposed constraints. Packaging methods that relieve the cg problems include forward placement of aeroshell propellant tanks and incorporating aeroshell structure as lander structure. The MRSR missions developed during the pre-phase A study are discussed with dimensional and mass data included. Further study is needed for some missions to minimize MRSR element volume so that launch mass constraints can be met.

  16. Sparsely Sampling the Sky: A Bayesian Experimental Design Approach

    CERN Document Server

    Paykari, P

    2012-01-01

    The next generation of galaxy surveys will observe millions of galaxies over large volumes of the universe. These surveys are expensive both in time and cost, raising questions regarding the optimal investment of this time and money. In this work we investigate criteria for selecting amongst observing strategies for constraining the galaxy power spectrum and a set of cosmological parameters. Depending on the parameters of interest, it may be more efficient to observe a larger, but sparsely sampled, area of sky instead of a smaller contiguous area. In this work, by making use of the principles of Bayesian Experimental Design, we will investigate the advantages and disadvantages of the sparse sampling of the sky and discuss the circumstances in which a sparse survey is indeed the most efficient strategy. For the Dark Energy Survey (DES), we find that by sparsely observing the same area in a smaller amount of time, we only increase the errors on the parameters by a maximum of 0.45%. Conversely, investing the sam...

  17. Effects-Driven Participatory Design: Learning from Sampling Interruptions.

    Science.gov (United States)

    Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten; Karasti, Helena; Simonsen, Jesper

    2017-01-01

    Participatory design (PD) can play an important role in obtaining benefits from healthcare information technologies, but we contend that to fulfil this role PD must incorporate feedback from real use of the technologies. In this paper we describe an effects-driven PD approach that revolves around a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians' intra- and interdepartmental coordination. The hospital aimed to reduce the number of phone calls involved in coordinating work because many phone calls were seen as unnecessary interruptions. To learn about the interruptions we introduced an app for capturing quantitative data and qualitative feedback about the phone calls. The investigation showed that the electronic whiteboards had little potential for reducing the number of phone calls at the operating ward. The combination of quantitative data and qualitative feedback worked both as a basis for aligning assumptions to data and showed ESM as an instrument for triggering in-situ reflection. The participant-driven design and redesign of the way data were captured by means of ESM is a central contribution to the understanding of how to conduct effects-driven PD.

  18. End-wall boundary layer measurements in a two-stage fan

    Science.gov (United States)

    Ball, C. L.; Reid, L.; Schmidt, J. F.

    1983-01-01

    Detailed flow measurements made in the casing boundary layer of a two-stage transonic fan are summarized. These measurements were taken at stations upstream of the fan, between all blade rows, and downstream of the last blade row. At the design tip speed of 429 m/sec the fan achieved a peak efficiency of 0.846 at a pressure ratio of 2.471. The boundary layer data were obtained at three weight flows at the design speed: one near choke flow, one near peak efficiency, and one near stall. Conventional boundary layer parameters were calculated from the data measured at each measuring station for each of the three flows. A classical two dimensional casing boundary layer was measured at the fan inlet and extended inward to approximately 15 percent of span. A highly three dimensional boundary layer was measured at the exit of each blade row and extended inward to approximately 10 percent of span. The steep radial gradient of axial velocity noted at the exit of the rotors was reduced substantially as the flow passed through the stators. This reduced gradient is attributed to flow mixing. The amount of flow mixing was reflected in the radial redistribution of total temperature as the flow passed through the stators. The data also show overturning of the tip flow at the stator exits that is consistent with the expected effect of the secondary flow field. The blockage factors calculated from the measured data show an increase in blockage across the rotors and a decrease across the stators.

  19. A Reconfigurable Architecture for Rotation Invariant Multi-View Face Detection Based on a Novel Two-Stage Boosting Method

    Directory of Open Access Journals (Sweden)

    Zhengbin Pang

    2009-01-01

    Full Text Available We present a reconfigurable architecture model for rotation invariant multi-view face detection based on a novel two-stage boosting method. A tree-structured detector hierarchy is designed to organize multiple detector nodes identifying pose ranges of faces. We propose a boosting algorithm for training the detector nodes. The strong classifier in each detector node is composed of multiple novelly designed two-stage weak classifiers. With a shared output space of multicomponents vector, each detector node deals with the multidimensional binary classification problems. The design of the hardware architecture which fully exploits the spatial and temporal parallelism is introduced in detail. We also study the reconfiguration of the architecture for finding an appropriate tradeoff among the hardware implementation cost, the detection accuracy, and speed. Experiments on FPGA show that high accuracy and marvelous speed are achieved compared with previous related works. The execution time speedups range from 14.68 to 20.86 for images with size of 160×120 up to 800×600 when our FPGA design (98 MHz is compared with software solution on PC (Pentium 4 2.8 GHz.

  20. Enhancing the hydrolysis process of a two-stage biogas technology for the organic fraction of municipal solid waste

    DEFF Research Database (Denmark)

    Nasir, Zeeshan; Uellendahl, Hinrich

    2015-01-01

    The Danish company Solum A/S has developed a two-stage dry anaerobic digestion process labelled AIKAN® for the biological conversion of the organic fraction of municipal solid waste (OFMSW) into biogas and compost. In the AIKAN® process design the methanogenic (2nd) stage is separated from...... time, recirculation rate of percolate, ratio of admixing effluent from the anaerobic stage to the percolate, water submerge of waste) on the efficiency of the hydrolytic stage. •The effect of addition of adapted mixed cultures and specific hydrolytic microorganisms on the hydrolysis of the waste. •The...

  1. Two-stage plasma gun based on a gas discharge with a self-heating hollow emitter.

    Science.gov (United States)

    Vizir, A V; Tyunkov, A V; Shandrikov, M V; Oks, E M

    2010-02-01

    The paper presents the results of tests of a new compact two-stage bulk gas plasma gun. The plasma gun is based on a nonself-sustained gas discharge with an electron emitter based on a discharge with a self-heating hollow cathode. The operating characteristics of the plasma gun are investigated. The discharge system makes it possible to produce uniform and stable gas plasma in the dc mode with a plasma density up to 3x10(9) cm(-3) at an operating gas pressure in the vacuum chamber of less than 2x10(-2) Pa. The device features high power efficiency, design simplicity, and compactness.

  2. Method of oxygen-enriched two-stage underground coal gasification

    Institute of Scientific and Technical Information of China (English)

    Liu Hongtao; Chen Feng; Pan Xia; Yao Kai; Liu Shuqin

    2011-01-01

    Two-stage underground coal gasification was studied to improve the caloric value of the syngas and to extend gas production times. A model test using the oxygen-enriched two-stage coal gasification method was carried out. The composition of the gas produced, the time ratio of the two stages, and the role of the temperature field were analysed. The results show that oxygen-enriched two-stage gasification shortens the time of the first stage and prolongs the time of the second stage. Feed oxygen concentrations of 30%,35%, 40%, 45%, 60%, or 80% gave time ratios (first stage to second stage) of 1:0.12, 1:0.21, 1:0.51, 1:0.64,1:0.90, and 1:4.0 respectively. Cooling rates of the temperature field after steam injection decreased with time from about 19.1-27.4 ℃/min to 2.3-6.8 ℃/min. But this rate increased with increasing oxygen concentrations in the first stage. The caloric value of the syngas improves with increased oxygen concentration in the first stage. Injection of 80% oxygen-enriched air gave gas with the highest caloric value and also gave the longest production time. The caloric value of the gas obtained from the oxygenenriched two-stage gasification method lies in the range from 5.31 MJ/Nm3 to 10.54 MJ/Nm3.

  3. 13 K thermally coupled two-stage Stirling-type pulse tube refrigerator

    Institute of Scientific and Technical Information of China (English)

    TANG Ke; CHEN Guobang; THUMMES Günter

    2005-01-01

    Stirling-type pulse tube refrigerators have attracted academic and commercial interest in recent years due to their more compact configuration and higher efficiency than those of G-M type pulse tube refrigerators. In order to achieve a no-load cooling temperature below 20 K, a thermally coupled two-stage Stirling-type pulse tube refrigerator has been built. The thermally coupled arrangement was expected to minimize the interference between the two stages and to simplify the adjustment and optimization of the phase shifters. A no-load cooling temperature of 14.97 K has been realized with the two-stage cooler driven by one linear compressor of 200 W electric input. When the two stages are driven by two compressors respectively, with total electric input of 400 W, the prototype has attained a no-load cooling temperature of 12.96 K, which is the lowest temperature ever reported with two-stage Stirling-type pulse tube refrigerators.

  4. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  5. Two-stage open-loop velocity compensating method applied to multi-mass elastic transmission system

    Directory of Open Access Journals (Sweden)

    Zhang Deli

    2014-02-01

    Full Text Available In this paper, a novel vibration-suppression open-loop control method for multi-mass system is proposed, which uses two-stage velocity compensating algorithm and fuzzy I + P controller. This compensating method is based on model-based control theory in order to provide a damping effect on the system mechanical part. The mathematical model of multi-mass system is built and reduced to estimate the velocities of masses. The velocity difference between adjacent masses is calculated dynamically. A 3-mass system is regarded as the composition of two 2-mass systems in order to realize the two-stage compensating algorithm. Instead of using a typical PI controller in the velocity compensating loop, a fuzzy I + P controller is designed and its input variables are decided according to their impact on the system, which is different from the conventional fuzzy PID controller designing rules. Simulations and experimental results show that the proposed velocity compensating method is effective in suppressing vibration on a 3-mass system and it has a better performance when the designed fuzzy I + P controller is utilized in the control system.

  6. Effects of earthworm casts and zeolite on the two-stage composting of green waste.

    Science.gov (United States)

    Zhang, Lu; Sun, Xiangyang

    2015-05-01

    Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21days with the optimized two-stage composting method rather than in the 90-270days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  7. Two-Stage Revision Anterior Cruciate Ligament Reconstruction: Bone Grafting Technique Using an Allograft Bone Matrix.

    Science.gov (United States)

    Chahla, Jorge; Dean, Chase S; Cram, Tyler R; Civitarese, David; O'Brien, Luke; Moulton, Samuel G; LaPrade, Robert F

    2016-02-01

    Outcomes of primary anterior cruciate ligament (ACL) reconstruction have been reported to be far superior to those of revision reconstruction. However, as the incidence of ACL reconstruction is rapidly increasing, so is the number of failures. The subsequent need for revision ACL reconstruction is estimated to occur in up to 13,000 patients each year in the United States. Revision ACL reconstruction can be performed in one or two stages. A two-stage approach is recommended in cases of improper placement of the original tunnels or in cases of unacceptable tunnel enlargement. The aim of this study was to describe the technique for allograft ACL tunnel bone grafting in patients requiring a two-stage revision ACL reconstruction.

  8. Pilot plant testing of IGT`s two-stage fluidized-bed/cyclonic agglomerating combustor

    Energy Technology Data Exchange (ETDEWEB)

    Rehmat, A.; Mensinger, M.C. [Institute of Gas Technology, Chicago, IL (United States); Richardson, T.L. [Environmental Protection Agency, Cincinnati, OH (United States)

    1993-12-31

    The Institute of Gas Technology (IGT) is conducting a multi-year experimental program to develop and test, through pilot-scale operation, IGT`s two-stage fluidized-bed/cyclonic agglomerating combustor (AGGCOM). The AGGCOM process is based on combining the fluidized-bed agglomeration and gasification technology with the cyclonic combustion technology, both of which have been developed at IGT over many years. AGGCOM is a unique and extremely flexible combustor that can operate over a wide range of conditions in the fluidized-bed first stage from low temperature (desorption) to high temperature (agglomeration), including gasification of high-energy-content wastes. The ACCCOM combustor can easily and efficiently destroy solid, liquid, and gaseous organic wastes, while isolating solid inorganic contaminants within an essentially non-leachable glassy matrix, suitable for disposal in ordinary landfills. Fines elutriated from the first stage are captured by a high-efficiency cyclone and returned to the fluidized bed for ultimate incorporation into the agglomerates. Intense mixing in the second-stage cyclonic combustor ensures high destruction and removal efficiencies (DRE) for organic compounds that may be present in the feed material. This paper presents an overview of the experimental development of the AGGCOM process and progress made to date in designing, constructing, and operating the 6-ton/day AGGCOM pilot plant. Results of the bench-scale tests conducted to determine the operating conditions necessary to agglomerate a soil were presented at the 1991 Incineration Conference. On-site construction of the AGGCOM pilot plant was initiated in August 1992 and completed at the end of March 1993, with shakedown testing following immediately thereafter. The initial tests in the AGGCOM pilot plant will focus on the integrated operation of both stages of the combustor and will be conducted with ``clean`` topsoil.

  9. METHODOLOGY AND RESULTS OF MOBILE OBJECT PURSUIT PROBLEM SOLUTION WITH TWO-STAGE DYNAMIC SYSTEM

    Directory of Open Access Journals (Sweden)

    A. Kiselev Mikhail

    2017-01-01

    Full Text Available The experience of developing unmanned fighting vehicles indicates that the main challenge in this field reduces itself to creating the systems which can replace the pilot both as a sensor and as the operator of the flight. This problem can be partial- ly solved by introducing remote control, but there are certain flight segments where it can only be executed under fully inde- pendent control and data support due to various reasons, such as tight time, short duration, lack of robust communication, etc. Such stages also include close-range air combat maneuvering (CRACM - a key flight segment as far as the fighter's purpose is concerned, which also places the highest demands on the fighter's design. Until recently the creation of an unmanned fighter airplane has been a fundamentally impossible task due to the absence of sensors able to provide the necessary data support to control the fighter during CRACM. However, the development prospects of aircraft hardware (passive type flush antennae, op- tico-locating panoramic view stations are indicative of producing possible solutions to this problem in the nearest future. There- fore, presently the only fundamental impediment on the way to developing an unmanned fighting aircraft is the problem of cre- ating algorithms for automatic trajectory control during CRACM. This paper presents the strategy of automatic trajectory con- trol synthesis by a two-stage dynamic system aiming to reach the conditions specified with respect to an object in pursuit. It contains certain results of control algorithm parameters impact assessment in regards to the pursuit mission effectiveness. Based on the obtained results a deduction is drawn pertaining to the efficiency of the offered method and its possible utilization in au- tomated control of an unmanned fighting aerial vehicle as well as organizing group interaction during CRACM.

  10. Two-Stage Multi-Objective Collaborative Scheduling for Wind Farm and Battery Switch Station

    Directory of Open Access Journals (Sweden)

    Zhe Jiang

    2016-10-01

    Full Text Available In order to deal with the uncertainties of wind power, wind farm and electric vehicle (EV battery switch station (BSS were proposed to work together as an integrated system. In this paper, the collaborative scheduling problems of such a system were studied. Considering the features of the integrated system, three indices, which include battery swapping demand curtailment of BSS, wind curtailment of wind farm, and generation schedule tracking of the integrated system are proposed. In addition, a two-stage multi-objective collaborative scheduling model was designed. In the first stage, a day-ahead model was built based on the theory of dependent chance programming. With the aim of maximizing the realization probabilities of these three operating indices, random fluctuations of wind power and battery switch demand were taken into account simultaneously. In order to explore the capability of BSS as reserve, the readjustment process of the BSS within each hour was considered in this stage. In addition, the stored energy rather than the charging/discharging power of BSS during each period was optimized, which will provide basis for hour-ahead further correction of BSS. In the second stage, an hour-ahead model was established. In order to cope with the randomness of wind power and battery swapping demand, the proposed hour-ahead model utilized ultra-short term prediction of the wind power and the battery switch demand to schedule the charging/discharging power of BSS in a rolling manner. Finally, the effectiveness of the proposed models was validated by case studies. The simulation results indicated that the proposed model could realize complement between wind farm and BSS, reduce the dependence on power grid, and facilitate the accommodation of wind power.

  11. Study of high power, two-stage, TWT X-band amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Wang, P.; Golkowski, C.; Hayashi, Y.; Ivers, J.D.; Nation, J.A.; Schachter, L.

    1999-07-01

    A disk loaded slow wave structure with a cold wave phase (without electron beam) velocity of the TM{sub 01} wave greater than the speed of light (1.05c) is used as the electron bunching stage of a two stage X-band amplifier. The high phase velocity section produces well defined electron bunches. The second section, where the cold wave phase velocity is (0.84c), i.e., less than beam velocity of 0.91c, is used to generate the high output power microwave radiation. The tightly bunched beam from the high phase velocity section enhances the beam energy conversion into microwave radiation compared to that obtained with a synchronous electron-wave bunches. The amplifier is driven by a 7mm diameter 750 kV, 500A pencil electron beam. The structure, which has a 4 GHz bandwidth, produces an amplified output with a power in the range of 20--60 MW. At higher output powers (>60MW) pulse shortening develops. The authors suspect that the pulse shortening is a result of excitation of the hybrid mode, HEM{sub 11}, which overlaps (about 0.5 GHz separation) with the frequency domain of the desired TM{sub 0.1} mode. A new amplifier with similar phase velocity characteristics but with a 1 GHz bandwidth and an HEM{sub 11}, TM{sub 01} mode frequency separation of 3.3 GHz has been designed and constructed. The interaction frequency for the HEM mode is above the passband of the TM mode. Testing is in progress. The performance of the new amplifier will be compared with results obtained using the earlier configuration.

  12. The influence of Thomson effect in the performance optimization of a two stage thermoelectric cooler

    Science.gov (United States)

    Kaushik, S. C.; Manikandan, S.

    2015-12-01

    The exoreversible and irreversible thermodynamic models of a two stage thermoelectric cooler (TTEC) considering Thomson effect in conjunction with Peltier, Joule and Fourier heat conduction effects have been investigated using exergy analysis. New expressions for the interstage temperature, optimum current for the maximum cooling power, energy and exergy efficiency conditions, energy efficiency and exergy efficiency of a TTEC are derived as well. The number of thermocouples in the first and second stages of a TTEC for the maximum cooling power, energy and exergy efficiency conditions are optimized. The results show that the exergy efficiency is lower than the energy efficiency e.g., in an irreversible TTEC with total 30 thermocouples, heat sink temperature (TH) of 300 K and heat source temperature (TC) of 280 K, the obtained maximum cooling power, maximum energy and exergy efficiency are 20.37 W, 0.7147 and 5.10% respectively. It has been found that the Thomson effect increases the cooling power and energy efficiency of the TTEC system e.g., in the exoreversible TTEC the cooling power and energy efficiency increased from 14.87 W to 16.36 W and from 0.4079 to 0.4998 respectively for ΔTC of 40 K when Thomson effect is considered. It has also been found that the heat transfer area at the hot side of an irreversible TTEC should be higher than the cold side for maximum performance operation. This study will help in the designing of the actual multistage thermoelectric cooling systems.

  13. Methane production from sweet sorghum residues via a two-stage process

    Energy Technology Data Exchange (ETDEWEB)

    Stamatelatou, K.; Dravillas, K.; Lyberatos, G. [University of Patras (Greece). Department of Chemical Engineering, Laboratory of Biochemical Engineering and Environmental Technology

    2003-07-01

    The start-up of a two-stage reactor configuration for the anaerobic digestion of sweet sorghum residues was evaluated. The sweet sorghum residues were a waste stream originating from the alcoholic fermentation of sweet sorghum and the subsequent distillation step. This waste stream contained high concentration of solid matter (9% TS) and thus could be characterized as a semi-solid, not easily biodegradable wastewater with high COD (115 g/l). The application of the proposed two-stage configuration (consisting of one thermophilic hydrolyser and one mesophilic methaniser) achieved a methane production of 16 l/l wastewater under a hydraulic retention time of 19 d. (author)

  14. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  15. Terephthalic acid wastewater treatment by using two-stage aerobic process

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Based on the tests between anoxic and aerobic process, the two-stage aerobic process with a biological selector was chosen to treat terephthalic acid wastewater (PTA). By adopting the two- stage aerobic process, the CODCr in PTA wastewater could be reduced from 4000-6000 mg/L to below 100 mg/L; the COD loading in the first aerobic tank could reach 7.0-8.0 kgCODCr/(m3.d) and that of the second stage was from 0.2 to 0.4 kgCODCr/(m3.d). Further researches on the kinetics of substrate degradation were carried out.

  16. Design, construction, and measurement of a dielectric single-mirror two-stage (DSMTS) photovoltaic concentrator

    Science.gov (United States)

    Mohedano Arroyo, Ruben; Benitez, Pablo; Minano, Juan C.; Bercero, Francisco; Lobato, Pablo

    2001-11-01

    The 30 X DSMTS is a trough-like photovoltaic concentrator, meant to track the sun in one axis, which has a mirror allocating two concentration stages and a secondary lens that increases its acceptance up to +/- 2.3 degrees. Provided that the sun subtends an angle of +/- 0.256 degrees, such acceptance seems excessive. However, thanks to it, we can relax requirements that often demand accuracy in systems of the kind. For instance, the shape of the mirror can be achieved by simply bending an aluminum sheet. To foresee the results we may expect of this strategy, we carried out some mechanical calculations, whose results are the boundary conditions that lead to a minimum standard deviation on the local slope of the elastic mirror with regards to the theoretical value. We checked by ray tracing that such an error actually provoked a small decrease on the acceptance. This fact persuaded us to carry out the manufacture of two elastic prototypes. In the test that have been performed so far with them we achieved an acceptance angle of +/- 1.63 degrees and a collection efficiency of 98 percent at a geometrical concentration of 30 times, results that can be considered as outstanding in the photovoltaics framework.

  17. A Two-Stage Intercontinental Ballistic Missile (ICBM) Design Optimization Study and Life Cycle Cost Analysis

    Science.gov (United States)

    1992-12-01

    dollars. 6.5 Summary of Results - The System Performance Matriz The results of the NEMESIS systems engineering study are summarized in Table 6.12. Note...Edition. Boston , MA: PWS Engineering, 1984. 43. Hall, A.D. "Three-Dimensional Morphology of Systems Engineering." IEEE Transactions on Systems, Science and

  18. A GRASP model in network design for two-stage supply chain

    Directory of Open Access Journals (Sweden)

    Hassan Javanshir

    2011-04-01

    Full Text Available We consider a capacitated facility location problem (CFLP which contains a production facility and distribution centers (DCs supplying retailers' demand. The primary purpose is to locate distribution centres in the network and the objective is the minimization of the sum of fixed facility location, pipeline inventory, safety stock and lost sales. We use Greedy randomized adaptive search procedures (GRASP to solve the model. The preliminary results indicate that the proposed method of this paper could provide competitive results in reasonable amount time.

  19. First Law Analysis of a Two-stage Ejector-vapor Compression Refrigeration Cycle working with R404A

    National Research Council Canada - National Science Library

    Feiza Memet; Daniela-Elena Mitu

    2011-01-01

    The traditional two-stage vapor compression refrigeration cycle might be replaced by a two-stage ejector-vapor compression refrigeration cycle if it is aimed the decrease of irreversibility during expansion...

  20. SAS procedures for designing and analyzing sample surveys

    Science.gov (United States)

    Stafford, Joshua D.; Reinecke, Kenneth J.; Kaminski, Richard M.

    2003-01-01

    Complex surveys often are necessary to estimate occurrence (or distribution), density, and abundance of plants and animals for purposes of re-search and conservation. Most scientists are familiar with simple random sampling, where sample units are selected from a population of interest (sampling frame) with equal probability. However, the goal of ecological surveys often is to make inferences about populations over large or complex spatial areas where organisms are not homogeneously distributed or sampling frames are in-convenient or impossible to construct. Candidate sampling strategies for such complex surveys include stratified,multistage, and adaptive sampling (Thompson 1992, Buckland 1994).

  1. Statistical power and optimal design in experiments in which samples of participants respond to samples of stimuli.

    Science.gov (United States)

    Westfall, Jacob; Kenny, David A; Judd, Charles M

    2014-10-01

    Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.

  2. Overcoming the bottlenecks of anaerobic digestion of olive mill solid waste by two-stage fermentation.

    Science.gov (United States)

    Stoyanova, Elitza; Lundaa, Tserennyam; Bochmann, Günther; Fuchs, Werner

    2017-02-01

    Two-stage anaerobic digestion (AD) of two-phase olive mill solid waste (OMSW) was applied for reducing the inhibiting factors by optimizing the acidification stage. Single-stage AD and co-fermentation with chicken manure were conducted coinstantaneous for direct comparison. Degradation of the polyphenols up to 61% was observed during the methanogenic stage. Nevertheless the concentration of phenolic substances was still high; the two-stage fermentation remained stable at OLR 1.5 kgVS/m³day. The buffer capacity of the system was twice as high, compared to the one-stage fermentation, without additives. The two-stage AD was a combined process - thermophilic first stage and mesophilic second stage, which pointed out to be the most profitable for AD of OMSW for the reduced hydraulic retention time (HRT) from 230 to 150 days, and three times faster than the single-stage and the co-fermentation start-up of the fermentation. The optimal HRT and incubation temperature for the first stage were determined to four days and 55°C. The performance of the two-stage AD concerning the stability of the process was followed by the co-digestion of OMSW with chicken manure as a nitrogen-rich co-substrate, which makes them viable options for waste disposal with concomitant energy recovery.

  3. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  4. A two-stage ethanol-based biodiesel production in a packed bed reactor

    DEFF Research Database (Denmark)

    Xu, Yuan; Nordblad, Mathias; Woodley, John

    2012-01-01

    A two-stage enzymatic process for producing fatty acid ethyl ester (FAEE) in a packed bed reactor is reported. The process uses an experimental immobilized lipase (NS 88001) and Novozym 435 to catalyze transesterification (first stage) and esterification (second stage), respectively. Both stages...

  5. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...... problem....

  6. Use a Log Splitter to Demonstrate Two-Stage Hydraulic Pump

    Science.gov (United States)

    Dell, Timothy W.

    2012-01-01

    The two-stage hydraulic pump is commonly used in many high school and college courses to demonstrate hydraulic systems. Unfortunately, many textbooks do not provide a good explanation of how the technology works. Another challenge that instructors run into with teaching hydraulic systems is the cost of procuring an expensive real-world machine…

  7. Capacity Analysis of Two-Stage Production lines with Many Products

    NARCIS (Netherlands)

    M.B.M. de Koster (René)

    1987-01-01

    textabstractWe consider two-stage production lines with an intermediate buffer. A buffer is needed when fluctuations occur. For single-product production lines fluctuations in capacity availability may be caused by random processing times, failures and random repair times. For multi-product producti

  8. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...

  9. Two-stage data envelopment analysis technique for evaluating internal supply chain efficiency

    Directory of Open Access Journals (Sweden)

    Nisakorn Somsuk

    2014-12-01

    Full Text Available A two-stage data envelopment analysis (DEA which uses mathematical linear programming techniques is applied to evaluate the efficiency of a system composed of two relational sub-processes, by which the outputs from the first sub-process (as the intermediate outputs of the system are the inputs for the second sub-process. The relative efficiencies of the system and its sub-processes can be measured by applying the two-stage DEA. According to the literature review on the supply chain management, this technique can be used as a tool for evaluating the efficiency of the supply chain composed of two relational sub-processes. The technique can help to determine the inefficient sub-processes. Once the inefficient sub-process was improved its efficiency, it would result in better aggregate efficiency of the supply chain. This paper aims to present a procedure for evaluating the efficiency of the supply chain by using the two-stage DEA, under the assumption of constant returns to scale, with an example of internal supply chain efficiency measurement of insurance companies by applying the two-stage DEA for illustration. Moreover, in this paper the authors also present some observations on the application of this technique.

  10. Two-stage estimation in copula models used in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2005-01-01

    In this paper register based family studies provide the motivation for studying a two-stage estimation procedure in copula models for multivariate failure time data. The asymptotic properties of the estimators in both parametric and semi-parametric models are derived, generalising the approach by...

  11. Extraoral implants for orbit rehabilitation: a comparison between one-stage and two-stage surgeries.

    Science.gov (United States)

    de Mello, M C L M P; Guedes, R; de Oliveira, J A P; Pecorari, V A; Abrahão, M; Dib, L L

    2014-03-01

    The aim of the study was to compare the osseointegration success rate and time for delivery of the prosthesis among cases treated by two-stage or one-stage surgery for orbit rehabilitation between 2003 and 2011. Forty-five patients were included, 31 males and 14 females; 22 patients had two-stage surgery and 23 patients had one-stage surgery. A total 138 implants were installed, 42 (30.4%) on previously irradiated bone. The implant survival rate was 96.4%, with a success rate of 99.0% among non-irradiated patients and 90.5% among irradiated patients. Two-stage patients received 74 implants with a survival rate of 94.6% (four implants lost); one-stage surgery patients received 64 implants with a survival rate of 98.4% (one implant lost). The median time interval between implant fixation and delivery of the prosthesis for the two-stage group was 9.6 months and for the one-stage group was 4.0 months (P < 0.001). The one-stage technique proved to be reliable and was associated with few risks and complications; the rate of successful osseointegration was similar to those reported in the literature. The one-stage technique should be considered a viable procedure that shortens the time to final rehabilitation and facilitates appropriate patient follow-up treatment.

  12. Validation of Continuous CHP Operation of a Two-Stage Biomass Gasifier

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Henriksen, Ulrik Birk; Jensen, Torben Kvist

    2006-01-01

    The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-d...

  13. High rate treatment of terephthalic acid production wastewater in a two-stage anaerobic bioreactor

    NARCIS (Netherlands)

    Kleerebezem, R.; Beckers, J.; Pol, L.W.H.; Lettinga, G.

    2005-01-01

    The feasibility was studied of anaerobic treatment of wastewater generated during purified terephthalic acid (PTA) production in two-stage upflow anaerobic sludge blanket (UASB) reactor system. The artificial influent of the system contained the main organic substrates of PTA-wastewater: acetate, be

  14. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...

  15. The rearrangement process in a two-stage broadcast switching network

    DEFF Research Database (Denmark)

    Jacobsen, Søren B.

    1988-01-01

    The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...

  16. Two-stage laparoscopic resection of colon cancer and metastatic liver tumour

    Directory of Open Access Journals (Sweden)

    Yukio Iwashita

    2012-01-01

    Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both post-operative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

  17. Two-stage laparoscopic resection of colon cancer and metastatic liver tumour

    Directory of Open Access Journals (Sweden)

    Iwashita Yukio

    2005-01-01

    Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both postoperative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

  18. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  19. Novel two-stage piezoelectric-based electrical energy generators for low and variable speed rotary machinery

    Science.gov (United States)

    Rastegar, J.; Murray, R.

    2010-04-01

    A novel class of two-stage piezoelectric-based electrical energy generators is presented for rotary machinery in which the input speed is low and varies significantly, even reversing. Applications include wind mills, turbo-machinery for harvesting tidal flows, etc. Current technology using magnet-and-coil rotary generators require gearing or similar mechanisms to increase the input speed and make the generation cycle efficient. Variable speed-control mechanisms are also usually needed to achieve high mechanical to electrical energy conversion efficiency. Presented here are generators that do not require gearing or speed control mechanisms, significantly reducing complexity and cost, especially pertaining to maintenance and service. Additionally, these new generators can expand the application of energy harvesting to much slower input speeds than current technology allows. The primary novelty of this technology is the two-stage harvesting system. The harvesting environment (e.g. wind) provides input to the primary system, which is then used to successively excite a secondary system of vibratory elements into resonance - like strumming a guitar. The key advantage is that by having two decoupled systems, the low-andvarying- speed input can be converted into constant and much higher frequency vibrations. Energy is then harvested from the secondary system's vibrating elements with high efficiency using piezoelectric elements or magnet-and-coil generators. These new generators are uncomplicated, and can efficiently operate at widely varying and even reversing input speeds. Conceptual designs are presented for a number of generators and subsystems (e.g. for passing mechanical energy from the primary to the secondary system). Additionally, analysis of a complete two-stage energy harvesting system is discussed with predictions of performance and efficiency.

  20. The Bracka two-stage repair for severe proximal hypospadias: A single center experience

    Directory of Open Access Journals (Sweden)

    Rakesh S Joshi

    2015-01-01

    Full Text Available Background: Surgical correction of severe proximal hypospadias represents a significant surgical challenge and single-stage corrections are often associated with complications and reoperations. Bracka two-stage repair is an attractive alternative surgical procedure with superior, reliable, and reproducible results. Purpose: To study the feasibility and applicability of Bracka two-stage repair for the severe proximal hypospadias and to analyze the outcomes and complications of this surgical technique. Materials and Methods: This prospective study was conducted from January 2011 to December 2013. Bracka two-stage repair was performed using inner preputial skin as a free graft in subjects with proximal hypospadias in whom severe degree of chordee and/or poor urethral plate was present. Only primary cases were included in this study. All subjects received three doses of intra-muscular testosterone 3 weeks apart before first stage. Second stage was performed 6 months after the first stage. Follow-up ranged from 6 months to 24 months. Results: A total of 43 patients operated for Bracka repair, out of which 30 patients completed two-stage repair. Mean age of the patients was 4 years and 8 months. We achieved 100% graft uptake and no revision was required. Three patients developed fistula, while two had metal stenosis. Glans dehiscence, urethral stricture and the residual chordee were not found during follow-up and satisfactory cosmetic results with good urinary stream were achieved in all cases. Conclusion: The Bracka two-stage repair is a safe and reliable approach in select patients in whom it is impractical to maintain the axial integrity of the urethral plate, and, therefore, a full circumference urethral reconstruction become necessary. This gives good results both in terms of restoration of normal function with minimal complication.

  1. Optimisation of two-stage screw expanders for waste heat recovery applications

    Science.gov (United States)

    Read, M. G.; Smith, I. K.; Stosic, N.

    2015-08-01

    It has previously been shown that the use of two-phase screw expanders in power generation cycles can achieve an increase in the utilisation of available energy from a low temperature heat source when compared with more conventional single-phase turbines. However, screw expander efficiencies are more sensitive to expansion volume ratio than turbines, and this increases as the expander inlet vapour dryness fraction decreases. For singlestage screw machines with low inlet dryness, this can lead to under expansion of the working fluid and low isentropic efficiency for the expansion process. The performance of the cycle can potentially be improved by using a two-stage expander, consisting of a low pressure machine and a smaller high pressure machine connected in series. By expanding the working fluid over two stages, the built-in volume ratios of the two machines can be selected to provide a better match with the overall expansion process, thereby increasing efficiency for particular inlet and discharge conditions. The mass flow rate though both stages must however be matched, and the compromise between increasing efficiency and maximising power output must also be considered. This research uses a rigorous thermodynamic screw machine model to compare the performance of single and two-stage expanders over a range of operating conditions. The model allows optimisation of the required intermediate pressure in the two- stage expander, along with the rotational speed and built-in volume ratio of both screw machine stages. The results allow the two-stage machine to be fully specified in order to achieve maximum efficiency for a required power output.

  2. A two-stage procedure for determining unsaturated hydraulic characteristics using a syringe pump and outflow observations

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Jensen, Karsten Høgh; Hollenbeck, Karl-Josef;

    1997-01-01

    A fast two-stage methodology for determining unsaturated flow characteristics is presented. The procedure builds on direct measurement of the retention characteristic using a syringe pump technique, combined with inverse estimation of the hydraulic conductivity characteristic based on one......-step outflow experiments. The direct measurements are obtained with a commercial syringe pump, which continuously withdraws fluid from a soil sample at a very low and accurate how rate, thus providing the water content in the soil sample. The retention curve is then established by simultaneously monitoring......-step outflow data and the independently measured retention data are included in the objective function of a traditional least-squares minimization routine, providing unique estimates of the unsaturated hydraulic characteristics by means of numerical inversion of Richards equation. As opposed to what is often...

  3. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    Science.gov (United States)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  4. Design unbiased estimation in line intersect sampling using segmented transects

    Science.gov (United States)

    David L.R. Affleck; Timothy G. Gregoire; Harry T. Valentine; Harry T. Valentine

    2005-01-01

    In many applications of line intersect sampling. transects consist of multiple, connected segments in a prescribed configuration. The relationship between the transect configuration and the selection probability of a population element is illustrated and a consistent sampling protocol, applicable to populations composed of arbitrarily shaped elements, is proposed. It...

  5. Design of Automatic Sample Loading System for INAA

    Institute of Scientific and Technical Information of China (English)

    YAO; Yong-gang; XIAO; Cai-jin; WANG; Ping-sheng; JIN; Xiang-chun; HUA; Long; NI; Bang-fa

    2015-01-01

    Instrumental neutron activation analysis(INAA)is that the sample is bombarded with neutrons,causing the elements to form radioactive isotopes.It is possible to study spectra of the emissions of the radioactive sample,and determine the concentrations of the elements within it.Neutron activation analysis is a sensitive multi-element analytical technique that used for both

  6. Preliminary chemical analysis and biological testing of materials from the HRI catalytic two-stage liquefaction (CTSL) process. [Aliphatic hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Later, D.W.; Wilson, B.W.

    1985-01-01

    Coal-derived materials from experimental runs of Hydrocarbon Research Incorporated's (HRI) catalytic two-stage liquefaction (CTSL) process were chemically characterized and screened for microbial mutagenicity. This process differs from two-stage coal liquefaction processes in that catalyst is used in both stages. Samples from both the first and second stages were class-fractionated by alumina adsorption chromatography. The fractions were analyzed by capillary column gas chromatography; gas chromatography/mass spectrometry; direct probe, low voltage mass spectrometry; and proton nuclear magnetic resonance spectrometry. Mutagenicity assays were performed with the crude and class fractions in Salmonella typhimurium, TA98. Preliminary results of chemical analyses indicate that >80% CTSL materials from both process stages were aliphatic hydrocarbon and polynuclear aromatic hydrocarbon (PAH) compounds. Furthermore, the gross and specific chemical composition of process materials from the first stage were very similar to those of the second stage. In general, the unfractionated materials were only slightly active in the TA98 mutagenicity assay. Like other coal liquefaction materials investigated in this laboratory, the nitrogen-containing polycyclic aromatic compound (N-PAC) class fractions were responsible for the bulk of the mutagenic activity of the crudes. Finally, it was shown that this activity correlated with the presence of amino-PAH. 20 figures, 9 tables.

  7. A New Soil Infiltration Technology for Decentralized Sewage Treatment: Two-Stage Anaerobic Tank and Soil Trench System

    Institute of Scientific and Technical Information of China (English)

    YE Chun; HU Zhan-Bo; KONG Hai-Nan; WANG Xin-Ze; HE Sheng-Bing

    2008-01-01

    The low removal efficiency of total nitrogen (TN) is one of the main disadvantages of traditional single stage subsurface infiltration system,which combines an anaerobic tank and a soil filter field.In this study,a full-scale,two-stage anaerobic tank and soil trench system was designed and operated to evaluate the feasibility and performances in treating sewage from a school campus for over a one-year monitoring period.The raw sewage was prepared and fed into the first anaerobic tank and second tank by 60% and 40%,respectively.This novel process could decrease chemical oxygen demand with the dichromate method by 89%-96%,suspended solids by 91%-97%,and total phosphorus by 91%-97%.The denitrification was satisfactory in the second stage soil trench,so the removals of TN as well as ammonia nitrogen (NH+4-N) reached 68%-75% and 96%-99%,respectively.It appeared that the removal efficiency of TN in this two-stage anaerobic tank and soil trench system was more effective than that in the single stage soil infiltration system.The effluent met the discharge standard for the sewage treatment plant (GB18918-2002) of China.

  8. ADM1-based modeling of methane production from acidified sweet sorghum extract in a two stage process.

    Science.gov (United States)

    Antonopoulou, Georgia; Gavala, Hariklia N; Skiadas, Ioannis V; Lyberatos, Gerasimos

    2012-02-01

    The present study focused on the application of the Anaerobic Digestion Model 1 on the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were estimated through fitting of the model equations to the data obtained from batch experiments. The simulation of the continuous reactor performance at all HRTs tested (20, 15, and 10d) was very satisfactory. Specifically, the largest deviation of the theoretical predictions against the experimental data was 12% for the methane production rate at the HRT of 20d while the deviation values for the 15 and 10d HRT were 1.9% and 1.1%, respectively. The model predictions regarding pH, methane percentage in the gas phase and COD removal were in very good agreement with the experimental data with a deviation less than 5% for all steady states. Therefore, the ADM1 is a valuable tool for process design in the case of a two-stage anaerobic process as well.

  9. A two-stage approach for improved prediction of residue contact maps

    Directory of Open Access Journals (Sweden)

    Pollastri Gianluca

    2006-03-01

    Full Text Available Abstract Background Protein topology representations such as residue contact maps are an important intermediate step towards ab initio prediction of protein structure. Although improvements have occurred over the last years, the problem of accurately predicting residue contact maps from primary sequences is still largely unsolved. Among the reasons for this are the unbalanced nature of the problem (with far fewer examples of contacts than non-contacts, the formidable challenge of capturing long-range interactions in the maps, the intrinsic difficulty of mapping one-dimensional input sequences into two-dimensional output maps. In order to alleviate these problems and achieve improved contact map predictions, in this paper we split the task into two stages: the prediction of a map's principal eigenvector (PE from the primary sequence; the reconstruction of the contact map from the PE and primary sequence. Predicting the PE from the primary sequence consists in mapping a vector into a vector. This task is less complex than mapping vectors directly into two-dimensional matrices since the size of the problem is drastically reduced and so is the scale length of interactions that need to be learned. Results We develop architectures composed of ensembles of two-layered bidirectional recurrent neural networks to classify the components of the PE in 2, 3 and 4 classes from protein primary sequence, predicted secondary structure, and hydrophobicity interaction scales. Our predictor, tested on a non redundant set of 2171 proteins, achieves classification performances of up to 72.6%, 16% above a base-line statistical predictor. We design a system for the prediction of contact maps from the predicted PE. Our results show that predicting maps through the PE yields sizeable gains especially for long-range contacts which are particularly critical for accurate protein 3D reconstruction. The final predictor's accuracy on a non-redundant set of 327 targets is 35

  10. Variance Estimation, Design Effects, and Sample Size Calculations for Respondent-Driven Sampling

    National Research Council Canada - National Science Library

    Salganik, Matthew J

    2006-01-01

    .... A recently developed statistical approach called respondent-driven sampling improves our ability to study hidden populations by allowing researchers to make unbiased estimates of the prevalence...

  11. A Typology of Mixed Methods Sampling Designs in Social Science Research

    Science.gov (United States)

    Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.

    2007-01-01

    This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…

  12. Conditional estimation of exponential random graph models from snowball sampling designs

    NARCIS (Netherlands)

    Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng

    2013-01-01

    A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members

  13. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available At this presentation, the author discussed a statistical method for deriving optimal spatial sampling schemes. First I focus on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting...

  14. Implications of sampling design and sample size for national carbon accounting systems

    Science.gov (United States)

    Michael Köhl; Andrew Lister; Charles T. Scott; Thomas Baldauf; Daniel. Plugge

    2011-01-01

    Countries willing to adopt a REDD regime need to establish a national Measurement, Reporting and Verification (MRV) system that provides information on forest carbon stocks and carbon stock changes. Due to the extensive areas covered by forests the information is generally obtained by sample based surveys. Most operational sampling approaches utilize a combination of...

  15. Design-based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation.

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Discusses some key statistical concepts in probabilistic and non-probabilistic sampling to provide an overview for understanding the inference process. Suggests a statistical model constituting the basis of statistical inference and provides a brief review of the finite population descriptive inference and a quota sampling inferential theory.…

  16. Design, data analysis and sampling techniques for clinical research

    OpenAIRE

    Karthik Suresh; Sanjeev V Thomas; Geetha Suresh

    2011-01-01

    Statistical analysis is an essential technique that enables a medical research practitioner to draw meaningful inference from their data analysis. Improper application of study design and data analysis may render insufficient and improper results and conclusion. Converting a medical problem into a statistical hypothesis with appropriate methodological and logical design and then back-translating the statistical results into relevant medical knowledge is a real challenge. This article explains...

  17. Matching tutor to student: rules and mechanisms for efficient two-stage learning in neural circuits

    CERN Document Server

    Tesileanu, Tiberiu; Balasubramanian, Vijay

    2016-01-01

    Existing models of birdsong learning assume that brain area LMAN introduces variability into song for trial-and-error learning. Recent data suggest that LMAN also encodes a corrective bias driving short-term improvements in song. These later consolidate in area RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using a stochastic gradient descent approach, we derive how 'tutor' circuits should match plasticity mechanisms in 'student' circuits for efficient learning. We further describe a reinforcement learning framework with which the tutor can build its teaching signal. We show that mismatching the tutor signal and plasticity mechanism can impair or abolish learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.

  18. HRI catalytic two-stage liquefaction (CTSL) process materials: chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.

    1985-12-01

    This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stage liquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

  19. Two-stage precipitation process of iron and arsenic from acid leaching solutions

    Institute of Scientific and Technical Information of China (English)

    N.J.BOLIN; J.E.SUNDKVIST

    2008-01-01

    A leaching process for base metals recovery often generates considerable amounts of impurities such as iron and arsenic into the solution.It is a challenge to separate the non-valuable metals into manageable and stable waste products for final disposal,without loosing the valuable constituents.Boliden Mineral AB has patented a two-stage precipitation process that gives a very clean iron-arsenic precipitate by a minimum of coprecipitation of base metals.The obtained product shows to have good sedimentation and filtration properties,which makes it easy to recover the iron-arsenic depleted solution by filtration and washing of the precipitate.Continuos bench scale tests have been done,showing the excellent results achieved by the two-stage precipitation process.

  20. Power Frequency Oscillation Suppression Using Two-Stage Optimized Fuzzy Logic Controller for Multigeneration System

    Directory of Open Access Journals (Sweden)

    Y. K. Bhateshvar

    2016-01-01

    Full Text Available This paper attempts to develop a linearized model of automatic generation control (AGC for an interconnected two-area reheat type thermal power system in deregulated environment. A comparison between genetic algorithm optimized PID controller (GA-PID, particle swarm optimized PID controller (PSO-PID, and proposed two-stage based PSO optimized fuzzy logic controller (TSO-FLC is presented. The proposed fuzzy based controller is optimized at two stages: one is rule base optimization and other is scaling factor and gain factor optimization. This shows the best dynamic response following a step load change with different cases of bilateral contracts in deregulated environment. In addition, performance of proposed TSO-FLC is also examined for ±30% changes in system parameters with different type of contractual demands between control areas and compared with GA-PID and PSO-PID. MATLAB/Simulink® is used for all simulations.

  1. A two-stage scheme for multi-view human pose estimation

    Science.gov (United States)

    Yan, Junchi; Sun, Bing; Liu, Yuncai

    2010-08-01

    We present a two-stage scheme integrating voxel reconstruction and human motion tacking. By combining voxel reconstruction with human motion tracking interactively, our method can work in a cluttered background where perfect foreground silhouettes are hardly available. For each frame, a silhouette-based 3D volume reconstruction method and hierarchical tracking algorithm are applied in two stages. In the first stage, coarse reconstruction and tracking results are obtained, and then the refinement for reconstruction is applied in the second stage. The experimental results demonstrate our approach is promising. Although our method focuses on the problem of human body voxel reconstruction and motion tracking in this paper, our scheme can be used to reconstruct voxel data and infer the pose of many specified rigid and articulated objects.

  2. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach.

    Science.gov (United States)

    Tan, Robin; Perkowski, Marek

    2017-02-20

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.

  3. Effect of two-stage aging on superplasticity of Al-Li alloy

    Institute of Scientific and Technical Information of China (English)

    LUO Zhi-hui; ZHANG Xin-ming; DU Yu-xuan; YE Ling-ying

    2006-01-01

    The effect of two-stage aging on the microstructures and superplasticity of 01420 Al-Li alloy was investigated by means of OM, TEM analysis and stretching experiment. The results demonstrate that the second phase particles distributed more uniformly with a larger volume fraction can be observed after the two-stage aging (120 ℃, 12 h+300 ℃, 36 h) compared with the single-aging(300 ℃, 48 h). After rolling and recrystallization annealing, fine grains with size of 8-10 μm are obtained, and the superplastic elongation of the specimens reaches 560% at strain rate of 8×10-4 s-1 and 480 ℃. Uniformly distributed fine particles precipitate both on grain boundaries and in grains at lower temperature. When the sheet is aged at high temperature, the particles become coarser with a large volume fraction.

  4. Two stage bioethanol refining with multi litre stacked microbial fuel cell and microbial electrolysis cell.

    Science.gov (United States)

    Sugnaux, Marc; Happe, Manuel; Cachelin, Christian Pierre; Gloriod, Olivier; Huguenin, Gérald; Blatter, Maxime; Fischer, Fabian

    2016-12-01

    Ethanol, electricity, hydrogen and methane were produced in a two stage bioethanol refinery setup based on a 10L microbial fuel cell (MFC) and a 33L microbial electrolysis cell (MEC). The MFC was a triple stack for ethanol and electricity co-generation. The stack configuration produced more ethanol with faster glucose consumption the higher the stack potential. Under electrolytic conditions ethanol productivity outperformed standard conditions and reached 96.3% of the theoretically best case. At lower external loads currents and working potentials oscillated in a self-synchronized manner over all three MFC units in the stack. In the second refining stage, fermentation waste was converted into methane, using the scale up MEC stack. The bioelectric methanisation reached 91% efficiency at room temperature with an applied voltage of 1.5V using nickel cathodes. The two stage bioethanol refining process employing bioelectrochemical reactors produces more energy vectors than is possible with today's ethanol distilleries.

  5. HRI catalytic two-stage liquefaction (CTSL) process materials: chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.

    1985-12-01

    This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stage liquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

  6. Performance measurement of insurance firms using a two-stage DEA method

    Directory of Open Access Journals (Sweden)

    Raha Jalili Sabet

    2013-01-01

    Full Text Available Measuring the relative performance of insurance firms plays an important role in this industry. In this paper, we present a two-stage data envelopment analysis to measure the performance of insurance firms, which were active over the period of 2006-2010. The proposed study of this paper performs DEA method in two stages where the first stage considers five inputs and three outputs while the second stage considers the outputs of the first stage as the inputs of the second stage and uses three different outputs for this stage. The results of our survey have indicated that while there were 4 efficient insurance firms most other insurances were noticeably inefficient. This means market was monopolized mostly by a limited number of insurance firms and competition was not fare enough to let other firms participate in economy, more efficiently.

  7. Direct Torque Control of Sensorless Induction Machine Drives: A Two-Stage Kalman Filter Approach

    Directory of Open Access Journals (Sweden)

    Jinliang Zhang

    2015-01-01

    Full Text Available Extended Kalman filter (EKF has been widely applied for sensorless direct torque control (DTC in induction machines (IMs. One key problem associated with EKF is that the estimator suffers from computational burden and numerical problems resulting from high order mathematical models. To reduce the computational cost, a two-stage extended Kalman filter (TEKF based solution is presented for closed-loop stator flux, speed, and torque estimation of IM to achieve sensorless DTC-SVM operations in this paper. The novel observer can be similarly derived as the optimal two-stage Kalman filter (TKF which has been proposed by several researchers. Compared to a straightforward implementation of a conventional EKF, the TEKF estimator can reduce the number of arithmetic operations. Simulation and experimental results verify the performance of the proposed TEKF estimator for DTC of IMs.

  8. Syme's two-stage amputation in insulin-requiring diabetics with gangrene of the forefoot.

    Science.gov (United States)

    Pinzur, M S; Morrison, C; Sage, R; Stuck, R; Osterman, H; Vrbos, L

    1991-06-01

    Thirty-five insulin-requiring adult diabetic patients underwent 38 Syme's Two-Stage amputations for gangrene of the forefoot with nonreconstructible peripheral vascular insufficiency. All had a minimum Doppler ischemic index of 0.5, serum albumin of 3.0 gm/dl, and total lymphocyte count of 1500. Thirty-one (81.6%) eventually healed and were uneventfully fit with a prosthesis. Regional anesthesia was used in all of the patients, with 22 spinal and 16 ankle block anesthetics. Twenty-seven (71%) returned to their preamputation level of ambulatory function. Six (16%) had major, and fifteen (39%) minor complications following the first stage surgery. The results of this study support the use of the Syme's Two-Stage amputation in adult diabetic patients with gangrene of the forefoot requiring amputation.

  9. Low-noise SQUIDs with large transfer: two-stage SQUIDs based on DROSs

    Science.gov (United States)

    Podt, M.; Flokstra, J.; Rogalla, H.

    2002-08-01

    We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×10 5Φ0/s and the measured white flux noise was 1.3 μ Φ0/√Hz at 4.2 K. The system is based on a conventional dc SQUID with a double relaxation oscillation SQUID (DROS) as the second stage. Because of the large flux-to-voltage transfer, the sensitivity of the system is completely determined by the sensor SQUID and not by the DROS or the room-temperature preamplifier. Decreasing the Josephson junction area enables a further improvement of the sensitivity of the two-stage SQUID systems.

  10. Experiment and surge analysis of centrifugal two-stage turbocharging system

    Institute of Scientific and Technical Information of China (English)

    Yituan HE; Chaochen MA

    2008-01-01

    To study a centrifugal two-stage turbocharging system's surge and influencing factors, a special test bench was set up and the system surge test was performed. The test results indicate that the measured parameters such as air mass flow and rotation speed of a high pressure (HP) stage compressor can be converted into corrected para-meters under a standard condition according to the Mach number similarity criterion, because the air flow in a HP stage compressor has entered the Reynolds number (Re) auto-modeling range. Accordingly, the reasons leading to a two-stage turbocharging system's surge can be analyzed according to the corrected mass flow characteristic maps and actual operating conditions of HP and low pressure (LP) stage compressors.

  11. Sample port design for ballast water sampling: Refinement of guidance regarding the isokinetic diameter.

    Science.gov (United States)

    Wier, Timothy P; Moser, Cameron S; Grant, Jonathan F; First, Matthew R; Riley, Scott C; Robbins-Wamsley, Stephanie H; Drake, Lisa A

    2015-09-15

    By using an appropriate in-line sampling system, it is possible to obtain representative samples of ballast water from the main ballast line. An important parameter of the sampling port is its "isokinetic diameter" (DISO), which is the diameter calculated to determine the velocity of water in the sample port relative to the velocity of the water in the main ballast line. The guidance in the U.S. Environmental Technology Verification (ETV) program protocol suggests increasing the diameter from 1.0× DISO (in which velocity in the sample port is equivalent to velocity in the main line) to 1.5-2.0× DISO. In this manner, flow velocity is slowed-and mortality of organisms is theoretically minimized-as water enters the sample port. This report describes field and laboratory trials, as well as computational fluid dynamics modeling, to refine this guidance. From this work, a DISO of 1.0-2.0× (smaller diameter sample ports) is recommended. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  13. Hybrid staging of a Lysholm positive displacement engine with two Westinghouse two stage impulse Curtis turbines

    Energy Technology Data Exchange (ETDEWEB)

    Parker, D.A.

    1982-06-01

    The University of California at Berkeley has tested and modeled satisfactorly a hybrid staged Lysholm engine (positive displacement) with a two stage Curtis wheel turbine. The system operates in a stable manner over its operating range (0/1-3/1 water ratio, 120 psia input). Proposals are made for controlling interstage pressure with a partial admission turbine and volume expansion to control mass flow and pressure ratio for the Lysholm engine.

  14. Full noise characterization of a low-noise two-stage SQUID amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Falferi, P [Istituto di Fotonica e Nanotecnologie, CNR-Fondazione Bruno Kessler, 38100 Povo, Trento (Italy); Mezzena, R [INFN, Gruppo Collegato di Trento, Sezione di Padova, 38100 Povo, Trento (Italy); Vinante, A [INFN, Sezione di Padova, 35131 Padova (Italy)], E-mail: falferi@science.unitn.it

    2009-07-15

    From measurements performed on a low-noise two-stage SQUID amplifier coupled to a high- Q electrical resonator we give a complete noise characterization of the SQUID amplifier around the resonator frequency of 11 kHz in terms of additive, back action and cross-correlation noise spectral densities. The minimum noise temperature evaluated at 135 mK is 10 {mu}K and corresponds to an energy resolution of 18{Dirac_h}.

  15. Development of a Novel Type Catalyst SY-2 for Two-Stage Hydrogenation of Pyrolysis Gasoline

    Institute of Scientific and Technical Information of China (English)

    Wu Linmei; Zhang Xuejun; Zhang Zhihua; Wang Fucun

    2004-01-01

    By using the group ⅢB or groupⅦB metals and modulating the characteristics of electric charges on carrier surface, improving the catalyst preparation process and techniques for loading the active metal components, a novel type SY-2 catalyst earmarked for two-stage hydrogenation of pyrolysis gasoline has been developed. The catalyst evaluation results have indicated that the novel catalyst is characterized by a better hydrogenation reaction activity to give higher aromatic yield.

  16. Investigation on a two-stage solvay refrigerator with magnetic material regenerator

    Science.gov (United States)

    Chen, Guobang; Zheng, Jianyao; Zhang, Fagao; Yu, Jianping; Tao, Zhenshi; Ding, Cenyu; Zhang, Liang; Wu, Peiyi; Long, Yi

    This paper describes experimental results that the no-load temperature of a two-stage Solvay refrigerator has been reached in liquid helium temperature region from the original 11.5 K by using magnetic regenerative material instead of lead. The structure and technological characteristics of the prototype machine are presented. The effects of operating frequency and pressure on the refrigerating temperature have been discussed in this paper.

  17. Biological hydrogen production from olive mill wastewater with two-stage processes

    Energy Technology Data Exchange (ETDEWEB)

    Eroglu, Ela; Eroglu, Inci [Department of Chemical Engineering, Middle East Technical University, 06531, Ankara (Turkey); Guenduez, Ufuk; Yuecel, Meral [Department of Biology, Middle East Technical University, 06531, Ankara (Turkey); Tuerker, Lemi [Department of Chemistry, Middle East Technical University, 06531, Ankara (Turkey)

    2006-09-15

    In the present work two novel two-stage hydrogen production processes from olive mill wastewater (OMW) have been introduced. The first two-stage process involved dark-fermentation followed by a photofermentation process. Dark-fermentation by activated sludge cultures and photofermentation by Rhodobacter sphaeroides O.U.001 were both performed in 55ml glass vessels, under anaerobic conditions. In some cases of dark-fermentation, activated sludge was initially acclimatized to the OMW to provide the adaptation of microorganisms to the extreme conditions of OMW. The highest hydrogen production potential obtained was 29l{sub H{sub 2}}/l{sub OMW} after photofermentation with 50% (v/v) effluent of dark fermentation with activated sludge. Photofermentation with 50% (v/v) effluent of dark fermentation with acclimated activated sludge had the highest hydrogen production rate (0.008ll{sup -1}h{sup -1}). The second two-stage process involved a clay treatment step followed by photofermentation by R. sphaeroides O.U.001. Photofermentation with the effluent of the clay pretreatment process (4% (v/v)) gives the highest hydrogen production potential (35l{sub H{sub 2}}/l{sub OMW}), light conversion efficiency (0.42%) and COD conversion efficiency (52%). It was concluded that both pretreatment processes enhanced the photofermentative hydrogen production process. Moreover, hydrogen could be produced with highly concentrated OMW. Two-stage processes developed in the present investigation have a high potential for solving the environmental problems caused by OMW. (author)

  18. The two-stage aegean extension, from localized to distributed, a result of slab rollback acceleration

    OpenAIRE

    Brun, Jean-Pierre; Faccenna, Claudio; Gueydan, Frédéric; Sokoutis, Dimitrios; Philippon, Mélody; Kydonakis, Konstantinos; Gorini, Christian

    2016-01-01

    International audience; Back-arc extension in the Aegean, which was driven by slab rollback since 45 Ma, is described here for the first time in two stages. From Middle Eocene to Middle Miocene, deformation was localized leading to i) the exhumation of high-pressure metamorphic rocks to crustal depths, ii) the exhumation of high-temperature metamorphic rocks in core complexes and iii) the deposition of sedimentary basins. Since Middle Miocene, extension distributed over the whole Aegean domai...

  19. A Two-stage Discriminating Framework for Making Supply Chain Operation Decisions under Uncertainties

    OpenAIRE

    Gu, H; Rong, G

    2010-01-01

    This paper addresses the problem of making supply chain operation decisions for refineries under two types of uncertainties: demand uncertainty and incomplete information shared with suppliers and transport companies. Most of the literature only focus on one uncertainty or treat more uncertainties identically. However, we note that refineries have more power to control uncertainties in procurement and transportation than in demand in the real world. Thus, a two-stage framework for dealing wit...

  20. Low-noise SQUIDs with large transfer: two-stage SQUIDs based on DROSs

    NARCIS (Netherlands)

    Podt, M.; Flokstra, Jakob; Rogalla, Horst

    2002-01-01

    We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×105 Φ0/s and the measured white flux noise was 1.3 μΦ0/√Hz at 4.2 K. The

  1. Latent Inhibition as a Function of US Intensity in a Two-Stage CER Procedure

    Science.gov (United States)

    Rodriguez, Gabriel; Alonso, Gumersinda

    2004-01-01

    An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…

  2. Two stage dual gate MESFET monolithic gain control amplifier for Ka-band

    Science.gov (United States)

    Sokolov, V.; Geddes, J.; Contolatis, A.

    A monolithic two stage gain control amplifier has been developed using submicron gate length dual gate MESFETs fabricated on ion implanted material. The amplifier has a gain of 12 dB at 30 GHz with a gain control range of over 30 dB. This ion implanted monolithic IC is readily integrable with other phased array receiver functions such as low noise amplifiers and phase shifters.

  3. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Science.gov (United States)

    Kılıç, Bayram

    2012-07-01

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared.

  4. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Bayram [Mehmet Akif Ersoy University, Bucak Emin Guelmez Vocational School, Bucak, Burdur (Turkey)

    2012-07-15

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared. (orig.)

  5. Performance of Combined Water Turbine Darrieus-Savonius with Two Stage Savonius Buckets and Single Deflector

    OpenAIRE

    Sahim, Kaprawi; Santoso, Dyos; Sipahutar, Riman

    2016-01-01

    The objective of this study is to show the effect of single deflector plate on the performance of combined Darrieus-Savonius water turbine. In order to overcome the disadvantages of low torque of solo Darrieus turbine, a plate deflector mounted in front of returning Savonius bucket of combined water turbine composing of Darrieus and Savonius rotor has been proposed in this study. Some configurations of combined turbines with two stage Savonius rotors were experimentally tested in a river of c...

  6. Perceived Health Benefits and Soy Consumption Behavior: Two-Stage Decision Model Approach

    OpenAIRE

    Moon, Wanki; Balasubramanian, Siva K.; Rimal, Arbindra

    2005-01-01

    A two-stage decision model is developed to assess the effect of perceived soy health benefits on consumers' decisions with respect to soy food. The first stage captures whether or not to consume soy food, while the second stage reflects how often to consume. A conceptual/analytical framework is also employed, combining Lancaster's characteristics model and Fishbein's multi-attribute model. Results show that perceived soy health benefits significantly influence both decision stages. Further, c...

  7. Designing a Repetitive Group Sampling Plan for Weibull Distributed Processes

    Directory of Open Access Journals (Sweden)

    Aijun Yan

    2016-01-01

    Full Text Available Acceptance sampling plans are useful tools to determine whether the submitted lots should be accepted or rejected. An efficient and economic sampling plan is very desirable for the high quality levels required by the production processes. The process capability index CL is an important quality parameter to measure the product quality. Utilizing the relationship between the CL index and the nonconforming rate, a repetitive group sampling (RGS plan based on CL index is developed in this paper when the quality characteristic follows the Weibull distribution. The optimal plan parameters of the proposed RGS plan are determined by satisfying the commonly used producer’s risk and consumer’s risk at the same time by minimizing the average sample number (ASN and then tabulated for different combinations of acceptance quality level (AQL and limiting quality level (LQL. The results show that the proposed plan has better performance than the single sampling plan in terms of ASN. Finally, the proposed RGS plan is illustrated with an industrial example.

  8. High quantum efficiency mid-wavelength interband cascade infrared photodetectors with one and two stages

    Science.gov (United States)

    Zhou, Yi; Chen, Jianxin; Xu, Zhicheng; He, Li

    2016-08-01

    In this paper, we report on mid-wavelength infrared interband cascade photodetectors grown on InAs substrates. We studied the transport properties of the photon-generated carriers in the interband cascade structures by comparing two different detectors, a single stage detector and a two-stage cascade detector. The two-stage device showed quantum efficiency around 19.8% at room temperature, and clear optical response was measured even at a temperature of 323 K. The two detectors showed similar Johnson-noise limited detectivity. The peak detectivity of the one- and two-stage devices was measured to be 2.15 × 1014 cm·Hz1/02/W and 2.19 × 1014 cm·Hz1/02/W at 80 K, 1.21 × 109 cm·Hz1/02/W and 1.23 × 109 cm·Hz1/02/W at 300 K, respectively. The 300 K background limited infrared performance (BLIP) operation temperature is estimated to be over 140 K.

  9. Performance analysis of RDF gasification in a two stage fluidized bed-plasma process.

    Science.gov (United States)

    Materazzi, M; Lettieri, P; Taylor, R; Chapman, C

    2016-01-01

    The major technical problems faced by stand-alone fluidized bed gasifiers (FBG) for waste-to gas applications are intrinsically related to the composition and physical properties of waste materials, such as RDF. The high quantity of ash and volatile material in RDF can provide a decrease in thermal output, create high ash clinkering, and increase emission of tars and CO2, thus affecting the operability for clean syngas generation at industrial scale. By contrast, a two-stage process which separates primary gasification and selective tar and ash conversion would be inherently more forgiving and stable. This can be achieved with the use of a separate plasma converter, which has been successfully used in conjunction with conventional thermal treatment units, for the ability to 'polish' the producer gas by organic contaminants and collect the inorganic fraction in a molten (and inert) state. This research focused on the performance analysis of a two-stage fluid bed gasification-plasma process to transform solid waste into clean syngas. Thermodynamic assessment using the two-stage equilibrium method was carried out to determine optimum conditions for the gasification of RDF and to understand the limitations and influence of the second stage on the process performance (gas heating value, cold gas efficiency, carbon conversion efficiency), along with other parameters. Comparison with a different thermal refining stage, i.e. thermal cracking (via partial oxidation) was also performed. The analysis is supported by experimental data from a pilot plant.

  10. Continuous removal of endocrine disruptors by versatile peroxidase using a two-stage system.

    Science.gov (United States)

    Taboada-Puig, Roberto; Lu-Chau, Thelmo A; Eibes, Gemma; Feijoo, Gumersindo; Moreira, Maria T; Lema, Juan M

    2015-01-01

    The oxidant Mn(3+) -malonate, generated by the ligninolytic enzyme versatile peroxidase in a two-stage system, was used for the continuous removal of endocrine disrupting compounds (EDCs) from synthetic and real wastewaters. One plasticizer (bisphenol-A), one bactericide (triclosan) and three estrogenic compounds (estrone, 17β-estradiol, and 17α-ethinylestradiol) were removed from wastewater at degradation rates in the range of 28-58 µg/L·min, with low enzyme inactivation. First, the optimization of three main parameters affecting the generation of Mn(3+) -malonate (hydraulic retention time as well as Na-malonate and H2 O2 feeding rates) was conducted following a response surface methodology (RSM). Under optimal conditions, the degradation of the EDCs was proven at high (1.3-8.8 mg/L) and environmental (1.2-6.1 µg/L) concentrations. Finally, when the two-stage system was compared with a conventional enzymatic membrane reactor (EMR) using the same enzyme, a 14-fold increase of the removal efficiency was observed. At the same time, operational problems found during EDCs removal in the EMR system (e.g., clogging of the membrane and enzyme inactivation) were avoided by physically separating the stages of complex formation and pollutant oxidation, allowing the system to be operated for a longer period (∼8 h). This study demonstrates the feasibility of the two-stage enzymatic system for removing EDCs both at high and environmental concentrations.

  11. Rehabilitation outcomes in patients with early and two-stage reconstruction of flexor tendon injuries.

    Science.gov (United States)

    Sade, Ilgin; İnanir, Murat; Şen, Suzan; Çakmak, Esra; Kablanoğlu, Serkan; Selçuk, Barin; Dursun, Nigar

    2016-08-01

    [Purpose] The primary aim of this study was to assess rehabilitation outcomes for early and two-stage repair of hand flexor tendon injuries. The secondary purpose of this study was to compare the findings between treatment groups. [Subjects and Methods] Twenty-three patients were included in this study. Early repair (n=14) and two-stage repair (n=9) groups were included in a rehabilitation program that used hand splints. This retrospective evaluated patients according to their demographic characteristics, including age, gender, injured hand, dominant hand, cause of injury, zone of injury, number of affected fingers, and accompanying injuries. Pain, range of motion, and grip strength were evaluated using a visual analog scale, goniometer, and dynamometer, respectively. [Results] Both groups showed significant improvements in pain and finger flexion after treatment compared with baseline measurements. However, no significant differences were observed between the two treatment groups. Similar results were obtained for grip strength and pinch grip, whereas gross grip was better in the early tendon repair group. [Conclusion] Early and two-stage reconstruction of patients with flexor tendon injuries can be performed with similarly favorable responses and effective rehabilitation programs.

  12. A Comparison of Direct and Two-Stage Transportation of Patients to Hospital in Poland

    Directory of Open Access Journals (Sweden)

    Anna Rosiek

    2015-04-01

    Full Text Available Background: The rapid international expansion of telemedicine reflects the growth of technological innovations. This technological advancement is transforming the way in which patients can receive health care. Materials and Methods: The study was conducted in Poland, at the Department of Cardiology of the Regional Hospital of Louis Rydygier in Torun. The researchers analyzed the delay in the treatment of patients with acute coronary syndrome. The study was conducted as a survey and examined 67 consecutively admitted patients treated invasively in a two-stage transport system. Data were analyzed statistically. Results: Two-stage transportation does not meet the timeframe guidelines for the treatment of patients with acute myocardial infarction. Intervals for the analyzed group of patients were statistically significant (p < 0.0001. Conclusions: Direct transportation of the patient to a reference center with interventional cardiology laboratory has a significant impact on reducing in-hospital delay in case of patients with acute coronary syndrome. Perspectives: This article presents the results of two-stage transportation of the patient with acute coronary syndrome. This measure could help clinicians who seek to assess time needed for intervention. It also shows how time from the beginning of pain in chest is important and may contribute to patient disability, death or well-being.

  13. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  14. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  15. Study on two stage activated carbon/HFC-134a based adsorption chiller

    Science.gov (United States)

    >K Habib,

    2013-06-01

    In this paper, a theoretical analysis on the performance of a thermally driven two-stage four-bed adsorption chiller utilizing low-grade waste heat of temperatures between 50°C and 70°C in combination with a heat sink (cooling water) of 30°C for air-conditioning applications has been described. Activated carbon (AC) of type Maxsorb III/HFC-134a pair has been examined as an adsorbent/refrigerant pair. FORTRAN simulation program is developed to analyze the influence of operating conditions (hot and cooling water temperatures and adsorption/desorption cycle times) on the cycle performance in terms of cooling capacity and COP. The main advantage of this two-stage chiller is that it can be operational with smaller regenerating temperature lifts than other heat-driven single-stage chillers. Simulation results shows that the two-stage chiller can be operated effectively with heat sources of 50°C and 70°C in combination with a coolant at 30°C.

  16. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu, E-mail: zhanglu1211@gmail.com; Sun, Xiangyang, E-mail: xysunbjfu@gmail.com

    2015-05-15

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  17. A Two-stage injection-locked magnetron for accelerators with superconducting cavities

    CERN Document Server

    Kazakevich, Grigory; Flanagan, Gene; Marhauser, Frank; Neubauer, Mike; Yakovlev, Vyacheslav; Chase, Brian; Nagaitsev, Sergey; Pasquinelli, Ralph; Solyak, Nikolay; Tupikov, Vitali; Wolff, Daniel

    2013-01-01

    A concept for a two-stage injection-locked CW magnetron intended to drive Superconducting Cavities (SC) for intensity-frontier accelerators has been proposed. The concept considers two magnetrons in which the output power differs by 15-20 dB and the lower power magnetron being frequency-locked from an external source locks the higher power magnetron. The injection-locked two-stage CW magnetron can be used as an RF power source for Fermilab's Project-X to feed separately each of the 1.3 GHz SC of the 8 GeV pulsed linac. We expect output/locking power ratio of about 30-40 dB assuming operation in a pulsed mode with pulse duration of ~ 8 ms and repetition rate of 10 Hz. The experimental setup of a two-stage magnetron utilising CW, S-band, 1 kW tubes operating at pulse duration of 1-10 ms, and the obtained results are presented and discussed in this paper.

  18. Study on the Control Algorithm of Two-Stage DC-DC Converter for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Changhao Piao

    2014-01-01

    Full Text Available The fast response, high efficiency, and good reliability are very important characteristics to electric vehicles (EVs dc/dc converters. Two-stage dc-dc converter is a kind of dc-dc topologies that can offer those characteristics to EVs. Presently, nonlinear control is an active area of research in the field of the control algorithm of dc-dc converters. However, very few papers research on two-stage converter for EVs. In this paper, a fixed switching frequency sliding mode (FSFSM controller and double-integral sliding mode (DISM controller for two-stage dc-dc converter are proposed. And a conventional linear control (lag is chosen as the comparison. The performances of the proposed FSFSM controller are compared with those obtained by the lag controller. In consequence, the satisfactory simulation and experiment results show that the FSFSM controller is capable of offering good large-signal operations with fast dynamical responses to the converter. At last, some other simulation results are presented to prove that the DISM controller is a promising method for the converter to eliminate the steady-state error.

  19. A novel sampling design to explore gene-longevity associations

    DEFF Research Database (Denmark)

    De Rango, Francesco; Dato, Serena; Bellizzi, Dina

    2008-01-01

    To investigate the genetic contribution to familial similarity in longevity, we set up a novel experimental design where cousin-pairs born from siblings who were concordant or discordant for the longevity trait were analyzed. To check this design, two chromosomal regions already known to encompass...... longevity-related genes were examined: 6p21.3 (genes TNFalpha, TNFbeta, HSP70.1) and 11p15.5 (genes SIRT3, HRAS1, IGF2, INS, TH). Population pools of 1.6, 2.3 and 2.0 million inhabitants were screened, respectively, in Denmark, France and Italy to identify families matching the design requirements. A total...... of 234 trios composed by one centenarian, his/her child and a child of his/her concordant or discordant sib were collected. By using population-specific allele frequencies, we reconstructed haplotype phase and estimated the likelihood of Identical By Descent (IBD) haplotype sharing in cousin-pairs born...

  20. Designing waveforms for temporal encoding using a frequency sampling method

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2007-01-01

    , the amplitude spectrum of the transmitted waveform can be optimized, such that most of the energy is transmitted where the transducer has large amplification. To test the design method, a waveform was designed for a BK8804 linear array transducer. The resulting nonlinear frequency modulated waveform...... for the linear frequency modulated signal) were tested for both waveforms in simulation with respect to the Doppler frequency shift occurring when probing moving objects. It was concluded that the Doppler effect of moving targets does not significantly degrade the filtered output. Finally, in vivo measurements...

  1. Practical Tools for Designing and Weighting Survey Samples

    Science.gov (United States)

    Valliant, Richard; Dever, Jill A.; Kreuter, Frauke

    2013-01-01

    Survey sampling is fundamentally an applied field. The goal in this book is to put an array of tools at the fingertips of practitioners by explaining approaches long used by survey statisticians, illustrating how existing software can be used to solve survey problems, and developing some specialized software where needed. This book serves at least…

  2. Practical Tools for Designing and Weighting Survey Samples

    Science.gov (United States)

    Valliant, Richard; Dever, Jill A.; Kreuter, Frauke

    2013-01-01

    Survey sampling is fundamentally an applied field. The goal in this book is to put an array of tools at the fingertips of practitioners by explaining approaches long used by survey statisticians, illustrating how existing software can be used to solve survey problems, and developing some specialized software where needed. This book serves at least…

  3. Effects-Driven Participatory Design: Learning from Sampling Interruptions

    DEFF Research Database (Denmark)

    Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten

    2017-01-01

    a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians’ intra...

  4. Effects-Driven Participatory Design: Learning from Sampling Interruptions

    DEFF Research Database (Denmark)

    Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten

    2017-01-01

    a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians’ intra...

  5. DESIGN, DEVELOPMENT AND FIELD DEPLOYMENT OF A TELEOPERATED SAMPLING SYSTEM

    Energy Technology Data Exchange (ETDEWEB)

    Dalmaso, M; Robert Fogle, R; Tony Hicks, T; Larry Harpring, L; Daniel Odell, D

    2007-11-09

    A teleoperated sampling system for the identification, collection and retrieval of samples following the detonation of an Improvised Nuclear Device (IND) or Radiological Dispersion Devise (RDD) has been developed and tested in numerous field exercises. The system has been developed as part of the Defense Threat Reduction Agency's (DTRA) National Technical Nuclear Forensic (NTNF) Program. The system is based on a Remotec ANDROS Mark V-A1 platform. Extensive modifications and additions have been incorporated into the platform to enable it to meet the mission requirements. The Defense Science Board Task Force on Unconventional Nuclear Warfare Defense, 2000 Summer Study Volume III report recommended the Department of Defense (DOD) improve nuclear forensics capabilities to achieve accurate and fast identification and attribution. One of the strongest elements of protection is deterrence through the threat of reprisal, but to accomplish this objective a more rapid and authoritative attribution system is needed. The NTNF program provides the capability for attribution. Early on in the NTNF program, it was recognized that there would be a desire to collect debris samples for analysis as soon as possible after a nuclear event. Based on nuclear test experience, it was recognized that mean radiation fields associated with even low yield events could be several thousand R/Hr near the detonation point for some time after the detonation. In anticipation of pressures to rapidly sample debris near the crater, considerable effort is being devoted to developing a remotely controlled vehicle that could enter the high radiation field area and collect one or more samples for subsequent analysis.

  6. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    Energy Technology Data Exchange (ETDEWEB)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L., E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece); Vassiou, K. [Department of Anatomy, School of Medicine, University of Thessaly, Larissa 41500 (Greece)

    2015-10-15

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing

  7. An improved two stages dynamic programming/artificial neural network solution model to the unit commitment of thermal units

    Energy Technology Data Exchange (ETDEWEB)

    Abbasy, N.H. [College of Technological Studies, Shuwaikh (Kuwait); Elfayoumy, M.K. [Univ. of Alexandria (Egypt). Dept. of Electrical Engineering

    1995-11-01

    An improved two stages solution model to the unit commitment of thermal units is developed in this paper. In the first stage a pre-schedule is generated using a high quality trained artificial neural net (ANN). A dynamic programming (DP) algorithm is implemented and applied in the second stage for the final determination of the commitment states. The developed solution model avoids the complications imposed by the generation of the variable window structure, proposed by other techniques. A unified approach for the treatment of the ANN is also developed in the paper. The validity of the proposed technique is proved via numerical applications to both sample and small practical power systems. 12 refs, 9 tabs

  8. Measuring Radionuclides in the environment: radiological quantities and sampling designs

    Energy Technology Data Exchange (ETDEWEB)

    Voigt, G. [ed.] [GSF - Forschungszentrum fuer Umwelt und Gesundheit Neuherberg GmbH, Oberschleissheim (Germany). Inst. fuer Strahlenschutz

    1998-10-01

    One aim of the workshop was to support and provide an ICRU report committee (International Union of Radiation Units) with actual information on techniques, data and knowledge of modern radioecology when radionuclides are to be measured in the environment. It has been increasingly recognised that some studies in radioecology, especially those involving both field sampling and laboratory measurements, have not paid adequate attention to the problem of obtaining representative, unbiased samples. This can greatly affect the quality of scientific interpretation, and the ability to manage the environment. Further, as the discipline of radioecology has developed, it has seen a growth in the numbers of quantities and units used, some of which are ill-defined and which are non-standardised. (orig.)

  9. Restricted Repetitive Sampling in Designing of Control Charts

    Directory of Open Access Journals (Sweden)

    Muhammad Anwar Mughal

    2017-06-01

    Full Text Available In this article a criteria have been defined to classify the existing repetitive sampling into soft, moderate and strict conditions. Behind this division a ratio has been suggested i.e. c2 (constant used in repetitive limits to c1(constant used in control limit in slabs. A restricted criterion has been devised on the existing repetitive sampling. By embedding the proposed schematic in the control chart it becomes highly efficient in detecting the shifts quite earlier as well as it detects even smaller shifts at smaller ARLs. To facilitate the user for best choice the restricted criterion has further categorized to softly restricted, moderately restricted and strictly restricted. The restricted conditions are dependent on value of restriction parameter ’m’ varies 2 to 6. The application of proposed scheme on selected cases is given in tables which are self explanatory.

  10. Identification of residual non-biodegradable organic compounds in wastewater effluent after two-stage biochemical treatment

    Directory of Open Access Journals (Sweden)

    Xuqing Liu

    2016-01-01

    Full Text Available The main non-biodegradable compounds (soluble microbial product – SMP of wastewater from the Maotai aromatic factories, located in the Chishui river region, were analyzed by UV spectroscopy, and by solid-phase extraction followed by gas chromatography coupled to mass spectrometry, after a two-stage biochemical treatment. The UV-Vis spectra revealed that the wastewater contained two double-bonds in conjugated systems (conjugated diene or α, β- unsaturated ketone, etc. and simple non-conjugated chromophores containing n electrons from carbonyl groups or the like. The residual organic non-biodegradable substances were identified using SPE-GC/MS analysis as complex polymers containing hydroxyl, carbonyl, and carboxyl functional groups with multiple connections to either benzene rings or heterocyclic rings. As these compounds are difficult to remove by conventional biochemical treatments, our findings provide a scientific basis for the design of efficient new strategies to remove SMP from wastewater.

  11. Economically efficient operation of two-stage fluidized-bed combustion systems; Wirtschaftliche Betriebsweise von zweistufigen Wirbelschicht-Verbrennungsanlagen

    Energy Technology Data Exchange (ETDEWEB)

    Lehmann, E. [Hoelter-ABT GmbH, Essen (Germany)

    1996-11-01

    Two-stage stationary fluidized-bed combustion is an efficient technology for thermal treatment of residues. This includes, e.g. sorted residues from industrial processes, materials soiled with coatings, varnishes or glues, biomass (wood, straw, hay) and packaging materials. A simple and robust design of the plant with few mobile parts ensures high availability, good performance, and low investment and operation cost. The modular structure contributes to this. (orig.) [Deutsch] Mit der gestuften stationaeren Wirbelschichtverbrennung stellt eine Technologie zur thermischen Verwertung von Reststoffen fuer vielfaeltige Einsatzbereiche zur Verfuegung. Dazu zaehlten zum Beispiel sortenreine Reststoffe aus der Industrieproduktion, durch Beschichtungen, Anstriche oder Klebstoffe verunreinigte Materialien, Biomassen (Holz, Stroh, Heu) und Verpackungsmittelrueckstaende. Ein einfacher und robuster Anlagenaufbau mit wenigen beweglichen Teilen gewaehrleistet eine hohe Betriebssicherheit sowie die Anlagenverfuegbarkeit und verringert gleichzeitig die Investitions- und Betriebskosten. Hierzu traegt auch der Aufbau aus verschiedenen, immer gleichartigen Funktionsmodulen bei. (orig.)

  12. A CFD Analysis of Steam Flow in the Two-Stage Experimental Impulse Turbine with the Drum Rotor Arrangement

    Science.gov (United States)

    Yun, Kukchol; Tajč, L.; Kolovratník, M.

    2016-03-01

    The aim of the paper is to present the CFD analysis of the steam flow in the two-stage turbine with a drum rotor and balancing slots. The balancing slot is a part of every rotor blade and it can be used in the same way as balancing holes on the classical rotor disc. The main attention is focused on the explanation of the experimental knowledge about the impact of the slot covering and uncovering on the efficiency of the individual stages and the entire turbine. The pressure and temperature fields and the mass steam flows through the shaft seals, slots and blade cascades are calculated. The impact of the balancing slots covering or uncovering on the reaction and velocity conditions in the stages is evaluated according to the pressure and temperature fields. We have also concentrated on the analysis of the seal steam flow through the balancing slots. The optimized design of the balancing slots has been suggested.

  13. Design of the CERN MEDICIS Collection and Sample Extraction System

    CERN Document Server

    Brown, Alexander

    MEDICIS is a new facility at CERN ISOLDE that aims to produce radio-isotopes for medical research. Possible designs for the collection and transport system for the collection of radio-isotopes was investigated. A system using readily available equipment was devised with the the aim of keeping costs to a minimum whilst maintaining the highest safety standards. FLUKA, a Monte Carlo radiation transport code, was used to simulate the radiation from the isotopes to be collected. Of the isotopes to be collected 44Sc was found to give the largest dose by simulating the collection of all isotopes of interest to CERN’s MEDICIS facility, for medical research. The simulations helped guide the amount of shielding used in the final design. Swiss Regulations stipulating allowed activity level of individual isotopes was also considered within the body of the work.

  14. Design-Based Sample and Probability Law-Assumed Sample: Their Role in Scientific Investigation

    Science.gov (United States)

    Ojeda, Mario Miguel; Sahai, Hardeo

    2002-01-01

    Students in statistics service courses are frequently exposed to dogmatic approaches for evaluating the role of randomization in statistical designs, and inferential data analysis in experimental, observational and survey studies. In order to provide an overview for understanding the inference process, in this work some key statistical concepts in…

  15. Prevalence of Mixed-Methods Sampling Designs in Social Science Research

    Science.gov (United States)

    Collins, Kathleen M. T.

    2006-01-01

    The purpose of this mixed-methods study was to document the prevalence of sampling designs utilised in mixed-methods research and to examine the interpretive consistency between interpretations made in mixed-methods studies and the sampling design used. Classification of studies was based on a two-dimensional mixed-methods sampling model. This…

  16. A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research

    Science.gov (United States)

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.

    2007-01-01

    A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…

  17. Establishing Interpretive Consistency When Mixing Approaches: Role of Sampling Designs in Evaluations

    Science.gov (United States)

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.

    2013-01-01

    The goal of this chapter is to recommend quality criteria to guide evaluators' selections of sampling designs when mixing approaches. First, we contextualize our discussion of quality criteria and sampling designs by discussing the concept of interpretive consistency and how it impacts sampling decisions. Embedded in this discussion are…

  18. Optimal adaptive group sequential design with flexible timing of sample size determination.

    Science.gov (United States)

    Cui, Lu; Zhang, Lanju; Yang, Bo

    2017-04-26

    Flexible sample size designs, including group sequential and sample size re-estimation designs, have been used as alternatives to fixed sample size designs to achieve more robust statistical power and better trial efficiency. In this work, a new representation of sample size re-estimation design suggested by Cui et al. [5,6] is introduced as an adaptive group sequential design with flexible timing of sample size determination. This generalized adaptive group sequential design allows one time sample size determination either before the start of or in the mid-course of a clinical study. The new approach leads to possible design optimization on an expanded space of design parameters. Its equivalence to sample size re-estimation design proposed by Cui et al. provides further insight on re-estimation design and helps to address common confusions and misunderstanding. Issues in designing flexible sample size trial, including design objective, performance evaluation and implementation are touched upon with an example to illustrate. Copyright © 2017. Published by Elsevier Inc.

  19. A two stage algorithm for target and suspect analysis of produced water via gas chromatography coupled with high resolution time of flight mass spectrometry.

    Science.gov (United States)

    Samanipour, Saer; Langford, Katherine; Reid, Malcolm J; Thomas, Kevin V

    2016-09-09

    Gas chromatography coupled with high resolution time of flight mass spectrometry (GC-HR-TOFMS) has gained popularity for the target and suspect analysis of complex samples. However, confident detection of target/suspect analytes in complex samples, such as produced water, remains a challenging task. Here we report on the development and validation of a two stage algorithm for the confident target and suspect analysis of produced water extracts. We performed both target and suspect analysis for 48 standards, which were a mixture of 28 aliphatic hydrocarbons and 20 alkylated phenols, in 3 produced water extracts. The two stage algorithm produces a chemical standard database of spectra, in the first stage, which is used for target and suspect analysis during the second stage. The first stage is carried out through five steps via an algorithm here referred to as unique ion extractor (UIE). During the first step the m/z values in the spectrum of a standard that do not belong to that standard are removed in order to produce a clean spectrum and then during the last step the cleaned spectrum is calibrated. The Dot-product algorithm, during the second stage, uses the cleaned and calibrated spectra of the standards for both target and suspect analysis. We performed the target analysis of 48 standards in all 3 samples via conventional methods, in order to validate the two stage algorithm. The two stage algorithm was demonstrated to be more robust, reliable, and less sensitive to the signal-to-noise ratio (S/N), when compared to the conventional method. The Dot-product algorithm showed lower potential in producing false positives compared to the conventional methods, when dealing with complex samples. We also evaluated the effect of the mass accuracy on the performances of Dot-product algorithm. Our results indicated the crucial importance of HR-MS data and the mass accuracy for confident suspect analysis in complex samples.

  20. Dual-Filter Estimation for Rotating-Panel Sample Designs

    Directory of Open Access Journals (Sweden)

    Francis A. Roesch

    2017-06-01

    Full Text Available Dual-filter estimators are described and tested for use in the annual estimation for national forest inventories. The dual-filter approach involves the use of a moving widow estimator in the first pass, which is used as input to Theil’s mixed estimator in the second pass. The moving window and dual-filter estimators are tested along with two other estimators in a sampling simulation of 152 simulated populations, which were developed from data collected in 38 states and Puerto Rico by the Forest Inventory and Analysis Program of the USDA Forest Service. The dual-filter estimators are shown to almost always provide some reduction in mean squared error (MSE relative to the first pass moving window estimators.

  1. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  2. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed....... The performance of the basic DC/DC topologies is reviewed with focus on the component stress. The knowledge obtained in this process is used to review some examples of the alternative PFC solutions and compare these solutions with the basic twostage PFC solution....

  3. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    in extending coverage of a minimum wage to the non-union sector. Furthermore, the union sector does not seek to increase the non-union wage to a level above the market-clearing wage. In fact, it is optimal for the union sector to impose a market-clearing wage on the non-union sector. Finally, coverage......This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  4. Two stage DOA and Fundamental Frequency Estimation based on Subspace Techniques

    DEFF Research Database (Denmark)

    Zhou, Zhenhua; Christensen, Mads Græsbøll; So, Hing-Cheung

    2012-01-01

    optimally weighted harmonic multiple signal classification (MCOW-HMUSIC) estimator is devised for the estimation of fundamental frequencies. Secondly, the spatio- temporal multiple signal classification (ST-MUSIC) estimator is proposed for the estimation of DOA with the estimated frequencies. Statistical......In this paper, the problem of fundamental frequency and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signal is addressed. The estimation procedure consists of two stages. Firstly, by making use of the subspace technique and Markov-based eigenanalysis, a multi- channel...... evaluation with synthetic signals shows the high accuracy of the proposed methods compared with their non-weighting versions....

  5. Two-Stage Bulk Electron Heating in the Diffusion Region of Anti-Parallel Symmetric Reconnection

    CERN Document Server

    Le, Ari; Daughton, William

    2016-01-01

    Electron bulk energization in the diffusion region during anti-parallel symmetric reconnection entails two stages. First, the inflowing electrons are adiabatically trapped and energized by an ambipolar parallel electric field. Next, the electrons gain energy from the reconnection electric field as they undergo meandering motion. These collisionless mechanisms have been decribed previously, and they lead to highly-structured electron velocity distributions. Nevertheless, a simplified control-volume analysis gives estimates for how the net effective heating scales with the upstream plasma conditions in agreement with fully kinetic simulations and spacecraft observations.

  6. Use of two-stage membrane countercurrent cascade for natural gas purification from carbon dioxide

    Science.gov (United States)

    Kurchatov, I. M.; Laguntsov, N. I.; Karaseva, M. D.

    2016-09-01

    Membrane technology scheme is offered and presented as a two-stage countercurrent recirculating cascade, in order to solve the problem of natural gas dehydration and purification from CO2. The first stage is a single divider, and the second stage is a recirculating two-module divider. This scheme allows natural gas to be cleaned from impurities, with any desired degree of methane extraction. In this paper, the optimal values of the basic parameters of the selected technological scheme are determined. An estimation of energy efficiency was carried out, taking into account the energy consumption of interstage compressor and methane losses in energy units.

  7. Forecasting long memory series subject to structural change: A two-stage approach

    DEFF Research Database (Denmark)

    Papailias, Fotis; Dias, Gustavo Fruet

    2015-01-01

    A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...... change and yields good forecasting results....

  8. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  9. Two-Stage Electric Vehicle Charging Coordination in Low Voltage Distribution Grids

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    Increased environmental awareness in the recent years has encouraged rapid growth of renewable energy sources (RESs); especially solar PV and wind. One of the effective solutions to compensate intermittencies in generation from the RESs is to enable consumer participation in demand response (DR......). Being a sizable rated element, electric vehicles (EVs) can offer a great deal of demand flexibility in future intelligent grids. This paper first investigates and analyzes driving pattern and charging requirements of EVs. Secondly, a two-stage charging algorithm, namely local adaptive control...

  10. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...

  11. The global stability of a delayed predator-prey system with two stage-structure

    Energy Technology Data Exchange (ETDEWEB)

    Wang Fengyan [College of Science, Jimei University, Xiamen Fujian 361021 (China)], E-mail: wangfy68@163.com; Pang Guoping [Department of Mathematics and Computer Science, Yulin Normal University, Yulin Guangxi 537000 (China)

    2009-04-30

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  12. A Two-Stage Assembly-Type Flowshop Scheduling Problem for Minimizing Total Tardiness

    Directory of Open Access Journals (Sweden)

    Ju-Yong Lee

    2016-01-01

    Full Text Available This research considers a two-stage assembly-type flowshop scheduling problem with the objective of minimizing the total tardiness. The first stage consists of two independent machines, and the second stage consists of a single machine. Two types of components are fabricated in the first stage, and then they are assembled in the second stage. Dominance properties and lower bounds are developed, and a branch and bound algorithm is presented that uses these properties and lower bounds as well as an upper bound obtained from a heuristic algorithm. The algorithm performance is evaluated using a series of computational experiments on randomly generated instances and the results are reported.

  13. Biomass waste gasification - can be the two stage process suitable for tar reduction and power generation?

    Science.gov (United States)

    Sulc, Jindřich; Stojdl, Jiří; Richter, Miroslav; Popelka, Jan; Svoboda, Karel; Smetana, Jiří; Vacek, Jiří; Skoblja, Siarhei; Buryan, Petr

    2012-04-01

    A pilot scale gasification unit with novel co-current, updraft arrangement in the first stage and counter-current downdraft in the second stage was developed and exploited for studying effects of two stage gasification in comparison with one stage gasification of biomass (wood pellets) on fuel gas composition and attainable gas purity. Significant producer gas parameters (gas composition, heating value, content of tar compounds, content of inorganic gas impurities) were compared for the two stage and the one stage method of the gasification arrangement with only the upward moving bed (co-current updraft). The main novel features of the gasifier conception include grate-less reactor, upward moving bed of biomass particles (e.g. pellets) by means of a screw elevator with changeable rotational speed and gradual expanding diameter of the cylindrical reactor in the part above the upper end of the screw. The gasifier concept and arrangement are considered convenient for thermal power range 100-350 kW(th). The second stage of the gasifier served mainly for tar compounds destruction/reforming by increased temperature (around 950°C) and for gasification reaction of the fuel gas with char. The second stage used additional combustion of the fuel gas by preheated secondary air for attaining higher temperature and faster gasification of the remaining char from the first stage. The measurements of gas composition and tar compound contents confirmed superiority of the two stage gasification system, drastic decrease of aromatic compounds with two and higher number of benzene rings by 1-2 orders. On the other hand the two stage gasification (with overall ER=0.71) led to substantial reduction of gas heating value (LHV=3.15 MJ/Nm(3)), elevation of gas volume and increase of nitrogen content in fuel gas. The increased temperature (>950°C) at the entrance to the char bed caused also substantial decrease of ammonia content in fuel gas. The char with higher content of ash leaving the

  14. Two-stage continuous fermentation of Saccharomycopsis fibuligeria and Candida utilis.

    Science.gov (United States)

    Admassu, W; Korus, R A; Heimsch, R C

    1983-11-01

    Biomass production and carbohydrate reduction were determined for a two-stage continuous fermentation process with a simulated potato processing waste feed. The amylolytic yeast Saccharomycopsis fibuligera was grown in the first stage and a mixed culture of S. fibuligera and Candida utilis was maintained in the second stage. All conditions for the first and second stages were fixed except the flow of medium to the second stage was varied. Maximum biomass production occurred at a second stage dilution rate, D(2), of 0.27 h (-1). Carbohydrate reduction was inversely proportional to D(2), between 0.10 and 0.35 h (-1).

  15. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    OpenAIRE

    Ladan Jamshidy; Hamid Reza Mozaffari; Payam Faraji; Roohollah Sharifi

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with ...

  16. An Investigation on the Formation of Carbon Nanotubes by Two-Stage Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    M. S. Shamsudin

    2012-01-01

    Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

  17. Fast detection of lead dioxide (PbO2) in chlorinated drinking water by a two-stage iodometric method.

    Science.gov (United States)

    Zhang, Yan; Zhang, Yuanyuan; Lin, Yi-Pin

    2010-02-15

    Lead dioxide (PbO(2)) is an important corrosion product associated with lead contamination in drinking water. Quantification of PbO(2) in water samples has been proven challenging due to the incomplete dissolution of PbO(2) in sample preservation and digestion. In this study, we present a simple iodometric method for fast detection of PbO(2) in chlorinated drinking water. PbO(2) can oxidize iodide to form triiodide (I(3)(-)), a yellow-colored anion that can be detected by the UV-vis spectrometry. Complete reduction of up to 20 mg/L PbO(2) can be achieved within 10 min at pH 2.0 and KI = 4 g/L. Free chlorine can oxidize iodide and cause interference. However, this interference can be accounted by a two-stage pH adjustment, allowing free chlorine to completely react with iodide at ambient pH followed by sample acidification to pH 2.0 to accelerate the iodide oxidation by PbO(2). This method showed good recoveries of PbO(2) (90-111%) in chlorinated water samples with a concentration ranging from 0.01 to 20 mg/L. In chloraminated water, this method is limited due to incomplete quenching of monochloramine by iodide in neutral to slightly alkaline pH values. The interference of other particles that may be present in the distribution system was also investigated.

  18. Efficiency and hardware comparison of analog control-based and digital control-based 70 W two-stage power factor corrector and DC-DC converters

    DEFF Research Database (Denmark)

    Török, Lajos; Munk-Nielsen, Stig

    2011-01-01

    A comparison of an analog and a digital controller driven 70 W two-stage power factor corrector converter is presented. Both controllers are operated in average current-mode-control for the PFC and peak current control for the DC-DC converter. Digital controller design and converter modeling is d...... is described. Results show that digital control can compete with the analog one in efficiency, PFC and THD.......A comparison of an analog and a digital controller driven 70 W two-stage power factor corrector converter is presented. Both controllers are operated in average current-mode-control for the PFC and peak current control for the DC-DC converter. Digital controller design and converter modeling...

  19. On effects of trawling, benthos and sampling design.

    Science.gov (United States)

    Gray, John S; Dayton, Paul; Thrush, Simon; Kaiser, Michel J

    2006-08-01

    The evidence for the wider effects of fishing on the marine ecosystem demands that we incorporate these considerations into our management of human activities. The consequences of the direct physical disturbance of the seabed caused by towed bottom-fishing gear have been studied extensively with over 100 manipulations reported in the peer-reviewed literature. The outcome of these studies varies according to the gear used and the habitat in which it was deployed. This variability in the response of different benthic systems concurs with established theoretical models of the response of community metrics to disturbance. Despite this powerful evidence, a recent FAO report wrongly concludes that the variability in the reported responses to fishing disturbance mean that no firm conclusion as to the effects of fishing disturbance can be made. This thesis is further supported (incorrectly) by the supposition that current benthic sampling methodologies are inadequate to demonstrate the effects of fishing disturbance on benthic systems. The present article addresses these two erroneous conclusions which may confuse non-experts and in particular policy-makers.

  20. The Effect Of Two-Stage Age Hardening Treatment Combined With Shot Peening On Stress Distribution In The Surface Layer Of 7075 Aluminum Alloy

    Directory of Open Access Journals (Sweden)

    Kaczmarek Ł.

    2015-09-01

    Full Text Available The article present the results of the study on the improvement of mechanical properties of the surface layer of 7075 aluminum alloy via two-stage aging combined with shot peening. The experiments proved that thermo-mechanical treatment may significantly improve hardness and stress distribution in the surface layer. Compressive stresses of 226 MPa±5.5 MPa and hardness of 210±2 HV were obtained for selected samples.

  1. [An evaluation of sampling design for estimating an epidemiologic volume of diabetes and for assessing present status of its control in Korea].

    Science.gov (United States)

    Lee, Ji-Sung; Kim, Jaiyong; Baik, Sei-Hyun; Park, Ie-Byung; Lee, Juneyoung

    2009-03-01

    An appropriate sampling strategy for estimating an epidemiologic volume of diabetes has been evaluated through a simulation. We analyzed about 250 million medical insurance claims data submitted to the Health Insurance Review & Assessment Service with diabetes as principal or subsequent diagnoses, more than or equal to once per year, in 2003. The database was re-constructed to a 'patient-hospital profile' that had 3,676,164 cases, and then to a 'patient profile' that consisted of 2,412,082 observations. The patient profile data was then used to test the validity of a proposed sampling frame and methods of sampling to develop diabetic-related epidemiologic indices. Simulation study showed that a use of a stratified two-stage cluster sampling design with a total sample size of 4,000 will provide an estimate of 57.04% (95% prediction range, 49.83 - 64.24%) for a treatment prescription rate of diabetes. The proposed sampling design consists, at first, stratifying the area of the nation into "metropolitan/city/county" and the types of hospital into "tertiary/secondary/primary/clinic" with a proportion of 5:10:10:75. Hospitals were then randomly selected within the strata as a primary sampling unit, followed by a random selection of patients within the hospitals as a secondly sampling unit. The difference between the estimate and the parameter value was projected to be less than 0.3%. The sampling scheme proposed will be applied to a subsequent nationwide field survey not only for estimating the epidemiologic volume of diabetes but also for assessing the present status of nationwide diabetes control.

  2. Experimental Technique of Two-Stage Electric Gun%二级电炮加载技术研究

    Institute of Scientific and Technical Information of China (English)

    罗斌强; 赵剑衡; 孙承纬; 莫建军; 贺佳; 张兴卫; 王桂吉; 谭福利

    2012-01-01

    利用常规电炮发射金属飞片的实验技术,将Ф10mm×0.5mm、Ф10mm×0.3mm铝飞片分别发射到300、500m/s的速度且具有较好的平面度,有效拓展了电炮在相对较低的飞片发射速度和较低应变率下的加载能力(飞片速度约100m/s、应变率小于106 s-1),使之可以在更宽的应变率和加载压力下使用。利用二级电炮技术,对Zr51Ti5Ni10Cu25Al9块体非晶合金进行了平面撞击实验,获得了较好的结果,该工作对电炮加载技术的推广应用有着重要的作用。%A newly designed configuration based on an electric gun (namely two-stage electric gun) was used to launch metallic flyer,and dimension ofФ510 mm×0.5 mm,Ф510 mm×0.3 mm aluminum flyers can be accelerated to 300 m/s and 500 m/s, respectively. Using the two stage electric gun, dynamic response of Zr-based amorphous alloy was investigated under planar impact and good results were obtained. This work extended the capability of electric gun to contain wider range of planer flyer velocity(0. 1-10 km/s) and loading strain rates(10s-106 s Z),and it may be helpful to planer-impact experiments.

  3. Creel survey sampling designs for estimating effort in short-duration Chinook salmon fisheries

    Science.gov (United States)

    McCormick, Joshua L.; Quist, Michael C.; Schill, Daniel J.

    2013-01-01

    Chinook Salmon Oncorhynchus tshawytscha sport fisheries in the Columbia River basin are commonly monitored using roving creel survey designs and require precise, unbiased catch estimates. The objective of this study was to examine the relative bias and precision of total catch estimates using various sampling designs to estimate angling effort under the assumption that mean catch rate was known. We obtained information on angling populations based on direct visual observations of portions of Chinook Salmon fisheries in three Idaho river systems over a 23-d period. Based on the angling population, Monte Carlo simulations were used to evaluate the properties of effort and catch estimates for each sampling design. All sampling designs evaluated were relatively unbiased. Systematic random sampling (SYS) resulted in the most precise estimates. The SYS and simple random sampling designs had mean square error (MSE) estimates that were generally half of those observed with cluster sampling designs. The SYS design was more efficient (i.e., higher accuracy per unit cost) than a two-cluster design. Increasing the number of clusters available for sampling within a day decreased the MSE of estimates of daily angling effort, but the MSE of total catch estimates was variable depending on the fishery. The results of our simulations provide guidelines on the relative influence of sample sizes and sampling designs on parameters of interest in short-duration Chinook Salmon fisheries.

  4. Efficiency and hardware comparison of analog control-based and digital control-based 70 W two-stage power factor corrector and DC-DC converters

    DEFF Research Database (Denmark)

    Török, Lajos; Munk-Nielsen, Stig

    2011-01-01

    A comparison of an analog and a digital controller driven 70 W two-stage power factor corrector converter is presented. Both controllers are operated in average current-mode-control for the PFC and peak current control for the DC-DC converter. Digital controller design and converter modeling...... is described. Results show that digital control can compete with the analog one in efficiency, PFC and THD....

  5. Sampling flies or sampling flaws? Experimental design and inference strength in forensic entomology.

    Science.gov (United States)

    Michaud, J-P; Schoenly, Kenneth G; Moreau, G

    2012-01-01

    Forensic entomology is an inferential science because postmortem interval estimates are based on the extrapolation of results obtained in field or laboratory settings. Although enormous gains in scientific understanding and methodological practice have been made in forensic entomology over the last few decades, a majority of the field studies we reviewed do not meet the standards for inference, which are 1) adequate replication, 2) independence of experimental units, and 3) experimental conditions that capture a representative range of natural variability. Using a mock case-study approach, we identify design flaws in field and lab experiments and suggest methodological solutions for increasing inference strength that can inform future casework. Suggestions for improving data reporting in future field studies are also proposed.

  6. Complex Dynamical Behavior of a Two-Stage Colpitts Oscillator with Magnetically Coupled Inductors

    Directory of Open Access Journals (Sweden)

    V. Kamdoum Tamba

    2014-01-01

    Full Text Available A five-dimensional (5D controlled two-stage Colpitts oscillator is introduced and analyzed. This new electronic oscillator is constructed by considering the well-known two-stage Colpitts oscillator with two further elements (coupled inductors and variable resistor. In contrast to current approaches based on piecewise linear (PWL model, we propose a smooth mathematical model (with exponential nonlinearity to investigate the dynamics of the oscillator. Several issues, such as the basic dynamical behaviour, bifurcation diagrams, Lyapunov exponents, and frequency spectra of the oscillator, are investigated theoretically and numerically by varying a single control resistor. It is found that the oscillator moves from the state of fixed point motion to chaos via the usual paths of period-doubling and interior crisis routes as the single control resistor is monitored. Furthermore, an experimental study of controlled Colpitts oscillator is carried out. An appropriate electronic circuit is proposed for the investigations of the complex dynamics behaviour of the system. A very good qualitative agreement is obtained between the theoretical/numerical and experimental results.

  7. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  8. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    Science.gov (United States)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  9. Planning an Agricultural Water Resources Management System: A Two-Stage Stochastic Fractional Programming Model

    Directory of Open Access Journals (Sweden)

    Liang Cui

    2015-07-01

    Full Text Available Irrigation water management is crucial for agricultural production and livelihood security in many regions and countries throughout the world. In this study, a two-stage stochastic fractional programming (TSFP method is developed for planning an agricultural water resources management system under uncertainty. TSFP can provide an effective linkage between conflicting economic benefits and the associated penalties; it can also balance conflicting objectives and maximize the system marginal benefit with per unit of input under uncertainty. The developed TSFP method is applied to a real case of agricultural water resources management of the Zhangweinan River Basin China, which is one of the main food and cotton producing regions in north China and faces serious water shortage. The results demonstrate that the TSFP model is advantageous in balancing conflicting objectives and reflecting complicated relationships among multiple system factors. Results also indicate that, under the optimized irrigation target, the optimized water allocation rate of Minyou Channel and Zhangnan Channel are 57.3% and 42.7%, respectively, which adapts the changes in the actual agricultural water resources management problem. Compared with the inexact two-stage water management (ITSP method, TSFP could more effectively address the sustainable water management problem, provide more information regarding tradeoffs between multiple input factors and system benefits, and help the water managers maintain sustainable water resources development of the Zhangweinan River Basin.

  10. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  11. Two-Stage Single-Compartment Models to Evaluate Dissolution in the Lower Intestine.

    Science.gov (United States)

    Markopoulos, Constantinos; Vertzoni, Maria; Symillides, Mira; Kesisoglou, Filippos; Reppas, Christos

    2015-09-01

    The purpose was to propose two-stage single-compartment models for evaluating dissolution characteristics in distal ileum and ascending colon, under conditions simulating the bioavailability and bioequivalence studies in fasted and fed state by using the mini-paddle and the compendial flow-through apparatus (closed-loop mode). Immediate release products of two highly dosed active pharmaceutical ingredients (APIs), sulfasalazine and L-870,810, and one mesalamine colon targeting product were used for evaluating their usefulness. Change of medium composition simulating the conditions in distal ileum (SIFileum ) to a medium simulating the conditions in ascending colon in fasted state and in fed state was achieved by adding an appropriate solution in SIFileum . Data with immediate release products suggest that dissolution in lower intestine is substantially different than in upper intestine and is affected by regional pH differences > type/intensity of fluid convection > differences in concentration of other luminal components. Asacol® (400 mg/tab) was more sensitive to type/intensity of fluid convection. In all the cases, data were in line with available human data. Two-stage single-compartment models may be useful for the evaluation of dissolution in lower intestine. The impact of type/intensity of fluid convection and viscosity of media on luminal performance of other APIs and drug products requires further exploration.

  12. Metamodeling and Optimization of a Blister Copper Two-Stage Production Process

    Science.gov (United States)

    Jarosz, Piotr; Kusiak, Jan; Małecki, Stanisław; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz

    2016-06-01

    It is often difficult to estimate parameters for a two-stage production process of blister copper (containing 99.4 wt.% of Cu metal) as well as those for most industrial processes with high accuracy, which leads to problems related to process modeling and control. The first objective of this study was to model flash smelting and converting of Cu matte stages using three different techniques: artificial neural networks, support vector machines, and random forests, which utilized noisy technological data. Subsequently, more advanced models were applied to optimize the entire process (which was the second goal of this research). The obtained optimal solution was a Pareto-optimal one because the process consisted of two stages, making the optimization problem a multi-criteria one. A sequential optimization strategy was employed, which aimed for optimal control parameters consecutively for both stages. The obtained optimal output parameters for the first smelting stage were used as input parameters for the second converting stage. Finally, a search for another optimal set of control parameters for the second stage of a Kennecott-Outokumpu process was performed. The optimization process was modeled using a Monte-Carlo method, and both modeling parameters and computed optimal solutions are discussed.

  13. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  14. An integrated two-stage support vector machine approach to forecast inundation maps during typhoons

    Science.gov (United States)

    Jhong, Bing-Chen; Wang, Jhih-Huang; Lin, Gwo-Fong

    2017-04-01

    During typhoons, accurate forecasts of hourly inundation depths are essential for inundation warning and mitigation. Due to the lack of observed data of inundation maps, sufficient observed data are not available for developing inundation forecasting models. In this paper, the inundation depths, which are simulated and validated by a physically based two-dimensional model (FLO-2D), are used as a database for inundation forecasting. A two-stage inundation forecasting approach based on Support Vector Machine (SVM) is proposed to yield 1- to 6-h lead-time inundation maps during typhoons. In the first stage (point forecasting), the proposed approach not only considers the rainfall intensity and inundation depth as model input but also simultaneously considers cumulative rainfall and forecasted inundation depths. In the second stage (spatial expansion), the geographic information of inundation grids and the inundation forecasts of reference points are used to yield inundation maps. The results clearly indicate that the proposed approach effectively improves the forecasting performance and decreases the negative impact of increasing forecast lead time. Moreover, the proposed approach is capable of providing accurate inundation maps for 1- to 6-h lead times. In conclusion, the proposed two-stage forecasting approach is suitable and useful for improving the inundation forecasting during typhoons, especially for long lead times.

  15. The influence of partial oxidation mechanisms on tar destruction in TwoStage biomass gasification

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Egsgaard, Helge; Stelte, Wolfgang

    2013-01-01

    TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction and ...... tar destruction and a high moisture content of the biomass enhances the decomposition of phenol and inhibits the formation of naphthalene. This enhances tar conversion and gasification in the char-bed, and thus contributes in-directly to the tar destruction.......TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction...... and conversion. The study identifies the following major impact factors regarding tar content in the producer gas: oxidation temperature, excess air ratio and biomass moisture content. In a experimental setup, wood pellets were pyrolyzed and the resulting pyrolysis gas was transferred in a heated partial...

  16. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Jerónimo, Fernando Martínez-; Flores-Ortíz, Cesar Mateo

    2017-09-16

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL(-1) to 17.98gL(-1) with an increase in initial glucose concentration from 10.6gL(-1) to 30.3gL(-1). However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Yx/s), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  17. Dynamics of installation way for the actuator of a two-stage active vibration-isolator

    Institute of Scientific and Technical Information of China (English)

    HU Li; HUANG Qi-bai; HE Xue-song; YUAN Ji-xuan

    2008-01-01

    We investigated the behaviors of an active control system of two-stage vibration isolation with the actuator installed in parallel with either the upper passive mount or the lower passive isolation mount. We revealed the relationships between the active control force of the actuator and the parameters of the passive isolators by studying the dynamics of two-stage active vibration isolation for the actuator at the foregoing two positions in turn. With the actuator installed beside the upper mount, a small active force can achieve a very good isolating effect when the frequency of the stimulating force is much larger than the natural frequency of the upper mount; a larger active force is required in the low-frequency domain; and the active force equals the stimulating force when the upper mount works within the resonance region, suggesting an approach to reducing wobble and ensuring desirable installation accuracy by increasing the upper-mount stiffness. In either the low or the high frequency region far away from the resonance region, the active force is smaller when the actuator is beside the lower mount than beside the upper mount.

  18. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of two “two-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  19. Hydrogen and methane production from household solid waste in the two-stage fermentation process

    DEFF Research Database (Denmark)

    Lui, D.; Liu, D.; Zeng, Raymond Jianxiong

    2006-01-01

    A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS....... Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase. (c) 2006 Elsevier Ltd. All rights reserved.......A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS...... added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. PH was observed as a key factor affecting fermentation pathway in hydrogen production stage...

  20. Two-stage electrodialytic concentration of glyceric acid from fermentation broth.

    Science.gov (United States)

    Habe, Hiroshi; Shimada, Yuko; Fukuoka, Tokuma; Kitamoto, Dai; Itagaki, Masayuki; Watanabe, Kunihiko; Yanagishita, Hiroshi; Sakaki, Keiji

    2010-12-01

    The aim of this research was the application of a two-stage electrodialysis (ED) method for glyceric acid (GA) recovery from fermentation broth. First, by desalting ED, glycerate solutions (counterpart is Na+) were concentrated using ion-exchange membranes, and the glycerate recovery and energy consumption became more efficient with increasing the initial glycerate concentration (30 to 130 g/l). Second, by water-splitting ED, the concentrated glycerate was electroconverted to GA using bipolar membranes. Using a culture broth of Acetobacter tropicalis containing 68.6 g/l of D-glycerate, a final D-GA concentration of 116 g/l was obtained following the two-stage ED process. The total energy consumption for the D-glycerate concentration and its electroconversion to D-GA was approximately 0.92 kWh per 1 kg of D-GA. Copyright © 2010 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  1. Occurrence of two-stage hardening in C-Mn steel wire rods containing pearlitic microstructure

    Science.gov (United States)

    Singh, Balbir; Sahoo, Gadadhar; Saxena, Atul

    2016-09-01

    The 8 and 10 mm diameter wire rods intended for use as concrete reinforcement were produced/ hot rolled from C-Mn steel chemistry containing various elements within the range of C:0.55-0.65, Mn:0.85-1.50, Si:0.05-0.09, S:0.04 max, P:0.04 max and N:0.006 max wt%. Depending upon the C and Mn contents the product attained pearlitic microstructure in the range of 85-93% with balance amount of polygonal ferrite transformed at prior austenite grain boundaries. The pearlitic microstructure in the wire rods helped in achieving yield strength, tensile strength, total elongation and reduction in area values within the range of 422-515 MPa, 790-950 MPa, 22-15% and 45-35%, respectively. On analyzing the tensile results it was revealed that the material experienced hardening in two stages separable by a knee strain value of about 0.05. The occurrence of two stage hardening thus in the steel with hardening coefficients of 0.26 and 0.09 could be demonstrated with the help of derived relationships existed between flow stress and the strain.

  2. Rules and mechanisms for efficient two-stage learning in neural circuits

    Science.gov (United States)

    Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay

    2017-01-01

    Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (e.g., LMAN) should match plasticity mechanisms in ‘student’ circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning. DOI: http://dx.doi.org/10.7554/eLife.20944.001 PMID:28374674

  3. Two-stage estimation for multivariate recurrent event data with a dependent terminal event.

    Science.gov (United States)

    Chen, Chyong-Mei; Chuang, Ya-Wen; Shen, Pao-Sheng

    2015-03-01

    Recurrent event data arise in longitudinal follow-up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram-positive and non-Gram-positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter-recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two-stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two-stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Two-stage earth-to-orbit vehicles with dual-fuel propulsion in the Orbiter

    Science.gov (United States)

    Martin, J. A.

    1982-01-01

    Earth-to-orbit vehicle studies of future replacements for the Space Shuttle are needed to guide technology development. Previous studies that have examined single-stage vehicles have shown advantages for dual-fuel propulsion. Previous two-stage system studies have assumed all-hydrogen fuel for the Orbiters. The present study examined dual-fuel Orbiters and found that the system dry mass could be reduced with this concept. The possibility of staging the booster at a staging velocity low enough to allow coast-back to the launch site is shown to be beneficial, particularly in combination with a dual-fuel Orbiter. An engine evaluation indicated the same ranking of engines as did a previous single-stage study. Propane and RP-1 fuels result in lower vehicle dry mass than methane, and staged-combustion engines are preferred over gas-generator engines. The sensitivity to the engine selection is less for two-stage systems than for single-stage systems.

  5. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  6. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    Science.gov (United States)

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  7. Two-stage image segmentation based on edge and region information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A two-stage method for image segmentation based on edge and region information is proposed. Different deformation schemes are used at two stages for segmenting the object correctly in image plane. At the first stage, the contour of the model is divided into several segments hierarchically that deform respectively using affine transformation. After the contour is deformed to the approximate boundary of object, a fine match mechanism using statistical information of local region to redefine the external energy of the model is used to make the contour fit the object's boundary exactly. The algorithm is effective, as the hierarchical segmental deformation makes use of the globe and local information of the image, the affine transformation keeps the consistency of the model, and the reformative approaches of computing the internal energy and external energy are proposed to reduce the algorithm complexity. The adaptive method of defining the search area at the second stage makes the model converge quickly. The experimental results indicate that the proposed model is effective and robust to local minima and able to search for concave objects.

  8. Waste-gasification efficiency of a two-stage fluidized-bed gasification system.

    Science.gov (United States)

    Liu, Zhen-Shu; Lin, Chiou-Liang; Chang, Tsung-Jen; Weng, Wang-Chang

    2016-02-01

    This study employed a two-stage fluidized-bed gasifier as a gasification reactor and two additives (CaO and activated carbon) as the Stage-II bed material to investigate the effects of the operating temperature (700°C, 800°C, and 900°C) on the syngas composition, total gas yield, and gas-heating value during simulated waste gasification. The results showed that when the operating temperature increased from 700 to 900°C, the molar percentage of H2 in the syngas produced by the two-stage gasification process increased from 19.4 to 29.7mol% and that the total gas yield and gas-heating value also increased. When CaO was used as the additive, the molar percentage of CO2 in the syngas decreased, and the molar percentage of H2 increased. When activated carbon was used, the molar percentage of CH4 in the syngas increased, and the total gas yield and gas-heating value increased. Overall, CaO had better effects on the production of H2, whereas activated carbon clearly enhanced the total gas yield and gas-heating value. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  9. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction.

    Science.gov (United States)

    Zhang, Long; Li, Kang; Bai, Er-Wei; Irwin, George W

    2015-08-01

    A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

  10. Two-stage numerical simulation for temperature profile in furnace of tangentially fired pulverized coal boiler

    Institute of Scientific and Technical Information of China (English)

    ZHOU Nai-jun; XU Qiong-hui; ZHOU Ping

    2005-01-01

    Considering the fact that the temperature distribution in furnace of a tangential fired pulverized coal boiler is difficult to be measured and monitored, two-stage numerical simulation method was put forward. First, multi-field coupling simulation in typical work conditions was carried out off-line with the software CFX-4.3, and then the expression of temperature profile varying with operating parameter was obtained. According to real-time operating parameters, the temperature at arbitrary point of the furnace can be calculated by using this expression. Thus the temperature profile can be shown on-line and monitoring for combustion state in the furnace is realized. The simul-ation model was checked by the parameters measured in an operating boiler, DG130-9.8/540. The maximum of relative error is less than 12% and the absolute error is less than 120 ℃, which shows that the proposed two-stage simulation method is reliable and able to satisfy the requirement of industrial application.

  11. A low-voltage sense amplifier with two-stage operational amplifier clamping for flash memory

    Science.gov (United States)

    Guo, Jiarong

    2017-04-01

    A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).

  12. Two-stage high temperature sludge gasification using the waste heat from hot blast furnace slags.

    Science.gov (United States)

    Sun, Yongqi; Zhang, Zuotai; Liu, Lili; Wang, Xidong

    2015-12-01

    Nowadays, disposal of sewage sludge from wastewater treatment plants and recovery of waste heat from steel industry, become two important environmental issues and to integrate these two problems, a two-stage high temperature sludge gasification approach was investigated using the waste heat in hot slags herein. The whole process was divided into two stages, i.e., the low temperature sludge pyrolysis at ⩽ 900°C in argon agent and the high temperature char gasification at ⩾ 900°C in CO2 agent, during which the heat required was supplied by hot slags in different temperature ranges. Both the thermodynamic and kinetic mechanisms were identified and it was indicated that an Avrami-Erofeev model could best interpret the stage of char gasification. Furthermore, a schematic concept of this strategy was portrayed, based on which the potential CO yield and CO2 emission reduction achieved in China could be ∼1.92∗10(9)m(3) and 1.93∗10(6)t, respectively.

  13. A two-stage broadcast message propagation model in social networks

    Science.gov (United States)

    Wang, Dan; Cheng, Shun-Jun

    2016-11-01

    Message propagation in social networks is becoming a popular topic in complex networks. One of the message types in social networks is called broadcast message. It refers to a type of message which has a unique and unknown destination for the publisher, such as 'lost and found'. Its propagation always has two stages. Due to this feature, rumor propagation model and epidemic propagation model have difficulty in describing this message's propagation accurately. In this paper, an improved two-stage susceptible-infected-removed model is proposed. We come up with the concept of the first forwarding probability and the second forwarding probability. Another part of our work is figuring out the influence to the successful message transmission chance in each level resulting from multiple reasons, including the topology of the network, the receiving probability, the first stage forwarding probability, the second stage forwarding probability as well as the length of the shortest path between the publisher and the relevant destination. The proposed model has been simulated on real networks and the results proved the model's effectiveness.

  14. Selective capsulotomies of the expanded breast as a remodelling method in two-stage breast reconstruction.

    Science.gov (United States)

    Grimaldi, Luca; Campana, Matteo; Brandi, Cesare; Nisi, Giuseppe; Brafa, Anna; Calabrò, Massimiliano; D'Aniello, Carlo

    2013-06-01

    The two-stage breast reconstruction with tissue expander and prosthesis is nowadays a common method for achieving a satisfactory appearance in selected patients who had a mastectomy, but its most common aesthetic drawback is represented by an excessive volumetric increment of the superior half of the reconstructed breast, with a convexity of the profile in that area. A possible solution to limit this effect, and to fulfil the inferior pole, may be obtained by reducing the inferior tissue resistance by means of capsulotomies. This study reports the effects of various types of capsulotomies, performed in 72 patients after removal of the mammary expander, with the aim of emphasising the convexity of the inferior mammary aspect in the expanded breast. According to each kind of desired modification, possible solutions are described. On the basis of subjective and objective evaluations, an overall high degree of satisfaction has been evidenced. The described selective capsulotomies, when properly carried out, may significantly improve the aesthetic results in two-stage reconstructed breasts, with no additional scars, with minimal risks, and with little lengthening of the surgical time.

  15. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  16. Two-Stage Nerve Graft in Severe Scar: A Time-Course Study in a Rat Model

    Directory of Open Access Journals (Sweden)

    Shayan Zadegan

    2015-04-01

    According to the EPT and WRL, the two-stage nerve graft showed significant improvement (P=0.020 and P =0.017 respectively. The TOA showed no significant difference between the two groups. The total vascular index was significantly higher in the two-stage nerve graft group (P

  17. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Vaezzadeh, Mehdi [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)]. E-mail: mehdi@kntu.ac.ir; Yazdani, Ahmad [Tarbiat Modares University, P.O. Box 14155-4838, Tehran (Iran, Islamic Republic of); Vaezzadeh, Majid [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Daneshmand, Gissoo [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Kanzeghi, Ali [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)

    2006-05-01

    The magnetic behavior of Gd-based intermetallic compound (Gd{sub 2}Al{sub (1-x)}Au{sub x}) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement {chi}(T) shows two different stages. Moreover for (T>T{sub K2}) a fall of the value of {chi}(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T{sub K1}) an increase of {chi}(T) and in the second stage (T{sub K2}) a new remarkable decrease of {chi}(T) (T{sub K1}>T{sub K2}). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution.

  18. A cheap and quickly adaptable in situ electrical contacting TEM sample holder design.

    Science.gov (United States)

    Börrnert, Felix; Voigtländer, Ralf; Rellinghaus, Bernd; Büchner, Bernd; Rümmeli, Mark H; Lichte, Hannes

    2014-04-01

    In situ electrical characterization of nanostructures inside a transmission electron microscope provides crucial insight into the mechanisms of functioning micro- and nano-electronic devices. For such in situ investigations specialized sample holders are necessary. A simple and affordable but flexible design is important, especially, when sample geometries change, a holder should be adaptable with minimum effort. Atomic resolution imaging is standard nowadays, so a sample holder must ensure this capability. A sample holder design for on-chip samples is presented that fulfils these requisites. On-chip sample devices have the advantage that they can be manufactured via standard fabrication routes.

  19. A comparison of two sampling designs for fish assemblage assessment in a large river

    Science.gov (United States)

    Kiraly, Ian A.; Coghlan Jr., Stephen M.; Zydlewski, Joseph; Hayes, Daniel

    2014-01-01

    We compared the efficiency of stratified random and fixed-station sampling designs to characterize fish assemblages in anticipation of dam removal on the Penobscot River, the largest river in Maine. We used boat electrofishing methods in both sampling designs. Multiple 500-m transects were selected randomly and electrofished in each of nine strata within the stratified random sampling design. Within the fixed-station design, up to 11 transects (1,000 m) were electrofished, all of which had been sampled previously. In total, 88 km of shoreline were electrofished during summer and fall in 2010 and 2011, and 45,874 individuals of 34 fish species were captured. Species-accumulation and dissimilarity curve analyses indicated that all sampling effort, other than fall 2011 under the fixed-station design, provided repeatable estimates of total species richness and proportional abundances. Overall, our sampling designs were similar in precision and efficiency for sampling fish assemblages. The fixed-station design was negatively biased for estimating the abundance of species such as Common Shiner Luxilus cornutus and Fallfish Semotilus corporalis and was positively biased for estimating biomass for species such as White Sucker Catostomus commersonii and Atlantic Salmon Salmo salar. However, we found no significant differences between the designs for proportional catch and biomass per unit effort, except in fall 2011. The difference observed in fall 2011 was due to limitations on the number and location of fixed sites that could be sampled, rather than an inherent bias within the design. Given the results from sampling in the Penobscot River, application of the stratified random design is preferable to the fixed-station design due to less potential for bias caused by varying sampling effort, such as what occurred in the fall 2011 fixed-station sample or due to purposeful site selection.

  20. A strategy for sampling on a sphere applied to 3D selective RF pulse design.

    Science.gov (United States)

    Wong, S T; Roos, M S

    1994-12-01

    Conventional constant angular velocity sampling of the surface of a sphere results in a higher sampling density near the two poles relative to the equatorial region. More samples, and hence longer sampling time, are required to achieve a given sampling density in the equatorial region when compared with uniform sampling. This paper presents a simple expression for a continuous sample path through a nearly uniform distribution of points on the surface of a sphere. Sampling of concentric spherical shells in k-space with the new strategy is used to design 3D selective inversion and spin-echo pulses. These new 3D selective pulses have been implemented and verified experimentally.

  1. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis†

    Science.gov (United States)

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-01-01

    OBJECTIVES Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. METHODS From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). RESULTS The mean postoperative follow-up period was 12.5 (range: 1–24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. CONCLUSIONS Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis. PMID:23442937

  2. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis.

    Science.gov (United States)

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-06-01

    Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). The mean postoperative follow-up period was 12.5 (range: 1-24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis.

  3. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Science.gov (United States)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B.

    2015-07-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  4. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments.

    Science.gov (United States)

    Gopalakrishnan, V; Subramanian, V; Baskaran, R; Venkatraman, B

    2015-07-01

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  5. Note: Design and development of wireless controlled aerosol sampling network for large scale aerosol dispersion experiments

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, V.; Subramanian, V.; Baskaran, R.; Venkatraman, B. [Radiation Impact Assessment Section, Radiological Safety Division, Indira Gandhi Centre for Atomic Research, Kalpakkam 603 102 (India)

    2015-07-15

    Wireless based custom built aerosol sampling network is designed, developed, and implemented for environmental aerosol sampling. These aerosol sampling systems are used in field measurement campaign, in which sodium aerosol dispersion experiments have been conducted as a part of environmental impact studies related to sodium cooled fast reactor. The sampling network contains 40 aerosol sampling units and each contains custom built sampling head and the wireless control networking designed with Programmable System on Chip (PSoC™) and Xbee Pro RF modules. The base station control is designed using graphical programming language LabView. The sampling network is programmed to operate in a preset time and the running status of the samplers in the network is visualized from the base station. The system is developed in such a way that it can be used for any other environment sampling system deployed in wide area and uneven terrain where manual operation is difficult due to the requirement of simultaneous operation and status logging.

  6. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  7. Two-stage vs single-stage management for concomitant gallstones and common bile duct stones

    Institute of Scientific and Technical Information of China (English)

    Jiong Lu; Yao Cheng; Xian-Ze Xiong; Yi-Xin Lin; Si-Jia Wu; Nan-Sheng Cheng

    2012-01-01

    AIM:To evaluate the safety and effectiveness of two-stage vs single-stage management for concomitant gallstones and common bile duct stones.METHODS:Four databases,including PubMed,Embase,the Cochrane Central Register of Controlled Trials and the Science Citation Index up to September 2011,were searched to identify all randomized controlled trials (RCTs).Data were extracted from the studies by two independent reviewers.The primary outcomes were stone clearance from the common bile duct,postoperative morbidity and mortality.The secondary outcomes were conversion to other procedures,number of procedures per patient,length of hospital stay,total operative time,hospitalization charges,patient acceptance and quality of life scores.RESULTS:Seven eligible RCTs [five trials (n =621)comparing preoperative endoscopic retrograde cholangiopancreatography (ERCP)/endoscopic sphincterotomy (EST) + laparoscopic cholecystectomy (LC) with LC +laparoscopic common bile duct exploration (LCBDE);two trials (n =166) comparing postoperative ERCP/EST + LC with LC + LCBDE],composed of 787 patients in total,were included in the final analysis.The metaanalysis detected no statistically significant difference between the two groups in stone clearance from the common bile duct [risk ratios (RR) =-0.10,95% confidence intervals (CI):-0.24 to 0.04,P =0.17],postoperative morbidity (RR =0.79,95% CI:0.58 to 1.10,P =0.16),mortality (RR =2.19,95% CI:0.33 to 14.67,P =0.42),conversion to other procedures (RR =1.21,95% CI:0.54 to 2.70,P =0.39),length of hospital stay (MD =0.99,95% CI:-1.59 to 3.57,P =0.45),total operative time (MD =12.14,95% CI:-1.83 to 26.10,P =0.09).Two-stage (LC + ERCP/EST) management clearly required more procedures per patient than single-stage (LC + LCBDE) management.CONCLUSION:Single-stage management is equivalent to two-stage management but requires fewer procedures.However,patient's condition,operator's expertise and local resources should be taken into account in

  8. A Pilot Sampling Design for Estimating Outdoor Recreation Site Visits on the National Forests

    Science.gov (United States)

    Stanley J. Zarnoch; S.M. Kocis; H. Ken Cordell; D.B.K. English

    2002-01-01

    A pilot sampling design is described for estimating site visits to National Forest System lands. The three-stage sampling design consisted of national forest ranger districts, site days within ranger districts, and last-exiting recreation visitors within site days. Stratification was used at both the primary and secondary stages. Ranger districts were stratified based...

  9. A data structure for describing sampling designs to aid in compilation of stand attributes

    Science.gov (United States)

    John C. Byrne; Albert R. Stage

    1988-01-01

    Maintaining permanent plot data with different sampling designs over long periods within an organization, and sharing such information between organizations, requires that common standards be used. A data structure for the description of the sampling design within a stand is proposed. It is composed of just those variables and their relationships needed to compile...

  10. An Optimal Spatial Sampling Design for Intra-Urban Population Exposure Assessment.

    Science.gov (United States)

    Kumar, Naresh

    2009-02-01

    This article offers an optimal spatial sampling design that captures maximum variance with the minimum sample size. The proposed sampling design addresses the weaknesses of the sampling design that Kanaroglou et al. (2005) used for identifying 100 sites for capturing population exposure to NO(2) in Toronto, Canada. Their sampling design suffers from a number of weaknesses and fails to capture the spatial variability in NO(2) effectively. The demand surface they used is spatially autocorrelated and weighted by the population size, which leads to the selection of redundant sites. The location-allocation model (LAM) available with the commercial software packages, which they used to identify their sample sites, is not designed to solve spatial sampling problems using spatially autocorrelated data. A computer application (written in C++) that utilizes spatial search algorithm was developed to implement the proposed sampling design. This design was implemented in three different urban environments - namely Cleveland, OH; Delhi, India; and Iowa City, IA - to identify optimal sample sites for monitoring airborne particulates.

  11. Numerical analysis of flow interaction of turbine system in two-stage turbocharger of internal combustion engine

    Science.gov (United States)

    Liu, Y. B.; Zhuge, W. L.; Zhang, Y. J.; Zhang, S. Y.

    2016-05-01

    To reach the goal of energy conservation and emission reduction, high intake pressure is needed to meet the demand of high power density and high EGR rate for internal combustion engine. Present power density of diesel engine has reached 90KW/L and intake pressure ratio needed is over 5. Two-stage turbocharging system is an effective way to realize high compression ratio. Because turbocharging system compression work derives from exhaust gas energy. Efficiency of exhaust gas energy influenced by design and matching of turbine system is important to performance of high supercharging engine. Conventional turbine system is assembled by single-stage turbocharger turbines and turbine matching is based on turbine MAP measured on test rig. Flow between turbine system is assumed uniform and value of outlet physical quantities of turbine are regarded as the same as ambient value. However, there are three-dimension flow field distortion and outlet physical quantities value change which will influence performance of turbine system as were demonstrated by some studies. For engine equipped with two-stage turbocharging system, optimization of turbine system design will increase efficiency of exhaust gas energy and thereby increase engine power density. However flow interaction of turbine system will change flow in turbine and influence turbine performance. To recognize the interaction characteristics between high pressure turbine and low pressure turbine, flow in turbine system is modeled and simulated numerically. The calculation results suggested that static pressure field at inlet to low pressure turbine increases back pressure of high pressure turbine, however efficiency of high pressure turbine changes little; distorted velocity field at outlet to high pressure turbine results in swirl at inlet to low pressure turbine. Clockwise swirl results in large negative angle of attack at inlet to rotor which causes flow loss in turbine impeller passages and decreases turbine

  12. Two-stage dilute-acid and organic-solvent lignocellulosic pretreatment for enhanced bioprocessing

    Energy Technology Data Exchange (ETDEWEB)

    Brodeur, G.; Telotte, J.; Stickel, J. J.; Ramakrishnan, S.

    2016-11-01

    A two stage pretreatment approach for biomass is developed in the current work in which dilute acid (DA) pretreatment is followed by a solvent based pretreatment (N-methyl morpholine N oxide -- NMMO). When the combined pretreatment (DAWNT) is applied to sugarcane bagasse and corn stover, the rates of hydrolysis and overall yields (>90%) are seen to dramatically improve and under certain conditions 48 h can be taken off the time of hydrolysis with the additional NMMO step to reach similar conversions. DAWNT shows a 2-fold increase in characteristic rates and also fractionates different components of biomass -- DA treatment removes the hemicellulose while the remaining cellulose is broken down by enzymatic hydrolysis after NMMO treatment to simple sugars. The remaining residual solid is high purity lignin. Future work will focus on developing a full scale economic analysis of DAWNT for use in biomass fractionation.

  13. Reconstruction of Gene Regulatory Networks Based on Two-Stage Bayesian Network Structure Learning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Gui-xia Liu; Wei Feng; Han Wang; Lei Liu; Chun-guang Zhou

    2009-01-01

    In the post-genomic biology era, the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system, and it has been a challenging task in bioinformatics. The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages, but how to determine the network structure and parameters is still important to be explored. This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network .The new algorithm is evaluated with the use of both simulated and yeast cell cycle data. The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.

  14. The Application of Two-stage Structure Decomposition Technique to the Study of Industrial Carbon Emissions

    Institute of Scientific and Technical Information of China (English)

    Yanqiu HE

    2015-01-01

    The total carbon emissions control is the ultimate goal of carbon emission reduction, while industrial carbon emissions are the basic units of the total carbon emission. On the basis of existing research results, in this paper, a two-stage input-output structure decomposition method is creatively proposed for fully combining the input-output method with the structure decomposition technique. In this study, more comprehensive technical progress indicators were chosen in comparison with the previous studies and included the utilization efficiency of all kinds of intermediate inputs such as energy and non-energy products, and finally were positioned at the factors affecting the carbon emissions of different industries. Through analysis, the affecting rate of each factor on industrial carbon emissions was acquired. Thus, a theory basis and data support is provided for the total carbon emissions control of China from the perspective of industrial emissions.

  15. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  16. A wavelet-based two-stage near-lossless coder.

    Science.gov (United States)

    Yea, Sehoon; Pearlman, William A

    2006-11-01

    In this paper, we present a two-stage near-lossless compression scheme. It belongs to the class of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by arithmetic coding of the quantized residual to guarantee a given L(infinity) error bound in the pixel domain. We focus on the selection of the optimum bit rate for the lossy layer to achieve the minimum total bit rate. Unlike other similar lossy plus lossless approaches using a wavelet-based lossy layer, the proposed method does not require iteration of decoding and inverse discrete wavelet transform in succession to locate the optimum bit rate. We propose a simple method to estimate the optimal bit rate, with a theoretical justification based on the critical rate argument from the rate-distortion theory and the independence of the residual error.

  17. Two-Stage Over-the-Air (OTA Test Method for LTE MIMO Device Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Ya Jing

    2012-01-01

    Full Text Available With MIMO technology being adopted by the wireless communication standards LTE and HSPA+, MIMO OTA research has attracted wide interest from both industry and academia. Parallel studies are underway in COST2100, CTIA, and 3GPP RAN WG4. The major test challenge for MIMO OTA is how to create a repeatable scenario which accurately reflects the MIMO antenna radiation performance in a realistic wireless propagation environment. Different MIMO OTA methods differ in the way to reproduce a specified MIMO channel model. This paper introduces a novel, flexible, and cost-effective method for measuring MIMO OTA using a two-stage approach. In the first stage, the antenna pattern is measured in an anechoic chamber using a nonintrusive approach, that is without cabled connections or modifying the device. In the second stage, the antenna pattern is convolved with the chosen channel model in a channel emulator to measure throughput using a cabled connection.

  18. Two stages of parafoveal processing during reading: Evidence from a display change detection task.

    Science.gov (United States)

    Angele, Bernhard; Slattery, Timothy J; Rayner, Keith

    2016-08-01

    We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924-1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage.

  19. Enhanced biodiesel production in Neochloris oleoabundans by a semi-continuous process in two stage photobioreactors.

    Science.gov (United States)

    Yoon, Se Young; Hong, Min Eui; Chang, Won Seok; Sim, Sang Jun

    2015-07-01

    Under autotrophic conditions, highly productive biodiesel production was achieved using a semi-continuous culture system in Neochloris oleoabundans. In particular, the flue gas generated by combustion of liquefied natural gas and natural solar radiation were used for cost-effective microalgal culture system. In semi-continuous culture, the greater part (~80%) of the culture volume containing vegetative cells grown under nitrogen-replete conditions in a first photobioreactor (PBR) was directly transferred to a second PBR and cultured sequentially under nitrogen-deplete conditions for accelerating oil accumulation. As a result, in semi-continuous culture, the productivities of biomass and biodiesel in the cells were increased by 58% (growth phase) and 51% (induction phase) compared to the cells in batch culture, respectively. The semi-continuous culture system using two stage photobioreactors is a very efficient strategy to further improve biodiesel production from microalgae under photoautotrophic conditions.

  20. The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach

    Directory of Open Access Journals (Sweden)

    Frances Obafemi

    2013-11-01

    Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.