WorldWideScience

Sample records for two-stage random sampling

  1. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  2. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal [alpha] should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  3. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal {alpha} should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  4. Two-Stage Modelling Of Random Phenomena

    Science.gov (United States)

    Barańska, Anna

    2015-12-01

    The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.

  5. Precision and cost considerations for two-stage sampling in a panelized forest inventory design.

    Science.gov (United States)

    Westfall, James A; Lister, Andrew J; Scott, Charles T

    2016-01-01

    Due to the relatively high cost of measuring sample plots in forest inventories, considerable attention is given to sampling and plot designs during the forest inventory planning phase. A two-stage design can be efficient from a field work perspective as spatially proximate plots are grouped into work zones. A comparison between subsampling with units of unequal size (SUUS) and a simple random sample (SRS) design in a panelized framework assessed the statistical and economic implications of using the SUUS design for a case study in the Northeastern USA. The sampling errors for estimates of forest land area and biomass were approximately 1.5-2.2 times larger with SUUS prior to completion of the inventory cycle. Considerable sampling error reductions were realized by using the zones within a post-stratified sampling paradigm; however, post-stratification of plots in the SRS design always provided smaller sampling errors in comparison. Cost differences between the two designs indicated the SUUS design could reduce the field work expense by 2-7 %. The results also suggest the SUUS design may provide substantial economic advantage for tropical forest inventories, where remote areas, poor access, and lower wages are typically encountered.

  6. Two-Stage Sampling Procedures for Comparing Means When Population Distributions Are Non-Normal.

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen

    Two-stage sampling procedures for comparing two population means when variances are heterogeneous have been developed by D. G. Chapman (1950) and B. K. Ghosh (1975). Both procedures assume sampling from populations that are normally distributed. The present study reports on the effect that sampling from non-normal distributions has on Type I error…

  7. Two Stage Fully Differential Sample and Hold Circuit Using .18µm Technology

    Directory of Open Access Journals (Sweden)

    Dharmendra Dongardiye

    2014-05-01

    Full Text Available This paper presents a well-established Fully Differential sample & hold circuitry, implemented in 180-nm CMOS technology. In this two stage method the first stage give us very high gain and second stage gives large voltage swing. The proposed opamp provides 149MHz unity-gain bandwidth , 78 degree phase margin and a differential peak to peak output swing more than 2.4v. using the improved fully differential two stage operational amplifier of 76.7dB gain. Although the sample and hold circuit meets the requirements of SNR specifications.

  8. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  9. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  10. A two-stage method to determine optimal product sampling considering dynamic potential market.

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

  11. The role of the upper sample size limit in two-stage bioequivalence designs.

    Science.gov (United States)

    Karalis, Vangelis

    2013-11-01

    Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs.

  12. Modifications of some simple One-stage Randomized Response Models to Two-stage in complex surveys

    Directory of Open Access Journals (Sweden)

    Mohammad Rafiq

    2016-06-01

    Full Text Available Warner (1965 introduced a Randomized Response Technique (RRT to minimize bias due to non- response or false response. Thereafter, several researchers have made significant contribution in the development and modification of different Randomized Response Models. We have modified a few one-stage Simple Randomized Response Models to two-stage randomized response models in complex surveys and found that our developed models are more efficient.

  13. Synthetic control charts with two-stage sampling for monitoring bivariate processes

    Directory of Open Access Journals (Sweden)

    Antonio F. B. Costa

    2007-04-01

    Full Text Available In this article, we consider the synthetic control chart with two-stage sampling (SyTS chart to control bivariate processes. During the first stage, one item of the sample is inspected and two correlated quality characteristics (x;y are measured. If the Hotelling statistic T1² for these individual observations of (x;y is lower than a specified value UCL1 the sampling is interrupted. Otherwise, the sampling goes on to the second stage, where the remaining items are inspected and the Hotelling statistic T2² for the sample means of (x;y is computed. When the statistic T2² is larger than a specified value UCL2, the sample is classified as nonconforming. According to the synthetic control chart procedure, the signal is based on the number of conforming samples between two neighbor nonconforming samples. The proposed chart detects process disturbances faster than the bivariate charts with variable sample size and it is from the practical viewpoint more convenient to administer.Este artigo apresenta um gráfico de controle com regra especial de decisão e amostragens em dois estágios para o monitoramento de processos bivariados. No primeiro estágio, um item da amostra é inspecionado e duas características de qualidade correlacionadas (x;y são medidas. Se a estatística de Hotelling T1² para as observações individuais de (x;y for menor que um valor especificado UCL1 a amostragem é interrompida. Caso contrário, a amostragem segue para o segundo estágio, onde os demais itens da amostra são inspecionados e a estatística de Hotelling T2² para as médias de (x;y é calculada. Quando a estatística T2² é maior que um valor especificado UCL2, a amostra é classificada como não conforme. De acordo com a regra especial de decisão, o alarme é baseado no número de amostras entre duas não conformes. O gráfico proposto é mais ágil e mais simples do ponto de vista operacional que o gráfico de controle bivariado com tamanho de amostras variável.

  14. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. A covariate adjusted two-stage allocation design for binary responses in randomized clinical trials.

    Science.gov (United States)

    Bandyopadhyay, Uttam; Biswas, Atanu; Bhattacharya, Rahul

    2007-10-30

    In the present work, we develop a two-stage allocation rule for binary response using the log-odds ratio within the Bayesian framework allowing the current allocation to depend on the covariate value of the current subject. We study, both numerically and theoretically, several exact and limiting properties of this design. The applicability of the proposed methodology is illustrated by using some data set. We compare this rule with some of the existing rules by computing various performance measures.

  16. Sampling strategies for estimating forest cover from remote sensing-based two-stage inventories

    Institute of Scientific and Technical Information of China (English)

    Piermaria; Corona; Lorenzo; Fattorini; Maria; Chiara; Pagliarella

    2015-01-01

    Background: Remote sensing-based inventories are essential in estimating forest cover in tropical and subtropical countries, where ground inventories cannot be performed periodically at a large scale owing to high costs and forest inaccessibility(e.g. REDD projects) and are mandatory for constructing historical records that can be used as forest cover baselines. Given the conditions of such inventories, the survey area is partitioned into a grid of imagery segments of pre-fixed size where the proportion of forest cover can be measured within segments using a combination of unsupervised(automated or semi-automated) classification of satellite imagery and manual(i.e. visual on-screen)enhancements. Because visual on-screen operations are time expensive procedures, manual classification can be performed only for a sample of imagery segments selected at a first stage, while forest cover within each selected segment is estimated at a second stage from a sample of pixels selected within the segment. Because forest cover data arising from unsupervised satellite imagery classification may be freely available(e.g. Landsat imagery)over the entire survey area(wall-to-wall data) and are likely to be good proxies of manually classified cover data(sample data), they can be adopted as suitable auxiliary information.Methods: The question is how to choose the sample areas where manual classification is carried out. We have investigated the efficiency of one-per-stratum stratified sampling for selecting segments and pixels, where to carry out manual classification and to determine the efficiency of the difference estimator for exploiting auxiliary information at the estimation level. The performance of this strategy is compared with simple random sampling without replacement.Results: Our results were obtained theoretically from three artificial populations constructed from the Landsat classification(forest/non forest) available at pixel level for a study area located in central Italy

  17. The Efficiency Level in the Estimation of the Nigerian Population: A Comparison of One-Stage and Two-Stage Sampling Technique (A Case Study of the 2006 Census of Nigerians

    Directory of Open Access Journals (Sweden)

    T.J. Akingbade

    2014-09-01

    Full Text Available This research work compares the one-stage sampling technique (Simple Random Sampling and two-stage sampling technique for estimating the population total of Nigerians using the 2006 census result of Nigerians. A sample size of twenty (20 states was selected out of a population of thirty six (36 states at the Primary Sampling Unit (PSU and one-third of each state selected at the PSU was sample at the Secondary Sampling Unit (SSU and analyzed. The result shows that, with the same sample size at the PSU, one-stage sampling technique (Simple Random Sampling is more efficient than two-stage sampling technique and hence, recommended.

  18. Effekt of a two-stage nursing assesment and intervention - a randomized intervention study

    DEFF Research Database (Denmark)

    Rosted, Elizabeth Emilie; Poulsen, Ingrid; Hendriksen, Carsten

    to the geriatric outpatient clinic, community health centre, primary physician or arrangements with next-of-kin. Findings: Primary endpoints will be presented as unplanned readmission to ED; admission to nursing home; and death. Secondary endpoints will be presented as physical function; depressive symptoms......Background: Geriatric patients recently discharged from hospital are at risk of unplanned readmissions and admission to nursing home. When discharged directly from Emergency Department (ED) the risk increases, as time pressure often requires focus on the presenting problem, although 80...... % of geriatric patients have complex and often unresolved caring needs. The objective was to examine the effect of a two-stage nursing assessment and intervention to address the patients uncompensated problems given just after discharge from ED and one and six months after. Method: We conducted a prospective...

  19. Exact alpha-error determination for two-stage sampling strategies to substantiate freedom from disease.

    Science.gov (United States)

    Kopacka, I; Hofrichter, J; Fuchs, K

    2013-05-01

    Sampling strategies to substantiate freedom from disease are important when it comes to the trade of animals and animal products. When considering imperfect tests and finite populations, sample size calculation can, however, be a challenging task. The generalized hypergeometric formula developed by Cameron and Baldock (1998a) offers a framework that can elegantly be extended to multi-stage sampling strategies, which are widely used to account for disease clustering at herd-level. The achieved alpha-error of such surveys, however, typically depends on the realization of the sample and can differ from the pre-calculated value. In this paper, we introduce a new formula to evaluate the exact alpha-error induced by a specific sample. We further give a numerically viable approximation formula and analyze its properties using a data example of Brucella melitensis in the Austrian sheep population.

  20. The Mark Coventry, MD, Award: Oral Antibiotics Reduce Reinfection After Two-Stage Exchange: A Multicenter, Randomized Controlled Trial.

    Science.gov (United States)

    Frank, Jonathan M; Kayupov, Erdan; Moric, Mario; Segreti, John; Hansen, Erik; Hartman, Curtis; Okroj, Kamil; Belden, Katherine; Roslund, Brian; Silibovsky, Randi; Parvizi, Javad; Della Valle, Craig J

    2017-01-01

    Many patients develop recurrent periprosthetic joint infection after two-stage exchange arthroplasty of the hip or knee. One potential but insufficiently tested strategy to decrease the risk of persistent or recurrent infection is to administer additional antibiotics after the second-stage reimplantation. (1) Does a 3-month course of oral antibiotics decrease the risk of failure secondary to infection after a two-stage exchange? (2) Are there any complications related to the administration of oral antibiotics after a two-stage exchange? (3) In those patients who develop a reinfection, is the infecting organism different from the initial infection? Patients at seven centers randomized to receive 3 months of oral antibiotics or no further antibiotic treatment after operative cultures after the second-stage reimplantation were negative. Adult patients undergoing two-stage hip or knee revision arthroplasty for a periprosthetic infection who met Musculoskeletal Infection Society (MSIS) criteria for infection at the first stage were included. Oral antibiotic therapy was tailored to the original infecting organism(s) in consultation with an infectious disease specialist. MSIS criteria as used by the treating surgeon defined failure. Surveillance of patients for complications, including reinfection, occurred at 3 weeks, 6 weeks, 3 months, 12 months, and 24 months. If an organism demonstrated the same antibiotic sensitivities as the original organism, it was considered the same organism; no DNA subtyping was performed. Analysis was performed as intent to treat with all randomized patients included in the groups to which they were randomized. A log-rank survival curve was used to analyze the primary outcome of reinfection. At planned interim analysis (enrollment is ongoing), 59 patients were successfully randomized to the antibiotic group and 48 patients to the control group. Fifty-seven patients had an infection after TKA and 50 after a THA. There was no minimum followup

  1. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    Science.gov (United States)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  2. Two-stage sample-to-answer system based on nucleic acid amplification approach for detection of malaria parasites.

    Science.gov (United States)

    Liu, Qing; Nam, Jeonghun; Kim, Sangho; Lim, Chwee Teck; Park, Mi Kyoung; Shin, Yong

    2016-08-15

    Rapid, early, and accurate diagnosis of malaria is essential for effective disease management and surveillance, and can reduce morbidity and mortality associated with the disease. Although significant advances have been achieved for the diagnosis of malaria, these technologies are still far from ideal, being time consuming, complex and poorly sensitive as well as requiring separate assays for sample processing and detection. Therefore, the development of a fast and sensitive method that can integrate sample processing with detection of malarial infection is desirable. Here, we report a two-stage sample-to-answer system based on nucleic acid amplification approach for detection of malaria parasites. It combines the Dimethyl adipimidate (DMA)/Thin film Sample processing (DTS) technique as a first stage and the Mach-Zehnder Interferometer-Isothermal solid-phase DNA Amplification (MZI-IDA) sensing technique as a second stage. The system can extract DNA from malarial parasites using DTS technique in a closed system, not only reducing sample loss and contamination, but also facilitating the multiplexed malarial DNA detection using the fast and accurate MZI-IDA technique. Here, we demonstrated that this system can deliver results within 60min (including sample processing, amplification and detection) with high sensitivity (malaria in low-resource settings.

  3. FunSAV: predicting the functional effect of single amino acid variants using a two-stage random forest model.

    Directory of Open Access Journals (Sweden)

    Mingjun Wang

    Full Text Available Single amino acid variants (SAVs are the most abundant form of known genetic variations associated with human disease. Successful prediction of the functional impact of SAVs from sequences can thus lead to an improved understanding of the underlying mechanisms of why a SAV may be associated with certain disease. In this work, we constructed a high-quality structural dataset that contained 679 high-quality protein structures with 2,048 SAVs by collecting the human genetic variant data from multiple resources and dividing them into two categories, i.e., disease-associated and neutral variants. We built a two-stage random forest (RF model, termed as FunSAV, to predict the functional effect of SAVs by combining sequence, structure and residue-contact network features with other additional features that were not explored in previous studies. Importantly, a two-step feature selection procedure was proposed to select the most important and informative features that contribute to the prediction of disease association of SAVs. In cross-validation experiments on the benchmark dataset, FunSAV achieved a good prediction performance with the area under the curve (AUC of 0.882, which is competitive with and in some cases better than other existing tools including SIFT, SNAP, Polyphen2, PANTHER, nsSNPAnalyzer and PhD-SNP. The sourcecodes of FunSAV and the datasets can be downloaded at http://sunflower.kuicr.kyoto-u.ac.jp/sjn/FunSAV.

  4. Application of composite estimation in studies of animal population production with two-stage repeated sample designs.

    Science.gov (United States)

    Farver, T B; Holt, D; Lehenbauer, T; Greenley, W M

    1997-05-01

    This paper reports results from two example data sets of a two-stage sampling design where sampling (in panels) both farms and animals within selected farms increases the efficiency of parameter estimation from measurements recorded over time. With such a design, not only are farms replaced from time-to-time but also animals subsampled within retained farms are subject to replacement. Three general categories of parameters estimated for the population (the set of animals belonging to the universe of farms of interest) were (1) the total at each measurement occasion; (2) the difference between means or totals on successive measurement occasions; (3) the total over a sequence of successive measurement periods. Whereas several responses at the farm level were highly correlated over time (rho 1), the corresponding animal responses were less correlated over time (rho 2)-leading to only moderate gains in relative efficiency. Intraclass correlation values were too low in most cases to counteract the overall negative impact of rho 2. In general, sizeable gains in relative efficiency were observed for estimating change-confirming a previous result which showed this to be true provided that rho 1 was high (irrespective of rho 2).

  5. Robust prediction of B-factor profile from sequence using two-stage SVR based on random forest feature selection.

    Science.gov (United States)

    Pan, Xiao-Yong; Shen, Hong-Bin

    2009-01-01

    B-factor is highly correlated with protein internal motion, which is used to measure the uncertainty in the position of an atom within a crystal structure. Although the rapid progress of structural biology in recent years makes more accurate protein structures available than ever, with the avalanche of new protein sequences emerging during the post-genomic Era, the gap between the known protein sequences and the known protein structures becomes wider and wider. It is urgent to develop automated methods to predict B-factor profile from the amino acid sequences directly, so as to be able to timely utilize them for basic research. In this article, we propose a novel approach, called PredBF, to predict the real value of B-factor. We firstly extract both global and local features from the protein sequences as well as their evolution information, then the random forests feature selection is applied to rank their importance and the most important features are inputted to a two-stage support vector regression (SVR) for prediction, where the initial predicted outputs from the 1(st) SVR are further inputted to the 2nd layer SVR for final refinement. Our results have revealed that a systematic analysis of the importance of different features makes us have deep insights into the different contributions of features and is very necessary for developing effective B-factor prediction tools. The two-layer SVR prediction model designed in this study further enhanced the robustness of predicting the B-factor profile. As a web server, PredBF is freely available at: http://www.csbio.sjtu.edu.cn/bioinf/PredBF for academic use.

  6. Estimation of infection prevalence and sensitivity in a stratified two-stage sampling design employing highly specific diagnostic tests when there is no gold standard.

    Science.gov (United States)

    Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S

    2015-11-10

    In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.

  7. Mixed effect regression analysis for a cluster-based two-stage outcome-auxiliary-dependent sampling design with a continuous outcome.

    Science.gov (United States)

    Xu, Wangli; Zhou, Haibo

    2012-09-01

    Two-stage design is a well-known cost-effective way for conducting biomedical studies when the exposure variable is expensive or difficult to measure. Recent research development further allowed one or both stages of the two-stage design to be outcome dependent on a continuous outcome variable. This outcome-dependent sampling feature enables further efficiency gain in parameter estimation and overall cost reduction of the study (e.g. Wang, X. and Zhou, H., 2010. Design and inference for cancer biomarker study with an outcome and auxiliary-dependent subsampling. Biometrics 66, 502-511; Zhou, H., Song, R., Wu, Y. and Qin, J., 2011. Statistical inference for a two-stage outcome-dependent sampling design with a continuous outcome. Biometrics 67, 194-202). In this paper, we develop a semiparametric mixed effect regression model for data from a two-stage design where the second-stage data are sampled with an outcome-auxiliary-dependent sample (OADS) scheme. Our method allows the cluster- or center-effects of the study subjects to be accounted for. We propose an estimated likelihood function to estimate the regression parameters. Simulation study indicates that greater study efficiency gains can be achieved under the proposed two-stage OADS design with center-effects when compared with other alternative sampling schemes. We illustrate the proposed method by analyzing a dataset from the Collaborative Perinatal Project.

  8. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  9. Two-stage revision surgery with preformed spacers and cementless implants for septic hip arthritis: a prospective, non-randomized cohort study

    Directory of Open Access Journals (Sweden)

    Logoluso Nicola

    2011-05-01

    Full Text Available Abstract Background Outcome data on two-stage revision surgery for deep infection after septic hip arthritis are limited and inconsistent. This study presents the medium-term results of a new, standardized two-stage arthroplasty with preformed hip spacers and cementless implants in a consecutive series of adult patients with septic arthritis of the hip treated according to a same protocol. Methods Nineteen patients (20 hips were enrolled in this prospective, non-randomized cohort study between 2000 and 2008. The first stage comprised femoral head resection, debridement, and insertion of a preformed, commercially available, antibiotic-loaded cement hip spacer. After eradication of infection, a cementless total hip arthroplasty was implanted in the second stage. Patients were assessed for infection recurrence, pain (visual analog scale [VAS] and hip joint function (Harris Hip score. Results The mean time between first diagnosis of infection and revision surgery was 5.8 ± 9.0 months; the average duration of follow up was 56.6 (range, 24 - 104 months; all 20 hips were successfully converted to prosthesis an average 22 ± 5.1 weeks after spacer implantation. Reinfection after total hip joint replacement occurred in 1 patient. The mean VAS pain score improved from 48 (range, 35 - 84 pre-operatively to 18 (range, 0 - 38 prior to spacer removal and to 8 (range, 0 - 15 at the last follow-up assessment after prosthesis implantation. The average Harris Hip score improved from 27.5 before surgery to 61.8 between the two stages to 92.3 at the final follow-up assessment. Conclusions Satisfactory outcomes can be obtained with two-stage revision hip arthroplasty using preformed spacers and cementless implants for prosthetic hip joint infections of various etiologies.

  10. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  11. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    Science.gov (United States)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  12. THE LIBERATION OF ARSENOSUGARS FROM MATRIX COMPONENTS IN DIFFICULT TO EXTRACT SEAFOOD SAMPLES UTILIZING TMAOH/ACETIC ACID SEQUENTIALLY IN A TWO-STAGE EXTRACTION PROCESS

    Science.gov (United States)

    Sample extraction is one of the most important steps in arsenic speciation analysis of solid dietary samples. One of the problem areas in this analysis is the partial extraction of arsenicals from seafood samples. The partial extraction allows the toxicity of the extracted arse...

  13. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  14. Sample to sample fluctuations in the random energy model

    Energy Technology Data Exchange (ETDEWEB)

    Derrida, B. (Service de Physique Theorique, CEN Saclay, 91 - Gif-sur-Yvette (France)); Toulouse, G. (E.S.P.C.I., 75 - Paris (France))

    1985-03-15

    In the spin glass phase, mean field theory says that the weights of the valleys vary from sample to sample. Exact expressions for the probability laws of these fluctuations are derived, from the random energy model, without recourse to the replica method.

  15. Effects of regularly consuming dietary fibre rich soluble cocoa products on bowel habits in healthy subjects: a free-living, two-stage, randomized, crossover, single-blind intervention

    Directory of Open Access Journals (Sweden)

    Sarriá Beatriz

    2012-04-01

    Full Text Available Abstract Background Dietary fibre is both preventive and therapeutic for bowel functional diseases. Soluble cocoa products are good sources of dietary fibre that may be supplemented with this dietary component. This study assessed the effects of regularly consuming two soluble cocoa products (A and B with different non-starch polysaccharides levels (NSP, 15.1 and 22.0% w/w, respectively on bowel habits using subjective intestinal function and symptom questionnaires, a daily diary and a faecal marker in healthy individuals. Methods A free-living, two-stage, randomized, crossover, single-blind intervention was carried out in 44 healthy men and women, between 18-55 y old, who had not taken dietary supplements, laxatives, or antibiotics six months before the start of the study. In the four-week-long intervention stages, separated by a three-week-wash-out stage, two servings of A and B, that provided 2.26 vs. 6.60 g/day of NSP respectively, were taken. In each stage, volunteers' diet was recorded using a 72-h food intake report. Results Regularly consuming cocoa A and B increased fibre intake, although only cocoa B significantly increased fibre intake (p Conclusions Regular consumption of the cocoa products increases dietary fibre intake to recommended levels and product B improves bowel habits. The use of both objective and subjective assessments to evaluate the effects of food on bowel habits is recommended.

  16. Efficacy and safety of 5% lidocaine (lignocaine) medicated plaster in comparison with pregabalin in patients with postherpetic neuralgia and diabetic polyneuropathy: interim analysis from an open-label, two-stage adaptive, randomized, controlled trial.

    Science.gov (United States)

    Baron, Ralf; Mayoral, Victor; Leijon, Göran; Binder, Andreas; Steigerwald, Ilona; Serpell, Michael

    2009-01-01

    Postherpetic neuralgia (PHN) and diabetic polyneuropathy (DPN) are two common causes of peripheral neuropathic pain. Typical localized symptoms can include burning sensations or intermittent shooting or stabbing pains with or without allodynia. Evidence-based treatment guidelines recommend the 5% lidocaine (lignocaine) medicated plaster or pregabalin as first-line therapy for relief of peripheral neuropathic pain. This study aimed to compare 5% lidocaine medicated plaster treatment with pregabalin in patients with PHN and patients with DPN. The study was a two-stage, adaptive, randomized, controlled, open-label, multicentre trial that incorporated a drug wash-out phase of up to 2 weeks prior to the start of the comparative phase. At the end of the enrollment phase, patients who fulfilled the eligibility criteria were randomized to either 5% lidocaine medicated plaster or pregabalin treatment and entered the 4-week comparative phase. The interim analysis represents the first stage of the two-stage adaptive trial design and was planned to include data from the comparative phase for the first 150 randomized patients of the 300 total planned for the trial. Patients aged > or = 18 years with PHN or DPN were recruited from 53 investigational centres in 14 European countries. For this interim analysis, 55 patients with PHN and 91 with DPN (full-analysis set [FAS]), randomly assigned to the treatment groups, were available for analysis. Topical 5% lidocaine medicated plaster treatment was administered by patients to the area of most painful skin. A maximum of three or four plasters were applied for up to 12 hours within each 24-hour period in patients with PHN or DPN, respectively. Pregabalin capsules were administered orally, twice daily. The dose was titrated to effect: all patients received 150 mg/day in the first week and 300 mg/day in the second week of treatment. After 1 week at 300 mg/day, the dose of pregabalin was further increased to 600 mg/day in patients with

  17. Adapted random sampling patterns for accelerated MRI.

    Science.gov (United States)

    Knoll, Florian; Clason, Christian; Diwoky, Clemens; Stollberger, Rudolf

    2011-02-01

    Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.

  18. 基于视觉显著性的两阶段采样突变目标跟踪算法%Saliency Based Tracking Method for Abrupt Motions via Two-stage Sampling

    Institute of Scientific and Technical Information of China (English)

    江晓莲; 李翠华; 李雄宗

    2014-01-01

    In this paper, a saliency based tracking method via two-stage sampling is proposed for abrupt motions. Firstly, the visual salience is introduced as a prior knowledge into the Wang-Landau Monte Carlo (WLMC)-based tracking algorithm. By dividing the spatial space into disjoint sub-regions and assigning each sub-region a saliency value, a prior knowledge of the promising regions is obtained;then the saliency values of sub-regions are integrated into the Markov chain Monte Carlo (MCMC) acceptance mechanism to guide effective states sampling. Secondly, considering the abrupt motion sequence contains both abrupt and smooth motions, a two-stage sampling model is brought up into the algorithm. In the first stage, the model detects the motion type of the target. According to the result of the first stage, the model chooses either the saliency-based WLMC method to track abrupt motions or the double-chain MCMC method to track smooth motions of the target in the second stage. The algorithm effciently addresses tracking of abrupt motions while smooth motions are also accurately tracked. Experimental results demonstrate that this approach outperforms the state-of-the-art algorithms on abrupt motion sequence and public benchmark sequence in terms of accuracy and robustness.%针对运动突变目标视觉跟踪问题,提出一种基于视觉显著性的两阶段采样跟踪算法。首先,将视觉显著性信息引入到Wang-Landau 蒙特卡罗(Wang-Landau Monte Carlo, WLMC)跟踪算法中,设计了结合显著性先验的接受函数,利用子区域的显著性值来引导马尔可夫链的构造,通过增大目标出现区粒子的接受概率,提高采样效率;其次,针对运动序列中平滑与突变运动共存的特点,建立两阶段采样模型。其中第一阶段对目标当前运动类型进行判定,第二阶段则根据判定结果采用相应算法。突变运动采用基于视觉显著性的WLMC 算法,平滑运动采用双链

  19. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  20. The construction of two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1988-01-01

    Although two-stage testing is not the most efficient form of adaptive testing, it has some advantages. In this paper, linear programming models are given for the construction of two-stage tests. In these models, practical constraints with respect to, among other things, test composition, administrat

  1. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, Clara M.; Buchhave, Preben; K. George, William

    2014-01-01

    The random, but velocity dependent, sampling of the LDA presents non-trivial signal processing challengesdue to the high velocity bias and the arbitrariness of particle path through the measuring volume, among other factors.To obtain the desired non-biased statistics, it has previously been shown...... analytically as well as empirically thatresidence time weighting is the suitable choice. Unfortunately, due to technical problems related to the processors providing erroneous measurements of the residence times, this previously widely accepted theory has been questioned and instead a wide spectrum...... of alternative methods attempting to produce correct power spectra have been invented andtested. The objective of the current study is to create a simple computer generated signal for baseline testing of residence time weighting and some of the most commonly proposed algorithms (or algorithms which most...

  2. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, C. M.; Buchhave, P.; K. George, W.

    The random, but velocity dependent, sampling of the LDA presents non-trivial signal processing challenges due to the high velocity bias and the arbitrariness of particle path through the measuring volume, among other factors. To obtain the desired non-biased statistics, it has previously been shown...... is that if the algorithms are not able to produce correct statistics from this simple signal, then they will certainly not be able to function well for a more complex measured LDA signal. This is, of course, true also for other methods that are based on the tested algorithms. The extremes are tested by increasing, e....... Residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from measurements on the jet centerline and the outer part of the jet...

  3. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... a new random pattern generator which copes with strict practical limitations imposed on patterns, with possibly minimal loss in randomness of sampling. The proposed generator is compared with existing sampling pattern generators using the introduced statistical methods. It is shown that the proposed......Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...

  4. A two-stage rank test using density estimation

    NARCIS (Netherlands)

    Albers, Willem/Wim

    1995-01-01

    For the one-sample problem, a two-stage rank test is derived which realizes a required power against a given local alternative, for all sufficiently smooth underlying distributions. This is achieved using asymptotic expansions resulting in a precision of orderm −1, wherem is the size of the first

  5. Sample controllability of impulsive differential systems with random coefficients

    Science.gov (United States)

    Zhang, Shuorui; Sun, Jitao

    2016-07-01

    In this paper, we investigate the controllability of impulsive differential systems with random coefficients. Impulsive differential systems with random coefficients are a different stochastic model from stochastic differential equations. Sufficient conditions of sample controllability for impulsive differential systems with random coefficients are obtained by using random Sadovskii's fixed-point theorem. Finally, an example is given to illustrate our results.

  6. Control of Randomly Sampled Robotic Systems

    Science.gov (United States)

    1989-05-01

    Artificial Inteligence Laboratory, 1972. PumA26O.c Ned Mar 8 17:51:04 1989 1 #include <rnath.h> #define real float #define mm 6 #define G 9.83. #define M6...systems through communications. Communication between processes sharing a single processor are also subject to random delays due to memory management ...processes sharing a single processor are also subject to random delays due to memory management and interrupt latency. Communications between processors

  7. Two Stage Gear Tooth Dynamics Program

    Science.gov (United States)

    1989-08-01

    cordi - tions and associated iteration prooedure become more complex. This is due to both the increased number of components and to the time for a...solved for each stage in the two stage solution . There are (3 + ntrrber of planets) degrees of freedom fcr eacb stage plus two degrees of freedom...should be devised. It should be noted that this is not minor task. In general, each stage plus an input or output shaft will have 2 times (4 + number

  8. Two-stage replication of previous genome-wide association studies of AS3MT-CNNM2-NT5C2 gene cluster region in a large schizophrenia case-control sample from Han Chinese population.

    Science.gov (United States)

    Guan, Fanglin; Zhang, Tianxiao; Li, Lu; Fu, Dongke; Lin, Huali; Chen, Gang; Chen, Teng

    2016-10-01

    Schizophrenia is a devastating psychiatric condition with high heritability. Replicating the specific genetic variants that increase susceptibility to schizophrenia in different populations is critical to better understand schizophrenia. CNNM2 and NT5C2 are genes recently identified as susceptibility genes for schizophrenia in Europeans, but the exact mechanism by which these genes confer risk for schizophrenia remains unknown. In this study, we examined the potential for genetic susceptibility to schizophrenia of a three-gene cluster region, AS3MT-CNNM2-NT5C2. We implemented a two-stage strategy to conduct association analyses of the targeted regions with schizophrenia. A total of 8218 individuals were recruited, and 45 pre-selected single nucleotide polymorphisms (SNPs) were genotyped. Both single-marker and haplotype-based analyses were conducted in addition to imputation analysis to increase the coverage of our genetic markers. Two SNPs, rs11191419 (OR=1.24, P=7.28×10(-5)) and rs11191514 (OR=1.24, P=0.0003), with significant independent effects were identified. These results were supported by the data from both the discovery and validation stages. Further haplotype and imputation analyses also validated these results, and bioinformatics analyses indicated that CALHM1, which is located approximately 630kb away from CNNM2, might be a susceptible gene for schizophrenia. Our results provide further support that AS3MT, CNNM2 and CALHM1 are involved with the etiology and pathogenesis of schizophrenia, suggesting these genes are potential targets of interest for the improvement of disease management and the development of novel pharmacological strategies.

  9. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  10. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

    Science.gov (United States)

    Barranca, Victor J.; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-08-01

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.

  11. Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling.

    Science.gov (United States)

    Barranca, Victor J; Kovačič, Gregor; Zhou, Douglas; Cai, David

    2016-08-24

    Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging.

  12. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds......Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...

  13. Two Stage Sibling Cycle Compressor/Expander.

    Science.gov (United States)

    1994-02-01

    vol. 5, p. 424. 11. L. Bauwens and M.P. Mitchell, " Regenerator Analysis: Validation of the MS*2 Stirling Cycle Code," Proc. XVIIIth International...PL-TR--94-1051 PL-TR-- 94-1051 TWO STAGE SIBLING CYCLE COMPRESSOR/EXPANDER Matthew P. Mitchell . Mitchell/ Stirling Machines/Systems, Inc. No\\ 1995...ty. THIS PAGE IS UNCLASSIFIED PL-TR-94-1051 This final report was prepared byMitchell/ Stirling Machines/Systems, Inc., Berkeley, CA under Contract

  14. Random number datasets generated from statistical analysis of randomly sampled GSM recharge cards.

    Science.gov (United States)

    Okagbue, Hilary I; Opanuga, Abiodun A; Oguntunde, Pelumi E; Ugwoke, Paulinus O

    2017-02-01

    In this article, a random number of datasets was generated from random samples of used GSM (Global Systems for Mobile Communications) recharge cards. Statistical analyses were performed to refine the raw data to random number datasets arranged in table. A detailed description of the method and relevant tests of randomness were also discussed.

  15. Measuring the Learning from Two-Stage Collaborative Group Exams

    CERN Document Server

    Ives, Joss

    2014-01-01

    A two-stage collaborative exam is one in which students first complete the exam individually, and then complete the same or similar exam in collaborative groups immediately afterward. To quantify the learning effect from the group component of these two-stage exams in an introductory Physics course, a randomized crossover design was used where each student participated in both the treatment and control groups. For each of the two two-stage collaborative group midterm exams, questions were designed to form matched near-transfer pairs with questions on an end-of-term diagnostic which was used as a learning test. For learning test questions paired with questions from the first midterm, which took place six to seven weeks before the learning test, an analysis using a mixed-effects logistic regression found no significant differences in learning-test performance between the control and treatment group. For learning test questions paired with questions from the second midterm, which took place one to two weeks prio...

  16. Recursive algorithm for the two-stage EFOP estimation method

    Institute of Scientific and Technical Information of China (English)

    LUO GuiMing; HUANG Jian

    2008-01-01

    A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.

  17. Classification in two-stage screening.

    Science.gov (United States)

    Longford, Nicholas T

    2015-11-10

    Decision theory is applied to the problem of setting thresholds in medical screening when it is organised in two stages. In the first stage that involves a less expensive procedure that can be applied on a mass scale, an individual is classified as a negative or a likely positive. In the second stage, the likely positives are subjected to another test that classifies them as (definite) positives or negatives. The second-stage test is more accurate, but also more expensive and more involved, and so there are incentives to restrict its application. Robustness of the method with respect to the parameters, some of which have to be set by elicitation, is assessed by sensitivity analysis.

  18. Two stage gear tooth dynamics program

    Science.gov (United States)

    Boyd, Linda S.

    1989-01-01

    The epicyclic gear dynamics program was expanded to add the option of evaluating the tooth pair dynamics for two epicyclic gear stages with peripheral components. This was a practical extension to the program as multiple gear stages are often used for speed reduction, space, weight, and/or auxiliary units. The option was developed for either stage to be a basic planetary, star, single external-external mesh, or single external-internal mesh. The two stage system allows for modeling of the peripherals with an input mass and shaft, an output mass and shaft, and a connecting shaft. Execution of the initial test case indicated an instability in the solution with the tooth paid loads growing to excessive magnitudes. A procedure to trace the instability is recommended as well as a method of reducing the program's computation time by reducing the number of boundary condition iterations.

  19. Noncausal two-stage image filtration at presence of observations with anomalous errors

    Directory of Open Access Journals (Sweden)

    S. V. Vishnevyy

    2013-04-01

    Full Text Available Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptive one-dimensional algorithm for causal filtration is used for independent processing along rows and columns of image. On the second stage the obtained data are united and a posteriori estimations are calculated. Results of experimental investigations. The developed adaptive algorithm for noncausal images filtration at presence of observations with anomalous errors is investigated on the model sample by means of statistical modeling on PC. The image is modeled as a realization of Gaussian-Markov random field. The modeled image is corrupted with uncorrelated Gaussian noise. Regions of image with anomalous errors are corrupted with uncorrelated Gaussian noise which has higher power than normal noise on the rest part of the image. Conclusions. The analysis of adaptive algorithm for noncausal two-stage filtration is done. The characteristics of accuracy of computed estimations are shown. The comparisons of first stage and second stage of the developed adaptive algorithm are done. Adaptive algorithm is compared with known uniform two-stage algorithm of image filtration. According to the obtained results the uniform algorithm does not suppress anomalous noise meanwhile the adaptive algorithm shows good results.

  20. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  1. Different Random Distributions Research on Logistic-Based Sample Assumption

    Directory of Open Access Journals (Sweden)

    Jing Pan

    2014-01-01

    Full Text Available Logistic-based sample assumption is proposed in this paper, with a research on different random distributions through this system. It provides an assumption system of logistic-based sample, including its sample space structure. Moreover, the influence of different random distributions for inputs has been studied through this logistic-based sample assumption system. In this paper, three different random distributions (normal distribution, uniform distribution, and beta distribution are used for test. The experimental simulations illustrate the relationship between inputs and outputs under different random distributions. Thereafter, numerical analysis infers that the distribution of outputs depends on that of inputs to some extent, and this assumption system is not independent increment process, but it is quasistationary.

  2. On Two-stage Seamless Adaptive Design in Clinical Trials

    Directory of Open Access Journals (Sweden)

    Shein-Chung Chow

    2008-12-01

    Full Text Available In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular because of its efficiency and flexibility in modifying trial and/or statistical procedures of ongoing clinical trials. One of the most commonly considered adaptive designs is probably a two-stage seamless adaptive trial design that combines two separate studies into one single study. In many cases, study endpoints considered in a two-stage seamless adaptive design may be similar but different (e.g. a biomarker versus a regular clinical endpoint or the same study endpoint with different treatment durations. In this case, it is important to determine how the data collected from both stages should be combined for the final analysis. It is also of interest to know how the sample size calculation/allocation should be done for achieving the study objectives originally set for the two stages (separate studies. In this article, formulas for sample size calculation/allocation are derived for cases in which the study endpoints are continuous, discrete (e.g. binary responses, and contain time-to-event data assuming that there is a well-established relationship between the study endpoints at different stages, and that the study objectives at different stages are the same. In cases in which the study objectives at different stages are different (e.g. dose finding at the first stage and efficacy confirmation at the second stage and when there is a shift in patient population caused by protocol amendments, the derived test statistics and formulas for sample size calculation and allocation are necessarily modified for controlling the overall type I error at the prespecified level.

  3. A Two Stage Classification Approach for Handwritten Devanagari Characters

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Malik, Latesh

    2010-01-01

    The paper presents a two stage classification approach for handwritten devanagari characters The first stage is using structural properties like shirorekha, spine in character and second stage exploits some intersection features of characters which are fed to a feedforward neural network. Simple histogram based method does not work for finding shirorekha, vertical bar (Spine) in handwritten devnagari characters. So we designed a differential distance based technique to find a near straight line for shirorekha and spine. This approach has been tested for 50000 samples and we got 89.12% success

  4. Near-Optimal Random Walk Sampling in Distributed Networks

    CERN Document Server

    Sarma, Atish Das; Pandurangan, Gopal

    2012-01-01

    Performing random walks in networks is a fundamental primitive that has found numerous applications in communication networks such as token management, load balancing, network topology discovery and construction, search, and peer-to-peer membership management. While several such algorithms are ubiquitous, and use numerous random walk samples, the walks themselves have always been performed naively. In this paper, we focus on the problem of performing random walk sampling efficiently in a distributed network. Given bandwidth constraints, the goal is to minimize the number of rounds and messages required to obtain several random walk samples in a continuous online fashion. We present the first round and message optimal distributed algorithms that present a significant improvement on all previous approaches. The theoretical analysis and comprehensive experimental evaluation of our algorithms show that they perform very well in different types of networks of differing topologies. In particular, our results show h...

  5. Sequential time interleaved random equivalent sampling for repetitive signal

    Science.gov (United States)

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  6. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  7. Generalized Sampling Series Approximation of Random Signals from Local Averages

    Institute of Scientific and Technical Information of China (English)

    SONG Zhanjie; HE Gaiyun; YE Peixin; YANG Deyun

    2007-01-01

    Signals are often of random character since they cannot bear any information if they are predictable for any time t, they are usually modelled as stationary random processes. On the other hand, because of the inertia of the measurement apparatus, measured sampled values obtained in practice may not be the precise value of the signal X(t) at time tk( k∈ Z), but only local averages of X(t) near tk. In this paper, it is presented that a wide (or weak ) sense stationary stochastic process can be approximated by generalized sampling series with local average samples.

  8. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  9. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  10. On the Impact of Bootstrap in Stratified Random Sampling

    Institute of Scientific and Technical Information of China (English)

    LIU Cheng; ZHAO Lian-wen

    2009-01-01

    In general the accuracy of mean estimator can be improved by stratified random sampling. In this paper, we provide an idea different from empirical methods that the accuracy can be more improved through bootstrap resampling method under some conditions. The determination of sample size by bootstrap method is also discussed, and a simulation is made to verify the accuracy of the proposed method. The simulation results show that the sample size based on bootstrapping is smaller than that based on central limit theorem.

  11. A random spatial sampling method in a rural developing nation.

    Science.gov (United States)

    Kondo, Michelle C; Bream, Kent D W; Barg, Frances K; Branas, Charles C

    2014-04-10

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available.

  12. Reduction of noise and bias in randomly sampled power spectra

    DEFF Research Database (Denmark)

    Buchhave, Preben; Velte, Clara Marika

    2015-01-01

    We consider the origin of noise and distortion in power spectral estimates of randomly sampled data, specifically velocity data measured with a burst-mode laser Doppler anemometer. The analysis guides us to new ways of reducing noise and removing spectral bias, e.g., distortions caused by modific......We consider the origin of noise and distortion in power spectral estimates of randomly sampled data, specifically velocity data measured with a burst-mode laser Doppler anemometer. The analysis guides us to new ways of reducing noise and removing spectral bias, e.g., distortions caused...

  13. Recording 2-D Nutation NQR Spectra by Random Sampling Method.

    Science.gov (United States)

    Glotova, Olga; Sinyavsky, Nikolaj; Jadzyn, Maciej; Ostafin, Michal; Nogaj, Boleslaw

    2010-10-01

    The method of random sampling was introduced for the first time in the nutation nuclear quadrupole resonance (NQR) spectroscopy where the nutation spectra show characteristic singularities in the form of shoulders. The analytic formulae for complex two-dimensional (2-D) nutation NQR spectra (I = 3/2) were obtained and the condition for resolving the spectral singularities for small values of an asymmetry parameter η was determined. Our results show that the method of random sampling of a nutation interferogram allows significant reduction of time required to perform a 2-D nutation experiment and does not worsen the spectral resolution.

  14. Two-stage designs for cross-over bioequivalence trials.

    Science.gov (United States)

    Kieser, Meinhard; Rauch, Geraldine

    2015-07-20

    The topic of applying two-stage designs in the field of bioequivalence studies has recently gained attention in the literature and in regulatory guidelines. While there exists some methodological research on the application of group sequential designs in bioequivalence studies, implementation of adaptive approaches has focused up to now on superiority and non-inferiority trials. Especially, no comparison of the features and performance characteristics of these designs has been performed, and therefore, the question of which design to employ in this setting remains open. In this paper, we discuss and compare 'classical' group sequential designs and three types of adaptive designs that offer the option of mid-course sample size recalculation. A comprehensive simulation study demonstrates that group sequential designs can be identified, which show power characteristics that are similar to those of the adaptive designs but require a lower average sample size. The methods are illustrated with a real bioequivalence study example.

  15. Capacity Analysis of Two-Stage Production lines with Many Products

    NARCIS (Netherlands)

    M.B.M. de Koster (René)

    1987-01-01

    textabstractWe consider two-stage production lines with an intermediate buffer. A buffer is needed when fluctuations occur. For single-product production lines fluctuations in capacity availability may be caused by random processing times, failures and random repair times. For multi-product producti

  16. Adaptive importance sampling of random walks on continuous state spaces

    Energy Technology Data Exchange (ETDEWEB)

    Baggerly, K.; Cox, D.; Picard, R.

    1998-11-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material.

  17. Describing Typical Capstone Course Experiences from a National Random Sample

    Science.gov (United States)

    Grahe, Jon E.; Hauhart, Robert C.

    2013-01-01

    The pedagogical value of capstones has been regularly discussed within psychology. This study presents results from an examination of a national random sample of department webpages and an online survey that characterized the typical capstone course in terms of classroom activities and course administration. The department webpages provide an…

  18. Generalized and synthetic regression estimators for randomized branch sampling

    Science.gov (United States)

    David L. R. Affleck; Timothy G. Gregoire

    2015-01-01

    In felled-tree studies, ratio and regression estimators are commonly used to convert more readily measured branch characteristics to dry crown mass estimates. In some cases, data from multiple trees are pooled to form these estimates. This research evaluates the utility of both tactics in the estimation of crown biomass following randomized branch sampling (...

  19. Investigation of spectral analysis techniques for randomly sampled velocimetry data

    Science.gov (United States)

    Sree, Dave

    1993-01-01

    It is well known that velocimetry (LV) generates individual realization velocity data that are randomly or unevenly sampled in time. Spectral analysis of such data to obtain the turbulence spectra, and hence turbulence scales information, requires special techniques. The 'slotting' technique of Mayo et al, also described by Roberts and Ajmani, and the 'Direct Transform' method of Gaster and Roberts are well known in the LV community. The slotting technique is faster than the direct transform method in computation. There are practical limitations, however, as to how a high frequency and accurate estimate can be made for a given mean sampling rate. These high frequency estimates are important in obtaining the microscale information of turbulence structure. It was found from previous studies that reliable spectral estimates can be made up to about the mean sampling frequency (mean data rate) or less. If the data were evenly samples, the frequency range would be half the sampling frequency (i.e. up to Nyquist frequency); otherwise, aliasing problem would occur. The mean data rate and the sample size (total number of points) basically limit the frequency range. Also, there are large variabilities or errors associated with the high frequency estimates from randomly sampled signals. Roberts and Ajmani proposed certain pre-filtering techniques to reduce these variabilities, but at the cost of low frequency estimates. The prefiltering acts as a high-pass filter. Further, Shapiro and Silverman showed theoretically that, for Poisson sampled signals, it is possible to obtain alias-free spectral estimates far beyond the mean sampling frequency. But the question is, how far? During his tenure under 1993 NASA-ASEE Summer Faculty Fellowship Program, the author investigated from his studies on the spectral analysis techniques for randomly sampled signals that the spectral estimates can be enhanced or improved up to about 4-5 times the mean sampling frequency by using a suitable

  20. 考虑内部激励随机性的两级分流式人字齿轮传动动力学特性%Dynamic characteristics of two-stage split herringbone gear trains considering the randomness of internal excitations

    Institute of Scientific and Technical Information of China (English)

    廖映华; 秦大同; 刘长钊

    2015-01-01

    According to the configurational characteristics,a bearing-gear coupled non-linear dynamics model of two-stage split herringbone gear trains used in special equipment was presented.In the model,bearing stiffness,mesh stiffness,mesh errors and gear backlash were all included.Then the Runge-Kutta step-by-step integration method was used to solve the non-linear differential equations of motion,the time history curves and frequency spectrums of the dynamic meshing force and dynamic bearing force were obtained considering the random mesh stiffness and mesh error excitations, and further,their statistical characteristics were determined by Monte Carlo simulation to analyze the influences of the randomness of mesh stiffness and mesh errors on the dynamic mesh force and dynamic bearing force.The research results lay a foundation for the dynamic optimization and dynamic reliability optimization of two-stage split herringbone gear trains.%根据某特种装备用两级分流式人字齿轮传动系统的构型特点,考虑轴承变形、啮合刚度、啮合误差和齿侧间隙等因素,建立了两级分流式人字齿轮传动系统的轴承-齿轮耦合非线性动力学模型。采用 Runge-Kutta 逐步积分法求解系统的非线性动力学微分方程,从而获得随机啮合刚度和啮合误差激励作用下两级分流式人字齿轮传动系统的动态啮合力和动态支承力及其频谱,采用 Monte Carlo 仿真获得动态啮合力和动态支承力的统计特征,研究了啮合刚度和啮合误差随机性对动态啮合力和动态支承力的影响规律,为两级分流式人字齿轮传动系统动力学优化以及动态可靠性优化奠定基础。

  1. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  2. Sublinear Time Approximate Sum via Uniform Random Sampling

    CERN Document Server

    Fu, Bin; Peng, Zhiyong

    2012-01-01

    We investigate the approximation for computing the sum $a_1+...+a_n$ with an input of a list of nonnegative elements $a_1,..., a_n$. If all elements are in the range $[0,1]$, there is a randomized algorithm that can compute an $(1+\\epsilon)$-approximation for the sum problem in time ${O({n(\\log\\log n)\\over\\sum_{i=1}^n a_i})}$, where $\\epsilon$ is a constant in $(0,1)$. Our randomized algorithm is based on the uniform random sampling, which selects one element with equal probability from the input list each time. We also prove a lower bound $\\Omega({n\\over \\sum_{i=1}^n a_i})$, which almost matches the upper bound, for this problem.

  3. Sampling Random Bioinformatics Puzzles using Adaptive Probability Distributions

    DEFF Research Database (Denmark)

    Have, Christian Theil; Appel, Emil Vincent; Bork-Jensen, Jette

    2016-01-01

    We present a probabilistic logic program to generate an educational puzzle that introduces the basic principles of next generation sequencing, gene finding and the translation of genes to proteins following the central dogma in biology. In the puzzle, a secret "protein word" must be found...... by assembling DNA from fragments (reads), locating a gene in this sequence and translating the gene to a protein. Sampling using this program generates random instance of the puzzle, but it is possible constrain the difficulty and to customize the secret protein word. Because of these constraints...... and the randomness of the generation process, sampling may fail to generate a satisfactory puzzle. To avoid failure we employ a strategy using adaptive probabilities which change in response to previous steps of generative process, thus minimizing the risk of failure....

  4. Random sampling of lattice paths with constraints, via transportation

    CERN Document Server

    Gerin, Lucas

    2010-01-01

    We discuss a Monte Carlo Markov Chain (MCMC) procedure for the random sampling of some one-dimensional lattice paths with constraints, for various constraints. We show that an approach inspired by optimal transport allows us to bound efficiently the mixing time of the associated Markov chain. The algorithm is robust and easy to implement, and samples an "almost" uniform path of length $n$ in $n^{3+\\eps}$ steps. This bound makes use of a certain contraction property of the Markov chain, and is also used to derive a bound for the running time of Propp-Wilson's CFTP algorithm.

  5. Gas loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, Lee; Bartram, Brian; Dattelbaum, Dana; Lang, John; Morris, John

    2015-06-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez and Teflon. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system, and example data from the plate impact experiments will be shown. LA-UR-15-20521

  6. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  7. Treatment of cadmium dust with two-stage leaching process

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The treatment of cadmium dust with a two-stage leaching process was investigated to replace the existing sulphation roast-leaching processes. The process parameters in the first stage leaching were basically similar to the neutralleaching in zinc hydrometallurgy. The effects of process parameters in the second stage leaching on the extraction of zincand cadmium were mainly studied. The experimental results indicated that zinc and cadmium could be efficiently recoveredfrom the cadmium dust by two-stage leaching process. The extraction percentages of zinc and cadmium in two stage leach-ing reached 95% and 88% respectively under the optimum conditions. The total extraction percentage of Zn and Cdreached 94%.

  8. High magnetostriction parameters for low-temperature sintered cobalt ferrite obtained by two-stage sintering

    Energy Technology Data Exchange (ETDEWEB)

    Khaja Mohaideen, K.; Joy, P.A., E-mail: pa.joy@ncl.res.in

    2014-12-15

    From the studies on the magnetostriction characteristics of two-stage sintered polycrystalline CoFe{sub 2}O{sub 4} made from nanocrystalline powders, it is found that two-stage sintering at low temperatures is very effective for enhancing the density and for attaining higher magnetostriction coefficient. Magnetostriction coefficient and strain derivative are further enhanced by magnetic field annealing and relatively larger enhancement in the magnetostriction parameters is obtained for the samples sintered at lower temperatures, after magnetic annealing, despite the fact that samples sintered at higher temperatures show larger magnetostriction coefficients before annealing. A high magnetostriction coefficient of ∼380 ppm is obtained after field annealing for the sample sintered at 1100 °C, below a magnetic field of 400 kA/m, which is the highest value so far reported at low magnetic fields for sintered polycrystalline cobalt ferrite. - Highlights: • Effect of two-stage sintering on the magnetostriction characteristics of CoFe{sub 2}O{sub 4} is studied. • Two-stage sintering is very effective for enhancing the density and the magnetostriction parameters. • Higher magnetostriction for samples sintered at low temperatures and after magnetic field annealing. • Highest reported magnetostriction of 380 ppm at low fields after two-stage, low-temperature sintering.

  9. An environmental sampling model for combining judgment and randomly placed samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Anderson, Kevin K.; Matzke, Brett D.; Sieber, Karl; Shulman, Stanley; Bennett, James; Gillen, M.; Wilson, John E.; Pulsipher, Brent A.

    2007-08-23

    In the event of the release of a lethal agent (such as anthrax) inside a building, law enforcement and public health responders take samples to identify and characterize the contamination. Sample locations may be rapidly chosen based on available incident details and professional judgment. To achieve greater confidence of whether or not a room or zone was contaminated, or to certify that detectable contamination is not present after decontamination, we consider a Bayesian model for combining the information gained from both judgment and randomly placed samples. We investigate the sensitivity of the model to the parameter inputs and make recommendations for its practical use.

  10. LOGISTICS SCHEDULING: ANALYSIS OF TWO-STAGE PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yung-Chia CHANG; Chung-Yee LEE

    2003-01-01

    This paper studies the coordination effects between stages for scheduling problems where decision-making is a two-stage process. Two stages are considered as one system. The system can be a supply chain that links two stages, one stage representing a manufacturer; and the other, a distributor.It also can represent a single manufacturer, while each stage represents a different department responsible for a part of operations. A problem that jointly considers both stages in order to achieve ideal overall system performance is defined as a system problem. In practice, at times, it might not be feasible for the two stages to make coordinated decisions due to (i) the lack of channels that allow decision makers at the two stages to cooperate, and/or (ii) the optimal solution to the system problem is too difficult (or costly) to achieve.Two practical approaches are applied to solve a variant of two-stage logistic scheduling problems. The Forward Approach is defined as a solution procedure by which the first stage of the system problem is solved first, followed by the second stage. Similarly, the Backward Approach is defined as a solution procedure by which the second stage of the system problem is solved prior to solving the first stage. In each approach, two stages are solved sequentially and the solution generated is treated as a heuristic solution with respect to the corresponding system problem. When decision makers at two stages make decisions locally without considering consequences to the entire system,ineffectiveness may result - even when each stage optimally solves its own problem. The trade-off between the time complexity and the solution quality is the main concern. This paper provides the worst-case performance analysis for each approach.

  11. Residential Two-Stage Gas Furnaces - Do They Save Energy?

    Energy Technology Data Exchange (ETDEWEB)

    Lekov, Alex; Franco, Victor; Lutz, James

    2006-05-12

    Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in the DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.

  12. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    JIANG JianCheng; LI JianTao

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.

  13. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.

  14. Sample size in orthodontic randomized controlled trials: are numbers justified?

    Science.gov (United States)

    Koletsi, Despina; Pandis, Nikolaos; Fleming, Padhraig S

    2014-02-01

    Sample size calculations are advocated by the Consolidated Standards of Reporting Trials (CONSORT) group to justify sample sizes in randomized controlled trials (RCTs). This study aimed to analyse the reporting of sample size calculations in trials published as RCTs in orthodontic speciality journals. The performance of sample size calculations was assessed and calculations verified where possible. Related aspects, including number of authors; parallel, split-mouth, or other design; single- or multi-centre study; region of publication; type of data analysis (intention-to-treat or per-protocol basis); and number of participants recruited and lost to follow-up, were considered. Of 139 RCTs identified, complete sample size calculations were reported in 41 studies (29.5 per cent). Parallel designs were typically adopted (n = 113; 81 per cent), with 80 per cent (n = 111) involving two arms and 16 per cent having three arms. Data analysis was conducted on an intention-to-treat (ITT) basis in a small minority of studies (n = 18; 13 per cent). According to the calculations presented, overall, a median of 46 participants were required to demonstrate sufficient power to highlight meaningful differences (typically at a power of 80 per cent). The median number of participants recruited was 60, with a median of 4 participants being lost to follow-up. Our finding indicates good agreement between projected numbers required and those verified (median discrepancy: 5.3 per cent), although only a minority of trials (29.5 per cent) could be examined. Although sample size calculations are often reported in trials published as RCTs in orthodontic speciality journals, presentation is suboptimal and in need of significant improvement.

  15. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    Science.gov (United States)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  16. The CSS and The Two-Staged Methods for Parameter Estimation in SARFIMA Models

    Directory of Open Access Journals (Sweden)

    Erol Egrioglu

    2011-01-01

    Full Text Available Seasonal Autoregressive Fractionally Integrated Moving Average (SARFIMA models are used in the analysis of seasonal long memory-dependent time series. Two methods, which are conditional sum of squares (CSS and two-staged methods introduced by Hosking (1984, are proposed to estimate the parameters of SARFIMA models. However, no simulation study has been conducted in the literature. Therefore, it is not known how these methods behave under different parameter settings and sample sizes in SARFIMA models. The aim of this study is to show the behavior of these methods by a simulation study. According to results of the simulation, advantages and disadvantages of both methods under different parameter settings and sample sizes are discussed by comparing the root mean square error (RMSE obtained by the CSS and two-staged methods. As a result of the comparison, it is seen that CSS method produces better results than those obtained from the two-staged method.

  17. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  18. Random sampling versus exact enumeration of attractors in random Boolean networks

    Energy Technology Data Exchange (ETDEWEB)

    Berdahl, Andrew; Shreim, Amer; Sood, Vishal; Paczuski, Maya; Davidsen, Joern [Complexity Science Group, Department of Physics and Astronomy, University of Calgary, Alberta (Canada)], E-mail: aberdahl@phas.ucalgary.ca

    2009-04-15

    We clarify the effect different sampling methods and weighting schemes have on the statistics of attractors in ensembles of random Boolean networks (RBNs). We directly measure the cycle lengths of attractors and the sizes of basins of attraction in RBNs using exact enumeration of the state space. In general, the distribution of attractor lengths differs markedly from that obtained by randomly choosing an initial state and following the dynamics to reach an attractor. Our results indicate that the former distribution decays as a power law with exponent 1 for all connectivities K>1 in the infinite system size limit. In contrast, the latter distribution decays as a power law only for K=2. This is because the mean basin size grows linearly with the attractor cycle length for K>2, and is statistically independent of the cycle length for K=2. We also find that the histograms of basin sizes are strongly peaked at integer multiples of powers of two for K<3.

  19. STARS A Two Stage High Gain Harmonic Generation FEL Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    M. Abo-Bakr; W. Anders; J. Bahrdt; P. Budz; K.B. Buerkmann-Gehrlein; O. Dressler; H.A. Duerr; V. Duerr; W. Eberhardt; S. Eisebitt; J. Feikes; R. Follath; A. Gaupp; R. Goergen; K. Goldammer; S.C. Hessler; K. Holldack; E. Jaeschke; Thorsten Kamps; S. Klauke; J. Knobloch; O. Kugeler; B.C. Kuske; P. Kuske; A. Meseck; R. Mitzner; R. Mueller; M. Neeb; A. Neumann; K. Ott; D. Pfluckhahn; T. Quast; M. Scheer; Th. Schroeter; M. Schuster; F. Senf; G. Wuestefeld; D. Kramer; Frank Marhauser

    2007-08-01

    BESSY is proposing a demonstration facility, called STARS, for a two-stage high-gain harmonic generation free electron laser (HGHG FEL). STARS is planned for lasing in the wavelength range 40 to 70 nm, requiring a beam energy of 325 MeV. The facility consists of a normal conducting gun, three superconducting TESLA-type acceleration modules modified for CW operation, a single stage bunch compressor and finally a two-stage HGHG cascaded FEL. This paper describes the faciliy layout and the rationale behind the operation parameters.

  20. Dynamic Modelling of the Two-stage Gasification Process

    DEFF Research Database (Denmark)

    Gøbel, Benny; Henriksen, Ulrik B.; Houbak, Niels

    1999-01-01

    A two-stage gasification pilot plant was designed and built as a co-operative project between the Technical University of Denmark and the company REKA.A dynamic, mathematical model of the two-stage pilot plant was developed to serve as a tool for optimising the process and the operating conditions...... of the gasification plant.The model consists of modules corresponding to the different elements in the plant. The modules are coupled together through mass and heat conservation.Results from the model are compared with experimental data obtained during steady and unsteady operation of the pilot plant. A good...

  1. Phase Transitions in Sampling Algorithms and the Underlying Random Structures

    Science.gov (United States)

    Randall, Dana

    Sampling algorithms based on Markov chains arise in many areas of computing, engineering and science. The idea is to perform a random walk among the elements of a large state space so that samples chosen from the stationary distribution are useful for the application. In order to get reliable results, we require the chain to be rapidly mixing, or quickly converging to equilibrium. For example, to sample independent sets in a given graph G, the so-called hard-core lattice gas model, we can start at any independent set and repeatedly add or remove a single vertex (if allowed). By defining the transition probabilities of these moves appropriately, we can ensure that the chain will converge to a use- ful distribution over the state space Ω. For instance, the Gibbs (or Boltzmann) distribution, parameterized by Λ> 0, is defined so that p(Λ) = π(I) = Λ|I| /Z, where Z = sum_{J in Ω} Λ^{|J|} is the normalizing constant known as the partition function. An interesting phenomenon occurs as Λ is varied. For small values of Λ, local Markov chains converge quickly to stationarity, while for large values, they are prohibitively slow. To see why, imagine the underlying graph G is a region of the Cartesian lattice. Large independent sets will dominate the stationary distribution π when Λ is sufficiently large, and yet it will take a very long time to move from an independent set lying mostly on the odd sublattice to one that is mostly even. This phenomenon is well known in the statistical physics community, and characterizes by a phase transition in the underlying model.

  2. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach.

    Science.gov (United States)

    Tan, Robin; Perkowski, Marek

    2017-02-20

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.

  3. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  4. Efficient Two-Stage Group Testing Algorithms for DNA Screening

    CERN Document Server

    Huber, Michael

    2011-01-01

    Group testing algorithms are very useful tools for DNA library screening. Building on recent work by Levenshtein (2003) and Tonchev (2008), we construct in this paper new infinite classes of combinatorial structures, the existence of which are essential for attaining the minimum number of individual tests at the second stage of a two-stage disjunctive testing algorithm.

  5. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars. In the ......Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars....... In the two-stage gasification concept, the pyrolysis and the gasification processes are physical separated. The volatiles from the pyrolysis are partially oxidized, and the hot gases are used as gasification medium to gasify the char. Hot gases from the gasifier and a combustion unit can be used for drying...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...

  6. FREE GRAFT TWO-STAGE URETHROPLASTY FOR HYPOSPADIAS REPAIR

    Institute of Scientific and Technical Information of China (English)

    Zhong-jin Yue; Ling-jun Zuo; Jia-ji Wang; Gan-ping Zhong; Jian-ming Duan; Zhi-ping Wang; Da-shan Qin

    2005-01-01

    Objective To evaluate the effectiveness of free graft transplantation two-stage urethroplasty for hypospadias repair.Methods Fifty-eight cases with different types of hypospadias including 10 subcoronal, 36 penile shaft, 9 scrotal, and 3 perineal were treated with free full-thickness skin graft or (and) buccal mucosal graft transplantation two-stage urethroplasty. Of 58 cases, 45 were new cases, 13 had history of previous failed surgeries. Operative procedure included two stages: the first stage is to correct penile curvature (chordee), prepare transplanting bed, harvest and prepare full-thickness skin graft, buccal mucosal graft, and perform graft transplantation. The second stage is to complete urethroplasty and glanuloplasty.Results After the first stage operation, 56 of 58 cases (96.6%) were successful with grafts healing well, another 2foreskin grafts got gangrened. After the second stage operation on 56 cases, 5 cases failed with newly formed urethras opened due to infection, 8 cases had fistulas, 43 (76.8%) cases healed well.Conclusions Free graft transplantation two-stage urethroplasty for hypospadias repair is a kind of effective treatment with broad indication, comparatively high success rate, less complicationsand good cosmatic results, indicative of various types of hypospadias repair.

  7. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  8. The construction of customized two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1990-01-01

    In this paper mixed integer linear programming models for customizing two-stage tests are given. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. It is not difficult to modify the models to make them use

  9. A Two-Stage Compression Method for the Fault Detection of Roller Bearings

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2016-01-01

    Full Text Available Data measurement of roller bearings condition monitoring is carried out based on the Shannon sampling theorem, resulting in massive amounts of redundant information, which will lead to a big-data problem increasing the difficulty of roller bearing fault diagnosis. To overcome the aforementioned shortcoming, a two-stage compressed fault detection strategy is proposed in this study. First, a sliding window is utilized to divide the original signals into several segments and a selected symptom parameter is employed to represent each segment, through which a symptom parameter wave can be obtained and the raw vibration signals are compressed to a certain level with the faulty information remaining. Second, a fault detection scheme based on the compressed sensing is applied to extract the fault features, which can compress the symptom parameter wave thoroughly with a random matrix called the measurement matrix. The experimental results validate the effectiveness of the proposed method and the comparison of the three selected symptom parameters is also presented in this paper.

  10. Sampling Strategies for Fast Updating of Gaussian Markov Random Fields

    OpenAIRE

    Brown, D. Andrew; McMahan, Christopher S.

    2017-01-01

    Gaussian Markov random fields (GMRFs) are popular for modeling temporal or spatial dependence in large areal datasets due to their ease of interpretation and computational convenience afforded by conditional independence and their sparse precision matrices needed for random variable generation. Using such models inside a Markov chain Monte Carlo algorithm requires repeatedly simulating random fields. This is a nontrivial issue, especially when the full conditional precision matrix depends on ...

  11. Square Kilometre Array station configuration using two-stage beamforming

    CERN Document Server

    Jiwani, Aziz; Razavi-Ghods, Nima; Hall, Peter J; Padhi, Shantanu; de Vaate, Jan Geralt bij

    2012-01-01

    The lowest frequency band (70 - 450 MHz) of the Square Kilometre Array will consist of sparse aperture arrays grouped into geographically-localised patches, or stations. Signals from thousands of antennas in each station will be beamformed to produce station beams which form the inputs for the central correlator. Two-stage beamforming within stations can reduce SKA-low signal processing load and costs, but has not been previously explored for the irregular station layouts now favoured in radio astronomy arrays. This paper illustrates the effects of two-stage beamforming on sidelobes and effective area, for two representative station layouts (regular and irregular gridded tile on an irregular station). The performance is compared with a single-stage, irregular station. The inner sidelobe levels do not change significantly between layouts, but the more distant sidelobes are affected by the tile layouts; regular tile creates diffuse, but regular, grating lobes. With very sparse arrays, the station effective area...

  12. Two stage sorption type cryogenic refrigerator including heat regeneration system

    Science.gov (United States)

    Jones, Jack A.; Wen, Liang-Chi; Bard, Steven

    1989-01-01

    A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system is disclosed. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The two stages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

  13. Two-stage approach to full Chinese parsing

    Institute of Scientific and Technical Information of China (English)

    Cao Hailong; Zhao Tiejun; Yang Muyun; Li Sheng

    2005-01-01

    Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform model, we utilize a divide and conquer strategy. We propose an effective and fast method based on Markov model to identify the base phrases. Then we make the first attempt to extend one of the best English parsing models i.e. the head-driven model to recognize Chinese complex phrases. Our two-stage approach is superior to the uniform approach in two aspects. First, it creates synergy between the Markov model and the head-driven model. Second, it reduces the complexity of full Chinese parsing and makes the parsing system space and time efficient. We evaluate our approach in PARSEVAL measures on the open test set, the parsing system performances at 87.53% precision, 87.95% recall.

  14. Income and Poverty across SMSAs: A Two-Stage Analysis

    OpenAIRE

    1993-01-01

    Two popular explanations of urban poverty are the "welfare-disincentive" and "urban-deindustrialization" theories. Using cross-sectional Census data, we develop a two-stage model to predict an SMSAs median family income and poverty rate. The model allows the city's welfare level and industrial structure to affect its median family income and poverty rate directly. It also allows welfare and industrial structure to affect income and poverty indirectly, through their effects on family structure...

  15. A Two-stage Polynomial Method for Spectrum Emissivity Modeling

    OpenAIRE

    Qiu, Qirong; Liu, Shi; Teng, Jing; Yan, Yong

    2015-01-01

    Spectral emissivity is a key in the temperature measurement by radiation methods, but not easy to determine in a combustion environment, due to the interrelated influence of temperature and wave length of the radiation. In multi-wavelength radiation thermometry, knowing the spectral emissivity of the material is a prerequisite. However in many circumstances such a property is a complex function of temperature and wavelength and reliable models are yet to be sought. In this study, a two stages...

  16. Forty-five-degree two-stage venous cannula: advantages over standard two-stage venous cannulation.

    Science.gov (United States)

    Lawrence, D R; Desai, J B

    1997-01-01

    We present a 45-degree two-stage venous cannula that confers advantage to the surgeon using cardiopulmonary bypass. This cannula exits the mediastinum under the transverse bar of the sternal retractor, leaving the rostral end of the sternal incision free of apparatus. It allows for lifting of the heart with minimal effect on venous return and does not interfere with the radially laid out sutures of an aortic valve replacement using an interrupted suture technique.

  17. Empirical power and sample size calculations for cluster-randomized and cluster-randomized crossover studies.

    Science.gov (United States)

    Reich, Nicholas G; Myers, Jessica A; Obeng, Daniel; Milstone, Aaron M; Perl, Trish M

    2012-01-01

    In recent years, the number of studies using a cluster-randomized design has grown dramatically. In addition, the cluster-randomized crossover design has been touted as a methodological advance that can increase efficiency of cluster-randomized studies in certain situations. While the cluster-randomized crossover trial has become a popular tool, standards of design, analysis, reporting and implementation have not been established for this emergent design. We address one particular aspect of cluster-randomized and cluster-randomized crossover trial design: estimating statistical power. We present a general framework for estimating power via simulation in cluster-randomized studies with or without one or more crossover periods. We have implemented this framework in the clusterPower software package for R, freely available online from the Comprehensive R Archive Network. Our simulation framework is easy to implement and users may customize the methods used for data analysis. We give four examples of using the software in practice. The clusterPower package could play an important role in the design of future cluster-randomized and cluster-randomized crossover studies. This work is the first to establish a universal method for calculating power for both cluster-randomized and cluster-randomized clinical trials. More research is needed to develop standardized and recommended methodology for cluster-randomized crossover studies.

  18. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa.

    Science.gov (United States)

    Yadavalli, Rajasri; Heggers, Goutham Rao Venkata Naga

    2013-12-19

    Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer.

  19. Two-stage series array SQUID amplifier for space applications

    Science.gov (United States)

    Tuttle, J. G.; DiPirro, M. J.; Shirron, P. J.; Welty, R. P.; Radparvar, M.

    We present test results for a two-stage integrated SQUID amplifier which uses a series array of d.c. SQUIDS to amplify the signal from a single input SQUID. The device was developed by Welty and Martinis at NIST and recent versions have been manufactured by HYPRES, Inc. Shielding and filtering techniques were employed during the testing to minimize the external noise. Energy resolution of 300 h was demonstrated using a d.c. excitation at frequencies above 1 kHz, and better than 500 h resolution was typical down to 300 Hz.

  20. Two-Stage Aggregate Formation via Streams in Myxobacteria

    Science.gov (United States)

    Alber, Mark; Kiskowski, Maria; Jiang, Yi

    2005-03-01

    In response to adverse conditions, myxobacteria form aggregates which develop into fruiting bodies. We model myxobacteria aggregation with a lattice cell model based entirely on short range (non-chemotactic) cell-cell interactions. Local rules result in a two-stage process of aggregation mediated by transient streams. Aggregates resemble those observed in experiment and are stable against even very large perturbations. Noise in individual cell behavior increases the effects of streams and result in larger, more stable aggregates. Phys. Rev. Lett. 93: 068301 (2004).

  1. Straw Gasification in a Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Additive-prepared straw pellets were gasified in the 100 kW two-stage gasifier at The Department of Mechanical Engineering of the Technical University of Denmark (DTU). The fixed bed temperature range was 800-1000°C. In order to avoid bed sintering, as observed earlier with straw gasification...... residues were examined after the test. No agglomeration or sintering was observed in the ash residues. The tar content was measured both by solid phase amino adsorption (SPA) method and cold trapping (Petersen method). Both showed low tar contents (~42 mg/Nm3 without gas cleaning). The particle content...

  2. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  3. Two-Stage Eagle Strategy with Differential Evolution

    CERN Document Server

    Yang, Xin-She

    2012-01-01

    Efficiency of an optimization process is largely determined by the search algorithm and its fundamental characteristics. In a given optimization, a single type of algorithm is used in most applications. In this paper, we will investigate the Eagle Strategy recently developed for global optimization, which uses a two-stage strategy by combing two different algorithms to improve the overall search efficiency. We will discuss this strategy with differential evolution and then evaluate their performance by solving real-world optimization problems such as pressure vessel and speed reducer design. Results suggest that we can reduce the computing effort by a factor of up to 10 in many applications.

  4. Aiming for a representative sample: Simulating random versus purposive strategies for hospital selection

    NARCIS (Netherlands)

    Hoeven, van Loan R.; Janssen, Mart P.; Roes, Kit C.B.; Koffijberg, Hendrik

    2015-01-01

    Background A ubiquitous issue in research is that of selecting a representative sample from the study population. While random sampling strategies are the gold standard, in practice, random sampling of participants is not always feasible nor necessarily the optimal choice. In our case, a selection m

  5. Aiming for a representative sample: Simulating random versus purposive strategies for hospital selection

    NARCIS (Netherlands)

    van Hoeven, Loan R; Janssen, Mart P; Roes, Kit C B; Koffijberg, Hendrik

    2015-01-01

    BACKGROUND: A ubiquitous issue in research is that of selecting a representative sample from the study population. While random sampling strategies are the gold standard, in practice, random sampling of participants is not always feasible nor necessarily the optimal choice. In our case, a selection

  6. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

  7. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

  8. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health ca

  9. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health car

  10. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  11. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  12. Two-Stage Heuristic Algorithm for Aircraft Recovery Problem

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2017-01-01

    Full Text Available This study focuses on the aircraft recovery problem (ARP. In real-life operations, disruptions always cause schedule failures and make airlines suffer from great loss. Therefore, the main objective of the aircraft recovery problem is to minimize the total recovery cost and solve the problem within reasonable runtimes. An aircraft recovery model (ARM is proposed herein to formulate the ARP and use feasible line of flights as the basic variables in the model. We define the feasible line of flights (LOFs as a sequence of flights flown by an aircraft within one day. The number of LOFs exponentially grows with the number of flights. Hence, a two-stage heuristic is proposed to reduce the problem scale. The algorithm integrates a heuristic scoring procedure with an aggregated aircraft recovery model (AARM to preselect LOFs. The approach is tested on five real-life test scenarios. The computational results show that the proposed model provides a good formulation of the problem and can be solved within reasonable runtimes with the proposed methodology. The two-stage heuristic significantly reduces the number of LOFs after each stage and finally reduces the number of variables and constraints in the aircraft recovery model.

  13. Is Knowledge Random? Introducing Sampling and Bias through Outdoor Inquiry

    Science.gov (United States)

    Stier, Sam

    2010-01-01

    Sampling, very generally, is the process of learning about something by selecting and assessing representative parts of that population or object. In the inquiry activity described here, students learned about sampling techniques as they estimated the number of trees greater than 12 cm dbh (diameter at breast height) in a wooded, discrete area…

  14. Detecting Treatment Effects with Small Samples: The Power of Some Tests under the Randomization Model

    Science.gov (United States)

    Keller, Bryan

    2012-01-01

    Randomization tests are often recommended when parametric assumptions may be violated because they require no distributional or random sampling assumptions in order to be valid. In addition to being exact, a randomization test may also be more powerful than its parametric counterpart. This was demonstrated in a simulation study which examined the…

  15. A Two-Stage Assembly-Type Flowshop Scheduling Problem for Minimizing Total Tardiness

    Directory of Open Access Journals (Sweden)

    Ju-Yong Lee

    2016-01-01

    Full Text Available This research considers a two-stage assembly-type flowshop scheduling problem with the objective of minimizing the total tardiness. The first stage consists of two independent machines, and the second stage consists of a single machine. Two types of components are fabricated in the first stage, and then they are assembled in the second stage. Dominance properties and lower bounds are developed, and a branch and bound algorithm is presented that uses these properties and lower bounds as well as an upper bound obtained from a heuristic algorithm. The algorithm performance is evaluated using a series of computational experiments on randomly generated instances and the results are reported.

  16. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  17. Laparoscopic management of a two staged gall bladdertorsion

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    Gall bladder torsion (GBT) is a relatively uncommonentity and rarely diagnosed preoperatively. A constantfactor in all occurrences of GBT is a freely mobilegall bladder due to congenital or acquired anomalies.GBT is commonly observed in elderly white females.We report a 77-year-old, Caucasian lady who wasoriginally diagnosed as gall bladder perforation butwas eventually found with a two staged torsion of thegall bladder with twisting of the Riedel's lobe (partof tongue like projection of liver segment 4A). Thistogether, has not been reported in literature, to thebest of our knowledge. We performed laparoscopiccholecystectomy and she had an uneventful postoperativeperiod. GBT may create a diagnostic dilemmain the context of acute cholecystitis. Timely diagnosisand intervention is necessary, with extra care whileoperating as the anatomy is generally distorted. Thefundus first approach can be useful due to alteredanatomy in the region of Calot's triangle. Laparoscopiccholecystectomy has the benefit of early recovery.

  18. Lightweight Concrete Produced Using a Two-Stage Casting Process

    Directory of Open Access Journals (Sweden)

    Jin Young Yoon

    2015-03-01

    Full Text Available The type of lightweight aggregate and its volume fraction in a mix determine the density of lightweight concrete. Minimizing the density obviously requires a higher volume fraction, but this usually causes aggregates segregation in a conventional mixing process. This paper proposes a two-stage casting process to produce a lightweight concrete. This process involves placing lightweight aggregates in a frame and then filling in the remaining interstitial voids with cementitious grout. The casting process results in the lowest density of lightweight concrete, which consequently has low compressive strength. The irregularly shaped aggregates compensate for the weak point in terms of strength while the round-shape aggregates provide a strength of 20 MPa. Therefore, the proposed casting process can be applied for manufacturing non-structural elements and structural composites requiring a very low density and a strength of at most 20 MPa.

  19. TWO-STAGE OCCLUDED OBJECT RECOGNITION METHOD FOR MICROASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    WANG Huaming; ZHU Jianying

    2007-01-01

    A two-stage object recognition algorithm with the presence of occlusion is presented for microassembly. Coarse localization determines whether template is in image or not and approximately where it is, and fine localization gives its accurate position. In coarse localization, local feature, which is invariant to translation, rotation and occlusion, is used to form signatures. By comparing signature of template with that of image, approximate transformation parameter from template to image is obtained, which is used as initial parameter value for fine localization. An objective function, which is a function of transformation parameter, is constructed in fine localization and minimized to realize sub-pixel localization accuracy. The occluded pixels are not taken into account in objective function, so the localization accuracy will not be influenced by the occlusion.

  20. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  1. Two Stage Assessment of Thermal Hazard in An Underground Mine

    Science.gov (United States)

    Drenda, Jan; Sułkowski, Józef; Pach, Grzegorz; Różański, Zenon; Wrona, Paweł

    2016-06-01

    The results of research into the application of selected thermal indices of men's work and climate indices in a two stage assessment of climatic work conditions in underground mines have been presented in this article. The difference between these two kinds of indices was pointed out during the project entitled "The recruiting requirements for miners working in hot underground mine environments". The project was coordinated by The Institute of Mining Technologies at Silesian University of Technology. It was a part of a Polish strategic project: "Improvement of safety in mines" being financed by the National Centre of Research and Development. Climate indices are based only on physical parameters of air and their measurements. Thermal indices include additional factors which are strictly connected with work, e.g. thermal resistance of clothing, kind of work etc. Special emphasis has been put on the following indices - substitute Silesian temperature (TS) which is considered as the climatic index, and the thermal discomfort index (δ) which belongs to the thermal indices group. The possibility of the two stage application of these indices has been taken into consideration (preliminary and detailed estimation). Based on the examples it was proved that by the application of thermal hazard (detailed estimation) it is possible to avoid the use of additional technical solutions which would be necessary to reduce thermal hazard in particular work places according to the climate index. The threshold limit value for TS has been set, based on these results. It was shown that below TS = 24°C it is not necessary to perform detailed estimation.

  2. Applicability and intrarespondent reliability of the pediatric evaluation of disability inventory in a random Danish sample

    DEFF Research Database (Denmark)

    Stahlhut, Michelle; Christensen, Jette; Aadahl, Mette

    2010-01-01

    To examine the applicability of US reference data from the Pediatric Evaluation of Disability Inventory (PEDI) in a random Danish sample and to assess intrarespondent reliability.......To examine the applicability of US reference data from the Pediatric Evaluation of Disability Inventory (PEDI) in a random Danish sample and to assess intrarespondent reliability....

  3. STATISTICAL LANDMARKS AND PRACTICAL ISSUES REGARDING THE USE OF SIMPLE RANDOM SAMPLING IN MARKET RESEARCHES

    Directory of Open Access Journals (Sweden)

    CODRUŢA DURA

    2010-01-01

    Full Text Available The sample represents a particular segment of the statistical populationchosen to represent it as a whole. The representativeness of the sample determines the accuracyfor estimations made on the basis of calculating the research indicators and the inferentialstatistics. The method of random sampling is part of probabilistic methods which can be usedwithin marketing research and it is characterized by the fact that it imposes the requirementthat each unit belonging to the statistical population should have an equal chance of beingselected for the sampling process. When the simple random sampling is meant to be rigorouslyput into practice, it is recommended to use the technique of random number tables in order toconfigure the sample which will provide information that the marketer needs. The paper alsodetails the practical procedure implemented in order to create a sample for a marketingresearch by generating random numbers using the facilities offered by Microsoft Excel.

  4. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster

    DEFF Research Database (Denmark)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and dimini......When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....

  5. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster

    DEFF Research Database (Denmark)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and dimini......When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....

  6. Characterization of component interactions in two-stage axial turbine

    Directory of Open Access Journals (Sweden)

    Adel Ghenaiet

    2016-08-01

    Full Text Available This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic performances show noticeable differences when simulating the turbine stages simultaneously or separately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while downstream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevailing effect is rather linked to the blade tip flow structure.

  7. A continuous two stage solar coal gasification system

    Science.gov (United States)

    Mathur, V. K.; Breault, R. W.; Lakshmanan, S.; Manasse, F. K.; Venkataramanan, V.

    The characteristics of a two-stage fluidized-bed hybrid coal gasification system to produce syngas from coal, lignite, and peat are described. Devolatilization heat of 823 K is supplied by recirculating gas heated by a solar receiver/coal heater. A second-stage gasifier maintained at 1227 K serves to crack remaining tar and light oil to yield a product free from tar and other condensables, and sulfur can be removed by hot clean-up processes. CO is minimized because the coal is not burned with oxygen, and the product gas contains 50% H2. Bench scale reactors consist of a stage I unit 0.1 m in diam which is fed coal 200 microns in size. A stage II reactor has an inner diam of 0.36 m and serves to gasify the char from stage I. A solar power source of 10 kWt is required for the bench model, and will be obtained from a central receiver with quartz or heat pipe configurations for heat transfer.

  8. Characterization of component interactions in two-stage axial turbine

    Institute of Scientific and Technical Information of China (English)

    Adel Ghenaiet; Kaddour Touil

    2016-01-01

    This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic perfor-mances show noticeable differences when simulating the turbine stages simultaneously or sepa-rately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD) produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while down-stream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevail-ing effect is rather linked to the blade tip flow structure.

  9. Two stages kinetics of municipal solid waste inoculation composting processes

    Institute of Scientific and Technical Information of China (English)

    XI Bei-dou1; HUANG Guo-he; QIN Xiao-sheng; LIU Hong-liang

    2004-01-01

    In order to understand the key mechanisms of the composting processes, the municipal solid waste(MSW) composting processes were divided into two stages, and the characteristics of typical experimental scenarios from the viewpoint of microbial kinetics was analyzed. Through experimentation with advanced composting reactor under controlled composting conditions, several equations were worked out to simulate the degradation rate of the substrate. The equations showed that the degradation rate was controlled by concentration of microbes in the first stage. The degradation rates of substrates of inoculation Run A, B, C and Control composting systems were 13.61 g/(kg·h), 13.08 g/(kg·h), 15.671 g/(kg·h), and 10.5 g/(kg·h), respectively. The value of Run C is around 1.5 times higher than that of Control system. The decomposition rate of the second stage is controlled by concentration of substrate. Although the organic matter decomposition rates were similar to all Runs, inoculation could reduce the values of the half velocity coefficient and could be more efficient to make the composting stable. Particularly. For Run C, the decomposition rate is high in the first stage, and is low in the second stage. The results indicated that the inoculation was efficient for the composting processes.

  10. Loss Function Based Ranking in Two-Stage, Hierarchical Models

    Science.gov (United States)

    Lin, Rongheng; Louis, Thomas A.; Paddock, Susan M.; Ridgeway, Greg

    2009-01-01

    Performance evaluations of health services providers burgeons. Similarly, analyzing spatially related health information, ranking teachers and schools, and identification of differentially expressed genes are increasing in prevalence and importance. Goals include valid and efficient ranking of units for profiling and league tables, identification of excellent and poor performers, the most differentially expressed genes, and determining “exceedances” (how many and which unit-specific true parameters exceed a threshold). These data and inferential goals require a hierarchical, Bayesian model that accounts for nesting relations and identifies both population values and random effects for unit-specific parameters. Furthermore, the Bayesian approach coupled with optimizing a loss function provides a framework for computing non-standard inferences such as ranks and histograms. Estimated ranks that minimize Squared Error Loss (SEL) between the true and estimated ranks have been investigated. The posterior mean ranks minimize SEL and are “general purpose,” relevant to a broad spectrum of ranking goals. However, other loss functions and optimizing ranks that are tuned to application-specific goals require identification and evaluation. For example, when the goal is to identify the relatively good (e.g., in the upper 10%) or relatively poor performers, a loss function that penalizes classification errors produces estimates that minimize the error rate. We construct loss functions that address this and other goals, developing a unified framework that facilitates generating candidate estimates, comparing approaches and producing data analytic performance summaries. We compare performance for a fully parametric, hierarchical model with Gaussian sampling distribution under Gaussian and a mixture of Gaussians prior distributions. We illustrate approaches via analysis of standardized mortality ratio data from the United States Renal Data System. Results show that SEL

  11. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  12. PERFORMANCE STUDY OF A TWO STAGE SOLAR ADSORPTION REFRIGERATION SYSTEM

    Directory of Open Access Journals (Sweden)

    BAIJU. V

    2011-07-01

    Full Text Available The present study deals with the performance of a two stage solar adsorption refrigeration system with activated carbon-methanol pair investigated experimentally. Such a system was fabricated and tested under the conditions of National Institute of Technology Calicut, Kerala, India. The system consists of a parabolic solar concentrator,two water tanks, two adsorbent beds, condenser, expansion device, evaporator and accumulator. In this particular system the second water tank is act as a sensible heat storage device so that the system can be used during night time also. The system has been designed for heating 50 litres of water from 25oC to 90oC as well ascooling 10 litres of water from 30oC to 10oC within one hour. The performance parameters such as specific cooling power (SCP, coefficient of performance, solar COP and exergetic efficiency are studied. The dependency between the exergetic efficiency and cycle COP with the driving heat source temperature is also studied. The optimum heat source temperature for this system is determined as 72.4oC. The results show that the system has better performance during night time as compared to the day time. The system has a mean cycle COP of 0.196 during day time and 0.335 for night time. The mean SCP values during day time and night time are 47.83 and 68.2, respectively. The experimental results also demonstrate that the refrigerator has cooling capacity of 47 to 78 W during day time and 57.6 W to 104.4W during night time.

  13. Prevalence and Severity of College Student Bereavement Examined in a Randomly Selected Sample

    Science.gov (United States)

    Balk, David E.; Walker, Andrea C.; Baker, Ardith

    2010-01-01

    The authors used stratified random sampling to assess the prevalence and severity of bereavement in college undergraduates, providing an advance over findings that emerge from convenience sampling methods or from anecdotal observations. Prior research using convenience sampling indicated that 22% to 30% of college students are within 12 months of…

  14. Generalized Yule-walker and two-stage identification algorithms for dual-rate systems

    Institute of Scientific and Technical Information of China (English)

    Feng DING

    2006-01-01

    In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. The first is the generalized Yule-Walker algorithm and the second is a two-stage algorithm based on the correlation technique. The basic idea is to directly identify the parameters of underlying single-rate models instead of the lifted models of dual-rate systems from the dual-rate input-output data, assuming that the measurement data are stationary and ergodic. An example is given.

  15. FORMATION OF HIGHLY RESISTANT CARBIDE AND BORIDE COATINGS BY A TWO-STAGE DEPOSITION METHOD

    Directory of Open Access Journals (Sweden)

    W. I. Sawich

    2011-01-01

    Full Text Available A study was made of the aspects of forming highly resistant coatings in the surface zone of tool steels and solid carbide inserts by a two-stage method. at the first stage of the method, pure Ta or Nb coatings were electrodeposited on samples of tool steel and solid carbide insert in a molten salt bath containing Ta and Nb fluorides. at the second stage, the electrodeposited coating of Ta (Nb was subjected to carburizing or boriding to form carbide (TaC, NbC or boride (TaB, NbB cladding layers.

  16. Conflict-cost based random sampling design for parallel MRI with low rank constraints

    Science.gov (United States)

    Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie

    2015-05-01

    In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.

  17. Metamodeling and Optimization of a Blister Copper Two-Stage Production Process

    Science.gov (United States)

    Jarosz, Piotr; Kusiak, Jan; Małecki, Stanisław; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz

    2016-06-01

    It is often difficult to estimate parameters for a two-stage production process of blister copper (containing 99.4 wt.% of Cu metal) as well as those for most industrial processes with high accuracy, which leads to problems related to process modeling and control. The first objective of this study was to model flash smelting and converting of Cu matte stages using three different techniques: artificial neural networks, support vector machines, and random forests, which utilized noisy technological data. Subsequently, more advanced models were applied to optimize the entire process (which was the second goal of this research). The obtained optimal solution was a Pareto-optimal one because the process consisted of two stages, making the optimization problem a multi-criteria one. A sequential optimization strategy was employed, which aimed for optimal control parameters consecutively for both stages. The obtained optimal output parameters for the first smelting stage were used as input parameters for the second converting stage. Finally, a search for another optimal set of control parameters for the second stage of a Kennecott-Outokumpu process was performed. The optimization process was modeled using a Monte-Carlo method, and both modeling parameters and computed optimal solutions are discussed.

  18. Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source

    DEFF Research Database (Denmark)

    Mashiach, Adam; Østergaard, Jan; Zamir, Ram

    2013-01-01

    Random binning is an efficient, yet complex, coding technique for the symmetric L-description source coding problem. We propose an alternative approach, that uses the quantized samples of a bandlimited source as "descriptions". By the Nyquist condition, the source can be reconstructed if enough...... samples are received. We examine a coding scheme that combines sampling and noise-shaped quantization for a scenario in which only K sampling while others to non-uniform sampling....... This scheme achieves the optimum rate-distortion performance for uniform-sampling K-sets, but suffers noise amplification for nonuniform-sampling K-sets. We then show that by increasing the sampling rate and adding a random-binning stage, the optimal operation point is achieved for any K-set....

  19. Right Axillary Sweating After Left Thoracoscopic Sypathectomy in Two-Stage Surgery

    Directory of Open Access Journals (Sweden)

    Berkant Ozpolat

    2013-06-01

    Full Text Available One stage bilateral or two stage unilateral video assisted thoracoscopic sympathectomy could be performed in the treatment of primary focal hyperhidrosis. Here we present a case with compensatory sweating of contralateral side after a two stage operation.

  20. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  1. Hypercalciuria, hyperoxaluria, and hypocitraturia screening from random urine samples in patients with calcium lithiasis.

    Science.gov (United States)

    Arrabal-Polo, Miguel Angel; Arias-Santiago, Salvador; Girón-Prieto, María Sierra; Abad-Menor, Felix; López-Carmona Pintado, Fernando; Zuluaga-Gomez, Armando; Arrabal-Martin, Miguel

    2012-10-01

    Calcium lithiasis is the most frequently diagnosed renal lithiasis and is associated with a high percentage of patients with metabolic disorders, such as hypercalciuria, hypocitraturia, and hyperoxaluria. The present study included 50 patients with recurrent calcium lithiasis. We conducted a random urine test during nocturnal fasting and a 24-h urine test, and examined calcium, oxalate, and citrate. A study of the linear correlation between the metabolites was performed, and the receiver operator characteristic (ROC) curves were analyzed in the random urine samples to determine the cutoff values for hypercalciuria (excretion greater than 200 mg), hyperoxaluria (excretion greater than 40 mg), and hypocitraturia (excretion less than 320 mg) in the 24-h urine. Linear relationships were observed between the calcium levels in the random and 24-h urine samples (R = 0.717, p = 0.0001), the oxalate levels in the random and 24-h urine samples (R = 0.838, p = 0.0001), and the citrate levels in the random and 24-h urine samples (R = 0.799, p = 0.0001). After obtaining the ROC curves, we observed that more than 10.15 mg/dl of random calcium and more than 16.45 mg/l of random oxalate were indicative of hypercalciuria and hyperoxaluria, respectively, in the 24-h urine. In addition, we found that the presence of less than 183 mg/l of random citrate was indicative of the presence of hypocitraturia in the 24-h urine. Using the proposed values, screening for hypercalciuria, hyperoxaluria, and hypocitraturia can be performed with a random urine sample during fasting with an overall sensitivity greater than 86%.

  2. Two-Stage Exams Improve Student Learning in an Introductory Geology Course: Logistics, Attendance, and Grades

    Science.gov (United States)

    Knierim, Katherine; Turner, Henry; Davis, Ralph K.

    2015-01-01

    Two-stage exams--where students complete part one of an exam closed book and independently and part two is completed open book and independently (two-stage independent, or TS-I) or collaboratively (two-stage collaborative, or TS-C)--provide a means to include collaborative learning in summative assessments. Collaborative learning has been shown to…

  3. Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !

    NARCIS (Netherlands)

    van Breukelen, Gerard J.P.; Candel, Math J.J.M.

    2012-01-01

    Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given

  4. Occupational position and its relation to mental distress in a random sample of Danish residents

    DEFF Research Database (Denmark)

    Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D

    2010-01-01

    PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...

  5. Comparison of kriging interpolation precision between grid sampling scheme and simple random sampling scheme for precision agriculture

    Directory of Open Access Journals (Sweden)

    Jiang Houlong

    2016-01-01

    Full Text Available Sampling methods are important factors that can potentially limit the accuracy of predictions of spatial distribution patterns. A 10 ha tobacco-planted field was selected to compared the accuracy in predicting the spatial distribution of soil properties by using ordinary kriging and cross validation methods between grid sampling and simple random sampling scheme (SRS. To achieve this objective, we collected soil samples from the topsoil (0-20 cm in March 2012. Sample numbers of grid sampling and SRS were both 115 points each. Accuracies of spatial interpolation using the two sampling schemes were then evaluated based on validation samples (36 points and deviations of the estimates. The results suggested that soil pH and nitrate-N (NO3-N had low variation, whereas all other soil properties exhibited medium variation. Soil pH, organic matter (OM, total nitrogen (TN, cation exchange capacity (CEC, total phosphorus (TP and available phosphorus (AP matched the spherical model, whereas the remaining variables fit an exponential model with both sampling methods. The interpolation error of soil pH, TP, and AP was the lowest in SRS. The errors of interpolation for OM, CEC, TN, available potassium (AK and total potassium (TK were the lowest for grid sampling. The interpolation precisions of the soil NO3-N showed no significant differences between the two sampling schemes. Considering our data on interpolation precision and the importance of minerals for cultivation of flue-cured tobacco, the grid-sampling scheme should be used in tobacco-planted fields to determine the spatial distribution of soil properties. The grid-sampling method can be applied in a practical and cost-effective manner to facilitate soil sampling in tobacco-planted field.

  6. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  7. Aerobic and two-stage anaerobic-aerobic sludge digestion with pure oxygen and air aeration.

    Science.gov (United States)

    Zupancic, Gregor D; Ros, Milenko

    2008-01-01

    The degradability of excess activated sludge from a wastewater treatment plant was studied. The objective was establishing the degree of degradation using either air or pure oxygen at different temperatures. Sludge treated with pure oxygen was degraded at temperatures from 22 degrees C to 50 degrees C while samples treated with air were degraded between 32 degrees C and 65 degrees C. Using air, sludge is efficiently degraded at 37 degrees C and at 50-55 degrees C. With oxygen, sludge was most effectively degraded at 38 degrees C or at 25-30 degrees C. Two-stage anaerobic-aerobic processes were studied. The first anaerobic stage was always operated for 5 days HRT, and the second stage involved aeration with pure oxygen and an HRT between 5 and 10 days. Under these conditions, there is 53.5% VSS removal and 55.4% COD degradation at 15 days HRT - 5 days anaerobic, 10 days aerobic. Sludge digested with pure oxygen at 25 degrees C in a batch reactor converted 48% of sludge total Kjeldahl nitrogen to nitrate. Addition of an aerobic stage with pure oxygen aeration to the anaerobic digestion enhances ammonium nitrogen removal. In a two-stage anaerobic-aerobic sludge digestion process within 8 days HRT of the aerobic stage, the removal of ammonium nitrogen was 85%.

  8. The Effect of Dead Time in Random Sampling of the LDA

    Science.gov (United States)

    Velte, Clara; Buchhave, Preben; George, William

    2012-11-01

    The random sampling emanating from the acquisition of velocities of randomly arriving particles in LDA measurements has since Gaster and Roberts commonly been believed to eliminate aliasing. For a perfect signal, in the sense that acquisition is truly instant and random, this is in principle correct. For real signals however, the acquisition is always afflicted with some finite time/space averaging. This hinders the capture of all realizations, and re-introduces aliasing. Contrary to common practice, using the time slot approximation autocorrelation to obtain the power spectrum also re-introduces aliasing (as noted even by Blackman and Tukey). We will demonstrate techniques for minimizing adverse effects.

  9. Random Walks on Directed Networks: Inference and Respondent-driven Sampling

    CERN Document Server

    Malmros, Jens; Britton, Tom

    2013-01-01

    Respondent driven sampling (RDS) is a method often used to estimate population properties (e.g. sexual risk behavior) in hard-to-reach populations. It combines an effective modified snowball sampling methodology with an estimation procedure that yields unbiased population estimates under the assumption that the sampling process behaves like a random walk on the social network of the population. Current RDS estimation methodology assumes that the social network is undirected, i.e. that all edges are reciprocal. However, empirical social networks in general also have non-reciprocated edges. To account for this fact, we develop a new estimation method for RDS in the presence of directed edges on the basis of random walks on directed networks. We distinguish directed and undirected edges and consider the possibility that the random walk returns to its current position in two steps through an undirected edge. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing...

  10. Random Sampling of Quantum States: a Survey of Methods. And Some Issues Regarding the Overparametrized Method

    Science.gov (United States)

    Maziero, Jonas

    2015-12-01

    The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed.

  11. Random sampling of quantum states: a survey of methods and some issues regarding the Overparametrized Method

    Energy Technology Data Exchange (ETDEWEB)

    Maziero, Jonas, E-mail: jonas.maziero@ufsm.br [Universidade Federal de Santa Maria (UFSM), Santa Maria, RS (Brazil). Dept. de Fisica

    2015-12-15

    The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed. (author)

  12. Two-stage vs single-stage management for concomitant gallstones and common bile duct stones

    Institute of Scientific and Technical Information of China (English)

    Jiong Lu; Yao Cheng; Xian-Ze Xiong; Yi-Xin Lin; Si-Jia Wu; Nan-Sheng Cheng

    2012-01-01

    AIM:To evaluate the safety and effectiveness of two-stage vs single-stage management for concomitant gallstones and common bile duct stones.METHODS:Four databases,including PubMed,Embase,the Cochrane Central Register of Controlled Trials and the Science Citation Index up to September 2011,were searched to identify all randomized controlled trials (RCTs).Data were extracted from the studies by two independent reviewers.The primary outcomes were stone clearance from the common bile duct,postoperative morbidity and mortality.The secondary outcomes were conversion to other procedures,number of procedures per patient,length of hospital stay,total operative time,hospitalization charges,patient acceptance and quality of life scores.RESULTS:Seven eligible RCTs [five trials (n =621)comparing preoperative endoscopic retrograde cholangiopancreatography (ERCP)/endoscopic sphincterotomy (EST) + laparoscopic cholecystectomy (LC) with LC +laparoscopic common bile duct exploration (LCBDE);two trials (n =166) comparing postoperative ERCP/EST + LC with LC + LCBDE],composed of 787 patients in total,were included in the final analysis.The metaanalysis detected no statistically significant difference between the two groups in stone clearance from the common bile duct [risk ratios (RR) =-0.10,95% confidence intervals (CI):-0.24 to 0.04,P =0.17],postoperative morbidity (RR =0.79,95% CI:0.58 to 1.10,P =0.16),mortality (RR =2.19,95% CI:0.33 to 14.67,P =0.42),conversion to other procedures (RR =1.21,95% CI:0.54 to 2.70,P =0.39),length of hospital stay (MD =0.99,95% CI:-1.59 to 3.57,P =0.45),total operative time (MD =12.14,95% CI:-1.83 to 26.10,P =0.09).Two-stage (LC + ERCP/EST) management clearly required more procedures per patient than single-stage (LC + LCBDE) management.CONCLUSION:Single-stage management is equivalent to two-stage management but requires fewer procedures.However,patient's condition,operator's expertise and local resources should be taken into account in

  13. Meta-analysis using individual participant data: one-stage and two-stage approaches, and why they may differ.

    Science.gov (United States)

    Burke, Danielle L; Ensor, Joie; Riley, Richard D

    2017-02-28

    Meta-analysis using individual participant data (IPD) obtains and synthesises the raw, participant-level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta-analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual-level interactions, such as treatment-effect modifiers. There are two statistical approaches for conducting an IPD meta-analysis: one-stage and two-stage. The one-stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two-stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta-analysis model. There have been numerous comparisons of the one-stage and two-stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one-stage and two-stage IPD meta-analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one-stage or two-stage itself. We illustrate the concepts with recently published IPD meta-analyses, summarise key statistical software and provide recommendations for future IPD meta-analyses. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  14. Evaluation of a Two-Stage Approach in Trans-Ethnic Meta-Analysis in Genome-Wide Association Studies.

    Science.gov (United States)

    Hong, Jaeyoung; Lunetta, Kathryn L; Cupples, L Adrienne; Dupuis, Josée; Liu, Ching-Ti

    2016-05-01

    Meta-analysis of genome-wide association studies (GWAS) has achieved great success in detecting loci underlying human diseases. Incorporating GWAS results from diverse ethnic populations for meta-analysis, however, remains challenging because of the possible heterogeneity across studies. Conventional fixed-effects (FE) or random-effects (RE) methods may not be most suitable to aggregate multiethnic GWAS results because of violation of the homogeneous effect assumption across studies (FE) or low power to detect signals (RE). Three recently proposed methods, modified RE (RE-HE) model, binary-effects (BE) model and a Bayesian approach (Meta-analysis of Transethnic Association [MANTRA]), show increased power over FE and RE methods while incorporating heterogeneity of effects when meta-analyzing trans-ethnic GWAS results. We propose a two-stage approach to account for heterogeneity in trans-ethnic meta-analysis in which we clustered studies with cohort-specific ancestry information prior to meta-analysis. We compare this to a no-prior-clustering (crude) approach, evaluating type I error and power of these two strategies, in an extensive simulation study to investigate whether the two-stage approach offers any improvements over the crude approach. We find that the two-stage approach and the crude approach for all five methods (FE, RE, RE-HE, BE, MANTRA) provide well-controlled type I error. However, the two-stage approach shows increased power for BE and RE-HE, and similar power for MANTRA and FE compared to their corresponding crude approach, especially when there is heterogeneity across the multiethnic GWAS results. These results suggest that prior clustering in the two-stage approach can be an effective and efficient intermediate step in meta-analysis to account for the multiethnic heterogeneity.

  15. A sero-survey of rinderpest in nomadic pastoral systems in central and southern Somalia from 2002 to 2003, using a spatially integrated random sampling approach.

    Science.gov (United States)

    Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M

    2010-12-01

    A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.

  16. Bayesian and frequentist two-stage treatment strategies based on sequential failure times subject to interval censoring.

    Science.gov (United States)

    Thall, Peter F; Wooten, Leiko H; Logothetis, Christopher J; Millikan, Randall E; Tannir, Nizar M

    2007-11-20

    For many diseases, therapy involves multiple stages, with the treatment in each stage chosen adaptively based on the patient's current disease status and history of previous treatments and clinical outcomes. Physicians routinely use such multi-stage treatment strategies, also called dynamic treatment regimes or treatment policies. We present a Bayesian framework for a clinical trial comparing two-stage strategies based on the time to overall failure, defined as either second disease worsening or discontinuation of therapy. Each patient is randomized among a set of treatments at enrollment, and if disease worsening occurs the patient is then re-randomized among a set of treatments excluding the treatment received initially. The goal is to select the two-stage strategy having the largest average overall failure time. A parametric model is formulated to account for non-constant failure time hazards, regression of the second failure time on the patient's first worsening time, and the complications that the failure time in either stage may be interval censored and there may be a delay between first worsening and the start of the second stage of therapy. Four different criteria, two Bayesian and two frequentist, for selecting a best strategy are considered. The methods are applied to a trial comparing two-stage strategies for treating metastatic renal cancer, and a simulation study in the context of this trial is presented. Advantages and disadvantages of this design compared to standard methods are discussed.

  17. Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling

    Directory of Open Access Journals (Sweden)

    Bo Yu

    2015-01-01

    Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.

  18. Effect partitioning under interference in two-stage randomized vaccine trials.

    Science.gov (United States)

    Vanderweele, Tyler J; Tchetgen Tchetgen, Eric J

    2011-07-01

    In the presence of interference, the exposure of one individual may affect the outcomes of others. We provide new effect partitioning results under interferences that express the overall effect as a sum of (i) the indirect (or spillover) effect and (ii) a contrast between two direct effects.

  19. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  20. Do "Instant Polls" Hit the Spot? Phone-In vs. Random Sampling of Public Opinion.

    Science.gov (United States)

    Bates, Benjamin; Harmon, Mark

    1993-01-01

    Compares television phone-in polls to random sample polling. Finds significant differences between the two types of opinion indicators. Shows that persons with strongly held opinions and a pro-change, activist stance are more likely to respond in phone-in polls. (SR)

  1. Random sampling for the monomer-dimer model on a lattice

    NARCIS (Netherlands)

    J. van den Berg (Rob); R.M. Brouwer (Rachel)

    1999-01-01

    textabstractIn the monomer-dimer model on a graph, each matching (collection of non-overlapping edges) ${M$ has a probability proportional to $lambda^{|M|$, where $lambda > 0$ is the model parameter, and $|M|$ denotes the number of edges in $M$. An approximate random sample from the monomer-dimer

  2. Power and sample size calculations for Mendelian randomization studies using one genetic instrument.

    Science.gov (United States)

    Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary

    2013-08-01

    Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.

  3. On the repeated measures designs and sample sizes for randomized controlled trials.

    Science.gov (United States)

    Tango, Toshiro

    2016-04-01

    For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials.

  4. Flexible sampling large-scale social networks by self-adjustable random walk

    Science.gov (United States)

    Xu, Xiao-Ke; Zhu, Jonathan J. H.

    2016-12-01

    Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.

  5. Random serial sampling to evaluate efficacy of iron fortification: a randomized controlled trial of margarine fortification with ferric pyrophosphate or sodium iron edetate 1-3

    NARCIS (Netherlands)

    Andersson, M.; Theis, W.; Zimmermann, M.B.; Forman, J.T.; Jakel, M.; Duchateau, G.S.M.J.E.; Frenken, L.G.J.; Hurrell, R.F.

    2010-01-01

    Background: Random serial sampling is widely used in population pharmacokinetic studies and may have advantages compared with conventional fixed time-point evaluation of iron fortification. Objective: Our objective was to validate random serial sampling to judge the efficacy of iron fortification of

  6. Tumor producing fibroblast growth factor 23 localized by two-staged venous sampling.

    NARCIS (Netherlands)

    Boekel, G.A.J van; Ruinemans-Koerts, J.; Joosten, F.; Dijkhuizen, P.; Sorge, A van; Boer, H de

    2008-01-01

    BACKGROUND: Tumor-induced osteomalacia is a rare paraneoplastic syndrome characterized by hypophosphatemia, renal phosphate wasting, suppressed 1,25-dihydroxyvitamin D production, and osteomalacia. It is caused by a usually benign mesenchymal tumor producing fibroblast growth factor 23 (FGF-23). Sur

  7. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster.

    Science.gov (United States)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness of the samples between the two methods was assessed. The method presented here was superior to the traditional method. Only 14% of the samples had a standard deviation higher than expected, as compared with 58% in the traditional method. To reduce bias in the estimation of the variance and the mean of a trait and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila.

  8. Effective number of samples and pseudo-random nonlinear distortions in digital OFDM coded signal

    CERN Document Server

    Rudziński, Adam

    2013-01-01

    This paper concerns theoretical modeling of degradation of signal with OFDM coding caused by pseudo-random nonlinear distortions introduced by an analog-to-digital or digital-to-analog converter. A new quantity, effective number of samples, is defined and used for derivation of accurate expressions for autocorrelation function and the total power of the distortions. The derivation is based on probabilistic model of the signal and its transition probability. It is shown, that for digital (discrete and quantized) signals the effective number of samples replaces the total number of samples and is the proper quantity defining their properties.

  9. Monte Carlo non-local means: random sampling for large-scale image filtering.

    Science.gov (United States)

    Chan, Stanley H; Zickler, Todd; Lu, Yue M

    2014-08-01

    We propose a randomized version of the nonlocal means (NLM) algorithm for large-scale image filtering. The new algorithm, called Monte Carlo nonlocal means (MCNLM), speeds up the classical NLM by computing a small subset of image patch distances, which are randomly selected according to a designed sampling pattern. We make two contributions. First, we analyze the performance of the MCNLM algorithm and show that, for large images or large external image databases, the random outcomes of MCNLM are tightly concentrated around the deterministic full NLM result. In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows. Second, we derive explicit formulas for optimal sampling patterns that minimize the error probability bound by exploiting partial knowledge of the pairwise similarity weights. Numerical experiments show that MCNLM is competitive with other state-of-the-art fast NLM algorithms for single-image denoising. When applied to denoising images using an external database containing ten billion patches, MCNLM returns a randomized solution that is within 0.2 dB of the full NLM solution while reducing the runtime by three orders of magnitude.

  10. Two-stage re-estimation adaptive design: a simulation study

    Directory of Open Access Journals (Sweden)

    Francesca Galli

    2013-10-01

    Full Text Available Background: adaptive clinical trial design has been proposed as a promising new approach to improve the drug discovery process. Among the many options available, adaptive sample size re-estimation is of great interest mainly because of its ability to avoid a large ‘up-front’ commitment of resources. In this simulation study, we investigate the statistical properties of two-stage sample size re-estimation designs in terms of type I error control, study power and sample size, in comparison with the fixed-sample study.Methods: we simulated a balanced two-arm trial aimed at comparing two means of normally distributed data, using the inverse normal method to combine the results of each stage, and considering scenarios jointly defined by the following factors: the sample size re-estimation method, the information fraction, the type of group sequential boundaries and the use of futility stopping. Calculations were performed using the statistical software SAS™ (version 9.2.Results: under the null hypothesis, any type of adaptive design considered maintained the prefixed type I error rate, but futility stopping was required to avoid the unwanted increase in sample size. When deviating from the null hypothesis, the gain in power usually achieved with the adaptive design and its performance in terms of sample size were influenced by the specific design options considered.Conclusions: we show that adaptive designs incorporating futility stopping, a sufficiently high information fraction (50-70% and the conditional power method for sample size re-estimation have good statistical properties, which include a gain in power when trial results are less favourable than anticipated. 

  11. Nonuniform sampling of hypercomplex multidimensional NMR experiments: Dimensionality, quadrature phase and randomization

    Science.gov (United States)

    Schuyler, Adam D; Maciejewski, Mark W; Stern, Alan S; Hoch, Jeffrey C

    2015-01-01

    Nonuniform sampling (NUS) in multidimensional NMR permits the exploration of higher dimensional experiments and longer evolution times than the Nyquist Theorem practically allows for uniformly sampled experiments. However, the spectra of NUS data include sampling-induced artifacts and may be subject to distortions imposed by sparse data reconstruction techniques, issues not encountered with the discrete Fourier transform (DFT) applied to uniformly sampled data. The characterization of these NUS-induced artifacts allows for more informed sample schedule design and improved spectral quality. The DFT–Convolution Theorem, via the point-spread function (PSF) for a given sampling scheme, provides a useful framework for exploring the nature of NUS sampling artifacts. In this work, we analyze the PSFs for a set of specially constructed NUS schemes to quantify the interplay between randomization and dimensionality for reducing artifacts relative to uniformly undersampled controls. In particular, we find a synergistic relationship between the indirect time dimensions and the “quadrature phase dimension” (i.e. the hypercomplex components collected for quadrature detection). The quadrature phase dimension provides additional degrees of freedom that enable partial-component NUS (collecting a subset of quadrature components) to further reduce sampling-induced aliases relative to traditional full-component NUS (collecting all quadrature components). The efficacy of artifact reduction is exponentially related to the dimensionality of the sample space. Our results quantify the utility of partial-component NUS as an additional means for introducing decoherence into sampling schemes and reducing sampling artifacts in high dimensional experiments. PMID:25899289

  12. Mineral chemistry of the Tissint meteorite: Indications of two-stage crystallization in a closed system

    Science.gov (United States)

    Liu, Yang; Baziotis, Ioannis P.; Asimow, Paul D.; Bodnar, Robert J.; Taylor, Lawrence A.

    2016-12-01

    The Tissint meteorite is a geochemically depleted, olivine-phyric shergottite. Olivine megacrysts contain 300-600 μm cores with uniform Mg# ( 80 ± 1) followed by concentric zones of Fe-enrichment toward the rims. We applied a number of tests to distinguish the relationship of these megacrysts to the host rock. Major and trace element compositions of the Mg-rich core in olivine are in equilibrium with the bulk rock, within uncertainty, and rare earth element abundances of melt inclusions in Mg-rich olivines reported in the literature are similar to those of the bulk rock. Moreover, the P Kα intensity maps of two large olivine grains show no resorption between the uniform core and the rim. Taken together, these lines of evidence suggest the olivine megacrysts are phenocrysts. Among depleted olivine-phyric shergottites, Tissint is the first one that acts mostly as a closed system with olivine megacrysts being the phenocrysts. The texture and mineral chemistry of Tissint indicate a crystallization sequence of: olivine (Mg# 80 ± 1) → olivine (Mg# 76) + chromite → olivine (Mg# 74) + Ti-chromite → olivine (Mg# 74-63) + pyroxene (Mg# 76-65) + Cr-ulvöspinel → olivine (Mg# 63-35) + pyroxene (Mg# 65-60) + plagioclase, followed by late-stage ilmenite and phosphate. The crystallization of the Tissint meteorite likely occurred in two stages: uniform olivine cores likely crystallized under equilibrium conditions; and a fractional crystallization sequence that formed the rest of the rock. The two-stage crystallization without crystal settling is simulated using MELTS and the Tissint bulk composition, and can broadly reproduce the crystallization sequence and mineral chemistry measured in the Tissint samples. The transition between equilibrium and fractional crystallization is associated with a dramatic increase in cooling rate and might have been driven by an acceleration in the ascent rate or by encounter with a steep thermal gradient in the Martian crust.

  13. Evidence for non-random sampling in randomised, controlled trials by Yuhji Saitoh.

    Science.gov (United States)

    Carlisle, J B; Loadsman, J A

    2017-01-01

    A large number of randomised trials authored by Yoshitaka Fujii have been retracted, in part as a consequence of a previous analysis finding a very low probability of random sampling. Dr Yuhji Saitoh co-authored 34 of those trials and he was corresponding author for eight of them. We found a number of additional randomised, controlled trials that included baseline data, with Saitoh as corresponding author, that Fujii did not co-author. We used Monte Carlo simulations to analyse the baseline data from 32 relevant trials in total as well as an outcome (muscle twitch recovery ratios) reported in several. We also compared a series of muscle twitch recovery graphs appearing in a number of Saitoh's publications. The baseline data in 14/32 randomised, controlled trials had p sampling. Combining the continuous and categorical probabilities of the 32 included trials, we found a very low likelihood of random sampling: p = 1.27 × 10(-8) (1 in 100,000,000). The high probability of non-random sampling and the repetition of lines in multiple graphs suggest that further scrutiny of Saitoh's work is warranted. © 2016 The Association of Anaesthetists of Great Britain and Ireland.

  14. High-speed random equivalent sampling system for time-domain reflectometry

    Science.gov (United States)

    Song, Jian-hui; Yuan, Feng; Ding, Zhen-liang

    2008-10-01

    Time domain reflectometry (TDR) has been commonly used for testing cable for years. The waveform attenuation and distortion of TDR pulse is an inherent problem for the correct definition of arrival time and propagation velocity of traveling wave. For the purpose of obtaining the required information of incident and reflected pulse waveform, a highspeed random equivalent sampling (RES) system with 65ps sampling resolution is proposed for a high-resolution TDR. The problem of data storage and communication caused by high sampling rate is solved by using both digital signal processors (DSP) and field programmable gate arrays (FPGA). The detail architecture of the implemented circuit and software is described, including the control logic and data processing algorithm. The real-time sampling rate of the system is up to 125MHz, with 15.4GHz equivalent sampling bandwidth. The test results show that the proposed system can be used as a high-speed data acquisition and processing unit.

  15. On Simon's two-stage design for single-arm phase IIA cancer clinical trials under beta-binomial distribution.

    Science.gov (United States)

    Liu, Junfeng; Lin, Yong; Shih, Weichung Joe

    2010-05-10

    Simon (Control. Clin. Trials 1989; 10:1-10)'s two-stage design has been broadly applied to single-arm phase IIA cancer clinical trials in order to minimize either the expected or the maximum sample size under the null hypothesis of drug inefficacy, i.e. when the pre-specified amount of improvement in response rate (RR) is not expected to be observed. This paper studies a realistic scenario where the standard and experimental treatment RRs follow two continuous distributions (e.g. beta distribution) rather than two single values. The binomial probabilities in Simon's (Control. Clin. Trials 1989; 10:1-10) design are replaced by prior predictive Beta-binomial probabilities that are the ratios of two beta functions and domain-restricted RRs involve incomplete beta functions to induce the null hypothesis acceptance probability. We illustrate that Beta-binomial mixture model based two-stage design retains certain desirable properties for hypothesis testing purpose. However, numerical results show that such designs may not exist under certain hypothesis and error rate (type I and II) setups within maximal sample size approximately 130. Furthermore, we give theoretical conditions for asymptotic two-stage design non-existence (sample size goes to infinity) in order to improve the efficiency of design search and to avoid needless searching.

  16. Preemptive scheduling in a two-stage supply chain to minimize the makespan

    NARCIS (Netherlands)

    Pei, Jun; Fan, Wenjuan; Pardalos, Panos M.; Liu, Xinbao; Goldengorin, Boris; Yang, Shanlin

    2015-01-01

    This paper deals with the problem of preemptive scheduling in a two-stage supply chain framework. The supply chain environment contains two stages: production and transportation. In the production stage jobs are processed on a manufacturer's bounded serial batching machine, preemptions are allowed,

  17. Two-stage removal of nitrate from groundwater using biological and chemical treatments.

    Science.gov (United States)

    Ayyasamy, Pudukadu Munusamy; Shanthi, Kuppusamy; Lakshmanaperumalsamy, Perumalsamy; Lee, Soon-Jae; Choi, Nag-Choul; Kim, Dong-Ju

    2007-08-01

    In this study, we attempted to treat groundwater contaminated with nitrate using a two-stage removal system: one is biological treatment using the nitrate-degrading bacteria Pseudomonas sp. RS-7 and the other is chemical treatment using a coagulant. For the biological system, the effect of carbon sources on nitrate removal was first investigated using mineral salt medium (MSM) containing 500 mg l(-1) nitrate to select the most effective carbon source. Among three carbon sources, namely, glucose, starch and cellulose, starch at 1% was found to be the most effective. Thus, starch was used as a representative carbon source for the remaining part of the biological treatment where nitrate removal was carried out for MSM solution and groundwater samples containing 500 mg l(-1) and 460 mg l(-1) nitrate, respectively. About 86% and 89% of nitrate were removed from the MSM solution and groundwater samples, respectively at 72 h. Chemical coagulants such as alum, lime and poly aluminium chloride were tested for the removal of nitrate remaining in the samples. Among the coagulants, lime at 150 mg l(-1) exhibited the highest nitrate removal efficiency with complete disappearance for the MSM solutions. Thus, a combined system of biological and chemical treatments was found to be more effective for the complete removal of nitrate from groundwater.

  18. Contextual Classification of Point Clouds Using a Two-Stage Crf

    Science.gov (United States)

    Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2015-03-01

    In this investigation, we address the task of airborne LiDAR point cloud labelling for urban areas by presenting a contextual classification methodology based on a Conditional Random Field (CRF). A two-stage CRF is set up: in a first step, a point-based CRF is applied. The resulting labellings are then used to generate a segmentation of the classified points using a Conditional Euclidean Clustering algorithm. This algorithm combines neighbouring points with the same object label into one segment. The second step comprises the classification of these segments, again with a CRF. As the number of the segments is much smaller than the number of points, it is computationally feasible to integrate long range interactions into this framework. Additionally, two different types of interactions are introduced: one for the local neighbourhood and another one operating on a coarser scale. This paper presents the entire processing chain. We show preliminary results achieved using the Vaihingen LiDAR dataset from the ISPRS Benchmark on Urban Classification and 3D Reconstruction, which consists of three test areas characterised by different and challenging conditions. The utilised classification features are described, and the advantages and remaining problems of our approach are discussed. We also compare our results to those generated by a point-based classification and show that a slight improvement is obtained with this first implementation.

  19. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  20. A New Two-Stage Approach to Short Term Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Dragan Tasić

    2013-04-01

    Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

  1. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  2. An inversion method based on random sampling for real-time MEG neuroimaging

    CERN Document Server

    Pascarella, Annalisa

    2016-01-01

    The MagnetoEncephaloGraphy (MEG) has gained great interest in neurorehabilitation training due to its high temporal resolution. The challenge is to localize the active regions of the brain in a fast and accurate way. In this paper we use an inversion method based on random spatial sampling to solve the real-time MEG inverse problem. Several numerical tests on synthetic but realistic data show that the method takes just a few hundredths of a second on a laptop to produce an accurate map of the electric activity inside the brain. Moreover, it requires very little memory storage. For this reasons the random sampling method is particularly attractive in real-time MEG applications.

  3. Field quality control of an earth dam: random versus purposive sampling

    Energy Technology Data Exchange (ETDEWEB)

    Kotzias, P.C.; Stamatopoulos, A.C.

    1996-08-01

    Two sampling operations, the random and purposive techniques, for field quality control of an earth dam were presented. Each technique was analyzed and their similarities and differences were compared. Each took into consideration the attributes or variables such as strength, temperature, slump, air content, density, and water content. The purposive operation needed much less field sampling and testing than the random operation. The advantages and disadvantages of each technique were described. It was concluded that both techniques could be used individually or jointly on the same project as long as the extent of the application, with all rules and underlying assumptions, were recognized for each case and were strictly adhered to in the field. 19 refs., 4 tabs., 2 figs.

  4. Quality Assessment of Attribute Data in GIS Based on Simple Random Sampling

    Institute of Scientific and Technical Information of China (English)

    LIU Chun; SHI Wenzhong; LIU Dajie

    2003-01-01

    On the basis of the principles of simple random sampling, the statistical model of rate of disfigurement (RD) is put forward and described in detail. According to the definition of simple random sampling for the attribute data in GIS, the mean and variance of the RD are deduced as the characteristic value of the statistical model in order to explain the feasibility of the accuracy measurement of the attribute data in GIS by using the RD. Moreover, on the basis of the mean and variance of the RD, the quality assessment method for attribute data of vector maps during the data collecting is discussed. The RD spread graph is also drawn to see whether the quality of the attribute data is under control. The RD model can synthetically judge the quality of attribute data, which is different from other measurement coefficients that only discuss accuracy of classification.

  5. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    Science.gov (United States)

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  6. Ranking of Simultaneous Equation Techniques to Small Sample Properties and Correlated Random Deviates

    Directory of Open Access Journals (Sweden)

    A. A. Adepoju

    2009-01-01

    Full Text Available Problem statement: All simultaneous equation estimation methods have some desirable asymptotic properties and these properties become effective in large samples. This study is relevant since samples available to researchers are mostly small in practice and are often plagued with the problem of mutual correlation between pairs of random deviates which is a violation of the assumption of mutual independence between pairs of such random deviates. The objective of this research was to study the small sample properties of these estimators when the errors are correlated to determine if the properties will still hold when available samples are relatively small and the errors were correlated. Approach: Most of the evidence on the small sample properties of the simultaneous equation estimators was studied from sampling (or Monte Carlo experiments. It is important to rank estimators on the merit they have when applied to small samples. This study examined the performances of five simultaneous estimation techniques using some of the basic characteristics of the sampling distributions rather than their full description. The characteristics considered here are the mean, the total absolute bias and the root mean square error. Results: The result revealed that the ranking of the five estimators in respect of the Average Total Absolute Bias (ATAB is invariant to the choice of the upper (P1 or lower (P2 triangular matrix. The result of the FIML using RMSE of estimates was outstandingly best in the open-ended intervals and outstandingly poor in the closed interval (-0.051 and P2 we re-combined. Conclusion: (i The ranking of the various simultaneous estimation methods considered based on their small sample properties differs according to the correlation status of the error term, the identifiability status of the equation and the assumed triangular matrix. (ii The nature of the relationship under study also determined which of the criteria for judging the

  7. Sample size calculation in cost-effectiveness cluster randomized trials: optimal and maximin approaches.

    Science.gov (United States)

    Manju, Md Abu; Candel, Math J J M; Berger, Martijn P F

    2014-07-10

    In this paper, the optimal sample sizes at the cluster and person levels for each of two treatment arms are obtained for cluster randomized trials where the cost-effectiveness of treatments on a continuous scale is studied. The optimal sample sizes maximize the efficiency or power for a given budget or minimize the budget for a given efficiency or power. Optimal sample sizes require information on the intra-cluster correlations (ICCs) for effects and costs, the correlations between costs and effects at individual and cluster levels, the ratio of the variance of effects translated into costs to the variance of the costs (the variance ratio), sampling and measuring costs, and the budget. When planning, a study information on the model parameters usually is not available. To overcome this local optimality problem, the current paper also presents maximin sample sizes. The maximin sample sizes turn out to be rather robust against misspecifying the correlation between costs and effects at the cluster and individual levels but may lose much efficiency when misspecifying the variance ratio. The robustness of the maximin sample sizes against misspecifying the ICCs depends on the variance ratio. The maximin sample sizes are robust under misspecification of the ICC for costs for realistic values of the variance ratio greater than one but not robust under misspecification of the ICC for effects. Finally, we show how to calculate optimal or maximin sample sizes that yield sufficient power for a test on the cost-effectiveness of an intervention.

  8. Knowledge of health information and services in a random sample of the population of Glasgow.

    Science.gov (United States)

    Moynihan, M; Jones, A K; Stewart, G T; Lucas, R W

    1980-01-01

    A RANDOM sample of adults in Glasgow was surveyed by trained interviewers to determine public knowledge on four topics chosen specifically for each of four age groups. The topics were: Welfare rights and services; Coronary Heart Disease (CHD) and individual action that can reduce risk; The dangers of smoking in pregnancy; and fluoride and its functions and the connections between good health and habitual behaviour.

  9. Ratio Estimators in Simple Random Sampling Using Information on Auxiliary Attribute

    Directory of Open Access Journals (Sweden)

    Rajesh Singh

    2008-01-01

    Full Text Available Some ratio estimators for estimating the population mean of the variable under study, which make use of information regarding the population proportion possessing certain attribute, are proposed. Under simple random sampling without replacement (SRSWOR scheme, the expressions of bias and mean-squared error (MSE up to the first order of approximation are derived. The results obtained have been illustrated numerically by taking some empirical population considered in the literature.

  10. Minimizing makespan in a two-stage hybrid flow shop scheduling problem with open shop in one stage

    Institute of Scientific and Technical Information of China (English)

    DONG Jian-ming; HU Jue-liang; CHEN Yong

    2013-01-01

    This paper considers a scheduling problem in two-stage hybrid flow shop, where the first stage consists of two machines formed an open shop and the other stage has only one machine. The objective is to minimize the makespan, i.e., the maximum completion time of all jobs. We first show the problem is NP-hard in the strong sense, then we present two heuristics to solve the problem. Computational experiments show that the combined algorithm of the two heuristics performs well on randomly generated problem instances.

  11. A novel 3D Cartesian random sampling strategy for Compressive Sensing Magnetic Resonance Imaging.

    Science.gov (United States)

    Valvano, Giuseppe; Martini, Nicola; Santarelli, Maria Filomena; Chiappino, Dante; Landini, Luigi

    2015-01-01

    In this work we propose a novel acquisition strategy for accelerated 3D Compressive Sensing Magnetic Resonance Imaging (CS-MRI). This strategy is based on a 3D cartesian sampling with random switching of the frequency encoding direction with other K-space directions. Two 3D sampling strategies are presented. In the first strategy, the frequency encoding direction is randomly switched with one of the two phase encoding directions. In the second strategy, the frequency encoding direction is randomly chosen between all the directions of the K-Space. These strategies can lower the coherence of the acquisition, in order to produce reduced aliasing artifacts and to achieve a better image quality after Compressive Sensing (CS) reconstruction. Furthermore, the proposed strategies can reduce the typical smoothing of CS due to the limited sampling of high frequency locations. We demonstrated by means of simulations that the proposed acquisition strategies outperformed the standard Compressive Sensing acquisition. This results in a better quality of the reconstructed images and in a greater achievable acceleration.

  12. Randomly dividing homologous samples leads to overinflated accuracies for emotion recognition.

    Science.gov (United States)

    Liu, Shuang; Zhang, Di; Xu, Minpeng; Qi, Hongzhi; He, Feng; Zhao, Xin; Zhou, Peng; Zhang, Lixin; Ming, Dong

    2015-04-01

    There are numerous studies measuring the brain emotional status by analyzing EEGs under the emotional stimuli that have occurred. However, they often randomly divide the homologous samples into training and testing groups, known as randomly dividing homologous samples (RDHS), despite considering the impact of the non-emotional information among them, which would inflate the recognition accuracy. This work proposed a modified method, the integrating homologous samples (IHS), where the homologous samples were either used to build a classifier, or to be tested. The results showed that the classification accuracy was much lower for the IHS than for the RDHS. Furthermore, a positive correlation was found between the accuracy and the overlapping rate of the homologous samples. These findings implied that the overinflated accuracy did exist in those previous studies where the RDHS method was employed for emotion recognition. Moreover, this study performed a feature selection for the IHS condition based on the support vector machine-recursive feature elimination, after which the average accuracies were greatly improved to 85.71% and 77.18% in the picture-induced and video-induced tasks, respectively.

  13. Two-Stage Multi-Objective Collaborative Scheduling for Wind Farm and Battery Switch Station

    Directory of Open Access Journals (Sweden)

    Zhe Jiang

    2016-10-01

    Full Text Available In order to deal with the uncertainties of wind power, wind farm and electric vehicle (EV battery switch station (BSS were proposed to work together as an integrated system. In this paper, the collaborative scheduling problems of such a system were studied. Considering the features of the integrated system, three indices, which include battery swapping demand curtailment of BSS, wind curtailment of wind farm, and generation schedule tracking of the integrated system are proposed. In addition, a two-stage multi-objective collaborative scheduling model was designed. In the first stage, a day-ahead model was built based on the theory of dependent chance programming. With the aim of maximizing the realization probabilities of these three operating indices, random fluctuations of wind power and battery switch demand were taken into account simultaneously. In order to explore the capability of BSS as reserve, the readjustment process of the BSS within each hour was considered in this stage. In addition, the stored energy rather than the charging/discharging power of BSS during each period was optimized, which will provide basis for hour-ahead further correction of BSS. In the second stage, an hour-ahead model was established. In order to cope with the randomness of wind power and battery swapping demand, the proposed hour-ahead model utilized ultra-short term prediction of the wind power and the battery switch demand to schedule the charging/discharging power of BSS in a rolling manner. Finally, the effectiveness of the proposed models was validated by case studies. The simulation results indicated that the proposed model could realize complement between wind farm and BSS, reduce the dependence on power grid, and facilitate the accommodation of wind power.

  14. AREA DETERMINATION OF DIABETIC FOOT ULCER IMAGES USING A CASCADED TWO-STAGE SVM BASED CLASSIFICATION.

    Science.gov (United States)

    Wang, Lei; Pedersen, Peder; Agu, Emmanuel; Strong, Diane; Tulu, Bengisu

    2016-11-23

    It is standard practice for clinicians and nurses to primarily assess patients' wounds via visual examination. This subjective method can be inaccurate in wound assessment and also represents a significant clinical workload. Hence, computer-based systems, especially implemented on mobile devices, can provide automatic, quantitative wound assessment and can thus be valuable for accurately monitoring wound healing status. Out of all wound assessment parameters, the measurement of the wound area is the most suitable for automated analysis. Most of the current wound boundary determination methods only process the image of the wound area along with a small amount of surrounding healthy skin. In this paper, we present a novel approach that uses Support Vector Machine (SVM) to determine the wound boundary on a foot ulcer image captured with an image capture box, which provides controlled lighting, angle and range conditions. The Simple Linear Iterative Clustering (SLIC) method is applied for effective super-pixel segmentation. A cascaded two-stage classifier is trained as follows: in the first stage a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and a set of incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from super-pixels that are used as input for each stage in the classifier training. Specifically, we apply the color and Bag-of-Word (BoW) representation of local Dense SIFT features (DSIFT) as the descriptor for ruling out irrelevant regions (first stage), and apply color and wavelet based features as descriptors for distinguishing healthy tissue from wound regions (second stage). Finally, the detected wound boundary is refined by applying a Conditional Random Field (CRF) image processing technique. We have implemented the wound classification on a Nexus

  15. Improving ambulatory saliva-sampling compliance in pregnant women: a randomized controlled study.

    Directory of Open Access Journals (Sweden)

    Julian Moeller

    Full Text Available OBJECTIVE: Noncompliance with scheduled ambulatory saliva sampling is common and has been associated with biased cortisol estimates in nonpregnant subjects. This study is the first to investigate in pregnant women strategies to improve ambulatory saliva-sampling compliance, and the association between sampling noncompliance and saliva cortisol estimates. METHODS: We instructed 64 pregnant women to collect eight scheduled saliva samples on two consecutive days each. Objective compliance with scheduled sampling times was assessed with a Medication Event Monitoring System and self-reported compliance with a paper-and-pencil diary. In a randomized controlled study, we estimated whether a disclosure intervention (informing women about objective compliance monitoring and a reminder intervention (use of acoustical reminders improved compliance. A mixed model analysis was used to estimate associations between women's objective compliance and their diurnal cortisol profiles, and between deviation from scheduled sampling and the cortisol concentration measured in the related sample. RESULTS: Self-reported compliance with a saliva-sampling protocol was 91%, and objective compliance was 70%. The disclosure intervention was associated with improved objective compliance (informed: 81%, noninformed: 60%, F(1,60  = 17.64, p<0.001, but not the reminder intervention (reminders: 68%, without reminders: 72%, F(1,60 = 0.78, p = 0.379. Furthermore, a woman's increased objective compliance was associated with a higher diurnal cortisol profile, F(2,64  = 8.22, p<0.001. Altered cortisol levels were observed in less objective compliant samples, F(1,705  = 7.38, p = 0.007, with delayed sampling associated with lower cortisol levels. CONCLUSIONS: The results suggest that in pregnant women, objective noncompliance with scheduled ambulatory saliva sampling is common and is associated with biased cortisol estimates. To improve sampling compliance, results suggest

  16. A gas-loading system for LANL two-stage gas guns

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Lloyd Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bartram, Brian Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dattelbaum, Dana Mcgraw [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lang, John Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Morris, John Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures.The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  17. Two stages of isotopic exchanges experienced by the Ertaibei granite pluton, northern Xinjiang, China

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    18O/16O and D/H of coexisting feldspar, quartz, and biotite separates of twenty samples collected from the Ertaibei granite pluton, northern Xinjiang, China are determined. It is shown that the Ertaibei pluton experienced two stages of isotopic exchanges. The second stage of 18O/16O and D/H exchanges with meteoric water brought about a marked decrease in the δ18O values of feldspar and biotite from the second group of samples. The D/H of biotite exhibits a higher sensitivity to the meteoric water alteration than its 18O/16O. However, the first stage of 18O/16O exchange with the 18O-rich aqueous fluid derived from the dehydration within the deep crust caused the δ18OQuartz-Feldspar reversal. It is inferred that the dehydration-melting may have been an important mechanism for anatexis. It is shown that the deep fluid encircled the Ertaibei pluton like an envelope which serves as an effective screen to the surface waters.

  18. Two stages of isotopic exchanges experienced by the Ertaibei granite pluton, northern Xinjiang, China

    Institute of Scientific and Technical Information of China (English)

    刘伟

    2000-01-01

    18O/16O and D/H of coexisting feldspar, quartz, and biotite separates of twenty samples collected from the Ertaibei granite pluton, northern Xinjiang, China are determined. It is shown that the Ertaibei pluton experienced two stages of isotopic exchanges. The second stage of 18O/16O and D/H exchanges with meteoric water brought about a marked decrease in the δ18O values of feldspar and biotite from the second group of samples. The D/H of biotite exhibits a higher sensitivity to the meteoric water alteration than its 18O/16O. However, the first stage of 18O/16O exchange with the 18O-rich aqueous fluid derived from the dehydration within the deep crust caused the Δ18OQuariz-Feidspar reversal. It is inferred that the dehydration-melting may have been an important mechanism for anatexis. It is shown that the deep fluid encircled the Ertaibei pluton like an envelope which serves as an effective screen to the surface waters.

  19. A gas-loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, L. L.; Bartram, B. D.; Dattelbaum, D. M.; Lang, J. M.; Morris, J. S.

    2017-01-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  20. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  1. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  2. Validation of Random Sampling as an Estimation Procedure for Lyme Disease Surveillance in Massachusetts and Minnesota.

    Science.gov (United States)

    Bjork, J; Brown, C; Friedlander, H; Schiffman, E; Neitzel, D

    2016-08-03

    Many disease surveillance programs, including the Massachusetts Department of Public Health and the Minnesota Department of Health, are challenged by marked increases in Lyme disease (LD) reports. The purpose of this study was to retrospectively analyse LD reports from 2005 through 2012 to determine whether key epidemiologic characteristics were statistically indistinguishable when an estimation procedure based on sampling was utilized. Estimates of the number of LD cases were produced by taking random 20% and 50% samples of laboratory-only reports, multiplying by 5 or 2, respectively, and adding the number of provider-reported confirmed cases. Estimated LD case counts were compared to observed, confirmed cases each year. In addition, the proportions of cases that were male, were ≤12 years of age, had erythema migrans (EM), had any late manifestation of LD, had a specific late manifestation of LD (arthritis, cranial neuritis or carditis) or lived in a specific region were compared to the proportions of cases identified using standard surveillance to determine whether estimated proportions were representative of observed proportions. Results indicate that the estimated counts of confirmed LD cases were consistently similar to observed, confirmed LD cases and accurately conveyed temporal trends. Most of the key demographic and disease manifestation characteristics were not significantly different (P < 0.05), although estimates for the 20% random sample demonstrated greater deviation than the 50% random sample. Applying this estimation procedure in endemic states could conserve limited resources by reducing follow-up effort while maintaining the ability to track disease trends.

  3. Nicotine therapy sampling to induce quit attempts among smokers unmotivated to quit: a randomized clinical trial.

    Science.gov (United States)

    Carpenter, Matthew J; Hughes, John R; Gray, Kevin M; Wahlquist, Amy E; Saladin, Michael E; Alberg, Anthony J

    2011-11-28

    Rates of smoking cessation have not changed in a decade, accentuating the need for novel approaches to prompt quit attempts. Within a nationwide randomized clinical trial (N = 849) to induce further quit attempts and cessation, smokers currently unmotivated to quit were randomized to a practice quit attempt (PQA) alone or to nicotine replacement therapy (hereafter referred to as nicotine therapy), sampling within the context of a PQA. Following a 6-week intervention period, participants were followed up for 6 months to assess outcomes. The PQA intervention was designed to increase motivation, confidence, and coping skills. The combination of a PQA plus nicotine therapy sampling added samples of nicotine lozenges to enhance attitudes toward pharmacotherapy and to promote the use of additional cessation resources. Primary outcomes included the incidence of any ever occurring self-defined quit attempt and 24-hour quit attempt. Secondary measures included 7-day point prevalence abstinence at any time during the study (ie, floating abstinence) and at the final follow-up assessment. Compared with PQA intervention, nicotine therapy sampling was associated with a significantly higher incidence of any quit attempt (49% vs 40%; relative risk [RR], 1.2; 95% CI, 1.1-1.4) and any 24-hour quit attempt (43% vs 34%; 1.3; 1.1-1.5). Nicotine therapy sampling was marginally more likely to promote floating abstinence (19% vs 15%; RR, 1.3; 95% CI, 1.0-1.7); 6-month point prevalence abstinence rates were no different between groups (16% vs 14%; 1.2; 0.9-1.6). Nicotine therapy sampling during a PQA represents a novel strategy to motivate smokers to make a quit attempt. clinicaltrials.gov Identifier: NCT00706979.

  4. Randomized comparison of 3 different-sized biopsy forceps for quality of sampling in Barrett's esophagus.

    Science.gov (United States)

    Gonzalez, Susana; Yu, Woojin M; Smith, Michael S; Slack, Kristen N; Rotterdam, Heidrun; Abrams, Julian A; Lightdale, Charles J

    2010-11-01

    Several types of forceps are available for use in sampling Barrett's esophagus (BE). Few data exist with regard to biopsy quality for histologic assessment. To evaluate sampling quality of 3 different forceps in patients with BE. Single-center, randomized clinical trial. Consecutive patients with BE undergoing upper endoscopy. Patients randomized to have biopsy specimens taken with 1 of 3 types of forceps: standard, large capacity, or jumbo. Specimen adequacy was defined a priori as a well-oriented biopsy sample 2 mm or greater in diameter and with at least muscularis mucosa present. A total of 65 patients were enrolled and analyzed (standard forceps, n = 21; large-capacity forceps, n = 21; jumbo forceps, n = 23). Compared with jumbo forceps, a significantly higher proportion of biopsy samples with large-capacity forceps were adequate (37.8% vs 25.2%, P = .002). Of the standard forceps biopsy samples, 31.9% were adequate, which was not significantly different from specimens taken with large-capacity (P = .20) or jumbo (P = .09) forceps. Biopsy specimens taken with jumbo forceps had the largest diameter (median, 3.0 mm vs 2.5 mm [standard] vs 2.8 mm [large capacity]; P = .0001). However, jumbo forceps had the lowest proportion of specimens that were well oriented (overall P = .001). Heterogeneous patient population precluded dysplasia detection analyses. Our results challenge the requirement of jumbo forceps and therapeutic endoscopes to properly perform the Seattle protocol. We found that standard and large-capacity forceps used with standard upper endoscopes produced biopsy samples at least as adequate as those obtained with jumbo forceps and therapeutic endoscopes in patients with BE. Copyright © 2010 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  5. Location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling

    Directory of Open Access Journals (Sweden)

    Alireza Goli

    2015-09-01

    Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.

  6. Numerical simulation of a step-piston type series two-stage pulse tube refrigerator

    Science.gov (United States)

    Zhu, Shaowei; Nogawa, Masafumi; Inoue, Tatsuo

    2007-09-01

    A two-stage pulse tube refrigerator has a great advantage in that there are no moving parts at low temperatures. The problem is low theoretical efficiency. In an ordinary two-stage pulse tube refrigerator, the expansion work of the first stage pulse tube is rather large, but is changed to heat. The theoretical efficiency is lower than that of a Stirling refrigerator. A series two-stage pulse tube refrigerator was introduced for solving this problem. The hot end of the regenerator of the second stage is connected to the hot end of the first stage pulse tube. The expansion work in the first stage pulse tube is part of the input work of the second stage, therefore the efficiency is increased. In a simulation result for a step-piston type two-stage series pulse tube refrigerator, the efficiency is increased by 13.8%.

  7. Theory and calculation of two-stage voltage stabilizer on zener diodes

    Directory of Open Access Journals (Sweden)

    G. S. Veksler

    1966-12-01

    Full Text Available Two-stage stabilizer is compared with one-stage. There have been got formulas, which give the possibility to make an engineering calculation. There is an example of the calculation.

  8. Two-stage fungal pre-treatment for improved biogas production from sisal leaf decortication residues

    National Research Council Canada - National Science Library

    Muthangya, Mutemi; Mshandete, Anthony Manoni; Kivaisi, Amelia Kajumulo

    2009-01-01

    .... Pre-treatment of the residue prior to its anaerobic digestion (AD) was investigated using a two-stage pre-treatment approach with two fungal strains, CCHT-1 and Trichoderma reesei in succession in anaerobic batch bioreactors...

  9. Experiment research on two-stage dry-fed entrained flow coal gasifier

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The process flow and the main devices of a new two-stage dry-fed coal gasification pilot plant with a throughout of 36 t/d are introduced in this paper. For comparison with the traditional one-stage gasifiers, the influences of the coal feed ratio between two stages on the performance of the gasifier are detailedly studied by a series of experiments. The results reveal that the two-stage gasification decreases the temperature of the syngas at the outlet of the gasifier, simplifies the gasification process, and reduces the size of the syngas cooler. Moreover, the cold gas efficiency of the gasifier can be improved by using the two-stage gasification. In our experiments, the efficiency is about 3%-6% higher than the existing one-stage gasifiers.

  10. TWO-STAGE CHARACTER CLASSIFICATION : A COMBINED APPROACH OF CLUSTERING AND SUPPORT VECTOR CLASSIFIERS

    NARCIS (Netherlands)

    Vuurpijl, L.; Schomaker, L.

    2000-01-01

    This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

  11. A Two-Stage Waste Gasification Reactor for Mars In-Situ Resource Utilization Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design, build, and test a two-stage waste processing reactor for space applications. Our proposed technology converts waste from space missions into...

  12. Saddlepoint approximation based line sampling method for uncertainty propagation in fuzzy and random reliability analysis

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    For structural system with random basic variables as well as fuzzy basic variables,uncertain propagation from two kinds of basic variables to the response of the structure is investigated.A novel algorithm for obtaining membership function of fuzzy reliability is presented with saddlepoint approximation(SA)based line sampling method.In the presented method,the value domain of the fuzzy basic variables under the given membership level is firstly obtained according to their membership functions.In the value domain of the fuzzy basic variables corresponding to the given membership level,bounds of reliability of the structure response satisfying safety requirement are obtained by employing the SA based line sampling method in the reduced space of the random variables.In this way the uncertainty of the basic variables is propagated to the safety measurement of the structure,and the fuzzy membership function of the reliability is obtained.Compared to the direct Monte Carlo method for propagating the uncertainties of the fuzzy and random basic variables,the presented method can considerably improve computational efficiency with acceptable precision.The presented method has wider applicability compared to the transformation method,because it doesn’t limit the distribution of the variable and the explicit expression of performance function, and no approximation is made for the performance function during the computing process.Additionally,the presented method can easily treat the performance function with cross items of the fuzzy variable and the random variable,which isn’t suitably approximated by the existing transformation methods.Several examples are provided to illustrate the advantages of the presented method.

  13. Experience sampling-based personalized feedback and positive affect: a randomized controlled trial in depressed patients.

    Directory of Open Access Journals (Sweden)

    Jessica A Hartmann

    Full Text Available Positive affect (PA plays a crucial role in the development, course, and recovery of depression. Recently, we showed that a therapeutic application of the experience sampling method (ESM, consisting of feedback focusing on PA in daily life, was associated with a decrease in depressive symptoms. The present study investigated whether the experience of PA increased during the course of this intervention.Multicentre parallel randomized controlled trial. An electronic random sequence generator was used to allocate treatments.University, two local mental health care institutions, one local hospital.102 pharmacologically treated outpatients with a DSM-IV diagnosis of major depressive disorder, randomized over three treatment arms.Six weeks of ESM self-monitoring combined with weekly PA-focused feedback sessions (experimental group; six weeks of ESM self-monitoring combined with six weekly sessions without feedback (pseudo-experimental group; or treatment as usual (control group.The interaction between treatment allocation and time in predicting positive and negative affect (NA was investigated in multilevel regression models.102 patients were randomized (mean age 48.0, SD 10.2 of which 81 finished the entire study protocol. All 102 patients were included in the analyses. The experimental group did not show a significant larger increase in momentary PA during or shortly after the intervention compared to the pseudo-experimental or control groups (χ2(2 = 0.33, p = .846. The pseudo-experimental group showed a larger decrease in NA compared to the control group (χ2(1 = 6.29, p =.012.PA-focused feedback did not significantly impact daily life PA during or shortly after the intervention. As the previously reported reduction in depressive symptoms associated with the feedback unveiled itself only after weeks, it is conceivable that the effects on daily life PA also evolve slowly and therefore were not captured by the experience sampling procedure

  14. A Two-Stage Bayesian Network Method for 3D Human Pose Estimation from Monocular Image Sequences

    Directory of Open Access Journals (Sweden)

    Wang Yuan-Kai

    2010-01-01

    Full Text Available Abstract This paper proposes a novel human motion capture method that locates human body joint position and reconstructs the human pose in 3D space from monocular images. We propose a two-stage framework including 2D and 3D probabilistic graphical models which can solve the occlusion problem for the estimation of human joint positions. The 2D and 3D models adopt directed acyclic structure to avoid error propagation of inference. Image observations corresponding to shape and appearance features of humans are considered as evidence for the inference of 2D joint positions in the 2D model. Both the 2D and 3D models utilize the Expectation Maximization algorithm to learn prior distributions of the models. An annealed Gibbs sampling method is proposed for the two-stage method to inference the maximum posteriori distributions of joint positions. The annealing process can efficiently explore the mode of distributions and find solutions in high-dimensional space. Experiments are conducted on the HumanEva dataset with image sequences of walking motion, which has challenges of occlusion and loss of image observations. Experimental results show that the proposed two-stage approach can efficiently estimate more accurate human poses.

  15. A new multi-motor drive system based on two-stage direct power converter

    OpenAIRE

    Kumar, Dinesh

    2011-01-01

    The two-stage AC to AC direct power converter is an alternative matrix converter topology, which offers the benefits of sinusoidal input currents and output voltages, bidirectional power flow and controllable input power factor. The absence of any energy storage devices, such as electrolytic capacitors, has increased the potential lifetime of the converter. In this research work, a new multi-motor drive system based on a two-stage direct power converter has been proposed, with two motors c...

  16. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  17. Two-Stage Conversion of Land and Marine Biomass for Biogas and Biohydrogen Production

    OpenAIRE

    Nkemka, Valentine

    2012-01-01

    The replacement of fossil fuels by renewable fuels such as biogas and biohydrogen will require efficient and economically competitive process technologies together with new kinds of biomass. A two-stage system for biogas production has several advantages over the widely used one-stage continuous stirred tank reactor (CSTR). However, it has not yet been widely implemented on a large scale. Biohydrogen can be produced in the anaerobic two-stage system. It is considered to be a useful fuel for t...

  18. On the creation of representative samples of random quasi-orders

    Directory of Open Access Journals (Sweden)

    Martin eSchrepp

    2015-11-01

    Full Text Available Dependencies between educational test items can be represented as quasi-orders on the item set of a knowledge domain and used for an efficient adaptive assessment of knowledge. One approach to uncovering such dependencies is by exploratory algorithms of Item Tree Analysis (ITA. There are several methods of ITA available. The basic tool to compare such algorithms concerning their quality are large-scale simulation studies that are crucially set up on a large collection of quasi-orders. A serious problem is that all known ITA algorithms are sensitive to the structure of the underlying quasi-order. Thus, it is crucial to base any simulation study that tries to compare the algorithms upon samples of quasi-orders that are representative, meaning each quasi-order is included in a sample with the same probability. Up to now, no method to create representative quasi-orders on larger item sets is known. Non-optimal algorithms for quasi-order generation were used in previous studies, which caused misinterpretations and erroneous conclusions. In this paper, we present a method for creating representative random samples of quasi-orders. The basic idea is to consider random extensions of quasi-orders from lower to higher dimension and to discard extensions that do not satisfy the transitivity property.

  19. Estimating the Size of a Large Network and its Communities from a Random Sample

    CERN Document Server

    Chen, Lin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...

  20. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  1. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie

    2009-08-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

  2. The impact of alcohol marketing on youth drinking behaviour: a two-stage cohort study.

    Science.gov (United States)

    Gordon, Ross; MacKintosh, Anne Marie; Moodie, Crawford

    2010-01-01

    To examine whether awareness of, and involvement with alcohol marketing at age 13 is predictive of initiation of drinking, frequency of drinking and units of alcohol consumed at age 15. A two-stage cohort study, involving a questionnaire survey, combining interview and self-completion, was administered in respondents' homes. Respondents were drawn from secondary schools in three adjoining local authority areas in the West of Scotland, UK. From a baseline sample of 920 teenagers (aged 12-14, mean age 13), in 2006, a cohort of 552 was followed up 2 years later (aged 14-16, mean age 15). Data were gathered on multiple forms of alcohol marketing and measures of drinking initiation, frequency and consumption. At follow-up, logistic regression demonstrated that, after controlling for confounding variables, involvement with alcohol marketing at baseline was predictive of both uptake of drinking and increased frequency of drinking. Awareness of marketing at baseline was also associated with an increased frequency of drinking at follow-up. Our findings demonstrate an association between involvement with, and awareness of, alcohol marketing and drinking uptake or increased drinking frequency, and we consider whether the current regulatory environment affords youth sufficient protection from alcohol marketing.

  3. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  4. An evaluation of a two-stage spiral processing ultrafine bituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Matthew D. Benusa; Mark S. Klima [Penn State University, University Park, PA (United States). Energy and Mineral Engineering

    2008-10-15

    Testing was conducted to evaluate the performance of a multistage Multotec SX7 spiral concentrator treating ultrafine bituminous coal. This spiral mimics a two-stage separation in that the refuse is removed after four turns, and the clean coal and middlings are repulped (without water addition) and then separated in the final three turns. Feed samples were collected from the spiral circuit of a coal cleaning plant located in southwestern Pennsylvania. The samples consisted of undeslimed cyclone feed (nominal -0.15 mm) and deslimed spiral feed (nominal 0.15 x 0.053 mm). Testing was carried out to investigate the effects of slurry flow rate and solids concentration on spiral performance. Detailed size and ash analyses were performed on the spiral feed and product samples. For selected tests, float-sink and sulfur analyses were performed. In nearly all cases, ash reduction occurred down to approximately 0.025 mm, with some sulfur reduction occurring even in the -0.025 mm interval. The separation of the +0.025 mm material was not significantly affected by the presence of the -0.025 mm material when treating the undeslimed feed. The -0.025 mm material split in approximately the same ratio as the slurry, and the majority of the water traveled to the clean coal stream. This split ultimately increased the overall clean coal ash value. A statistical analysis determined that both flow rate and solids concentration affected the clean coal ash value and yield, though the flow rate had a greater effect on the separation. 23 refs.

  5. How to get an exact sample from a generic Markov chain and sample a random spanning tree from a directed graph, both within the cover time

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, D.B.; Propp, J.G.

    1996-12-31

    This paper shows how to obtain unbiased samples from an unknown Markov chain by observing it for O(T{sub c}) steps, where T{sub c} is the cover time. This algorithm improves on several previous algorithms, and there is a matching lower bound. Using the techniques from the sampling algorithm, we also show how to sample random directed spanning trees from a weighted directed graph, with arcs directed to a root, and probability proportional to the product of the edge weights. This tree sampling algorithm runs within 18 cover times of the associated random walk, and is more generally applicable than the algorithm of Broder and Aldous.

  6. A profile of US-Mexico border mobility among a stratified random sample of Hispanics living in the El Paso-Juarez area.

    Science.gov (United States)

    Lapeyrouse, L M; Morera, O; Heyman, J M C; Amaya, M A; Pingitore, N E; Balcazar, H

    2012-04-01

    Examination of border-specific characteristics such as trans-border mobility and transborder health service illuminates the heterogeneity of border Hispanics and may provide greater insight toward understanding differential health behaviors and status among these populations. In this study, we create a descriptive profile of the concept of trans-border mobility by exploring the relationship between mobility status and a series of demographic, economic and socio-cultural characteristics among mobile and non-mobile Hispanics living in the El Paso-Juarez border region. Using a two-stage stratified random sampling design, bilingual interviewers collected survey data from border residents (n = 1,002). Findings show that significant economic, cultural, and behavioral differences exist between mobile and non-mobile respondents. While non-mobile respondents were found to have higher social economic status than their mobile counterparts, mobility across the border was found to offer less acculturated and poorer Hispanics access to alternative sources of health care and other services.

  7. A conditional random fields method for RNA sequence-structure relationship modeling and conformation sampling.

    Science.gov (United States)

    Wang, Zhiyong; Xu, Jinbo

    2011-07-01

    Accurate tertiary structures are very important for the functional study of non-coding RNA molecules. However, predicting RNA tertiary structures is extremely challenging, because of a large conformation space to be explored and lack of an accurate scoring function differentiating the native structure from decoys. The fragment-based conformation sampling method (e.g. FARNA) bears shortcomings that the limited size of a fragment library makes it infeasible to represent all possible conformations well. A recent dynamic Bayesian network method, BARNACLE, overcomes the issue of fragment assembly. In addition, neither of these methods makes use of sequence information in sampling conformations. Here, we present a new probabilistic graphical model, conditional random fields (CRFs), to model RNA sequence-structure relationship, which enables us to accurately estimate the probability of an RNA conformation from sequence. Coupled with a novel tree-guided sampling scheme, our CRF model is then applied to RNA conformation sampling. Experimental results show that our CRF method can model RNA sequence-structure relationship well and sequence information is important for conformation sampling. Our method, named as TreeFolder, generates a much higher percentage of native-like decoys than FARNA and BARNACLE, although we use the same simple energy function as BARNACLE. zywang@ttic.edu; j3xu@ttic.edu Supplementary data are available at Bioinformatics online.

  8. Improved river flow and random sample consensus for curve lane detection

    Directory of Open Access Journals (Sweden)

    Huachun Tan

    2015-07-01

    Full Text Available Accurate and robust lane detection, especially the curve lane detection, is the premise of lane departure warning system and forward collision warning system. In this article, an algorithm based on improved river flow and random sample consensus is proposed to detect curve lane under challenging conditions including the dashed lane markings and vehicle occlusion. The curve lanes are modeled as hyperbola pair. To determine the coefficient of curvature, an improved river flow method is presented to search feature points in the far vision field guided by the results of detected straight lines in near vision field or the curved lines from the last frame, which can connect dashed lane markings or obscured lane markings. As a result, it is robust on dashed lane markings and vehicle occlusion conditions. Then, random sample consensus is utilized to calculate the curvature, which can eliminate noisy feature points obtained from improved river flow. The experimental results show that the proposed method can accurately detect lane under challenging conditions.

  9. Determination of Initial Conditions for the Safety Analysis by Random Sampling of Operating Parameters

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hae-Yong; Park, Moon-Ghu [Sejong University, Seoul (Korea, Republic of)

    2015-05-15

    In most existing evaluation methodologies, which follow a conservative approach, the most conservative initial conditions are searched for each transient scenario through tremendous assessment for wide operating windows or limiting conditions for operation (LCO) allowed by the operating guidelines. In this procedure, a user effect could be involved and a remarkable time and human resources are consumed. In the present study, we investigated a more effective statistical method for the selection of the most conservative initial condition by the use of random sampling of operating parameters affecting the initial conditions. A method for the determination of initial conditions based on random sampling of plant design parameters is proposed. This method is expected to be applied for the selection of the most conservative initial plant conditions in the safety analysis using a conservative evaluation methodology. In the method, it is suggested that the initial conditions of reactor coolant flow rate, pressurizer level, pressurizer pressure, and SG level are adjusted by controlling the pump rated flow, setpoints of PLCS, PPCS, and FWCS, respectively. The proposed technique is expected to contribute to eliminate the human factors introduced in the conventional safety analysis procedure and also to reduce the human resources invested in the safety evaluation of nuclear power plants.

  10. A descriptive analysis of a representative sample of pediatric randomized controlled trials published in 2007

    Directory of Open Access Journals (Sweden)

    Thomson Denise

    2010-12-01

    Full Text Available Abstract Background Randomized controlled trials (RCTs are the gold standard for trials assessing the effects of therapeutic interventions; therefore it is important to understand how they are conducted. Our objectives were to provide an overview of a representative sample of pediatric RCTs published in 2007 and assess the validity of their results. Methods We searched Cochrane Central Register of Controlled Trials using a pediatric filter and randomly selected 300 RCTs published in 2007. We extracted data on trial characteristics; outcomes; methodological quality; reporting; and registration and protocol characteristics. Trial registration and protocol availability were determined for each study based on the publication, an Internet search and an author survey. Results Most studies (83% were efficacy trials, 40% evaluated drugs, and 30% were placebo-controlled. Primary outcomes were specified in 41%; 43% reported on adverse events. At least one statistically significant outcome was reported in 77% of trials; 63% favored the treatment group. Trial registration was declared in 12% of publications and 23% were found through an Internet search. Risk of bias (ROB was high in 59% of trials, unclear in 33%, and low in 8%. Registered trials were more likely to have low ROB than non-registered trials (16% vs. 5%; p = 0.008. Effect sizes tended to be larger for trials at high vs. low ROB (0.28, 95% CI 0.21,0.35 vs. 0.16, 95% CI 0.07,0.25. Among survey respondents (50% response rate, the most common reason for trial registration was a publication requirement and for non-registration, a lack of familiarity with the process. Conclusions More than half of this random sample of pediatric RCTs published in 2007 was at high ROB and three quarters of trials were not registered. There is an urgent need to improve the design, conduct, and reporting of child health research.

  11. Random sampling causes the low reproducibility of rare eukaryotic OTUs in Illumina COI metabarcoding

    Science.gov (United States)

    Knowlton, Nancy

    2017-01-01

    DNA metabarcoding, the PCR-based profiling of natural communities, is becoming the method of choice for biodiversity monitoring because it circumvents some of the limitations inherent to traditional ecological surveys. However, potential sources of bias that can affect the reproducibility of this method remain to be quantified. The interpretation of differences in patterns of sequence abundance and the ecological relevance of rare sequences remain particularly uncertain. Here we used one artificial mock community to explore the significance of abundance patterns and disentangle the effects of two potential biases on data reproducibility: indexed PCR primers and random sampling during Illumina MiSeq sequencing. We amplified a short fragment of the mitochondrial Cytochrome c Oxidase Subunit I (COI) for a single mock sample containing equimolar amounts of total genomic DNA from 34 marine invertebrates belonging to six phyla. We used seven indexed broad-range primers and sequenced the resulting library on two consecutive Illumina MiSeq runs. The total number of Operational Taxonomic Units (OTUs) was ∼4 times higher than expected based on the composition of the mock sample. Moreover, the total number of reads for the 34 components of the mock sample differed by up to three orders of magnitude. However, 79 out of 86 of the unexpected OTUs were represented by important. Our results further reinforce the need for technical replicates (parallel PCR and sequencing from the same sample) in metabarcoding experimental designs. Data reproducibility should be determined empirically as it will depend upon the sequencing depth, the type of sample, the sequence analysis pipeline, and the number of replicates. Moreover, estimating relative biomasses or abundances based on read counts remains elusive at the OTU level.

  12. Method of oxygen-enriched two-stage underground coal gasification

    Institute of Scientific and Technical Information of China (English)

    Liu Hongtao; Chen Feng; Pan Xia; Yao Kai; Liu Shuqin

    2011-01-01

    Two-stage underground coal gasification was studied to improve the caloric value of the syngas and to extend gas production times. A model test using the oxygen-enriched two-stage coal gasification method was carried out. The composition of the gas produced, the time ratio of the two stages, and the role of the temperature field were analysed. The results show that oxygen-enriched two-stage gasification shortens the time of the first stage and prolongs the time of the second stage. Feed oxygen concentrations of 30%,35%, 40%, 45%, 60%, or 80% gave time ratios (first stage to second stage) of 1:0.12, 1:0.21, 1:0.51, 1:0.64,1:0.90, and 1:4.0 respectively. Cooling rates of the temperature field after steam injection decreased with time from about 19.1-27.4 ℃/min to 2.3-6.8 ℃/min. But this rate increased with increasing oxygen concentrations in the first stage. The caloric value of the syngas improves with increased oxygen concentration in the first stage. Injection of 80% oxygen-enriched air gave gas with the highest caloric value and also gave the longest production time. The caloric value of the gas obtained from the oxygenenriched two-stage gasification method lies in the range from 5.31 MJ/Nm3 to 10.54 MJ/Nm3.

  13. 13 K thermally coupled two-stage Stirling-type pulse tube refrigerator

    Institute of Scientific and Technical Information of China (English)

    TANG Ke; CHEN Guobang; THUMMES Günter

    2005-01-01

    Stirling-type pulse tube refrigerators have attracted academic and commercial interest in recent years due to their more compact configuration and higher efficiency than those of G-M type pulse tube refrigerators. In order to achieve a no-load cooling temperature below 20 K, a thermally coupled two-stage Stirling-type pulse tube refrigerator has been built. The thermally coupled arrangement was expected to minimize the interference between the two stages and to simplify the adjustment and optimization of the phase shifters. A no-load cooling temperature of 14.97 K has been realized with the two-stage cooler driven by one linear compressor of 200 W electric input. When the two stages are driven by two compressors respectively, with total electric input of 400 W, the prototype has attained a no-load cooling temperature of 12.96 K, which is the lowest temperature ever reported with two-stage Stirling-type pulse tube refrigerators.

  14. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  15. Experience Sampling-Based Personalized Feedback and Positive Affect: A Randomized Controlled Trial in Depressed Patients

    Science.gov (United States)

    Hartmann, Jessica A.; Wichers, Marieke; Menne-Lothmann, Claudia; Kramer, Ingrid; Viechtbauer, Wolfgang; Peeters, Frenk; Schruers, Koen R. J.; van Bemmel, Alex L.; Myin-Germeys, Inez; Delespaul, Philippe; van Os, Jim; Simons, Claudia J. P.

    2015-01-01

    Objectives Positive affect (PA) plays a crucial role in the development, course, and recovery of depression. Recently, we showed that a therapeutic application of the experience sampling method (ESM), consisting of feedback focusing on PA in daily life, was associated with a decrease in depressive symptoms. The present study investigated whether the experience of PA increased during the course of this intervention. Design Multicentre parallel randomized controlled trial. An electronic random sequence generator was used to allocate treatments. Settings University, two local mental health care institutions, one local hospital. Participants 102 pharmacologically treated outpatients with a DSM-IV diagnosis of major depressive disorder, randomized over three treatment arms. Intervention Six weeks of ESM self-monitoring combined with weekly PA-focused feedback sessions (experimental group); six weeks of ESM self-monitoring combined with six weekly sessions without feedback (pseudo-experimental group); or treatment as usual (control group). Main outcome The interaction between treatment allocation and time in predicting positive and negative affect (NA) was investigated in multilevel regression models. Results 102 patients were randomized (mean age 48.0, SD 10.2) of which 81 finished the entire study protocol. All 102 patients were included in the analyses. The experimental group did not show a significant larger increase in momentary PA during or shortly after the intervention compared to the pseudo-experimental or control groups (χ2 (2) =0.33, p=.846). The pseudo-experimental group showed a larger decrease in NA compared to the control group (χ2 (1) =6.29, p=.012). Conclusion PA-focused feedback did not significantly impact daily life PA during or shortly after the intervention. As the previously reported reduction in depressive symptoms associated with the feedback unveiled itself only after weeks, it is conceivable that the effects on daily life PA also evolve

  16. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  17. Analysis of performance and optimum configuration of two-stage semiconductor thermoelectric module

    Institute of Scientific and Technical Information of China (English)

    Li Kai-Zhen; Liang Rui-Sheng; Wei Zheng-Jun

    2008-01-01

    In this paper, the theoretical analysis and simulating calculation were conducted for a basic two-stage semiconductor thermoelectric module, which contains one thermocouple in the second stage and several thermocouples in the first stage. The study focused on the configuration of the two-stage semiconductor thermoelectric cooler, especially investigating the influences of some parameters, such as the current I1 of the first stage, the area A1 of every thermocouple and the number n of thermocouples in the first stage, on the cooling performance of the module. The obtained results of analysis indicate that changing the current I1 of the first stage, the area A1 of thcrmocouples and the number n of thermocouples in the first stage can improve the cooling performance of the module. These results can be used to optimize the configuration of the two-stage semiconductor thermoelectric module and provide guides for the design and application of thermoelectric cooler.

  18. Effects of earthworm casts and zeolite on the two-stage composting of green waste.

    Science.gov (United States)

    Zhang, Lu; Sun, Xiangyang

    2015-05-01

    Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21days with the optimized two-stage composting method rather than in the 90-270days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  19. Two-Stage Revision Anterior Cruciate Ligament Reconstruction: Bone Grafting Technique Using an Allograft Bone Matrix.

    Science.gov (United States)

    Chahla, Jorge; Dean, Chase S; Cram, Tyler R; Civitarese, David; O'Brien, Luke; Moulton, Samuel G; LaPrade, Robert F

    2016-02-01

    Outcomes of primary anterior cruciate ligament (ACL) reconstruction have been reported to be far superior to those of revision reconstruction. However, as the incidence of ACL reconstruction is rapidly increasing, so is the number of failures. The subsequent need for revision ACL reconstruction is estimated to occur in up to 13,000 patients each year in the United States. Revision ACL reconstruction can be performed in one or two stages. A two-stage approach is recommended in cases of improper placement of the original tunnels or in cases of unacceptable tunnel enlargement. The aim of this study was to describe the technique for allograft ACL tunnel bone grafting in patients requiring a two-stage revision ACL reconstruction.

  20. [Random sampling according to section 17 c KHG--report on the experience of maximal care hospitals in Hesse].

    Science.gov (United States)

    van Essen, J; Hübner, M; von Mittelstaedt, G

    2007-03-01

    Hospital billing converted to "German diagnosis-related groups" (G-DRG) for in-patient treatment in Germany is reviewed, except in psychiatry where per-diems are still in use. Currently thousands of bills are sent to the Medical Service for scrutiny. In addition, the law relating to Hospital Financing (Krankenhausfinanzierungsgesetz, para. 17 c) provides for systematic checks on a random sample of bills from a given hospital. The Medical Service of the Social Security Health Insurance reports on the experience in the State of Hessen. Present regulations exclude from the random sample those bills that have already been presented for a check on a case by case basis. Excluding these cases from the random sample introduces a bias in an avoidable way. The present rule is contrary to valid conclusions from the random sampling and should be abolished.

  1. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  2. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    Science.gov (United States)

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  3. A two-stage subsurface vertical flow constructed wetland for high-rate nitrogen removal.

    Science.gov (United States)

    Langergraber, Guenter; Leroch, Klaus; Pressl, Alexander; Rohrhofer, Roland; Haberl, Raimund

    2008-01-01

    By using a two-stage constructed wetland (CW) system operated with an organic load of 40 gCOD.m(-2).d(-1) (2 m2 per person equivalent) average nitrogen removal efficiencies of about 50% and average nitrogen elimination rates of 980 g N.m(-2).yr(-1) could be achieved. Two vertical flow beds with intermittent loading have been operated in series. The first stage uses sand with a grain size of 2-3.2 mm for the main layer and has a drainage layer that is impounded; the second stage sand with a grain size of 0.06-4 mm and a drainage layer with free drainage. The high nitrogen removal can be achieved without recirculation thus it is possible to operate the two-stage CW system without energy input. The paper shows performance data for the two-stage CW system regarding removal of organic matter and nitrogen for the two year operating period of the system. Additionally, its efficiency is compared with the efficiency of a single-stage vertical flow CW system designed and operated according to the Austrian design standards with 4 m2 per person equivalent. The comparison shows that a higher effluent quality could be reached with the two-stage system although the two-stage CW system is operated with the double organic load or half the specific surface area requirement, respectively. Another advantage is that the specific investment costs of the two-stage CW system amount to 1,200 EUR per person (without mechanical pre-treatment) and are only about 60% of the specific investment costs of the singe-stage CW system. IWA Publishing 2008.

  4. Calculating the probability of random sampling for continuous variables in submitted or published randomised controlled trials.

    Science.gov (United States)

    Carlisle, J B; Dexter, F; Pandit, J J; Shafer, S L; Yentis, S M

    2015-07-01

    In a previous paper, one of the authors (JBC) used a chi-squared method to analyse the means (SD) of baseline variables, such as height or weight, from randomised controlled trials by Fujii et al., concluding that the probabilities that the reported distributions arose by chance were infinitesimally small. Subsequent testing of that chi-squared method, using simulation, suggested that the method was incorrect. This paper corrects the chi-squared method and tests its performance and the performance of Monte Carlo simulations and ANOVA to analyse the probability of random sampling. The corrected chi-squared method and ANOVA method became inaccurate when applied to means that were reported imprecisely. Monte Carlo simulations confirmed that baseline data from 158 randomised controlled trials by Fujii et al. were different to those from 329 trials published by other authors and that the distribution of Fujii et al.'s data were different to the expected distribution, both p non-random (i.e. unreliable) data in randomised controlled trials submitted to journals. © 2015 The Association of Anaesthetists of Great Britain and Ireland.

  5. Effectiveness of hand hygiene education among a random sample of women from the community.

    Science.gov (United States)

    Ubheeram, J; Biranjia-Hurdoyal, S D

    2017-03-01

    The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Better hand hygiene awareness score was significantly associated with age, scarce bacterial growth and absence of potential pathogen (p soap as compared to antiseptic soaps (69.7% vs 30.3%, p = 0.000; OR = 4.11; 95% CI: 1.67-10.12). Level of hand hygiene awareness among the participants was satisfactory but not the compliance of hand washing practice, especially among the elders.

  6. Growth by random walker sampling and scaling of the dielectric breakdown model

    Science.gov (United States)

    Somfai, Ellák; Goold, Nicholas R.; Ball, Robin C.; Devita, Jason P.; Sander, Leonard M.

    2004-11-01

    Random walkers absorbing on a boundary sample the harmonic measure linearly and independently: we discuss how the recurrence times between impacts enable nonlinear moments of the measure to be estimated. From this we derive a technique to simulate dielectric breakdown model growth, which is governed nonlinearly by the harmonic measure. For diffusion-limited aggregation, recurrence times are shown to be accurate and effective in probing the multifractal growth measure in its active region. For the dielectric breakdown model our technique grows large clusters efficiently and we are led to significantly revise earlier exponent estimates. Previous results by two conformal mapping techniques were less converged than expected, and in particular a recent theoretical suggestion of superuniversality is firmly refuted.

  7. Methane production from sweet sorghum residues via a two-stage process

    Energy Technology Data Exchange (ETDEWEB)

    Stamatelatou, K.; Dravillas, K.; Lyberatos, G. [University of Patras (Greece). Department of Chemical Engineering, Laboratory of Biochemical Engineering and Environmental Technology

    2003-07-01

    The start-up of a two-stage reactor configuration for the anaerobic digestion of sweet sorghum residues was evaluated. The sweet sorghum residues were a waste stream originating from the alcoholic fermentation of sweet sorghum and the subsequent distillation step. This waste stream contained high concentration of solid matter (9% TS) and thus could be characterized as a semi-solid, not easily biodegradable wastewater with high COD (115 g/l). The application of the proposed two-stage configuration (consisting of one thermophilic hydrolyser and one mesophilic methaniser) achieved a methane production of 16 l/l wastewater under a hydraulic retention time of 19 d. (author)

  8. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  9. Development of a linear compressor for two-stage pulse tube cryocoolers

    Institute of Scientific and Technical Information of China (English)

    Peng-da YAN; Wei-li GAO; Guo-bang CHEN

    2009-01-01

    A valveless linear compressor was built up to drive a self-made two-stage pulse tube cryocooler. With a designed maximum swept volume of 60 cm~3, the compressor can provide the cryocooler with a pressure volume (PV) power of 400 W.Preliminary measurements of the compressor indicated that both an efficiency of 35%~55% and a pressure ratio of 1.3~1.4 could be obtained. The two-stage pulse tube cryocooler driven by this compressor achieved the lowest temperature of 14.2 K.

  10. Terephthalic acid wastewater treatment by using two-stage aerobic process

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Based on the tests between anoxic and aerobic process, the two-stage aerobic process with a biological selector was chosen to treat terephthalic acid wastewater (PTA). By adopting the two- stage aerobic process, the CODCr in PTA wastewater could be reduced from 4000-6000 mg/L to below 100 mg/L; the COD loading in the first aerobic tank could reach 7.0-8.0 kgCODCr/(m3.d) and that of the second stage was from 0.2 to 0.4 kgCODCr/(m3.d). Further researches on the kinetics of substrate degradation were carried out.

  11. Airway hyperresponsiveness to mannitol and methacholine and exhaled nitric oxide: a random-sample population study.

    Science.gov (United States)

    Sverrild, Asger; Porsbjerg, Celeste; Thomsen, Simon Francis; Backer, Vibeke

    2010-11-01

    Studies of selected patient groups have shown that airway hyperresponsiveness (AHR) to mannitol is more specific than methacholine for the diagnosis of asthma, as well as more closely associated with markers of airway inflammation in asthma. We sought to compare AHR to mannitol and methacholine and exhaled nitric oxide (eNO) levels in a nonselected population sample. In 238 young adults randomly drawn from the nationwide civil registration list in Copenhagen, Denmark, AHR to mannitol and methacholine, as well as levels of eNO, were determined, and the association with asthma was analyzed. In diagnosing asthma the specificity of methacholine and mannitol was 80.2% (95% CI, 77.1% to 82.9%) and 98.4% (95% CI, 96.2% to 99.4%), respectively, with a positive predictive value of 48.6% versus 90.4%, whereas the sensitivity was 68.6% (95% CI, 57.1% to 78.4%) and 58.8% (95% CI, 50.7% to 62.6%), respectively. In asthmatic subjects AHR to mannitol was associated with increased eNO levels (positive AHR to mannitol: median, 47 ppb [interquartile range, 35-68 ppb]; negative AHR to mannitol: median, 19 ppb [interquartile range, 13-30 ppb]; P = .001), whereas this was not the case for AHR to methacholine (median of 37 ppb [interquartile range, 26-51 ppb] vs 24 ppb [interquartile range, 15-39 ppb], P = .13). In this random population sample, AHR to mannitol was less sensitive but more specific than methacholine in the diagnosis of asthma. Furthermore, AHR to mannitol was more closely associated with ongoing airway inflammation in terms of increased eNO levels. Copyright © 2010 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  12. First Law Analysis of a Two-stage Ejector-vapor Compression Refrigeration Cycle working with R404A

    National Research Council Canada - National Science Library

    Feiza Memet; Daniela-Elena Mitu

    2011-01-01

    The traditional two-stage vapor compression refrigeration cycle might be replaced by a two-stage ejector-vapor compression refrigeration cycle if it is aimed the decrease of irreversibility during expansion...

  13. State-independent importance sampling for random walks with regularly varying increments

    Directory of Open Access Journals (Sweden)

    Karthyek R. A. Murthy

    2015-03-01

    Full Text Available We develop importance sampling based efficient simulation techniques for three commonly encountered rare event probabilities associated with random walks having i.i.d. regularly varying increments; namely, 1 the large deviation probabilities, 2 the level crossing probabilities, and 3 the level crossing probabilities within a regenerative cycle. Exponential twisting based state-independent methods, which are effective in efficiently estimating these probabilities for light-tailed increments are not applicable when the increments are heavy-tailed. To address the latter case, more complex and elegant state-dependent efficient simulation algorithms have been developed in the literature over the last few years. We propose that by suitably decomposing these rare event probabilities into a dominant and further residual components, simpler state-independent importance sampling algorithms can be devised for each component resulting in composite unbiased estimators with desirable efficiency properties. When the increments have infinite variance, there is an added complexity in estimating the level crossing probabilities as even the well known zero-variance measures have an infinite expected termination time. We adapt our algorithms so that this expectation is finite while the estimators remain strongly efficient. Numerically, the proposed estimators perform at least as well, and sometimes substantially better than the existing state-dependent estimators in the literature.

  14. The contribution of simple random sampling to observed variations in faecal egg counts.

    Science.gov (United States)

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided.

  15. Overcoming the bottlenecks of anaerobic digestion of olive mill solid waste by two-stage fermentation.

    Science.gov (United States)

    Stoyanova, Elitza; Lundaa, Tserennyam; Bochmann, Günther; Fuchs, Werner

    2017-02-01

    Two-stage anaerobic digestion (AD) of two-phase olive mill solid waste (OMSW) was applied for reducing the inhibiting factors by optimizing the acidification stage. Single-stage AD and co-fermentation with chicken manure were conducted coinstantaneous for direct comparison. Degradation of the polyphenols up to 61% was observed during the methanogenic stage. Nevertheless the concentration of phenolic substances was still high; the two-stage fermentation remained stable at OLR 1.5 kgVS/m³day. The buffer capacity of the system was twice as high, compared to the one-stage fermentation, without additives. The two-stage AD was a combined process - thermophilic first stage and mesophilic second stage, which pointed out to be the most profitable for AD of OMSW for the reduced hydraulic retention time (HRT) from 230 to 150 days, and three times faster than the single-stage and the co-fermentation start-up of the fermentation. The optimal HRT and incubation temperature for the first stage were determined to four days and 55°C. The performance of the two-stage AD concerning the stability of the process was followed by the co-digestion of OMSW with chicken manure as a nitrogen-rich co-substrate, which makes them viable options for waste disposal with concomitant energy recovery.

  16. The Design, Construction and Operation of a 75 kW Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Birk; Ahrenfeldt, Jesper; Jensen, Torben Kvist

    2003-01-01

    The Two-Stage Gasifier was operated for several weeks (465 hours) and of these 190 hours continuously. The gasifier is operated automatically unattended day and night, and only small adjustments of the feeding rate were necessary once or twice a day. The operation was successful, and the output a...... of the reactor had to be constructed in some other material....

  17. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  18. A two-stage ethanol-based biodiesel production in a packed bed reactor

    DEFF Research Database (Denmark)

    Xu, Yuan; Nordblad, Mathias; Woodley, John

    2012-01-01

    A two-stage enzymatic process for producing fatty acid ethyl ester (FAEE) in a packed bed reactor is reported. The process uses an experimental immobilized lipase (NS 88001) and Novozym 435 to catalyze transesterification (first stage) and esterification (second stage), respectively. Both stages...

  19. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...... problem....

  20. Use a Log Splitter to Demonstrate Two-Stage Hydraulic Pump

    Science.gov (United States)

    Dell, Timothy W.

    2012-01-01

    The two-stage hydraulic pump is commonly used in many high school and college courses to demonstrate hydraulic systems. Unfortunately, many textbooks do not provide a good explanation of how the technology works. Another challenge that instructors run into with teaching hydraulic systems is the cost of procuring an expensive real-world machine…

  1. Some design aspects of a two-stage rail-to-rail CMOS op amp

    NARCIS (Netherlands)

    Gierkink, S.L.J.; Holzmann, Peter J.; Wiegerink, R.J.; Wassenaar, R.F.

    1999-01-01

    A two-stage low-voltage CMOS op amp with rail-to-rail input and output voltage ranges is presented. The circuit uses complementary differential input pairs to achieve the rail-to-rail common-mode input voltage range. The differential pairs operate in strong inversion, and the constant transconductan

  2. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...

  3. An intracooling system for a novel two-stage sliding-vane air compressor

    Science.gov (United States)

    Murgia, Stefano; Valenti, Gianluca; Costanzo, Ida; Colletta, Daniele; Contaldi, Giulio

    2017-08-01

    Lube-oil injection is used in positive-displacement compressors and, among them, in sliding-vane machines to guarantee the correct lubrication of the moving parts and as sealing to prevent air leakage. Furthermore, lube-oil injection allows to exploit lubricant also as thermal ballast with a great thermal capacity to minimize the temperature increase during the compression. This study presents the design of a two-stage sliding-vane rotary compressor in which the air cooling is operated by high-pressure cold oil injection into a connection duct between the two stages. The heat exchange between the atomized oil jet and the air results in a decrease of the air temperature before the second stage, improving the overall system efficiency. This cooling system is named here intracooling, as opposed to intercooling. The oil injection is realized via pressure-swirl nozzles, both within the compressors and inside the intracooling duct. The design of the two-stage sliding-vane compressor is accomplished by way of a lumped parameter model. The model predicts an input power reduction as large as 10% for intercooled and intracooled two-stage compressors, the latter being slightly better, with respect to a conventional single-stage compressor for compressed air applications. An experimental campaign is conducted on a first prototype that comprises the low-pressure compressor and the intracooling duct, indicating that a significant temperature reduction is achieved in the duct.

  4. Development of a heavy-duty diesel engine with two-stage turbocharging

    NARCIS (Netherlands)

    Sturm, L.; Kruithof, J.

    2001-01-01

    A mean value model was developed by using Matrixx/ Systembuild simulation tool for designing real-time control algorithms for the two-stage engine. All desired characteristics are achieved, apart from lower A/F ratio at lower engine speeds and Turbocharger matches calculations. The CANbus is used to

  5. Two-stage, dilute sulfuric acid hydrolysis of wood : an investigation of fundamentals

    Science.gov (United States)

    John F. Harris; Andrew J. Baker; Anthony H. Conner; Thomas W. Jeffries; James L. Minor; Roger C. Pettersen; Ralph W. Scott; Edward L Springer; Theodore H. Wegner; John I. Zerbe

    1985-01-01

    This paper presents a fundamental analysis of the processing steps in the production of methanol from southern red oak (Quercus falcata Michx.) by two-stage dilute sulfuric acid hydrolysis. Data for hemicellulose and cellulose hydrolysis are correlated using models. This information is used to develop and evaluate a process design.

  6. Two-stage data envelopment analysis technique for evaluating internal supply chain efficiency

    Directory of Open Access Journals (Sweden)

    Nisakorn Somsuk

    2014-12-01

    Full Text Available A two-stage data envelopment analysis (DEA which uses mathematical linear programming techniques is applied to evaluate the efficiency of a system composed of two relational sub-processes, by which the outputs from the first sub-process (as the intermediate outputs of the system are the inputs for the second sub-process. The relative efficiencies of the system and its sub-processes can be measured by applying the two-stage DEA. According to the literature review on the supply chain management, this technique can be used as a tool for evaluating the efficiency of the supply chain composed of two relational sub-processes. The technique can help to determine the inefficient sub-processes. Once the inefficient sub-process was improved its efficiency, it would result in better aggregate efficiency of the supply chain. This paper aims to present a procedure for evaluating the efficiency of the supply chain by using the two-stage DEA, under the assumption of constant returns to scale, with an example of internal supply chain efficiency measurement of insurance companies by applying the two-stage DEA for illustration. Moreover, in this paper the authors also present some observations on the application of this technique.

  7. Two-stage estimation in copula models used in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2005-01-01

    In this paper register based family studies provide the motivation for studying a two-stage estimation procedure in copula models for multivariate failure time data. The asymptotic properties of the estimators in both parametric and semi-parametric models are derived, generalising the approach by...

  8. Innovative two-stage anaerobic process for effective codigestion of cheese whey and cattle manure.

    Science.gov (United States)

    Bertin, Lorenzo; Grilli, Selene; Spagni, Alessandro; Fava, Fabio

    2013-01-01

    The valorisation of agroindustrial waste through anaerobic digestion represents a significant opportunity for refuse treatment and renewable energy production. This study aimed to improve the codigestion of cheese whey (CW) and cattle manure (CM) by an innovative two-stage process, based on concentric acidogenic and methanogenic phases, designed for enhancing performance and reducing footprint. The optimum CW to CM ratio was evaluated under batch conditions. Thereafter, codigestion was implemented under continuous-flow conditions comparing one- and two-stage processes. The results demonstrated that the addition of CM in codigestion with CW greatly improved the anaerobic process. The highest methane yield was obtained co-treating the two substrates at equal ratio by using the innovative two-stage process. The proposed system reached the maximum value of 258 mL(CH4) g(gv(-1), which was more than twice the value obtained by the one-stage process and 10% higher than the value obtained by the two-stage one.

  9. Extraoral implants for orbit rehabilitation: a comparison between one-stage and two-stage surgeries.

    Science.gov (United States)

    de Mello, M C L M P; Guedes, R; de Oliveira, J A P; Pecorari, V A; Abrahão, M; Dib, L L

    2014-03-01

    The aim of the study was to compare the osseointegration success rate and time for delivery of the prosthesis among cases treated by two-stage or one-stage surgery for orbit rehabilitation between 2003 and 2011. Forty-five patients were included, 31 males and 14 females; 22 patients had two-stage surgery and 23 patients had one-stage surgery. A total 138 implants were installed, 42 (30.4%) on previously irradiated bone. The implant survival rate was 96.4%, with a success rate of 99.0% among non-irradiated patients and 90.5% among irradiated patients. Two-stage patients received 74 implants with a survival rate of 94.6% (four implants lost); one-stage surgery patients received 64 implants with a survival rate of 98.4% (one implant lost). The median time interval between implant fixation and delivery of the prosthesis for the two-stage group was 9.6 months and for the one-stage group was 4.0 months (P < 0.001). The one-stage technique proved to be reliable and was associated with few risks and complications; the rate of successful osseointegration was similar to those reported in the literature. The one-stage technique should be considered a viable procedure that shortens the time to final rehabilitation and facilitates appropriate patient follow-up treatment.

  10. Validation of Continuous CHP Operation of a Two-Stage Biomass Gasifier

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Henriksen, Ulrik Birk; Jensen, Torben Kvist

    2006-01-01

    The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-d...

  11. High rate treatment of terephthalic acid production wastewater in a two-stage anaerobic bioreactor

    NARCIS (Netherlands)

    Kleerebezem, R.; Beckers, J.; Pol, L.W.H.; Lettinga, G.

    2005-01-01

    The feasibility was studied of anaerobic treatment of wastewater generated during purified terephthalic acid (PTA) production in two-stage upflow anaerobic sludge blanket (UASB) reactor system. The artificial influent of the system contained the main organic substrates of PTA-wastewater: acetate, be

  12. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2017-04-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  13. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...

  14. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2016-09-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  15. A Two-Stage Exercise on the Binomial Distribution Using Minitab.

    Science.gov (United States)

    Shibli, M. Abdullah

    1990-01-01

    Describes a two-stage experiment that was designed to explain binomial distribution to undergraduate statistics students. A manual coin flipping exercise is explained as the first stage; a computerized simulation using MINITAB software is presented as stage two; and output from the MINITAB exercises is included. (two references) (LRW)

  16. The rearrangement process in a two-stage broadcast switching network

    DEFF Research Database (Denmark)

    Jacobsen, Søren B.

    1988-01-01

    The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...

  17. Two-stage laparoscopic resection of colon cancer and metastatic liver tumour

    Directory of Open Access Journals (Sweden)

    Yukio Iwashita

    2012-01-01

    Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both post-operative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

  18. Two-stage laparoscopic resection of colon cancer and metastatic liver tumour

    Directory of Open Access Journals (Sweden)

    Iwashita Yukio

    2005-01-01

    Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both postoperative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

  19. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  20. The Bracka two-stage repair for severe proximal hypospadias: A single center experience

    Directory of Open Access Journals (Sweden)

    Rakesh S Joshi

    2015-01-01

    Full Text Available Background: Surgical correction of severe proximal hypospadias represents a significant surgical challenge and single-stage corrections are often associated with complications and reoperations. Bracka two-stage repair is an attractive alternative surgical procedure with superior, reliable, and reproducible results. Purpose: To study the feasibility and applicability of Bracka two-stage repair for the severe proximal hypospadias and to analyze the outcomes and complications of this surgical technique. Materials and Methods: This prospective study was conducted from January 2011 to December 2013. Bracka two-stage repair was performed using inner preputial skin as a free graft in subjects with proximal hypospadias in whom severe degree of chordee and/or poor urethral plate was present. Only primary cases were included in this study. All subjects received three doses of intra-muscular testosterone 3 weeks apart before first stage. Second stage was performed 6 months after the first stage. Follow-up ranged from 6 months to 24 months. Results: A total of 43 patients operated for Bracka repair, out of which 30 patients completed two-stage repair. Mean age of the patients was 4 years and 8 months. We achieved 100% graft uptake and no revision was required. Three patients developed fistula, while two had metal stenosis. Glans dehiscence, urethral stricture and the residual chordee were not found during follow-up and satisfactory cosmetic results with good urinary stream were achieved in all cases. Conclusion: The Bracka two-stage repair is a safe and reliable approach in select patients in whom it is impractical to maintain the axial integrity of the urethral plate, and, therefore, a full circumference urethral reconstruction become necessary. This gives good results both in terms of restoration of normal function with minimal complication.

  1. Optimisation of two-stage screw expanders for waste heat recovery applications

    Science.gov (United States)

    Read, M. G.; Smith, I. K.; Stosic, N.

    2015-08-01

    It has previously been shown that the use of two-phase screw expanders in power generation cycles can achieve an increase in the utilisation of available energy from a low temperature heat source when compared with more conventional single-phase turbines. However, screw expander efficiencies are more sensitive to expansion volume ratio than turbines, and this increases as the expander inlet vapour dryness fraction decreases. For singlestage screw machines with low inlet dryness, this can lead to under expansion of the working fluid and low isentropic efficiency for the expansion process. The performance of the cycle can potentially be improved by using a two-stage expander, consisting of a low pressure machine and a smaller high pressure machine connected in series. By expanding the working fluid over two stages, the built-in volume ratios of the two machines can be selected to provide a better match with the overall expansion process, thereby increasing efficiency for particular inlet and discharge conditions. The mass flow rate though both stages must however be matched, and the compromise between increasing efficiency and maximising power output must also be considered. This research uses a rigorous thermodynamic screw machine model to compare the performance of single and two-stage expanders over a range of operating conditions. The model allows optimisation of the required intermediate pressure in the two- stage expander, along with the rotational speed and built-in volume ratio of both screw machine stages. The results allow the two-stage machine to be fully specified in order to achieve maximum efficiency for a required power output.

  2. A two-stage procedure for determining unsaturated hydraulic characteristics using a syringe pump and outflow observations

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Jensen, Karsten Høgh; Hollenbeck, Karl-Josef;

    1997-01-01

    A fast two-stage methodology for determining unsaturated flow characteristics is presented. The procedure builds on direct measurement of the retention characteristic using a syringe pump technique, combined with inverse estimation of the hydraulic conductivity characteristic based on one......-step outflow experiments. The direct measurements are obtained with a commercial syringe pump, which continuously withdraws fluid from a soil sample at a very low and accurate how rate, thus providing the water content in the soil sample. The retention curve is then established by simultaneously monitoring......-step outflow data and the independently measured retention data are included in the objective function of a traditional least-squares minimization routine, providing unique estimates of the unsaturated hydraulic characteristics by means of numerical inversion of Richards equation. As opposed to what is often...

  3. An R package for spatial coverage sampling and random sampling from compact geographical strata by k-means

    NARCIS (Netherlands)

    Walvoort, D.J.J.; Brus, D.J.; Gruijter, de J.J.

    2010-01-01

    Both for mapping and for estimating spatial means of an environmental variable, the accuracy of the result will usually be increased by dispersing the sample locations so that they cover the study area as uniformly as possible. We developed a new R package for designing spatial coverage samples for

  4. Preliminary chemical analysis and biological testing of materials from the HRI catalytic two-stage liquefaction (CTSL) process. [Aliphatic hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Later, D.W.; Wilson, B.W.

    1985-01-01

    Coal-derived materials from experimental runs of Hydrocarbon Research Incorporated's (HRI) catalytic two-stage liquefaction (CTSL) process were chemically characterized and screened for microbial mutagenicity. This process differs from two-stage coal liquefaction processes in that catalyst is used in both stages. Samples from both the first and second stages were class-fractionated by alumina adsorption chromatography. The fractions were analyzed by capillary column gas chromatography; gas chromatography/mass spectrometry; direct probe, low voltage mass spectrometry; and proton nuclear magnetic resonance spectrometry. Mutagenicity assays were performed with the crude and class fractions in Salmonella typhimurium, TA98. Preliminary results of chemical analyses indicate that >80% CTSL materials from both process stages were aliphatic hydrocarbon and polynuclear aromatic hydrocarbon (PAH) compounds. Furthermore, the gross and specific chemical composition of process materials from the first stage were very similar to those of the second stage. In general, the unfractionated materials were only slightly active in the TA98 mutagenicity assay. Like other coal liquefaction materials investigated in this laboratory, the nitrogen-containing polycyclic aromatic compound (N-PAC) class fractions were responsible for the bulk of the mutagenic activity of the crudes. Finally, it was shown that this activity correlated with the presence of amino-PAH. 20 figures, 9 tables.

  5. Reality check for the Chinese microblog space: a random sampling approach.

    Directory of Open Access Journals (Sweden)

    King-wa Fu

    Full Text Available Chinese microblogs have drawn global attention to this online application's potential impact on the country's social and political environment. However, representative and reliable statistics on Chinese microbloggers are limited. Using a random sampling approach, this study collected Chinese microblog data from the service provider, analyzing the profile and the pattern of usage for 29,998 microblog accounts. From our analysis, 57.4% (95% CI 56.9%,58.0% of the accounts' timelines were empty. Among the 12,774 non-zero statuses samples, 86.9% (95% CI 86.2%,87.4% did not make original post in a 7-day study period. By contrast, 0.51% (95% CI 0.4%,0.65% wrote twenty or more original posts and 0.45% (95% CI 0.35%,0.60% reposted more than 40 unique messages within the 7-day period. A small group of microbloggers created a majority of contents and drew other users' attention. About 4.8% (95% CI 4.4%,5.2% of the 12,774 users contributed more than 80% (95% CI,78.6%,80.3% of the original posts and about 4.8% (95% CI 4.5%,5.2% managed to create posts that were reposted or received comments at least once. Moreover, a regression analysis revealed that volume of followers is a key determinant of creating original microblog posts, reposting messages, being reposted, and receiving comments. Volume of friends is found to be linked only with the number of reposts. Gender differences and regional disparities in using microblogs in China are also observed.

  6. A coupled well-balanced and random sampling scheme for computing bubble oscillations*

    Directory of Open Access Journals (Sweden)

    Jung Jonathan

    2012-04-01

    Full Text Available We propose a finite volume scheme to study the oscillations of a spherical bubble of gas in a liquid phase. Spherical symmetry implies a geometric source term in the Euler equations. Our scheme satisfies the well-balanced property. It is based on the VFRoe approach. In order to avoid spurious pressure oscillations, the well-balanced approach is coupled with an ALE (Arbitrary Lagrangian Eulerian technique at the interface and a random sampling remap. Nous proposons un schéma de volumes finis pour étudier les oscillations d’une bulle sphérique de gaz dans l’eau. La symétrie sphérique fait apparaitre un terme source géométrique dans les équations d’Euler. Notre schéma est basé sur une approche VFRoe et préserve les états stationnaires. Pour éviter les oscillations de pression, l’approche well-balanced est couplée avec une approche ALE (Arbitrary Lagrangian Eulerian, et une étape de projection basée sur un échantillonage aléatoire.

  7. Cognitive deficits and morphological cerebral changes in a random sample of social drinkers.

    Science.gov (United States)

    Bergman, H

    1985-01-01

    A random sample of 200 men and 200 women taken from the general population as well as subsamples of 31 male and 17 female excessive social drinkers were investigated with neuropsychological tests and computed tomography of the brain. Relatively high alcohol intake per drinking occasion did not give evidence of cognitive deficits or morphological cerebral changes. However, in males, mild cognitive deficits and morphological cerebral changes as a result of high recent alcohol intake, particularly during the 24-hr period prior to the investigation, were observed. When excluding acute effects of recent alcohol intake, mild cognitive deficits but not morphological cerebral changes that are apparently due to long-term excessive social drinking were observed in males. In females there was no association between the drinking variables and cognitive deficits or morphological cerebral changes, probably due to their less advanced drinking habits. It is suggested that future risk evaluations and estimations of safe alcohol intake should take into consideration the potential risk for brain damage due to excessive social drinking. However, it is premature to make any definite statements about safe alcohol intake and the risk for brain damage in social drinkers from the general population.

  8. Simple random sampling-based probe station selection for fault detection in wireless sensor networks.

    Science.gov (United States)

    Huang, Rimao; Qiu, Xuesong; Rui, Lanlan

    2011-01-01

    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate.

  9. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  10. Image reconstruction in EIT with unreliable electrode data using random sample consensus method

    Science.gov (United States)

    Jeon, Min Ho; Khambampati, Anil Kumar; Kim, Bong Seok; In Kang, Suk; Kim, Kyung Youn

    2017-04-01

    In electrical impedance tomography (EIT), it is important to acquire reliable measurement data through EIT system for achieving good reconstructed image. In order to have reliable data, various methods for checking and optimizing the EIT measurement system have been studied. However, most of the methods involve additional cost for testing and the measurement setup is often evaluated before the experiment. It is useful to have a method which can detect the faulty electrode data during the experiment without any additional cost. This paper presents a method based on random sample consensus (RANSAC) to find the incorrect data on fault electrode in EIT data. RANSAC is a curve fitting method that removes the outlier data from measurement data. RANSAC method is applied with Gauss Newton (GN) method for image reconstruction of human thorax with faulty data. Numerical and phantom experiments are performed and the reconstruction performance of the proposed RANSAC method with GN is compared with conventional GN method. From the results, it can be noticed that RANSAC with GN has better reconstruction performance than conventional GN method with faulty electrode data.

  11. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  12. Matching tutor to student: rules and mechanisms for efficient two-stage learning in neural circuits

    CERN Document Server

    Tesileanu, Tiberiu; Balasubramanian, Vijay

    2016-01-01

    Existing models of birdsong learning assume that brain area LMAN introduces variability into song for trial-and-error learning. Recent data suggest that LMAN also encodes a corrective bias driving short-term improvements in song. These later consolidate in area RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using a stochastic gradient descent approach, we derive how 'tutor' circuits should match plasticity mechanisms in 'student' circuits for efficient learning. We further describe a reinforcement learning framework with which the tutor can build its teaching signal. We show that mismatching the tutor signal and plasticity mechanism can impair or abolish learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.

  13. HRI catalytic two-stage liquefaction (CTSL) process materials: chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.

    1985-12-01

    This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stage liquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

  14. Two-stage precipitation process of iron and arsenic from acid leaching solutions

    Institute of Scientific and Technical Information of China (English)

    N.J.BOLIN; J.E.SUNDKVIST

    2008-01-01

    A leaching process for base metals recovery often generates considerable amounts of impurities such as iron and arsenic into the solution.It is a challenge to separate the non-valuable metals into manageable and stable waste products for final disposal,without loosing the valuable constituents.Boliden Mineral AB has patented a two-stage precipitation process that gives a very clean iron-arsenic precipitate by a minimum of coprecipitation of base metals.The obtained product shows to have good sedimentation and filtration properties,which makes it easy to recover the iron-arsenic depleted solution by filtration and washing of the precipitate.Continuos bench scale tests have been done,showing the excellent results achieved by the two-stage precipitation process.

  15. S-band gain-flattened EDFA with two-stage double-pass configuration

    Science.gov (United States)

    Fu, Hai-Wei; Xu, Shi-Chao; Qiao, Xue-Guang; Jia, Zhen-An; Liu, Ying-Gang; Zhou, Hong

    2011-11-01

    A gain-flattened S-band erbium-doped fiber amplifier (EDFA) using standard erbium-doped fiber (EDF) is proposed and experimentally demonstrated. The proposed amplifier with two-stage double-pass configuration employs two C-band suppressing filters to obtain the optical gain in S-band. The amplifier provides a maximum signal gain of 41.6 dB at 1524 nm with the corresponding noise figure of 3.8 dB. Furthermore, with a well-designed short-pass filter as a gain flattening filter (GFF), we are able to develop the S-band EDFA with a flattened gain of more than 20 dB in 1504-1524 nm. In the experiment, the two-stage double-pass amplifier configuration improves performance of gain and noise figure compared with the configuration of single-stage double-pass S-band EDFA.

  16. Power Frequency Oscillation Suppression Using Two-Stage Optimized Fuzzy Logic Controller for Multigeneration System

    Directory of Open Access Journals (Sweden)

    Y. K. Bhateshvar

    2016-01-01

    Full Text Available This paper attempts to develop a linearized model of automatic generation control (AGC for an interconnected two-area reheat type thermal power system in deregulated environment. A comparison between genetic algorithm optimized PID controller (GA-PID, particle swarm optimized PID controller (PSO-PID, and proposed two-stage based PSO optimized fuzzy logic controller (TSO-FLC is presented. The proposed fuzzy based controller is optimized at two stages: one is rule base optimization and other is scaling factor and gain factor optimization. This shows the best dynamic response following a step load change with different cases of bilateral contracts in deregulated environment. In addition, performance of proposed TSO-FLC is also examined for ±30% changes in system parameters with different type of contractual demands between control areas and compared with GA-PID and PSO-PID. MATLAB/Simulink® is used for all simulations.

  17. A two-stage scheme for multi-view human pose estimation

    Science.gov (United States)

    Yan, Junchi; Sun, Bing; Liu, Yuncai

    2010-08-01

    We present a two-stage scheme integrating voxel reconstruction and human motion tacking. By combining voxel reconstruction with human motion tracking interactively, our method can work in a cluttered background where perfect foreground silhouettes are hardly available. For each frame, a silhouette-based 3D volume reconstruction method and hierarchical tracking algorithm are applied in two stages. In the first stage, coarse reconstruction and tracking results are obtained, and then the refinement for reconstruction is applied in the second stage. The experimental results demonstrate our approach is promising. Although our method focuses on the problem of human body voxel reconstruction and motion tracking in this paper, our scheme can be used to reconstruct voxel data and infer the pose of many specified rigid and articulated objects.

  18. Effect of two-stage aging on superplasticity of Al-Li alloy

    Institute of Scientific and Technical Information of China (English)

    LUO Zhi-hui; ZHANG Xin-ming; DU Yu-xuan; YE Ling-ying

    2006-01-01

    The effect of two-stage aging on the microstructures and superplasticity of 01420 Al-Li alloy was investigated by means of OM, TEM analysis and stretching experiment. The results demonstrate that the second phase particles distributed more uniformly with a larger volume fraction can be observed after the two-stage aging (120 ℃, 12 h+300 ℃, 36 h) compared with the single-aging(300 ℃, 48 h). After rolling and recrystallization annealing, fine grains with size of 8-10 μm are obtained, and the superplastic elongation of the specimens reaches 560% at strain rate of 8×10-4 s-1 and 480 ℃. Uniformly distributed fine particles precipitate both on grain boundaries and in grains at lower temperature. When the sheet is aged at high temperature, the particles become coarser with a large volume fraction.

  19. Two stage bioethanol refining with multi litre stacked microbial fuel cell and microbial electrolysis cell.

    Science.gov (United States)

    Sugnaux, Marc; Happe, Manuel; Cachelin, Christian Pierre; Gloriod, Olivier; Huguenin, Gérald; Blatter, Maxime; Fischer, Fabian

    2016-12-01

    Ethanol, electricity, hydrogen and methane were produced in a two stage bioethanol refinery setup based on a 10L microbial fuel cell (MFC) and a 33L microbial electrolysis cell (MEC). The MFC was a triple stack for ethanol and electricity co-generation. The stack configuration produced more ethanol with faster glucose consumption the higher the stack potential. Under electrolytic conditions ethanol productivity outperformed standard conditions and reached 96.3% of the theoretically best case. At lower external loads currents and working potentials oscillated in a self-synchronized manner over all three MFC units in the stack. In the second refining stage, fermentation waste was converted into methane, using the scale up MEC stack. The bioelectric methanisation reached 91% efficiency at room temperature with an applied voltage of 1.5V using nickel cathodes. The two stage bioethanol refining process employing bioelectrochemical reactors produces more energy vectors than is possible with today's ethanol distilleries.

  20. HRI catalytic two-stage liquefaction (CTSL) process materials: chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.

    1985-12-01

    This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stage liquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

  1. Performance measurement of insurance firms using a two-stage DEA method

    Directory of Open Access Journals (Sweden)

    Raha Jalili Sabet

    2013-01-01

    Full Text Available Measuring the relative performance of insurance firms plays an important role in this industry. In this paper, we present a two-stage data envelopment analysis to measure the performance of insurance firms, which were active over the period of 2006-2010. The proposed study of this paper performs DEA method in two stages where the first stage considers five inputs and three outputs while the second stage considers the outputs of the first stage as the inputs of the second stage and uses three different outputs for this stage. The results of our survey have indicated that while there were 4 efficient insurance firms most other insurances were noticeably inefficient. This means market was monopolized mostly by a limited number of insurance firms and competition was not fare enough to let other firms participate in economy, more efficiently.

  2. Direct Torque Control of Sensorless Induction Machine Drives: A Two-Stage Kalman Filter Approach

    Directory of Open Access Journals (Sweden)

    Jinliang Zhang

    2015-01-01

    Full Text Available Extended Kalman filter (EKF has been widely applied for sensorless direct torque control (DTC in induction machines (IMs. One key problem associated with EKF is that the estimator suffers from computational burden and numerical problems resulting from high order mathematical models. To reduce the computational cost, a two-stage extended Kalman filter (TEKF based solution is presented for closed-loop stator flux, speed, and torque estimation of IM to achieve sensorless DTC-SVM operations in this paper. The novel observer can be similarly derived as the optimal two-stage Kalman filter (TKF which has been proposed by several researchers. Compared to a straightforward implementation of a conventional EKF, the TEKF estimator can reduce the number of arithmetic operations. Simulation and experimental results verify the performance of the proposed TEKF estimator for DTC of IMs.

  3. Syme's two-stage amputation in insulin-requiring diabetics with gangrene of the forefoot.

    Science.gov (United States)

    Pinzur, M S; Morrison, C; Sage, R; Stuck, R; Osterman, H; Vrbos, L

    1991-06-01

    Thirty-five insulin-requiring adult diabetic patients underwent 38 Syme's Two-Stage amputations for gangrene of the forefoot with nonreconstructible peripheral vascular insufficiency. All had a minimum Doppler ischemic index of 0.5, serum albumin of 3.0 gm/dl, and total lymphocyte count of 1500. Thirty-one (81.6%) eventually healed and were uneventfully fit with a prosthesis. Regional anesthesia was used in all of the patients, with 22 spinal and 16 ankle block anesthetics. Twenty-seven (71%) returned to their preamputation level of ambulatory function. Six (16%) had major, and fifteen (39%) minor complications following the first stage surgery. The results of this study support the use of the Syme's Two-Stage amputation in adult diabetic patients with gangrene of the forefoot requiring amputation.

  4. Low-noise SQUIDs with large transfer: two-stage SQUIDs based on DROSs

    Science.gov (United States)

    Podt, M.; Flokstra, J.; Rogalla, H.

    2002-08-01

    We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×10 5Φ0/s and the measured white flux noise was 1.3 μ Φ0/√Hz at 4.2 K. The system is based on a conventional dc SQUID with a double relaxation oscillation SQUID (DROS) as the second stage. Because of the large flux-to-voltage transfer, the sensitivity of the system is completely determined by the sensor SQUID and not by the DROS or the room-temperature preamplifier. Decreasing the Josephson junction area enables a further improvement of the sensitivity of the two-stage SQUID systems.

  5. Interval estimation of binomial proportion in clinical trials with a two-stage design.

    Science.gov (United States)

    Tsai, Wei-Yann; Chi, Yunchan; Chen, Chia-Min

    2008-01-15

    Generally, a two-stage design is employed in Phase II clinical trials to avoid giving patients an ineffective drug. If the number of patients with significant improvement, which is a binomial response, is greater than a pre-specified value at the first stage, then another binomial response at the second stage is also observed. This paper considers interval estimation of the response probability when the second stage is allowed to continue. Two asymptotic interval estimators, Wald and score, as well as two exact interval estimators, Clopper-Pearson and Sterne, are constructed according to the two binomial responses from this two-stage design, where the binomial response at the first stage follows a truncated binomial distribution. The mean actual coverage probability and expected interval width are employed to evaluate the performance of these interval estimators. According to the comparison results, the score interval is recommended for both Simon's optimal and minimax designs.

  6. Experiment and surge analysis of centrifugal two-stage turbocharging system

    Institute of Scientific and Technical Information of China (English)

    Yituan HE; Chaochen MA

    2008-01-01

    To study a centrifugal two-stage turbocharging system's surge and influencing factors, a special test bench was set up and the system surge test was performed. The test results indicate that the measured parameters such as air mass flow and rotation speed of a high pressure (HP) stage compressor can be converted into corrected para-meters under a standard condition according to the Mach number similarity criterion, because the air flow in a HP stage compressor has entered the Reynolds number (Re) auto-modeling range. Accordingly, the reasons leading to a two-stage turbocharging system's surge can be analyzed according to the corrected mass flow characteristic maps and actual operating conditions of HP and low pressure (LP) stage compressors.

  7. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  8. Hybrid staging of a Lysholm positive displacement engine with two Westinghouse two stage impulse Curtis turbines

    Energy Technology Data Exchange (ETDEWEB)

    Parker, D.A.

    1982-06-01

    The University of California at Berkeley has tested and modeled satisfactorly a hybrid staged Lysholm engine (positive displacement) with a two stage Curtis wheel turbine. The system operates in a stable manner over its operating range (0/1-3/1 water ratio, 120 psia input). Proposals are made for controlling interstage pressure with a partial admission turbine and volume expansion to control mass flow and pressure ratio for the Lysholm engine.

  9. Full noise characterization of a low-noise two-stage SQUID amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Falferi, P [Istituto di Fotonica e Nanotecnologie, CNR-Fondazione Bruno Kessler, 38100 Povo, Trento (Italy); Mezzena, R [INFN, Gruppo Collegato di Trento, Sezione di Padova, 38100 Povo, Trento (Italy); Vinante, A [INFN, Sezione di Padova, 35131 Padova (Italy)], E-mail: falferi@science.unitn.it

    2009-07-15

    From measurements performed on a low-noise two-stage SQUID amplifier coupled to a high- Q electrical resonator we give a complete noise characterization of the SQUID amplifier around the resonator frequency of 11 kHz in terms of additive, back action and cross-correlation noise spectral densities. The minimum noise temperature evaluated at 135 mK is 10 {mu}K and corresponds to an energy resolution of 18{Dirac_h}.

  10. Development of a Novel Type Catalyst SY-2 for Two-Stage Hydrogenation of Pyrolysis Gasoline

    Institute of Scientific and Technical Information of China (English)

    Wu Linmei; Zhang Xuejun; Zhang Zhihua; Wang Fucun

    2004-01-01

    By using the group ⅢB or groupⅦB metals and modulating the characteristics of electric charges on carrier surface, improving the catalyst preparation process and techniques for loading the active metal components, a novel type SY-2 catalyst earmarked for two-stage hydrogenation of pyrolysis gasoline has been developed. The catalyst evaluation results have indicated that the novel catalyst is characterized by a better hydrogenation reaction activity to give higher aromatic yield.

  11. Investigation on a two-stage solvay refrigerator with magnetic material regenerator

    Science.gov (United States)

    Chen, Guobang; Zheng, Jianyao; Zhang, Fagao; Yu, Jianping; Tao, Zhenshi; Ding, Cenyu; Zhang, Liang; Wu, Peiyi; Long, Yi

    This paper describes experimental results that the no-load temperature of a two-stage Solvay refrigerator has been reached in liquid helium temperature region from the original 11.5 K by using magnetic regenerative material instead of lead. The structure and technological characteristics of the prototype machine are presented. The effects of operating frequency and pressure on the refrigerating temperature have been discussed in this paper.

  12. Biological hydrogen production from olive mill wastewater with two-stage processes

    Energy Technology Data Exchange (ETDEWEB)

    Eroglu, Ela; Eroglu, Inci [Department of Chemical Engineering, Middle East Technical University, 06531, Ankara (Turkey); Guenduez, Ufuk; Yuecel, Meral [Department of Biology, Middle East Technical University, 06531, Ankara (Turkey); Tuerker, Lemi [Department of Chemistry, Middle East Technical University, 06531, Ankara (Turkey)

    2006-09-15

    In the present work two novel two-stage hydrogen production processes from olive mill wastewater (OMW) have been introduced. The first two-stage process involved dark-fermentation followed by a photofermentation process. Dark-fermentation by activated sludge cultures and photofermentation by Rhodobacter sphaeroides O.U.001 were both performed in 55ml glass vessels, under anaerobic conditions. In some cases of dark-fermentation, activated sludge was initially acclimatized to the OMW to provide the adaptation of microorganisms to the extreme conditions of OMW. The highest hydrogen production potential obtained was 29l{sub H{sub 2}}/l{sub OMW} after photofermentation with 50% (v/v) effluent of dark fermentation with activated sludge. Photofermentation with 50% (v/v) effluent of dark fermentation with acclimated activated sludge had the highest hydrogen production rate (0.008ll{sup -1}h{sup -1}). The second two-stage process involved a clay treatment step followed by photofermentation by R. sphaeroides O.U.001. Photofermentation with the effluent of the clay pretreatment process (4% (v/v)) gives the highest hydrogen production potential (35l{sub H{sub 2}}/l{sub OMW}), light conversion efficiency (0.42%) and COD conversion efficiency (52%). It was concluded that both pretreatment processes enhanced the photofermentative hydrogen production process. Moreover, hydrogen could be produced with highly concentrated OMW. Two-stage processes developed in the present investigation have a high potential for solving the environmental problems caused by OMW. (author)

  13. The two-stage aegean extension, from localized to distributed, a result of slab rollback acceleration

    OpenAIRE

    Brun, Jean-Pierre; Faccenna, Claudio; Gueydan, Frédéric; Sokoutis, Dimitrios; Philippon, Mélody; Kydonakis, Konstantinos; Gorini, Christian

    2016-01-01

    International audience; Back-arc extension in the Aegean, which was driven by slab rollback since 45 Ma, is described here for the first time in two stages. From Middle Eocene to Middle Miocene, deformation was localized leading to i) the exhumation of high-pressure metamorphic rocks to crustal depths, ii) the exhumation of high-temperature metamorphic rocks in core complexes and iii) the deposition of sedimentary basins. Since Middle Miocene, extension distributed over the whole Aegean domai...

  14. A Two-stage Discriminating Framework for Making Supply Chain Operation Decisions under Uncertainties

    OpenAIRE

    Gu, H; Rong, G

    2010-01-01

    This paper addresses the problem of making supply chain operation decisions for refineries under two types of uncertainties: demand uncertainty and incomplete information shared with suppliers and transport companies. Most of the literature only focus on one uncertainty or treat more uncertainties identically. However, we note that refineries have more power to control uncertainties in procurement and transportation than in demand in the real world. Thus, a two-stage framework for dealing wit...

  15. Low-noise SQUIDs with large transfer: two-stage SQUIDs based on DROSs

    NARCIS (Netherlands)

    Podt, M.; Flokstra, Jakob; Rogalla, Horst

    2002-01-01

    We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×105 Φ0/s and the measured white flux noise was 1.3 μΦ0/√Hz at 4.2 K. The

  16. Latent Inhibition as a Function of US Intensity in a Two-Stage CER Procedure

    Science.gov (United States)

    Rodriguez, Gabriel; Alonso, Gumersinda

    2004-01-01

    An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…

  17. Two stage dual gate MESFET monolithic gain control amplifier for Ka-band

    Science.gov (United States)

    Sokolov, V.; Geddes, J.; Contolatis, A.

    A monolithic two stage gain control amplifier has been developed using submicron gate length dual gate MESFETs fabricated on ion implanted material. The amplifier has a gain of 12 dB at 30 GHz with a gain control range of over 30 dB. This ion implanted monolithic IC is readily integrable with other phased array receiver functions such as low noise amplifiers and phase shifters.

  18. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Science.gov (United States)

    Kılıç, Bayram

    2012-07-01

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared.

  19. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Bayram [Mehmet Akif Ersoy University, Bucak Emin Guelmez Vocational School, Bucak, Burdur (Turkey)

    2012-07-15

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared. (orig.)

  20. Performance of Combined Water Turbine Darrieus-Savonius with Two Stage Savonius Buckets and Single Deflector

    OpenAIRE

    Sahim, Kaprawi; Santoso, Dyos; Sipahutar, Riman

    2016-01-01

    The objective of this study is to show the effect of single deflector plate on the performance of combined Darrieus-Savonius water turbine. In order to overcome the disadvantages of low torque of solo Darrieus turbine, a plate deflector mounted in front of returning Savonius bucket of combined water turbine composing of Darrieus and Savonius rotor has been proposed in this study. Some configurations of combined turbines with two stage Savonius rotors were experimentally tested in a river of c...

  1. Perceived Health Benefits and Soy Consumption Behavior: Two-Stage Decision Model Approach

    OpenAIRE

    Moon, Wanki; Balasubramanian, Siva K.; Rimal, Arbindra

    2005-01-01

    A two-stage decision model is developed to assess the effect of perceived soy health benefits on consumers' decisions with respect to soy food. The first stage captures whether or not to consume soy food, while the second stage reflects how often to consume. A conceptual/analytical framework is also employed, combining Lancaster's characteristics model and Fishbein's multi-attribute model. Results show that perceived soy health benefits significantly influence both decision stages. Further, c...

  2. High quantum efficiency mid-wavelength interband cascade infrared photodetectors with one and two stages

    Science.gov (United States)

    Zhou, Yi; Chen, Jianxin; Xu, Zhicheng; He, Li

    2016-08-01

    In this paper, we report on mid-wavelength infrared interband cascade photodetectors grown on InAs substrates. We studied the transport properties of the photon-generated carriers in the interband cascade structures by comparing two different detectors, a single stage detector and a two-stage cascade detector. The two-stage device showed quantum efficiency around 19.8% at room temperature, and clear optical response was measured even at a temperature of 323 K. The two detectors showed similar Johnson-noise limited detectivity. The peak detectivity of the one- and two-stage devices was measured to be 2.15 × 1014 cm·Hz1/02/W and 2.19 × 1014 cm·Hz1/02/W at 80 K, 1.21 × 109 cm·Hz1/02/W and 1.23 × 109 cm·Hz1/02/W at 300 K, respectively. The 300 K background limited infrared performance (BLIP) operation temperature is estimated to be over 140 K.

  3. Development of Two-Stage Stirling Cooler for ASTRO-F

    Science.gov (United States)

    Narasaki, K.; Tsunematsu, S.; Ootsuka, K.; Kyoya, M.; Matsumoto, T.; Murakami, H.; Nakagawa, T.

    2004-06-01

    A two-stage small Stirling cooler has been developed and tested for the infrared astronomical satellite ASTRO-F that is planned to be launched by Japanese M-V rocket in 2005. ASTRO-F has a hybrid cryogenic system that is a combination of superfluid liquid helium (HeII) and two-stage Stirling coolers. The mechanical cooler has a two-stage displacer driven by a linear motor in a cold head and a new linear-ball-bearing system for the piston-supporting structure in a compressor. The linear-ball-bearing supporting system achieves the piston clearance seal, the long piston-stroke operation and the low frequency operation. The typical cooling power is 200 mW at 20 K and the total input power to the compressor and the cold head is below 90 W without driver electronics. The engineering, the prototype and the flight models of the cooler have been fabricated and evaluated to verify the capability for ASTRO-F. This paper describes the design of the cooler and the results from verification tests including cooler performance test, thermal vacuum test, vibration test and lifetime test.

  4. Performance analysis of RDF gasification in a two stage fluidized bed-plasma process.

    Science.gov (United States)

    Materazzi, M; Lettieri, P; Taylor, R; Chapman, C

    2016-01-01

    The major technical problems faced by stand-alone fluidized bed gasifiers (FBG) for waste-to gas applications are intrinsically related to the composition and physical properties of waste materials, such as RDF. The high quantity of ash and volatile material in RDF can provide a decrease in thermal output, create high ash clinkering, and increase emission of tars and CO2, thus affecting the operability for clean syngas generation at industrial scale. By contrast, a two-stage process which separates primary gasification and selective tar and ash conversion would be inherently more forgiving and stable. This can be achieved with the use of a separate plasma converter, which has been successfully used in conjunction with conventional thermal treatment units, for the ability to 'polish' the producer gas by organic contaminants and collect the inorganic fraction in a molten (and inert) state. This research focused on the performance analysis of a two-stage fluid bed gasification-plasma process to transform solid waste into clean syngas. Thermodynamic assessment using the two-stage equilibrium method was carried out to determine optimum conditions for the gasification of RDF and to understand the limitations and influence of the second stage on the process performance (gas heating value, cold gas efficiency, carbon conversion efficiency), along with other parameters. Comparison with a different thermal refining stage, i.e. thermal cracking (via partial oxidation) was also performed. The analysis is supported by experimental data from a pilot plant.

  5. Continuous removal of endocrine disruptors by versatile peroxidase using a two-stage system.

    Science.gov (United States)

    Taboada-Puig, Roberto; Lu-Chau, Thelmo A; Eibes, Gemma; Feijoo, Gumersindo; Moreira, Maria T; Lema, Juan M

    2015-01-01

    The oxidant Mn(3+) -malonate, generated by the ligninolytic enzyme versatile peroxidase in a two-stage system, was used for the continuous removal of endocrine disrupting compounds (EDCs) from synthetic and real wastewaters. One plasticizer (bisphenol-A), one bactericide (triclosan) and three estrogenic compounds (estrone, 17β-estradiol, and 17α-ethinylestradiol) were removed from wastewater at degradation rates in the range of 28-58 µg/L·min, with low enzyme inactivation. First, the optimization of three main parameters affecting the generation of Mn(3+) -malonate (hydraulic retention time as well as Na-malonate and H2 O2 feeding rates) was conducted following a response surface methodology (RSM). Under optimal conditions, the degradation of the EDCs was proven at high (1.3-8.8 mg/L) and environmental (1.2-6.1 µg/L) concentrations. Finally, when the two-stage system was compared with a conventional enzymatic membrane reactor (EMR) using the same enzyme, a 14-fold increase of the removal efficiency was observed. At the same time, operational problems found during EDCs removal in the EMR system (e.g., clogging of the membrane and enzyme inactivation) were avoided by physically separating the stages of complex formation and pollutant oxidation, allowing the system to be operated for a longer period (∼8 h). This study demonstrates the feasibility of the two-stage enzymatic system for removing EDCs both at high and environmental concentrations.

  6. A two-stage Stirling-type pulse tube cryocooler with a cold inertance tube

    Science.gov (United States)

    Gan, Z. H.; Fan, B. Y.; Wu, Y. Z.; Qiu, L. M.; Zhang, X. J.; Chen, G. B.

    2010-06-01

    A thermally coupled two-stage Stirling-type pulse tube cryocooler (PTC) with inertance tubes as phase shifters has been designed, manufactured and tested. In order to obtain a larger phase shift at the low acoustic power of about 2.0 W, a cold inertance tube as well as a cold reservoir for the second stage, precooled by the cold end of the first stage, was introduced into the system. The transmission line model was used to calculate the phase shift produced by the cold inertance tube. Effect of regenerator material, geometry and charging pressure on the performance of the second stage of the two-stage PTC was investigated based on the well known regenerator model REGEN. Experimental results of the two-stage PTC were carried out with an emphasis on the performance of the second stage. A lowest cooling temperature of 23.7 K and 0.50 W at 33.9 K were obtained with an input electric power of 150.0 W and an operating frequency of 40 Hz.

  7. Rehabilitation outcomes in patients with early and two-stage reconstruction of flexor tendon injuries.

    Science.gov (United States)

    Sade, Ilgin; İnanir, Murat; Şen, Suzan; Çakmak, Esra; Kablanoğlu, Serkan; Selçuk, Barin; Dursun, Nigar

    2016-08-01

    [Purpose] The primary aim of this study was to assess rehabilitation outcomes for early and two-stage repair of hand flexor tendon injuries. The secondary purpose of this study was to compare the findings between treatment groups. [Subjects and Methods] Twenty-three patients were included in this study. Early repair (n=14) and two-stage repair (n=9) groups were included in a rehabilitation program that used hand splints. This retrospective evaluated patients according to their demographic characteristics, including age, gender, injured hand, dominant hand, cause of injury, zone of injury, number of affected fingers, and accompanying injuries. Pain, range of motion, and grip strength were evaluated using a visual analog scale, goniometer, and dynamometer, respectively. [Results] Both groups showed significant improvements in pain and finger flexion after treatment compared with baseline measurements. However, no significant differences were observed between the two treatment groups. Similar results were obtained for grip strength and pinch grip, whereas gross grip was better in the early tendon repair group. [Conclusion] Early and two-stage reconstruction of patients with flexor tendon injuries can be performed with similarly favorable responses and effective rehabilitation programs.

  8. A Comparison of Direct and Two-Stage Transportation of Patients to Hospital in Poland

    Directory of Open Access Journals (Sweden)

    Anna Rosiek

    2015-04-01

    Full Text Available Background: The rapid international expansion of telemedicine reflects the growth of technological innovations. This technological advancement is transforming the way in which patients can receive health care. Materials and Methods: The study was conducted in Poland, at the Department of Cardiology of the Regional Hospital of Louis Rydygier in Torun. The researchers analyzed the delay in the treatment of patients with acute coronary syndrome. The study was conducted as a survey and examined 67 consecutively admitted patients treated invasively in a two-stage transport system. Data were analyzed statistically. Results: Two-stage transportation does not meet the timeframe guidelines for the treatment of patients with acute myocardial infarction. Intervals for the analyzed group of patients were statistically significant (p < 0.0001. Conclusions: Direct transportation of the patient to a reference center with interventional cardiology laboratory has a significant impact on reducing in-hospital delay in case of patients with acute coronary syndrome. Perspectives: This article presents the results of two-stage transportation of the patient with acute coronary syndrome. This measure could help clinicians who seek to assess time needed for intervention. It also shows how time from the beginning of pain in chest is important and may contribute to patient disability, death or well-being.

  9. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  10. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  11. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  12. Industrial demonstration plant for the gasification of herb residue by fluidized bed two-stage process.

    Science.gov (United States)

    Zeng, Xi; Shao, Ruyi; Wang, Fang; Dong, Pengwei; Yu, Jian; Xu, Guangwen

    2016-04-01

    A fluidized bed two-stage gasification process, consisting of a fluidized-bed (FB) pyrolyzer and a transport fluidized bed (TFB) gasifier, has been proposed to gasify biomass for fuel gas production with low tar content. On the basis of our previous fundamental study, an autothermal two-stage gasifier has been designed and built for gasify a kind of Chinese herb residue with a treating capacity of 600 kg/h. The testing data in the operational stable stage of the industrial demonstration plant showed that when keeping the reaction temperatures of pyrolyzer and gasifier respectively at about 700 °C and 850 °C, the heating value of fuel gas can reach 1200 kcal/Nm(3), and the tar content in the produced fuel gas was about 0.4 g/Nm(3). The results from this pilot industrial demonstration plant fully verified the feasibility and technical features of the proposed FB two-stage gasification process.

  13. Study on two stage activated carbon/HFC-134a based adsorption chiller

    Science.gov (United States)

    >K Habib,

    2013-06-01

    In this paper, a theoretical analysis on the performance of a thermally driven two-stage four-bed adsorption chiller utilizing low-grade waste heat of temperatures between 50°C and 70°C in combination with a heat sink (cooling water) of 30°C for air-conditioning applications has been described. Activated carbon (AC) of type Maxsorb III/HFC-134a pair has been examined as an adsorbent/refrigerant pair. FORTRAN simulation program is developed to analyze the influence of operating conditions (hot and cooling water temperatures and adsorption/desorption cycle times) on the cycle performance in terms of cooling capacity and COP. The main advantage of this two-stage chiller is that it can be operational with smaller regenerating temperature lifts than other heat-driven single-stage chillers. Simulation results shows that the two-stage chiller can be operated effectively with heat sources of 50°C and 70°C in combination with a coolant at 30°C.

  14. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu, E-mail: zhanglu1211@gmail.com; Sun, Xiangyang, E-mail: xysunbjfu@gmail.com

    2015-05-15

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  15. A Two-stage injection-locked magnetron for accelerators with superconducting cavities

    CERN Document Server

    Kazakevich, Grigory; Flanagan, Gene; Marhauser, Frank; Neubauer, Mike; Yakovlev, Vyacheslav; Chase, Brian; Nagaitsev, Sergey; Pasquinelli, Ralph; Solyak, Nikolay; Tupikov, Vitali; Wolff, Daniel

    2013-01-01

    A concept for a two-stage injection-locked CW magnetron intended to drive Superconducting Cavities (SC) for intensity-frontier accelerators has been proposed. The concept considers two magnetrons in which the output power differs by 15-20 dB and the lower power magnetron being frequency-locked from an external source locks the higher power magnetron. The injection-locked two-stage CW magnetron can be used as an RF power source for Fermilab's Project-X to feed separately each of the 1.3 GHz SC of the 8 GeV pulsed linac. We expect output/locking power ratio of about 30-40 dB assuming operation in a pulsed mode with pulse duration of ~ 8 ms and repetition rate of 10 Hz. The experimental setup of a two-stage magnetron utilising CW, S-band, 1 kW tubes operating at pulse duration of 1-10 ms, and the obtained results are presented and discussed in this paper.

  16. Study on the Control Algorithm of Two-Stage DC-DC Converter for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Changhao Piao

    2014-01-01

    Full Text Available The fast response, high efficiency, and good reliability are very important characteristics to electric vehicles (EVs dc/dc converters. Two-stage dc-dc converter is a kind of dc-dc topologies that can offer those characteristics to EVs. Presently, nonlinear control is an active area of research in the field of the control algorithm of dc-dc converters. However, very few papers research on two-stage converter for EVs. In this paper, a fixed switching frequency sliding mode (FSFSM controller and double-integral sliding mode (DISM controller for two-stage dc-dc converter are proposed. And a conventional linear control (lag is chosen as the comparison. The performances of the proposed FSFSM controller are compared with those obtained by the lag controller. In consequence, the satisfactory simulation and experiment results show that the FSFSM controller is capable of offering good large-signal operations with fast dynamical responses to the converter. At last, some other simulation results are presented to prove that the DISM controller is a promising method for the converter to eliminate the steady-state error.

  17. Sample size for cluster randomized trials: effect of coefficient of variation of cluster size and analysis method.

    Science.gov (United States)

    Eldridge, Sandra M; Ashby, Deborah; Kerry, Sally

    2006-10-01

    Cluster randomized trials are increasingly popular. In many of these trials, cluster sizes are unequal. This can affect trial power, but standard sample size formulae for these trials ignore this. Previous studies addressing this issue have mostly focused on continuous outcomes or methods that are sometimes difficult to use in practice. We show how a simple formula can be used to judge the possible effect of unequal cluster sizes for various types of analyses and both continuous and binary outcomes. We explore the practical estimation of the coefficient of variation of cluster size required in this formula and demonstrate the formula's performance for a hypothetical but typical trial randomizing UK general practices. The simple formula provides a good estimate of sample size requirements for trials analysed using cluster-level analyses weighting by cluster size and a conservative estimate for other types of analyses. For trials randomizing UK general practices the coefficient of variation of cluster size depends on variation in practice list size, variation in incidence or prevalence of the medical condition under examination, and practice and patient recruitment strategies, and for many trials is expected to be approximately 0.65. Individual-level analyses can be noticeably more efficient than some cluster-level analyses in this context. When the coefficient of variation is <0.23, the effect of adjustment for variable cluster size on sample size is negligible. Most trials randomizing UK general practices and many other cluster randomized trials should account for variable cluster size in their sample size calculations.

  18. PSP_MCSVM: brainstorming consensus prediction of protein secondary structures using two-stage multiclass support vector machines.

    Science.gov (United States)

    Chatterjee, Piyali; Basu, Subhadip; Kundu, Mahantapas; Nasipuri, Mita; Plewczynski, Dariusz

    2011-09-01

    Secondary structure prediction is a crucial task for understanding the variety of protein structures and performed biological functions. Prediction of secondary structures for new proteins using their amino acid sequences is of fundamental importance in bioinformatics. We propose a novel technique to predict protein secondary structures based on position-specific scoring matrices (PSSMs) and physico-chemical properties of amino acids. It is a two stage approach involving multiclass support vector machines (SVMs) as classifiers for three different structural conformations, viz., helix, sheet and coil. In the first stage, PSSMs obtained from PSI-BLAST and five specially selected physicochemical properties of amino acids are fed into SVMs as features for sequence-to-structure prediction. Confidence values for forming helix, sheet and coil that are obtained from the first stage SVM are then used in the second stage SVM for performing structure-to-structure prediction. The two-stage cascaded classifiers (PSP_MCSVM) are trained with proteins from RS126 dataset. The classifiers are finally tested on target proteins of critical assessment of protein structure prediction experiment-9 (CASP9). PSP_MCSVM with brainstorming consensus procedure performs better than the prediction servers like Predator, DSC, SIMPA96, for randomly selected proteins from CASP9 targets. The overall performance is found to be comparable with the current state-of-the art. PSP_MCSVM source code, train-test datasets and supplementary files are available freely in public domain at: http://sysbio.icm.edu.pl/secstruct and http://code.google.com/p/cmater-bioinfo/

  19. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    Science.gov (United States)

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  20. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    Energy Technology Data Exchange (ETDEWEB)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L., E-mail: costarid@upatras.gr [Department of Medical Physics, School of Medicine, University of Patras, Patras 26504 (Greece); Vassiou, K. [Department of Anatomy, School of Medicine, University of Thessaly, Larissa 41500 (Greece)

    2015-10-15

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing

  1. Alcohol consumption and metabolic syndrome among Shanghai adults: A randomized multistage stratified cluster sampling investigation

    Institute of Scientific and Technical Information of China (English)

    Jian-Gao Fan; Xiao-Bu Cai; Lui Li; Xing-Jian Li; Fei Dai; Jun Zhu

    2008-01-01

    AIM: To examine the relations of alcohol consumption to the prevalence of metabolic syndrome in Shanghai adults.METHODS: We performed a cross-sectional analysis of data from the randomized multistage stratified cluster sampling of Shanghai adults, who were evaluated for alcohol consumption and each component of metabolic syndrome, using the adapted U.S. National Cholesterol Education Program criteria. Current alcohol consumption was defined as more than once of alcohol drinking per month.RESULTS: The study population consisted of 3953participants (1524 men) with a mean age of 54.3 ± 12.1years. Among them, 448 subjects (11.3%) were current alcohol drinkers, including 405 males and 43 females.After adjustment for age and sex, the prevalence of current alcohol drinking and metabolic syndrome in the general population of Shanghai was 13.0% and 15.3%,respectively. Compared with nondrinkers, the prevalence of hypertriglyceridemia and hypertension was higher while the prevalence of abdominal obesity, low serum high-density-lipoprotein cholesterol (HDL-C) and diabetes mellitus was lower in subjects who consumed alcohol twice or more per month, with a trend toward reducing the prevalence of metabolic syndrome. Among the current alcohol drinkers, systolic blood pressure, HDL-C, fasting plasma glucose, and prevalence of hypertriglyceridemia tended to increase with increased alcohol consumption.However, Iow-density-lipoprotein cholesterol concentration,prevalence of abdominal obesity, low serum HDL-C andmetabolic syndrome showed the tendency to decrease.Moreover, these statistically significant differences were independent of gender and age.CONCLUSION: Current alcohol consumption is associatedwith a lower prevalence of metabolic syndrome irrespe-ctive of alcohol intake (g/d), and has a favorable influence on HDL-C, waist circumference, and possible diabetes mellitus. However, alcohol intake increases the likelihoodof hypertension, hypertriglyceridemia and hyperglycemia

  2. An improved two stages dynamic programming/artificial neural network solution model to the unit commitment of thermal units

    Energy Technology Data Exchange (ETDEWEB)

    Abbasy, N.H. [College of Technological Studies, Shuwaikh (Kuwait); Elfayoumy, M.K. [Univ. of Alexandria (Egypt). Dept. of Electrical Engineering

    1995-11-01

    An improved two stages solution model to the unit commitment of thermal units is developed in this paper. In the first stage a pre-schedule is generated using a high quality trained artificial neural net (ANN). A dynamic programming (DP) algorithm is implemented and applied in the second stage for the final determination of the commitment states. The developed solution model avoids the complications imposed by the generation of the variable window structure, proposed by other techniques. A unified approach for the treatment of the ANN is also developed in the paper. The validity of the proposed technique is proved via numerical applications to both sample and small practical power systems. 12 refs, 9 tabs

  3. Randomization tests

    CERN Document Server

    Edgington, Eugene

    2007-01-01

    Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani

  4. A two stage algorithm for target and suspect analysis of produced water via gas chromatography coupled with high resolution time of flight mass spectrometry.

    Science.gov (United States)

    Samanipour, Saer; Langford, Katherine; Reid, Malcolm J; Thomas, Kevin V

    2016-09-09

    Gas chromatography coupled with high resolution time of flight mass spectrometry (GC-HR-TOFMS) has gained popularity for the target and suspect analysis of complex samples. However, confident detection of target/suspect analytes in complex samples, such as produced water, remains a challenging task. Here we report on the development and validation of a two stage algorithm for the confident target and suspect analysis of produced water extracts. We performed both target and suspect analysis for 48 standards, which were a mixture of 28 aliphatic hydrocarbons and 20 alkylated phenols, in 3 produced water extracts. The two stage algorithm produces a chemical standard database of spectra, in the first stage, which is used for target and suspect analysis during the second stage. The first stage is carried out through five steps via an algorithm here referred to as unique ion extractor (UIE). During the first step the m/z values in the spectrum of a standard that do not belong to that standard are removed in order to produce a clean spectrum and then during the last step the cleaned spectrum is calibrated. The Dot-product algorithm, during the second stage, uses the cleaned and calibrated spectra of the standards for both target and suspect analysis. We performed the target analysis of 48 standards in all 3 samples via conventional methods, in order to validate the two stage algorithm. The two stage algorithm was demonstrated to be more robust, reliable, and less sensitive to the signal-to-noise ratio (S/N), when compared to the conventional method. The Dot-product algorithm showed lower potential in producing false positives compared to the conventional methods, when dealing with complex samples. We also evaluated the effect of the mass accuracy on the performances of Dot-product algorithm. Our results indicated the crucial importance of HR-MS data and the mass accuracy for confident suspect analysis in complex samples.

  5. Albumin to creatinine ratio in a random urine sample: Correlation with severity of preeclampsia

    Directory of Open Access Journals (Sweden)

    Fady S. Moiety

    2014-06-01

    Conclusions: Random urine ACR may be a reliable method for prediction and assessment of severity of preeclampsia. Using the estimated cut-off may add to the predictive value of such a simple quick test.

  6. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed....... The performance of the basic DC/DC topologies is reviewed with focus on the component stress. The knowledge obtained in this process is used to review some examples of the alternative PFC solutions and compare these solutions with the basic twostage PFC solution....

  7. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    in extending coverage of a minimum wage to the non-union sector. Furthermore, the union sector does not seek to increase the non-union wage to a level above the market-clearing wage. In fact, it is optimal for the union sector to impose a market-clearing wage on the non-union sector. Finally, coverage......This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  8. SQL/JavaScript Hybrid Worms As Two-stage Quines

    CERN Document Server

    Orlicki, José I

    2009-01-01

    Delving into present trends and anticipating future malware trends, a hybrid, SQL on the server-side, JavaScript on the client-side, self-replicating worm based on two-stage quines was designed and implemented on an ad-hoc scenario instantiating a very common software pattern. The proof of concept code combines techniques seen in the wild, in the form of SQL injections leading to cross-site scripting JavaScript inclusion, and seen in the laboratory, in the form of SQL quines propa- gated via RFIDs, resulting in a hybrid code injection. General features of hybrid worms are also discussed.

  9. Two stage DOA and Fundamental Frequency Estimation based on Subspace Techniques

    DEFF Research Database (Denmark)

    Zhou, Zhenhua; Christensen, Mads Græsbøll; So, Hing-Cheung

    2012-01-01

    optimally weighted harmonic multiple signal classification (MCOW-HMUSIC) estimator is devised for the estimation of fundamental frequencies. Secondly, the spatio- temporal multiple signal classification (ST-MUSIC) estimator is proposed for the estimation of DOA with the estimated frequencies. Statistical......In this paper, the problem of fundamental frequency and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signal is addressed. The estimation procedure consists of two stages. Firstly, by making use of the subspace technique and Markov-based eigenanalysis, a multi- channel...... evaluation with synthetic signals shows the high accuracy of the proposed methods compared with their non-weighting versions....

  10. Performance of the SITP 35K two-stage Stirling cryocooler

    Science.gov (United States)

    Liu, Dongyu; Li, Ao; Li, Shanshan; Wu, Yinong

    2010-04-01

    This paper presents the design, development, optimization experiment and performance of the SITP two-stage Stirling cryocooler. The geometry size of the cooler, especially the diameter and length of the regenerator were analyzed. Operating parameters by experiments were optimized to maximize the second stage cooling performance. In the test the cooler was operated at various drive frequency, phase shift between displacer and piston, fill pressure. The experimental results indicate that the cryocooler has a higher efficiency with a performance of 0.85W at 35K with a compressor input power of 56W at a phase shift of 65°, an operating frequency of 40Hz, 1MPa fill pressure.

  11. Two-Stage Bulk Electron Heating in the Diffusion Region of Anti-Parallel Symmetric Reconnection

    CERN Document Server

    Le, Ari; Daughton, William

    2016-01-01

    Electron bulk energization in the diffusion region during anti-parallel symmetric reconnection entails two stages. First, the inflowing electrons are adiabatically trapped and energized by an ambipolar parallel electric field. Next, the electrons gain energy from the reconnection electric field as they undergo meandering motion. These collisionless mechanisms have been decribed previously, and they lead to highly-structured electron velocity distributions. Nevertheless, a simplified control-volume analysis gives estimates for how the net effective heating scales with the upstream plasma conditions in agreement with fully kinetic simulations and spacecraft observations.

  12. Use of two-stage membrane countercurrent cascade for natural gas purification from carbon dioxide

    Science.gov (United States)

    Kurchatov, I. M.; Laguntsov, N. I.; Karaseva, M. D.

    2016-09-01

    Membrane technology scheme is offered and presented as a two-stage countercurrent recirculating cascade, in order to solve the problem of natural gas dehydration and purification from CO2. The first stage is a single divider, and the second stage is a recirculating two-module divider. This scheme allows natural gas to be cleaned from impurities, with any desired degree of methane extraction. In this paper, the optimal values of the basic parameters of the selected technological scheme are determined. An estimation of energy efficiency was carried out, taking into account the energy consumption of interstage compressor and methane losses in energy units.

  13. Forecasting long memory series subject to structural change: A two-stage approach

    DEFF Research Database (Denmark)

    Papailias, Fotis; Dias, Gustavo Fruet

    2015-01-01

    A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...... change and yields good forecasting results....

  14. Space Station Freedom carbon dioxide removal assembly two-stage rotary sliding vane pump

    Science.gov (United States)

    Matteau, Dennis

    1992-07-01

    The design and development of a positive displacement pump selected to operate as an essential part of the carbon dioxide removal assembly (CDRA) are described. An oilless two-stage rotary sliding vane pump was selected as the optimum concept to meet the CDRA application requirements. This positive displacement pump is characterized by low weight and small envelope per unit flow, ability to pump saturated gases and moderate amount of liquid, small clearance volumes, and low vibration. It is easily modified to accommodate several stages on a single shaft optimizing space and weight, which makes the concept ideal for a range of demanding space applications.

  15. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  16. Two-Stage Electric Vehicle Charging Coordination in Low Voltage Distribution Grids

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    Increased environmental awareness in the recent years has encouraged rapid growth of renewable energy sources (RESs); especially solar PV and wind. One of the effective solutions to compensate intermittencies in generation from the RESs is to enable consumer participation in demand response (DR......). Being a sizable rated element, electric vehicles (EVs) can offer a great deal of demand flexibility in future intelligent grids. This paper first investigates and analyzes driving pattern and charging requirements of EVs. Secondly, a two-stage charging algorithm, namely local adaptive control...

  17. Health care planning and education via gaming-simulation: a two-stage experiment.

    Science.gov (United States)

    Gagnon, J H; Greenblat, C S

    1977-01-01

    A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.

  18. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...

  19. The global stability of a delayed predator-prey system with two stage-structure

    Energy Technology Data Exchange (ETDEWEB)

    Wang Fengyan [College of Science, Jimei University, Xiamen Fujian 361021 (China)], E-mail: wangfy68@163.com; Pang Guoping [Department of Mathematics and Computer Science, Yulin Normal University, Yulin Guangxi 537000 (China)

    2009-04-30

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  20. Biomass waste gasification - can be the two stage process suitable for tar reduction and power generation?

    Science.gov (United States)

    Sulc, Jindřich; Stojdl, Jiří; Richter, Miroslav; Popelka, Jan; Svoboda, Karel; Smetana, Jiří; Vacek, Jiří; Skoblja, Siarhei; Buryan, Petr

    2012-04-01

    A pilot scale gasification unit with novel co-current, updraft arrangement in the first stage and counter-current downdraft in the second stage was developed and exploited for studying effects of two stage gasification in comparison with one stage gasification of biomass (wood pellets) on fuel gas composition and attainable gas purity. Significant producer gas parameters (gas composition, heating value, content of tar compounds, content of inorganic gas impurities) were compared for the two stage and the one stage method of the gasification arrangement with only the upward moving bed (co-current updraft). The main novel features of the gasifier conception include grate-less reactor, upward moving bed of biomass particles (e.g. pellets) by means of a screw elevator with changeable rotational speed and gradual expanding diameter of the cylindrical reactor in the part above the upper end of the screw. The gasifier concept and arrangement are considered convenient for thermal power range 100-350 kW(th). The second stage of the gasifier served mainly for tar compounds destruction/reforming by increased temperature (around 950°C) and for gasification reaction of the fuel gas with char. The second stage used additional combustion of the fuel gas by preheated secondary air for attaining higher temperature and faster gasification of the remaining char from the first stage. The measurements of gas composition and tar compound contents confirmed superiority of the two stage gasification system, drastic decrease of aromatic compounds with two and higher number of benzene rings by 1-2 orders. On the other hand the two stage gasification (with overall ER=0.71) led to substantial reduction of gas heating value (LHV=3.15 MJ/Nm(3)), elevation of gas volume and increase of nitrogen content in fuel gas. The increased temperature (>950°C) at the entrance to the char bed caused also substantial decrease of ammonia content in fuel gas. The char with higher content of ash leaving the

  1. Two-stage continuous fermentation of Saccharomycopsis fibuligeria and Candida utilis.

    Science.gov (United States)

    Admassu, W; Korus, R A; Heimsch, R C

    1983-11-01

    Biomass production and carbohydrate reduction were determined for a two-stage continuous fermentation process with a simulated potato processing waste feed. The amylolytic yeast Saccharomycopsis fibuligera was grown in the first stage and a mixed culture of S. fibuligera and Candida utilis was maintained in the second stage. All conditions for the first and second stages were fixed except the flow of medium to the second stage was varied. Maximum biomass production occurred at a second stage dilution rate, D(2), of 0.27 h (-1). Carbohydrate reduction was inversely proportional to D(2), between 0.10 and 0.35 h (-1).

  2. Structural requirements and basic design concepts for a two-stage winged launcher system (Saenger)

    Science.gov (United States)

    Kuczera, H.; Keller, K.; Kunz, R.

    1988-10-01

    An evaluation is made of materials and structures technologies deemed capable of increasing the mass fraction-to-orbit of the Saenger two-stage launcher system while adequately addressing thermal-control and cryogenic fuel storage insulation problems. Except in its leading edges, nose cone, and airbreathing propulsion system air intakes, Ti alloy-based materials will be the basis of the airframe primary structure. Lightweight metallic thermal-protection measures will be employed. Attention is given to the design of the large lower stage element of Saenger.

  3. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    OpenAIRE

    Ladan Jamshidy; Hamid Reza Mozaffari; Payam Faraji; Roohollah Sharifi

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with ...

  4. An Investigation on the Formation of Carbon Nanotubes by Two-Stage Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    M. S. Shamsudin

    2012-01-01

    Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

  5. Fast detection of lead dioxide (PbO2) in chlorinated drinking water by a two-stage iodometric method.

    Science.gov (United States)

    Zhang, Yan; Zhang, Yuanyuan; Lin, Yi-Pin

    2010-02-15

    Lead dioxide (PbO(2)) is an important corrosion product associated with lead contamination in drinking water. Quantification of PbO(2) in water samples has been proven challenging due to the incomplete dissolution of PbO(2) in sample preservation and digestion. In this study, we present a simple iodometric method for fast detection of PbO(2) in chlorinated drinking water. PbO(2) can oxidize iodide to form triiodide (I(3)(-)), a yellow-colored anion that can be detected by the UV-vis spectrometry. Complete reduction of up to 20 mg/L PbO(2) can be achieved within 10 min at pH 2.0 and KI = 4 g/L. Free chlorine can oxidize iodide and cause interference. However, this interference can be accounted by a two-stage pH adjustment, allowing free chlorine to completely react with iodide at ambient pH followed by sample acidification to pH 2.0 to accelerate the iodide oxidation by PbO(2). This method showed good recoveries of PbO(2) (90-111%) in chlorinated water samples with a concentration ranging from 0.01 to 20 mg/L. In chloraminated water, this method is limited due to incomplete quenching of monochloramine by iodide in neutral to slightly alkaline pH values. The interference of other particles that may be present in the distribution system was also investigated.

  6. Two-stage method to remove population- and individual-level outliers from longitudinal data in a primary care database.

    Science.gov (United States)

    Welch, C; Petersen, I; Walters, K; Morris, R W; Nazareth, I; Kalaitzaki, E; White, I R; Marston, L; Carpenter, J

    2012-07-01

    PURPOSE: In the UK, primary care databases include repeated measurements of health indicators at the individual level. As these databases encompass a large population, some individuals have extreme values, but some values may also be recorded incorrectly. The challenge for researchers is to distinguish between records that are due to incorrect recording and those which represent true but extreme values. This study evaluated different methods to identify outliers. METHODS: Ten percent of practices were selected at random to evaluate the recording of 513,367 height measurements. Population-level outliers were identified using boundaries defined using Health Survey for England data. Individual-level outliers were identified by fitting a random-effects model with subject-specific slopes for height measurements adjusted for age and sex. Any height measurements with a patient-level standardised residual more extreme than ±10 were identified as an outlier and excluded. The model was subsequently refitted twice after removing outliers at each stage. This method was compared with existing methods of removing outliers. RESULTS: Most outliers were identified at the population level using the boundaries defined using Health Survey for England (1550 of 1643). Once these were removed from the database, fitting the random-effects model to the remaining data successfully identified only 75 further outliers. This method was more efficient at identifying true outliers compared with existing methods. CONCLUSIONS: We propose a new, two-stage approach in identifying outliers in longitudinal data and show that it can successfully identify outliers at both population and individual level. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  7. The Effect Of Two-Stage Age Hardening Treatment Combined With Shot Peening On Stress Distribution In The Surface Layer Of 7075 Aluminum Alloy

    Directory of Open Access Journals (Sweden)

    Kaczmarek Ł.

    2015-09-01

    Full Text Available The article present the results of the study on the improvement of mechanical properties of the surface layer of 7075 aluminum alloy via two-stage aging combined with shot peening. The experiments proved that thermo-mechanical treatment may significantly improve hardness and stress distribution in the surface layer. Compressive stresses of 226 MPa±5.5 MPa and hardness of 210±2 HV were obtained for selected samples.

  8. Experiences from the full-scale implementation of a new two-stage vertical flow constructed wetland design.

    Science.gov (United States)

    Langergraber, Guenter; Pressl, Alexander; Haberl, Raimund

    2014-01-01

    This paper describes the results of the first full-scale implementation of a two-stage vertical flow constructed wetland (CW) system developed to increase nitrogen removal. The full-scale system was constructed for the Bärenkogelhaus, which is located in Styria at the top of a mountain, 1,168 m above sea level. The Bärenkogelhaus has a restaurant with 70 seats, 16 rooms for overnight guests and is a popular site for day visits, especially during weekends and public holidays. The CW treatment system was designed for a hydraulic load of 2,500 L.d(-1) with a specific surface area requirement of 2.7 m(2) per person equivalent (PE). It was built in fall 2009 and started operation in April 2010 when the restaurant was re-opened. Samples were taken between July 2010 and June 2013 and were analysed in the laboratory of the Institute of Sanitary Engineering at BOKU University using standard methods. During 2010 the restaurant at Bärenkogelhaus was open 5 days a week whereas from 2011 the Bärenkogelhaus was open only on demand for events. This resulted in decreased organic loads of the system in the later period. In general, the measured effluent concentrations were low and the removal efficiencies high. During the whole period the ammonia nitrogen effluent concentration was below 1 mg/L even at effluent water temperatures below 3 °C. Investigations during high-load periods, i.e. events like weddings and festivals at weekends, with more than 100 visitors, showed a very robust treatment performance of the two-stage CW system. Effluent concentrations of chemical oxygen demand and NH4-N were not affected by these events with high hydraulic loads.

  9. Beyond Random Walk and Metropolis-Hastings Samplers: Why You Should Not Backtrack for Unbiased Graph Sampling

    CERN Document Server

    Lee, Chul-Ho; Eun, Do Young

    2012-01-01

    Graph sampling via crawling has been actively considered as a generic and important tool for collecting uniform node samples so as to consistently estimate and uncover various characteristics of complex networks. The so-called simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH) algorithm have been popular in the literature for such unbiased graph sampling. However, an unavoidable downside of their core random walks -- slow diffusion over the space, can cause poor estimation accuracy. In this paper, we propose non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at almost no additional cost, not only unbiased graph sampling but also higher efficiency (smaller asymptotic variance of the resulting unbiased estimators) than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable feature of the MHDA is its applicability for any non-uniform node sampling like the MH algorithm,...

  10. Synchronization of Complex Networks with Random Coupling Strengths and Mixed Probabilistic Time-Varying Coupling Delays Using Sampled Data

    Directory of Open Access Journals (Sweden)

    Jian-An Wang

    2014-01-01

    Full Text Available The sampled-data synchronization problem for complex networks with random coupling strengths, probabilistic time-varying coupling delay, and distributed delay (mixed delays is investigated. The sampling period is assumed to be time varying and bounded. By using the properties of random variables and input delay approach, new synchronization error dynamics are constructed. Based on the delay decomposition method and reciprocally convex approach, a delay-dependent mean square synchronization condition is established in terms of linear matrix inequalities (LMIs. According to the proposed condition, an explicit expression for a set of desired sampled-data controllers can be achieved by solving LMIs. Numerical examples are given to demonstrate the effectiveness of the theoretical results.

  11. Complex Dynamical Behavior of a Two-Stage Colpitts Oscillator with Magnetically Coupled Inductors

    Directory of Open Access Journals (Sweden)

    V. Kamdoum Tamba

    2014-01-01

    Full Text Available A five-dimensional (5D controlled two-stage Colpitts oscillator is introduced and analyzed. This new electronic oscillator is constructed by considering the well-known two-stage Colpitts oscillator with two further elements (coupled inductors and variable resistor. In contrast to current approaches based on piecewise linear (PWL model, we propose a smooth mathematical model (with exponential nonlinearity to investigate the dynamics of the oscillator. Several issues, such as the basic dynamical behaviour, bifurcation diagrams, Lyapunov exponents, and frequency spectra of the oscillator, are investigated theoretically and numerically by varying a single control resistor. It is found that the oscillator moves from the state of fixed point motion to chaos via the usual paths of period-doubling and interior crisis routes as the single control resistor is monitored. Furthermore, an experimental study of controlled Colpitts oscillator is carried out. An appropriate electronic circuit is proposed for the investigations of the complex dynamics behaviour of the system. A very good qualitative agreement is obtained between the theoretical/numerical and experimental results.

  12. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  13. A two-stage series diode for intense large-area moderate pulsed X rays production

    Science.gov (United States)

    Lai, Dingguo; Qiu, Mengtong; Xu, Qifu; Su, Zhaofeng; Li, Mo; Ren, Shuqing; Huang, Zhongliang

    2017-01-01

    This paper presents a method for moderate pulsed X rays produced by a series diode, which can be driven by high voltage pulse to generate intense large-area uniform sub-100-keV X rays. A two stage series diode was designed for Flash-II accelerator and experimentally investigated. A compact support system of floating converter/cathode was invented, the extra cathode is floating electrically and mechanically, by withdrawing three support pins several milliseconds before a diode electrical pulse. A double ring cathode was developed to improve the surface electric field and emission stability. The cathode radii and diode separation gap were optimized to enhance the uniformity of X rays and coincidence of the two diode voltages based on the simulation and theoretical calculation. The experimental results show that the two stage series diode can work stably under 700 kV and 300 kA, the average energy of X rays is 86 keV, and the dose is about 296 rad(Si) over 615 cm2 area with uniformity 2:1 at 5 cm from the last converter. Compared with the single diode, the average X rays' energy reduces from 132 keV to 88 keV, and the proportion of sub-100-keV photons increases from 39% to 69%.

  14. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    Science.gov (United States)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  15. Planning an Agricultural Water Resources Management System: A Two-Stage Stochastic Fractional Programming Model

    Directory of Open Access Journals (Sweden)

    Liang Cui

    2015-07-01

    Full Text Available Irrigation water management is crucial for agricultural production and livelihood security in many regions and countries throughout the world. In this study, a two-stage stochastic fractional programming (TSFP method is developed for planning an agricultural water resources management system under uncertainty. TSFP can provide an effective linkage between conflicting economic benefits and the associated penalties; it can also balance conflicting objectives and maximize the system marginal benefit with per unit of input under uncertainty. The developed TSFP method is applied to a real case of agricultural water resources management of the Zhangweinan River Basin China, which is one of the main food and cotton producing regions in north China and faces serious water shortage. The results demonstrate that the TSFP model is advantageous in balancing conflicting objectives and reflecting complicated relationships among multiple system factors. Results also indicate that, under the optimized irrigation target, the optimized water allocation rate of Minyou Channel and Zhangnan Channel are 57.3% and 42.7%, respectively, which adapts the changes in the actual agricultural water resources management problem. Compared with the inexact two-stage water management (ITSP method, TSFP could more effectively address the sustainable water management problem, provide more information regarding tradeoffs between multiple input factors and system benefits, and help the water managers maintain sustainable water resources development of the Zhangweinan River Basin.

  16. A separate two-stage pulse tube cooler working at liquid helium temperature

    Institute of Scientific and Technical Information of China (English)

    QIU Limin; HE Yonglin; GAN Zhihua; WAN Laihong; CHEN Guobang

    2005-01-01

    A novel 4 K separate two-stage pulse tube cooler (PTC) was designed and tested. The cooler consists of two separate pulse tube coolers, in which the cold end of the first stage regenerator is thermally connected with the middle part of the second regenerator. Compared to the traditional coupled multi-stage pulse tube cooler, the mutual interference between stages can be significantly eliminated. The lowest refrigeration temperature obtained at the first stage pulse tube was 13.8 K. This is a new record for single stage PTC. With two compressors and two rotary valves driving mode, the separate two-stage PTC obtained a refrigeration temperature of 2.5 K at the second stage. Cooling capacities of 508 mW at 4.2 K and 15 W at 37.5 K were achieved simultaneously. A one-compressor and one-rotary valve driving mode has been proposed to further simplify the structure of separate type PTC.

  17. Two-Stage Single-Compartment Models to Evaluate Dissolution in the Lower Intestine.

    Science.gov (United States)

    Markopoulos, Constantinos; Vertzoni, Maria; Symillides, Mira; Kesisoglou, Filippos; Reppas, Christos

    2015-09-01

    The purpose was to propose two-stage single-compartment models for evaluating dissolution characteristics in distal ileum and ascending colon, under conditions simulating the bioavailability and bioequivalence studies in fasted and fed state by using the mini-paddle and the compendial flow-through apparatus (closed-loop mode). Immediate release products of two highly dosed active pharmaceutical ingredients (APIs), sulfasalazine and L-870,810, and one mesalamine colon targeting product were used for evaluating their usefulness. Change of medium composition simulating the conditions in distal ileum (SIFileum ) to a medium simulating the conditions in ascending colon in fasted state and in fed state was achieved by adding an appropriate solution in SIFileum . Data with immediate release products suggest that dissolution in lower intestine is substantially different than in upper intestine and is affected by regional pH differences > type/intensity of fluid convection > differences in concentration of other luminal components. Asacol® (400 mg/tab) was more sensitive to type/intensity of fluid convection. In all the cases, data were in line with available human data. Two-stage single-compartment models may be useful for the evaluation of dissolution in lower intestine. The impact of type/intensity of fluid convection and viscosity of media on luminal performance of other APIs and drug products requires further exploration.

  18. Simultaneous bile duct and portal venous branch ligation in two-stage hepatectomy

    Institute of Scientific and Technical Information of China (English)

    Hiroya Iida; Chiaki Yasui; Tsukasa Aihara; Shinichi Ikuta; Hidenori Yoshie; Naoki Yamanaka

    2011-01-01

    Hepatectomy is an effective surgical treatment for multiple bilobar liver metastases from colon cancer; however, one of the primary obstacles to completing surgical resection for these cases is an insufficient volume of the future remnant liver, which may cause postoperative liver failure. To induce atrophy of the unilateral lobe and hypertrophy of the future remnant liver, procedures to occlude the portal vein have been conventionally used prior to major hepatectomy. We report a case of a 50-year-old woman in whom two-stage hepatectomy was performed in combination with intraoperative ligation of the portal vein and the bile duct of the right hepatic lobe. This procedure was designed to promote the atrophic effect on the right hepatic lobe more effectively than the conventional technique, and to the best of our knowledge, it was used for the first time in the present case. Despite successful induction of liver volume shift as well as the following procedure, the patient died of subsequent liver failure after developing recurrent tumors. We discuss the first case in which simultaneous ligation of the portal vein and the biliary system was successfully applied as part of the first step of two-stage hepatectomy.

  19. Development and optimization of a two-stage gasifier for heat and power production

    Science.gov (United States)

    Kosov, V. V.; Zaichenko, V. M.

    2016-11-01

    The major methods of biomass thermal conversion are combustion in excess oxygen, gasification in reduced oxygen, and pyrolysis in the absence of oxygen. The end products of these methods are heat, gas, liquid and solid fuels. From the point of view of energy production, none of these methods can be considered optimal. A two-stage thermal conversion of biomass based on pyrolysis as the first stage and pyrolysis products cracking as the second stage can be considered the optimal method for energy production that allows obtaining synthesis gas consisting of hydrogen and carbon monoxide and not containing liquid or solid particles. On the base of the two stage cracking technology, there was designed an experimental power plant of electric power up to 50 kW. The power plant consists of a thermal conversion module and a gas engine power generator adapted for operation on syngas. Purposes of the work were determination of an optimal operation temperature of the thermal conversion module and an optimal mass ratio of processed biomass and charcoal in cracking chamber of the thermal conversion module. Experiments on the pyrolysis products cracking at various temperatures show that the optimum cracking temperature is equal to 1000 °C. From the results of measuring the volume of gas produced in different mass ratios of charcoal and wood biomass processed, it follows that the maximum volume of the gas in the range of the mass ratio equal to 0.5-0.6.

  20. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  1. An integrated two-stage support vector machine approach to forecast inundation maps during typhoons

    Science.gov (United States)

    Jhong, Bing-Chen; Wang, Jhih-Huang; Lin, Gwo-Fong

    2017-04-01

    During typhoons, accurate forecasts of hourly inundation depths are essential for inundation warning and mitigation. Due to the lack of observed data of inundation maps, sufficient observed data are not available for developing inundation forecasting models. In this paper, the inundation depths, which are simulated and validated by a physically based two-dimensional model (FLO-2D), are used as a database for inundation forecasting. A two-stage inundation forecasting approach based on Support Vector Machine (SVM) is proposed to yield 1- to 6-h lead-time inundation maps during typhoons. In the first stage (point forecasting), the proposed approach not only considers the rainfall intensity and inundation depth as model input but also simultaneously considers cumulative rainfall and forecasted inundation depths. In the second stage (spatial expansion), the geographic information of inundation grids and the inundation forecasts of reference points are used to yield inundation maps. The results clearly indicate that the proposed approach effectively improves the forecasting performance and decreases the negative impact of increasing forecast lead time. Moreover, the proposed approach is capable of providing accurate inundation maps for 1- to 6-h lead times. In conclusion, the proposed two-stage forecasting approach is suitable and useful for improving the inundation forecasting during typhoons, especially for long lead times.

  2. The influence of partial oxidation mechanisms on tar destruction in TwoStage biomass gasification

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Egsgaard, Helge; Stelte, Wolfgang

    2013-01-01

    TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction and ...... tar destruction and a high moisture content of the biomass enhances the decomposition of phenol and inhibits the formation of naphthalene. This enhances tar conversion and gasification in the char-bed, and thus contributes in-directly to the tar destruction.......TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction...... and conversion. The study identifies the following major impact factors regarding tar content in the producer gas: oxidation temperature, excess air ratio and biomass moisture content. In a experimental setup, wood pellets were pyrolyzed and the resulting pyrolysis gas was transferred in a heated partial...

  3. Numerical simulation of municipal solid waste combustion in a novel two-stage reciprocating incinerator.

    Science.gov (United States)

    Huai, X L; Xu, W L; Qu, Z Y; Li, Z G; Zhang, F P; Xiang, G M; Zhu, S Y; Chen, G

    2008-01-01

    A mathematical model was presented in this paper for the combustion of municipal solid waste in a novel two-stage reciprocating grate furnace. Numerical simulations were performed to predict the temperature, the flow and the species distributions in the furnace, with practical operational conditions taken into account. The calculated results agree well with the test data, and the burning behavior of municipal solid waste in the novel two-stage reciprocating incinerator can be demonstrated well. The thickness of waste bed, the initial moisture content, the excessive air coefficient and the secondary air are the major factors that influence the combustion process. If the initial moisture content of waste is high, both the heat value of waste and the temperature inside incinerator are low, and less oxygen is necessary for combustion. The air supply rate and the primary air distribution along the grate should be adjusted according to the initial moisture content of the waste. A reasonable bed thickness and an adequate excessive air coefficient can keep a higher temperature, promote the burnout of combustibles, and consequently reduce the emission of dioxin pollutants. When the total air supply is constant, reducing primary air and introducing secondary air properly can enhance turbulence and mixing, prolong the residence time of flue gas, and promote the complete combustion of combustibles. This study provides an important reference for optimizing the design and operation of municipal solid wastes furnace.

  4. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Jerónimo, Fernando Martínez-; Flores-Ortíz, Cesar Mateo

    2017-09-16

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL(-1) to 17.98gL(-1) with an increase in initial glucose concentration from 10.6gL(-1) to 30.3gL(-1). However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Yx/s), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  5. Dynamics of installation way for the actuator of a two-stage active vibration-isolator

    Institute of Scientific and Technical Information of China (English)

    HU Li; HUANG Qi-bai; HE Xue-song; YUAN Ji-xuan

    2008-01-01

    We investigated the behaviors of an active control system of two-stage vibration isolation with the actuator installed in parallel with either the upper passive mount or the lower passive isolation mount. We revealed the relationships between the active control force of the actuator and the parameters of the passive isolators by studying the dynamics of two-stage active vibration isolation for the actuator at the foregoing two positions in turn. With the actuator installed beside the upper mount, a small active force can achieve a very good isolating effect when the frequency of the stimulating force is much larger than the natural frequency of the upper mount; a larger active force is required in the low-frequency domain; and the active force equals the stimulating force when the upper mount works within the resonance region, suggesting an approach to reducing wobble and ensuring desirable installation accuracy by increasing the upper-mount stiffness. In either the low or the high frequency region far away from the resonance region, the active force is smaller when the actuator is beside the lower mount than beside the upper mount.

  6. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of two “two-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  7. Hydrogen and methane production from household solid waste in the two-stage fermentation process

    DEFF Research Database (Denmark)

    Lui, D.; Liu, D.; Zeng, Raymond Jianxiong

    2006-01-01

    A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS....... Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase. (c) 2006 Elsevier Ltd. All rights reserved.......A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS...... added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. PH was observed as a key factor affecting fermentation pathway in hydrogen production stage...

  8. Two-stage electrodialytic concentration of glyceric acid from fermentation broth.

    Science.gov (United States)

    Habe, Hiroshi; Shimada, Yuko; Fukuoka, Tokuma; Kitamoto, Dai; Itagaki, Masayuki; Watanabe, Kunihiko; Yanagishita, Hiroshi; Sakaki, Keiji

    2010-12-01

    The aim of this research was the application of a two-stage electrodialysis (ED) method for glyceric acid (GA) recovery from fermentation broth. First, by desalting ED, glycerate solutions (counterpart is Na+) were concentrated using ion-exchange membranes, and the glycerate recovery and energy consumption became more efficient with increasing the initial glycerate concentration (30 to 130 g/l). Second, by water-splitting ED, the concentrated glycerate was electroconverted to GA using bipolar membranes. Using a culture broth of Acetobacter tropicalis containing 68.6 g/l of D-glycerate, a final D-GA concentration of 116 g/l was obtained following the two-stage ED process. The total energy consumption for the D-glycerate concentration and its electroconversion to D-GA was approximately 0.92 kWh per 1 kg of D-GA. Copyright © 2010 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  9. Occurrence of two-stage hardening in C-Mn steel wire rods containing pearlitic microstructure

    Science.gov (United States)

    Singh, Balbir; Sahoo, Gadadhar; Saxena, Atul

    2016-09-01

    The 8 and 10 mm diameter wire rods intended for use as concrete reinforcement were produced/ hot rolled from C-Mn steel chemistry containing various elements within the range of C:0.55-0.65, Mn:0.85-1.50, Si:0.05-0.09, S:0.04 max, P:0.04 max and N:0.006 max wt%. Depending upon the C and Mn contents the product attained pearlitic microstructure in the range of 85-93% with balance amount of polygonal ferrite transformed at prior austenite grain boundaries. The pearlitic microstructure in the wire rods helped in achieving yield strength, tensile strength, total elongation and reduction in area values within the range of 422-515 MPa, 790-950 MPa, 22-15% and 45-35%, respectively. On analyzing the tensile results it was revealed that the material experienced hardening in two stages separable by a knee strain value of about 0.05. The occurrence of two stage hardening thus in the steel with hardening coefficients of 0.26 and 0.09 could be demonstrated with the help of derived relationships existed between flow stress and the strain.

  10. Rules and mechanisms for efficient two-stage learning in neural circuits

    Science.gov (United States)

    Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay

    2017-01-01

    Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (e.g., LMAN) should match plasticity mechanisms in ‘student’ circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning. DOI: http://dx.doi.org/10.7554/eLife.20944.001 PMID:28374674

  11. Two-stage estimation for multivariate recurrent event data with a dependent terminal event.

    Science.gov (United States)

    Chen, Chyong-Mei; Chuang, Ya-Wen; Shen, Pao-Sheng

    2015-03-01

    Recurrent event data arise in longitudinal follow-up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram-positive and non-Gram-positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter-recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two-stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two-stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Two-stage earth-to-orbit vehicles with dual-fuel propulsion in the Orbiter

    Science.gov (United States)

    Martin, J. A.

    1982-01-01

    Earth-to-orbit vehicle studies of future replacements for the Space Shuttle are needed to guide technology development. Previous studies that have examined single-stage vehicles have shown advantages for dual-fuel propulsion. Previous two-stage system studies have assumed all-hydrogen fuel for the Orbiters. The present study examined dual-fuel Orbiters and found that the system dry mass could be reduced with this concept. The possibility of staging the booster at a staging velocity low enough to allow coast-back to the launch site is shown to be beneficial, particularly in combination with a dual-fuel Orbiter. An engine evaluation indicated the same ranking of engines as did a previous single-stage study. Propane and RP-1 fuels result in lower vehicle dry mass than methane, and staged-combustion engines are preferred over gas-generator engines. The sensitivity to the engine selection is less for two-stage systems than for single-stage systems.

  13. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  14. Configuration Consideration for Expander in Transcritical Carbon Dioxide Two-Stage Compression Cycle

    Institute of Scientific and Technical Information of China (English)

    MA Yitai; YANG Junlan; GUAN Haiqing; LI Minxia

    2005-01-01

    To investigate the configuration consideration of expander in transcritical carbon dioxide two-stage compression cycle, the best place in the cycle should be searched for to reinvest the recovery work so as to improve the system efficiency. The expander and the compressor are connected to the same shaft and integrated into one unit, with the latter being driven by the former, thus the transfer loss and leakage loss can be decreased greatly. In these systems, the expander can be either connected with the first stage compressor (shortened as DCDL cycle) or the second stage compressor (shortened as DCDH cycle), but the two configuration ways can get different performances. By setting up theoretical model for two kinds of expander configuration ways in the transcritical carbon dioxide two-stage compression cycle, the first and the second laws of thermodynamics are used to analyze the coefficient of performance, exergy efficiency, inter-stage pressure, discharge temperature and exergy losses of each component for the two cycles. From the model results, the performance of DCDH cycle is better than that of DCDL cycle. The analysis results are indispensable to providing a theoretical basis for practical design and operating.

  15. Two-stage coordination multi-radio multi-channel mac protocol for wireless mesh networks

    CERN Document Server

    Zhao, Bingxuan

    2011-01-01

    Within the wireless mesh network, a bottleneck problem arises as the number of concurrent traffic flows (NCTF) increases over a single common control channel, as it is for most conventional networks. To alleviate this problem, this paper proposes a two-stage coordination multi-radio multi-channel MAC (TSC-M2MAC) protocol that designates all available channels as both control channels and data channels in a time division manner through a two-stage coordination. At the first stage, a load balancing breadth-first-search-based vertex coloring algorithm for multi-radio conflict graph is proposed to intelligently allocate multiple control channels. At the second stage, a REQ/ACK/RES mechanism is proposed to realize dynamical channel allocation for data transmission. At this stage, the Channel-and-Radio Utilization Structure (CRUS) maintained by each node is able to alleviate the hidden nodes problem; also, the proposed adaptive adjustment algorithm for the Channel Negotiation and Allocation (CNA) sub-interval is ab...

  16. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    Science.gov (United States)

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  17. Two-stage image segmentation based on edge and region information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A two-stage method for image segmentation based on edge and region information is proposed. Different deformation schemes are used at two stages for segmenting the object correctly in image plane. At the first stage, the contour of the model is divided into several segments hierarchically that deform respectively using affine transformation. After the contour is deformed to the approximate boundary of object, a fine match mechanism using statistical information of local region to redefine the external energy of the model is used to make the contour fit the object's boundary exactly. The algorithm is effective, as the hierarchical segmental deformation makes use of the globe and local information of the image, the affine transformation keeps the consistency of the model, and the reformative approaches of computing the internal energy and external energy are proposed to reduce the algorithm complexity. The adaptive method of defining the search area at the second stage makes the model converge quickly. The experimental results indicate that the proposed model is effective and robust to local minima and able to search for concave objects.

  18. Waste-gasification efficiency of a two-stage fluidized-bed gasification system.

    Science.gov (United States)

    Liu, Zhen-Shu; Lin, Chiou-Liang; Chang, Tsung-Jen; Weng, Wang-Chang

    2016-02-01

    This study employed a two-stage fluidized-bed gasifier as a gasification reactor and two additives (CaO and activated carbon) as the Stage-II bed material to investigate the effects of the operating temperature (700°C, 800°C, and 900°C) on the syngas composition, total gas yield, and gas-heating value during simulated waste gasification. The results showed that when the operating temperature increased from 700 to 900°C, the molar percentage of H2 in the syngas produced by the two-stage gasification process increased from 19.4 to 29.7mol% and that the total gas yield and gas-heating value also increased. When CaO was used as the additive, the molar percentage of CO2 in the syngas decreased, and the molar percentage of H2 increased. When activated carbon was used, the molar percentage of CH4 in the syngas increased, and the total gas yield and gas-heating value increased. Overall, CaO had better effects on the production of H2, whereas activated carbon clearly enhanced the total gas yield and gas-heating value. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  19. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction.

    Science.gov (United States)

    Zhang, Long; Li, Kang; Bai, Er-Wei; Irwin, George W

    2015-08-01

    A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

  20. Two-stage numerical simulation for temperature profile in furnace of tangentially fired pulverized coal boiler

    Institute of Scientific and Technical Information of China (English)

    ZHOU Nai-jun; XU Qiong-hui; ZHOU Ping

    2005-01-01

    Considering the fact that the temperature distribution in furnace of a tangential fired pulverized coal boiler is difficult to be measured and monitored, two-stage numerical simulation method was put forward. First, multi-field coupling simulation in typical work conditions was carried out off-line with the software CFX-4.3, and then the expression of temperature profile varying with operating parameter was obtained. According to real-time operating parameters, the temperature at arbitrary point of the furnace can be calculated by using this expression. Thus the temperature profile can be shown on-line and monitoring for combustion state in the furnace is realized. The simul-ation model was checked by the parameters measured in an operating boiler, DG130-9.8/540. The maximum of relative error is less than 12% and the absolute error is less than 120 ℃, which shows that the proposed two-stage simulation method is reliable and able to satisfy the requirement of industrial application.

  1. A low-voltage sense amplifier with two-stage operational amplifier clamping for flash memory

    Science.gov (United States)

    Guo, Jiarong

    2017-04-01

    A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).

  2. Two-stage high temperature sludge gasification using the waste heat from hot blast furnace slags.

    Science.gov (United States)

    Sun, Yongqi; Zhang, Zuotai; Liu, Lili; Wang, Xidong

    2015-12-01

    Nowadays, disposal of sewage sludge from wastewater treatment plants and recovery of waste heat from steel industry, become two important environmental issues and to integrate these two problems, a two-stage high temperature sludge gasification approach was investigated using the waste heat in hot slags herein. The whole process was divided into two stages, i.e., the low temperature sludge pyrolysis at ⩽ 900°C in argon agent and the high temperature char gasification at ⩾ 900°C in CO2 agent, during which the heat required was supplied by hot slags in different temperature ranges. Both the thermodynamic and kinetic mechanisms were identified and it was indicated that an Avrami-Erofeev model could best interpret the stage of char gasification. Furthermore, a schematic concept of this strategy was portrayed, based on which the potential CO yield and CO2 emission reduction achieved in China could be ∼1.92∗10(9)m(3) and 1.93∗10(6)t, respectively.

  3. A two-stage broadcast message propagation model in social networks

    Science.gov (United States)

    Wang, Dan; Cheng, Shun-Jun

    2016-11-01

    Message propagation in social networks is becoming a popular topic in complex networks. One of the message types in social networks is called broadcast message. It refers to a type of message which has a unique and unknown destination for the publisher, such as 'lost and found'. Its propagation always has two stages. Due to this feature, rumor propagation model and epidemic propagation model have difficulty in describing this message's propagation accurately. In this paper, an improved two-stage susceptible-infected-removed model is proposed. We come up with the concept of the first forwarding probability and the second forwarding probability. Another part of our work is figuring out the influence to the successful message transmission chance in each level resulting from multiple reasons, including the topology of the network, the receiving probability, the first stage forwarding probability, the second stage forwarding probability as well as the length of the shortest path between the publisher and the relevant destination. The proposed model has been simulated on real networks and the results proved the model's effectiveness.

  4. Two staged incentive contract focused on efficiency and innovation matching in critical chain project management

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2014-09-01

    Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a two staged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The two staged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

  5. Selective capsulotomies of the expanded breast as a remodelling method in two-stage breast reconstruction.

    Science.gov (United States)

    Grimaldi, Luca; Campana, Matteo; Brandi, Cesare; Nisi, Giuseppe; Brafa, Anna; Calabrò, Massimiliano; D'Aniello, Carlo

    2013-06-01

    The two-stage breast reconstruction with tissue expander and prosthesis is nowadays a common method for achieving a satisfactory appearance in selected patients who had a mastectomy, but its most common aesthetic drawback is represented by an excessive volumetric increment of the superior half of the reconstructed breast, with a convexity of the profile in that area. A possible solution to limit this effect, and to fulfil the inferior pole, may be obtained by reducing the inferior tissue resistance by means of capsulotomies. This study reports the effects of various types of capsulotomies, performed in 72 patients after removal of the mammary expander, with the aim of emphasising the convexity of the inferior mammary aspect in the expanded breast. According to each kind of desired modification, possible solutions are described. On the basis of subjective and objective evaluations, an overall high degree of satisfaction has been evidenced. The described selective capsulotomies, when properly carried out, may significantly improve the aesthetic results in two-stage reconstructed breasts, with no additional scars, with minimal risks, and with little lengthening of the surgical time.

  6. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  7. Two-Stage Nerve Graft in Severe Scar: A Time-Course Study in a Rat Model

    Directory of Open Access Journals (Sweden)

    Shayan Zadegan

    2015-04-01

    According to the EPT and WRL, the two-stage nerve graft showed significant improvement (P=0.020 and P =0.017 respectively. The TOA showed no significant difference between the two groups. The total vascular index was significantly higher in the two-stage nerve graft group (P

  8. Inhaled Pharmacotherapy and Stroke Risk in Patients with Chronic Obstructive Pulmonary Disease: A Nationwide Population Based Study Using Two-Stage Approach.

    Directory of Open Access Journals (Sweden)

    Hui-Wen Lin

    Full Text Available Patients with chronic obstructive pulmonary disease (COPD are at higher risk of stroke than those without COPD. This study aims to explore the impact of inhaled pharmacotherapy on stroke risk in COPD patients during a three-year follow-up, using a nationwide, population-based study and a matched cohort design.The study cohort comprised 10,413 patients who had received COPD treatment between 2004 and 2006; 41,652 randomly selected subjects comprised the comparison cohort. Cox proportional hazard regressions and two-stage propensity score calibration were performed to determine the impact of various inhaled therapies including short-acting muscarinic antagonists, long-acting muscarinic antagonists, short-acting β-agonists (SABAs, long-acting β-agonists (LABAs, and LABA plus inhaled corticosteroid (ICS, on the risk after adjustment for patient demographic characteristics and comorbid disorders.Of the 52,065 sampled patients, 2,689 (5.2% developed stroke during follow-up, including 727 (7.0% from the COPD cohort and 1,962 (4.7% from the comparison cohort (p < 0.001. Treatment with SABA was associated with 1.67-fold (95% CI 1.45-1.91; p < 0.001 increased risk of stroke in COPD patients. By contrast, the cumulative incidence of stroke was significantly lower in those treated with LABA plus ICS than those treated without (adjusted hazard ratio 0.75, 95% CI 0.60-0.94, p = 0.014.Among COPD patients, the use of inhaled SABA is associated with an increased risk of stroke, and combination treatment with inhaled LABA and ICS relates to a risk reduction. Further prospective research is needed to verify whether LABA plus ICS confers protection against stroke in patients with COPD.

  9. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Vaezzadeh, Mehdi [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)]. E-mail: mehdi@kntu.ac.ir; Yazdani, Ahmad [Tarbiat Modares University, P.O. Box 14155-4838, Tehran (Iran, Islamic Republic of); Vaezzadeh, Majid [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Daneshmand, Gissoo [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Kanzeghi, Ali [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)

    2006-05-01

    The magnetic behavior of Gd-based intermetallic compound (Gd{sub 2}Al{sub (1-x)}Au{sub x}) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement {chi}(T) shows two different stages. Moreover for (T>T{sub K2}) a fall of the value of {chi}(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T{sub K1}) an increase of {chi}(T) and in the second stage (T{sub K2}) a new remarkable decrease of {chi}(T) (T{sub K1}>T{sub K2}). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution.

  10. EDGEWORTH EXPANSION AND BOOTSTRAP APPROXIMATION FOR THE STUDENTIZED MLE FROM RANDOMLY CENSORED EXPONENTIAL SAMPLES

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    In this paper,the author studies the asymptotic accuracies of the one-term Edgeworth expansions and the bootstrap approximation for the studentized MLE from randomly censored exponential population.It is shown that the Edgeworth expansions and the bootstrap approximation are asymptotically close to the exact distribution of the studentized MLE with a rate.

  11. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis†

    Science.gov (United States)

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-01-01

    OBJECTIVES Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. METHODS From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). RESULTS The mean postoperative follow-up period was 12.5 (range: 1–24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. CONCLUSIONS Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis. PMID:23442937

  12. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis.

    Science.gov (United States)

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-06-01

    Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). The mean postoperative follow-up period was 12.5 (range: 1-24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis.

  13. Biogas Upgrading via Hydrogenotrophic Methanogenesis in Two-Stage Continuous Stirred Tank Reactors at Mesophilic and Thermophilic Conditions.

    Science.gov (United States)

    Bassani, Ilaria; Kougias, Panagiotis G; Treu, Laura; Angelidaki, Irini

    2015-10-20

    This study proposes an innovative setup composed by two stage reactors to achieve biogas upgrading coupling the CO2 in the biogas with external H2 and subsequent conversion into CH4 by hydrogenotrophic methanogenesis. In this configuration, the biogas produced in the first reactor was transferred to the second one, where H2 was injected. This configuration was tested at both mesophilic and thermophilic conditions. After H2 addition, the produced biogas was upgraded to average CH4 content of 89% in the mesophilic reactor and 85% in the thermophilic. At thermophilic conditions, a higher efficiency of CH4 production and CO2 conversion was recorded. The consequent increase of pH did not inhibit the process indicating adaptation of microorganisms to higher pH levels. The effects of H2 on the microbial community were studied using high-throughput Illumina random sequences and full-length 16S rRNA genes extracted from the total sequences. The relative abundance of archaeal community markedly increased upon H2 addition with Methanoculleus as dominant genus. The increase of hydrogenotrophic methanogens and syntrophic Desulfovibrio and the decrease of aceticlastic methanogens indicate a H2-mediated shift toward the hydrogenotrophic pathway enhancing biogas upgrading. Moreover, Thermoanaerobacteraceae were likely involved in syntrophic acetate oxidation with hydrogenotrophic methanogens in absence of aceticlastic methanogenesis.

  14. Combined Two-Stage Stochastic Programming and Receding Horizon Control Strategy for Microgrid Energy Management Considering Uncertainty

    Directory of Open Access Journals (Sweden)

    Zhongwen Li

    2016-06-01

    Full Text Available Microgrids (MGs are presented as a cornerstone of smart grids. With the potential to integrate intermittent renewable energy sources (RES in a flexible and environmental way, the MG concept has gained even more attention. Due to the randomness of RES, load, and electricity price in MG, the forecast errors of MGs will affect the performance of the power scheduling and the operating cost of an MG. In this paper, a combined stochastic programming and receding horizon control (SPRHC strategy is proposed for microgrid energy management under uncertainty, which combines the advantages of two-stage stochastic programming (SP and receding horizon control (RHC strategy. With an SP strategy, a scheduling plan can be derived that minimizes the risk of uncertainty by involving the uncertainty of MG in the optimization model. With an RHC strategy, the uncertainty within the MG can be further compensated through a feedback mechanism with the lately updated forecast information. In our approach, a proper strategy is also proposed to maintain the SP model as a mixed integer linear constrained quadratic programming (MILCQP problem, which is solvable without resorting to any heuristics algorithms. The results of numerical experiments explicitly demonstrate the superiority of the proposed strategy for both island and grid-connected operating modes of an MG.

  15. A two-stage strategy to accommodate general patterns of confounding in the design of observational studies.

    Science.gov (United States)

    Haneuse, Sebastien; Schildcrout, Jonathan; Gillen, Daniel

    2012-04-01

    Accommodating general patterns of confounding in sample size/power calculations for observational studies is extremely challenging, both technically and scientifically. While employing previously implemented sample size/power tools is appealing, they typically ignore important aspects of the design/data structure. In this paper, we show that sample size/power calculations that ignore confounding can be much more unreliable than is conventionally thought; using real data from the US state of North Carolina, naive calculations yield sample size estimates that are half those obtained when confounding is appropriately acknowledged. Unfortunately, eliciting realistic design parameters for confounding mechanisms is difficult. To overcome this, we propose a novel two-stage strategy for observational study design that can accommodate arbitrary patterns of confounding. At the first stage, researchers establish bounds for power that facilitate the decision of whether or not to initiate the study. At the second stage, internal pilot data are used to estimate key scientific inputs that can be used to obtain realistic sample size/power. Our results indicate that the strategy is effective at replicating gold standard calculations based on knowing the true confounding mechanism. Finally, we show that consideration of the nature of confounding is a crucial aspect of the elicitation process; depending on whether the confounder is positively or negatively associated with the exposure of interest and outcome, naive power calculations can either under or overestimate the required sample size. Throughout, simulation is advocated as the only general means to obtain realistic estimates of statistical power; we describe, and provide in an R package, a simple algorithm for estimating power for a case-control study.

  16. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  17. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  18. The Power of Slightly More than One Sample in Randomized Load Balancing

    Science.gov (United States)

    2015-04-26

    of which can be executed in parallel in possibly different servers. In queueing theory parlance, this model differs from the models mentioned earlier...goal of minimizing queueing delays. When the number of processors is very large, a popular routing algorithm works as follows: select two servers...at random and route an arriving task to the least loaded of the two. It is well- known that this algorithm dramatically reduces queueing delays

  19. Two-stage dilute-acid and organic-solvent lignocellulosic pretreatment for enhanced bioprocessing

    Energy Technology Data Exchange (ETDEWEB)

    Brodeur, G.; Telotte, J.; Stickel, J. J.; Ramakrishnan, S.

    2016-11-01

    A two stage pretreatment approach for biomass is developed in the current work in which dilute acid (DA) pretreatment is followed by a solvent based pretreatment (N-methyl morpholine N oxide -- NMMO). When the combined pretreatment (DAWNT) is applied to sugarcane bagasse and corn stover, the rates of hydrolysis and overall yields (>90%) are seen to dramatically improve and under certain conditions 48 h can be taken off the time of hydrolysis with the additional NMMO step to reach similar conversions. DAWNT shows a 2-fold increase in characteristic rates and also fractionates different components of biomass -- DA treatment removes the hemicellulose while the remaining cellulose is broken down by enzymatic hydrolysis after NMMO treatment to simple sugars. The remaining residual solid is high purity lignin. Future work will focus on developing a full scale economic analysis of DAWNT for use in biomass fractionation.

  20. Reconstruction of Gene Regulatory Networks Based on Two-Stage Bayesian Network Structure Learning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Gui-xia Liu; Wei Feng; Han Wang; Lei Liu; Chun-guang Zhou

    2009-01-01

    In the post-genomic biology era, the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system, and it has been a challenging task in bioinformatics. The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages, but how to determine the network structure and parameters is still important to be explored. This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network .The new algorithm is evaluated with the use of both simulated and yeast cell cycle data. The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.

  1. The Application of Two-stage Structure Decomposition Technique to the Study of Industrial Carbon Emissions

    Institute of Scientific and Technical Information of China (English)

    Yanqiu HE

    2015-01-01

    The total carbon emissions control is the ultimate goal of carbon emission reduction, while industrial carbon emissions are the basic units of the total carbon emission. On the basis of existing research results, in this paper, a two-stage input-output structure decomposition method is creatively proposed for fully combining the input-output method with the structure decomposition technique. In this study, more comprehensive technical progress indicators were chosen in comparison with the previous studies and included the utilization efficiency of all kinds of intermediate inputs such as energy and non-energy products, and finally were positioned at the factors affecting the carbon emissions of different industries. Through analysis, the affecting rate of each factor on industrial carbon emissions was acquired. Thus, a theory basis and data support is provided for the total carbon emissions control of China from the perspective of industrial emissions.

  2. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  3. A wavelet-based two-stage near-lossless coder.

    Science.gov (United States)

    Yea, Sehoon; Pearlman, William A

    2006-11-01

    In this paper, we present a two-stage near-lossless compression scheme. It belongs to the class of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by arithmetic coding of the quantized residual to guarantee a given L(infinity) error bound in the pixel domain. We focus on the selection of the optimum bit rate for the lossy layer to achieve the minimum total bit rate. Unlike other similar lossy plus lossless approaches using a wavelet-based lossy layer, the proposed method does not require iteration of decoding and inverse discrete wavelet transform in succession to locate the optimum bit rate. We propose a simple method to estimate the optimal bit rate, with a theoretical justification based on the critical rate argument from the rate-distortion theory and the independence of the residual error.

  4. Two-Stage Over-the-Air (OTA Test Method for LTE MIMO Device Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Ya Jing

    2012-01-01

    Full Text Available With MIMO technology being adopted by the wireless communication standards LTE and HSPA+, MIMO OTA research has attracted wide interest from both industry and academia. Parallel studies are underway in COST2100, CTIA, and 3GPP RAN WG4. The major test challenge for MIMO OTA is how to create a repeatable scenario which accurately reflects the MIMO antenna radiation performance in a realistic wireless propagation environment. Different MIMO OTA methods differ in the way to reproduce a specified MIMO channel model. This paper introduces a novel, flexible, and cost-effective method for measuring MIMO OTA using a two-stage approach. In the first stage, the antenna pattern is measured in an anechoic chamber using a nonintrusive approach, that is without cabled connections or modifying the device. In the second stage, the antenna pattern is convolved with the chosen channel model in a channel emulator to measure throughput using a cabled connection.

  5. Two stages of parafoveal processing during reading: Evidence from a display change detection task.

    Science.gov (United States)

    Angele, Bernhard; Slattery, Timothy J; Rayner, Keith

    2016-08-01

    We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924-1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage.

  6. Enhanced biodiesel production in Neochloris oleoabundans by a semi-continuous process in two stage photobioreactors.

    Science.gov (United States)

    Yoon, Se Young; Hong, Min Eui; Chang, Won Seok; Sim, Sang Jun

    2015-07-01

    Under autotrophic conditions, highly productive biodiesel production was achieved using a semi-continuous culture system in Neochloris oleoabundans. In particular, the flue gas generated by combustion of liquefied natural gas and natural solar radiation were used for cost-effective microalgal culture system. In semi-continuous culture, the greater part (~80%) of the culture volume containing vegetative cells grown under nitrogen-replete conditions in a first photobioreactor (PBR) was directly transferred to a second PBR and cultured sequentially under nitrogen-deplete conditions for accelerating oil accumulation. As a result, in semi-continuous culture, the productivities of biomass and biodiesel in the cells were increased by 58% (growth phase) and 51% (induction phase) compared to the cells in batch culture, respectively. The semi-continuous culture system using two stage photobioreactors is a very efficient strategy to further improve biodiesel production from microalgae under photoautotrophic conditions.

  7. The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach

    Directory of Open Access Journals (Sweden)

    Frances Obafemi

    2013-11-01

    Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.

  8. Two-stage triolein breath test differentiates pancreatic insufficiency from other causes of malabsorption

    Energy Technology Data Exchange (ETDEWEB)

    Goff, J.S.

    1982-07-01

    In 24 patients with malabsorption, (/sup 14/C)triolein breath tests were conducted before and together with the administration of pancreatic enzymes (Pancrease, Johnson and Johnson, Skillman, N.J.). Eleven patients with pancreatic insufficiency had a significant rise in peak percent dose per hour /sup 14/CO/sub 2/ excretion after Pancrease, whereas 13 patients with other causes of malabsorption had no increase in /sup 14/CO/sub 2/ excretion (2.61 +/- 0.96 vs. 0.15 +/- 0.45, p less than 0.001). The two-stage (/sup 14/C)triolein breath test appears to be an accurate and simple noninvasive test of fat malabsorption that differentiates steatorrhea secondary to pancreatic insufficiency from other causes of steatorrhea.

  9. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    Science.gov (United States)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  10. Product prioritization in a two-stage food production system with intermediate storage

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter

    2007-01-01

    In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through the dedi......In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through...... the dedication of a storage tank. This type of situation has hardly been investigated, although planners struggle with it in practice. This paper aims at investigating the fundamental effect of prioritization and dedicated storage in a two-stage production system, for various product mixes. We show...

  11. Experimental and modeling study of a two-stage pilot scale high solid anaerobic digester system.

    Science.gov (United States)

    Yu, Liang; Zhao, Quanbao; Ma, Jingwei; Frear, Craig; Chen, Shulin

    2012-11-01

    This study established a comprehensive model to configure a new two-stage high solid anaerobic digester (HSAD) system designed for highly degradable organic fraction of municipal solid wastes (OFMSW). The HSAD reactor as the first stage was naturally separated into two zones due to biogas floatation and low specific gravity of solid waste. The solid waste was retained in the upper zone while only the liquid leachate resided in the lower zone of the HSAD reactor. Continuous stirred-tank reactor (CSTR) and advective-diffusive reactor (ADR) models were constructed in series to describe the whole system. Anaerobic digestion model No. 1 (ADM1) was used as reaction kinetics and incorporated into each reactor module. Compared with the experimental data, the simulation results indicated that the model was able to well predict the pH, volatile fatty acid (VFA) and biogas production.

  12. Study of a two-stage photobase generator for photolithography in microelectronics.

    Science.gov (United States)

    Turro, Nicholas J; Li, Yongjun; Jockusch, Steffen; Hagiwara, Yuji; Okazaki, Masahiro; Mesch, Ryan A; Schuster, David I; Willson, C Grant

    2013-03-01

    The investigation of the photochemistry of a two-stage photobase generator (PBG) is described. Absorption of a photon by a latent PBG (1) (first step) produces a PBG (2). Irradiation of 2 in the presence of water produces a base (second step). This two-photon sequence (1 + hν → 2 + hν → base) is an important component in the design of photoresists for pitch division technology, a method that doubles the resolution of projection photolithography for the production of microelectronic chips. In the present system, the excitation of 1 results in a Norrish type II intramolecular hydrogen abstraction to generate a 1,4-biradiacal that undergoes cleavage to form 2 and acetophenone (Φ ∼ 0.04). In the second step, excitation of 2 causes cleavage of the oxime ester (Φ = 0.56) followed by base generation after reaction with water.

  13. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...... estimated through fitting of the model equations to the data obtained from batch experiments. The simulation of the continuous reactor performance at all HRTs tested (20, 15 and 10d) was very satisfactory. Specifically, the largest deviation of the theoretical predictions against the experimental data...... was 12% for the methane production rate at the HRT of 20d while the deviation values for the 15 and 10 d HRT were 1.9% and 1.1%, respectively. The model predictions regarding pH, methane percentage in the gas phase and COD removal were in very good agreement with the experimental data with a deviation...

  14. A Two-Stage Diagnosis Framework for Wind Turbine Gearbox Condition Monitoring

    Directory of Open Access Journals (Sweden)

    Janet M. Twomey

    2013-01-01

    Full Text Available Advances in high performance sensing technologies enable the development of wind turbine condition monitoring system to diagnose and predict the system-wide effects of failure events. This paper presents a vibration-based two stage fault detection framework for failure diagnosis of rotating components in wind turbines. The proposed framework integrates an analytical defect detection method and a graphical verification method together to ensure the diagnosis efficiency and accuracy. The efficacy of the proposed methodology is demonstrated with a case study with the gearbox condition monitoring Round Robin study dataset provided by the National Renewable Energy Laboratory (NREL. The developed methodology successfully picked five faults out of seven in total with accurate severity levels without producing any false alarm in the blind analysis. The case study results indicated that the developed fault detection framework is effective for analyzing gear and bearing faults in wind turbine drive train system based upon system vibration characteristics.

  15. Nitrification and microalgae cultivation for two-stage biological nutrient valorization from source separated urine.

    Science.gov (United States)

    Coppens, Joeri; Lindeboom, Ralph; Muys, Maarten; Coessens, Wout; Alloul, Abbas; Meerbergen, Ken; Lievens, Bart; Clauwaert, Peter; Boon, Nico; Vlaeminck, Siegfried E

    2016-07-01

    Urine contains the majority of nutrients in urban wastewaters and is an ideal nutrient recovery target. In this study, stabilization of real undiluted urine through nitrification and subsequent microalgae cultivation were explored as strategy for biological nutrient recovery. A nitrifying inoculum screening revealed a commercial aquaculture inoculum to have the highest halotolerance. This inoculum was compared with municipal activated sludge for the start-up of two nitrification membrane bioreactors. Complete nitrification of undiluted urine was achieved in both systems at a conductivity of 75mScm(-1) and loading rate above 450mgNL(-1)d(-1). The halotolerant inoculum shortened the start-up time with 54%. Nitrite oxidizers showed faster salt adaptation and Nitrobacter spp. became the dominant nitrite oxidizers. Nitrified urine as growth medium for Arthrospira platensis demonstrated superior growth compared to untreated urine and resulted in a high protein content of 62%. This two-stage strategy is therefore a promising approach for biological nutrient recovery.

  16. STOCHASTIC DISCRETE MODEL OF TWO-STAGE ISOLATION SYSTEM WITH RIGID LIMITERS

    Institute of Scientific and Technical Information of China (English)

    HE Hua; FENG Qi; SHEN Rong-ying; WANG Yu

    2006-01-01

    The possible intermittent impacts of a two-stage isolation system with rigid limiters have been investigated. The isolation system is under periodic external excitation disturbed by small stationary Gaussian white noise after shock. The maximal impact Then in the period after shock, the zero order approximate stochastic discrete model and the first order approximate stochastic model are developed. The real isolation system of an MTU diesel engine is used to evaluate the established model. After calculating of the numerical example, the effects of noise excitation on the isolation system are discussed.The results show that the property of the system is complicated due to intermittent impact. The difference between zero order model and the first order model may be great.The effect of small noise is obvious. The results may be expected useful to the naval designers.

  17. Two-stage high frequency pulse tube cooler for refrigeration at 25 K

    CERN Document Server

    Dietrich, M

    2009-01-01

    A two-stage Stirling-type U-shape pulse tube cryocooler driven by a 10 kW-class linear compressor was designed, built and tested. A special feature of the cold head is the absence of a heat exchanger at the cold end of the first stage, since the intended application requires no cooling power at an intermediate temperature. Simulations where done using Sage-software to find optimum operating conditions and cold head geometry. Flow-impedance matching was required to connect the compressor designed for 60 Hz operation to the 40 Hz cold head. A cooling power of 12.9 W at 25 K with an electrical input power of 4.6 kW has been achieved up to now. The lowest temperature reached is 13.7 K.

  18. Two-stage reflective optical system for achromatic 10 nm x-ray focusing

    Science.gov (United States)

    Motoyama, Hiroto; Mimura, Hidekazu

    2015-12-01

    Recently, coherent x-ray sources have promoted developments of optical systems for focusing, imaging, and interferometers. In this paper, we propose a two-stage focusing optical system with the goal of achromatically focusing pulses from an x-ray free-electron laser (XFEL), with a focal width of 10 nm. In this optical system, the x-ray beam is expanded by a grazing-incidence aspheric mirror, and it is focused by a mirror that is shaped as a solid of revolution. We describe the design procedure and discuss the theoretical focusing performance. In theory, soft-XFEL lights can be focused to a 10 nm area without chromatic aberration and with high reflectivity; this creates an unprecedented power density of 1020 W cm-2 in the soft-x-ray range.

  19. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid...... to achieve the power reserve. In this method, the solar irradiance and temperature measurements that have been used in conventional power reserve control schemes to estimate the available PV power are not required, and thereby being a sensorless approach with reduced cost. Experimental tests have been...... support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...

  20. Sensorless Reserved Power Control Strategy for Two-Stage Grid-Connected Photovoltaic Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2016-01-01

    Due to still increasing penetration level of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A reserved power control, where the active power from the PV panels is reserved during operation, is required for grid...... to achieve the power reserve. In this method, the irradiance measurements that have been used in conventional control schemes to estimate the available PV power are not required, and thereby being a sensorless solution. Simulations and experimental tests have been performed on a 3-kW two-stage single...... support. In this paper, a cost-effective solution to realize the reserved power control for grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...

  1. Prey-Predator Model with Two-Stage Infection in Prey: Concerning Pest Control

    Directory of Open Access Journals (Sweden)

    Swapan Kumar Nandi

    2015-01-01

    Full Text Available A prey-predator model system is developed; specifically the disease is considered into the prey population. Here the prey population is taken as pest and the predators consume the selected pest. Moreover, we assume that the prey species is infected with a viral disease forming into susceptible and two-stage infected classes, and the early stage of infected prey is more vulnerable to predation by the predator. Also, it is assumed that the later stage of infected pests is not eaten by the predator. Different equilibria of the system are investigated and their stability analysis and Hopf bifurcation of the system around the interior equilibriums are discussed. A modified model has been constructed by considering some alternative source of food for the predator population and the dynamical behavior of the modified model has been investigated. We have demonstrated the analytical results by numerical analysis by taking some simulated set of parameter values.

  2. Lossless and near-lossless digital angiography coding using a two-stage motion compensation approach.

    Science.gov (United States)

    dos Santos, Rafael A P; Scharcanski, Jacob

    2008-07-01

    This paper presents a two-stage motion compensation coding scheme for image sequences in hemodynamics. The first stage of the proposed method implements motion compensation, and the second stage corrects local pixel intensity distortions with a context adaptive linear predictor. The proposed method is robust to the local intensity distortions and the noise that often degrades these image sequences, providing lossless and near-lossless quality. Our experiments with lossless compression of 12bits/pixel studies indicate that, potentially, our approach can perform 3.8%, 2% and 1.6% better than JPEG-2000, JPEG-LS and the method proposed by Scharcanski [1], respectively. The performance tends to improve for near-lossless compression. Therefore, this work presents experimental evidence that for coding image sequences in hemodynamics, an adequate motion compensation scheme can be more efficient than the still-image coding methods often used nowadays.

  3. Quasi-estimation as a Basis for Two-stage Solving of Regression Problem

    CERN Document Server

    Gordinsky, Anatoly

    2010-01-01

    An effective two-stage method for an estimation of parameters of the linear regression is considered. For this purpose we introduce a certain quasi-estimator that, in contrast to usual estimator, produces two alternative estimates. It is proved that, in comparison to the least squares estimate, one alternative has a significantly smaller quadratic risk, retaining at the same time unbiasedness and consistency. These properties hold true for one-dimensional, multi-dimensional, orthogonal and non-orthogonal problems. Moreover, a Monte-Carlo simulation confirms high robustness of the quasi-estimator to violations of the initial assumptions. Therefore, at the first stage of the estimation we calculate mentioned two alternative estimates. At the second stage we choose the better estimate out of these alternatives. In order to do so we use additional information, among it but not exclusively of a priori nature. In case of two alternatives the volume of such information should be minimal. Furthermore, the additional ...

  4. A characteristics study on the performance of a two-stage light gas gun

    Institute of Scientific and Technical Information of China (English)

    吴应湘; 郑之初; P.Kupschus

    1995-01-01

    In order to obtain an overall and systematic understanding of the performance of a two-stage light gas gun (TLGG), a numerical code to simulate the process occurring in a gun shot is advanced based on the quasi-one-dimensional unsteady equations of motion with the real gas effect, friction and heat transfer taken into account in a characteristic formulation for both driver and propellant gas. Comparisons of projectile velocities and projectile pressures along the barrel with experimental results from JET (Joint European Torus) and with computational data got by the Lagrangian method indicate that this code can provide results with good accuracy over a wide range of gun geometry and loading conditions.

  5. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2014-10-01

    Full Text Available We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs and assign medial aid points (MAPs to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i More TDCs often increase the efficiency and utility of medical supplies; (ii It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is.

  6. A Two-Stage LGSM for Three-Point BVPs of Second-Order ODEs

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2008-08-01

    Full Text Available The study in this paper is a numerical integration of second-order three-point boundary value problems under two imposed nonlocal boundary conditions at t=t0, t=ξ, and t=t1 in a general setting, where t0<ξtwo-stage Lie-group shooting method for finding unknown initial conditions, which are obtained through an iterative solution of derived algebraic equations in terms of a weighting factor r∈(0,1. The best r is selected by matching the target with a minimal discrepancy. Numerical examples are examined to confirm that the new approach has high efficiency and accuracy with a fast speed of convergence. Even for multiple solutions, the present method is also effective to find them.

  7. A Two-Stage LGSM for Three-Point BVPs of Second-Order ODEs

    Directory of Open Access Journals (Sweden)

    Liu Chein-Shan

    2008-01-01

    Full Text Available Abstract The study in this paper is a numerical integration of second-order three-point boundary value problems under two imposed nonlocal boundary conditions at , , and in a general setting, where . We construct a two-stage Lie-group shooting method for finding unknown initial conditions, which are obtained through an iterative solution of derived algebraic equations in terms of a weighting factor . The best is selected by matching the target with a minimal discrepancy. Numerical examples are examined to confirm that the new approach has high efficiency and accuracy with a fast speed of convergence. Even for multiple solutions, the present method is also effective to find them.

  8. Shaft Position Influence on Technical Characteristics of Universal Two-Stages Helical Speed Reducers

    Directory of Open Access Journals (Sweden)

    Мilan Rackov

    2005-10-01

    Full Text Available Purchasers of speed reducers decide on buying those reducers, that can the most approximately satisfy their demands with much smaller costs. Amount of used material, ie. mass and dimensions of gear unit influences on gear units price. Mass and dimensions of gear unit, besides output torque, gear unit ratio and efficiency, are the most important parameters of technical characteristics of gear units and their quality. Centre distance and position of shafts have significant influence on output torque, gear unit ratio and mass of gear unit through overall dimension of gear unit housing. Thus these characteristics are dependent on each other. This paper deals with analyzing of centre distance and shaft position influence on output torque and ratio of universal two stages gear units.

  9. A Two-stage Tuning Method of Servo Parameters for Feed Drives in Machine Tools

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the evaluation of dynamic performance for feed drives in machine tools, this paper presents a two-stage tuning method of servo parameters. In the first stage, the evaluation of dynamic performance, parameter tuning and optimization on a mechatronic integrated system simulation platform of feed drives are performed. As a result, a servo parameter combination is acquired. In the second stage, the servo parameter combination from the first stage is set and tuned further in a real machine tool whose dynamic performance is measured and evaluated using the cross grid encoder developed by Heidenhain GmbH. A case study shows that this method simplifies the test process effectively and results in a good dynamic performance in a real machine tool.

  10. Treatment of Domestic Sewage by Two-Stage-Bio-Contact Oxidation Process

    Institute of Scientific and Technical Information of China (English)

    LI Xiang-dong; FENG Qi-yan; LIU Zhong-wei; XIAO Xin; LIN Guo-hua

    2005-01-01

    Effects of hydraulic retention time (HRT) and gas volume on efficiency of wastewater treatment are discussed based on a simulation experiment in which the domestic swage was treated by the two-stage-bio-contact oxidation process. The result shows that the average CODcr, BOD5, suspended solid (SS), and ammonia-nitrogen removal rate are 94.5 %, 93.2 %, 91.7 % and 46.9 %, respectively, under the conditions of a total air/water ratio of 5: 1 , an air/water ratio of 3:1 for oxidation tank 1 and 2:1for oxidation tank 2 and a hydraulic retention time of 1 h for each stage. This method is suitable for domestic sewage treatment of residential community and small towns as well.

  11. Alignment and characterization of the two-stage time delay compensating XUV monochromator

    CERN Document Server

    Eckstein, Martin; Kubin, Markus; Yang, Chung-Hsin; Frassetto, Fabio; Poletto, Luca; Vrakking, Marc J J; Kornilov, Oleg

    2016-01-01

    We present the design, implementation and alignment procedure for a two-stage time delay compensating monochromator. The setup spectrally filters the radiation of a high-order harmonic generation source providing wavelength-selected XUV pulses with a bandwidth of 300 to 600~meV in the photon energy range of 3 to 50~eV. XUV pulses as short as $12\\pm3$~fs are demonstrated. Transmission of the 400~nm (3.1~eV) light facilitates precise alignment of the monochromator. This alignment strategy together with the stable mechanical design of the motorized beamline components enables us to automatically scan the XUV photon energ in pump-probe experiments that require XUV beam pointing stability. The performance of the beamline is demonstrated by the generation of IR-assisted sidebands in XUV photoionization of argon atoms.

  12. Final two-stage MOAO on-sky demonstration with CANARY

    Science.gov (United States)

    Gendron, E.; Morris, T.; Basden, A.; Vidal, F.; Atkinson, D.; Bitenc, U.; Buey, T.; Chemla, F.; Cohen, M.; Dickson, C.; Dipper, N.; Feautrier, P.; Gach, J.-L.; Gratadour, D.; Henry, D.; Huet, J.-M.; Morel, C.; Morris, S.; Myers, R.; Osborn, J.; Perret, D.; Reeves, A.; Rousset, G.; Sevin, A.; Stadler, E.; Talbot, G.; Todd, S.; Younger, E.

    2016-07-01

    CANARY is an on-sky Laser Guide Star (LGS) tomographic AO demonstrator in operation at the 4.2m William Herschel Telescope (WHT) in La Palma. From the early demonstration of open-loop tomography on a single deformable mirror using natural guide stars in 2010, CANARY has been progressively upgraded each year to reach its final goal in July 2015. It is now a two-stage system that mimics the future E-ELT: a GLAO-driven woofer based on 4 laser guide stars delivers a ground-layer compensated field to a figure sensor locked tweeter DM, that achieves the final on-axis tomographic compensation. We present the overall system, the control strategy and an overview of its on-sky performance.

  13. Performance of a highly loaded two stage axial-flow fan

    Science.gov (United States)

    Ruggeri, R. S.; Benser, W. A.

    1974-01-01

    A two-stage axial-flow fan with a tip speed of 1450 ft/sec (442 m/sec) and an overall pressure ratio of 2.8 was designed, built, and tested. At design speed and pressure ratio, the measured flow matched the design value of 184.2 lbm/sec (83.55kg/sec). The adiabatic efficiency at the design operating point was 85.7 percent. The stall margin at design speed was 10 percent. A first-bending-mode flutter of the second-stage rotor blades was encountered near stall at speeds between 77 and 93 percent of design, and also at high pressure ratios at speeds above 105 percent of design. A 5 deg closed reset of the first-stage stator eliminated second-stage flutter for all but a narrow speed range near 90 percent of design.

  14. A Two-stage Kalman Filter for Sensorless Direct Torque Controlled PM Synchronous Motor Drive

    Directory of Open Access Journals (Sweden)

    Boyu Yi

    2013-01-01

    Full Text Available This paper presents an optimal two-stage extended Kalman filter (OTSEKF for closed-loop flux, torque, and speed estimation of a permanent magnet synchronous motor (PMSM to achieve sensorless DTC-SVPWM operation of drive system. The novel observer is obtained by using the same transformation as in a linear Kalman observer, which is proposed by C.-S. Hsieh and F.-C. Chen in 1999. The OTSEKF is an effective implementation of the extended Kalman filter (EKF and provides a recursive optimum state estimation for PMSMs using terminal signals that may be polluted by noise. Compared to a conventional EKF, the OTSEKF reduces the number of arithmetic operations. Simulation and experimental results verify the effectiveness of the proposed OTSEKF observer for DTC of PMSMs.

  15. Synchronous rapid start-up of the methanation and anammox processes in two-stage ASBRs

    Science.gov (United States)

    Duan, Y.; Li, W. R.; Zhao, Y.

    2017-01-01

    The “methanation + anaerobic ammonia oxidation autotrophic denitrification” method was adopted by using anaerobic sequencing batch reactors (ASBRs) and realized a satisfactory synchronous removal of chemical oxygen demand (COD) and ammonia-nitrogen (NH4 +-N) in wastewater after 75 days operation. 90% of COD was removed at a COD load of 1.2 kg/(m3•d) and 90% of TN was removed at a TN load of 0.14 kg/(m3•d). The anammox reaction ratio was estimated to be 1: 1.32: 0.26. The results showed that synchronous rapid start-up of the methanation and anaerobic ammonia oxidation processes in two-stage ASBRs was feasible.

  16. a Remote Liquid Target Loading System for a Two-Stage Gas Gun

    Science.gov (United States)

    Gibson, L. L.; Bartram, B.; Dattelbaum, D. M.; Sheffield, S. A.; Stahl, D. B.

    2009-12-01

    A Remote Liquid Loading System (RLLS) was designed and tested for the application of loading high-hazard liquid materials into instrumented target cells for gas gun-driven plate impact experiments. These high hazard liquids tend to react with confining materials in a short period of time, degrading target assemblies and potentially building up pressure through the evolution of gas in the reactions. Therefore, the ability to load a gas gun target immediately prior to gun firing provides the most stable and reliable target fielding approach. We present the design and evaluation of an RLLS built for the LANL two-stage gas gun. The system has been used successfully to interrogate the shock initiation behavior of ˜98 wt% percent hydrogen peroxide (H2O2) solutions, using embedded electromagnetic gauges for measurement of shock wave profiles in-situ.

  17. Two-Stage Surgery for a Large Cervical Dumbbell Tumour in Neurofibromatosis 1: A Case Report

    Directory of Open Access Journals (Sweden)

    Mohd Ariff S

    2011-11-01

    Full Text Available Spinal neurofibromas occur sporadically and typically occur in association with neurofibromatosis 1. Patients afflicted with neurofibromatosis 1 usually present with involvement of several nerve roots. This report describes the case of a 14- year-old child with a large intraspinal, but extradural tumour with paraspinal extension, dumbbell neurofibroma of the cervical region extending from the C2 to C4 vertebrae. The lesions were readily detected by MR imaging and were successfully resected in a two-stage surgery. The time interval between the first and second surgery was one month. We provide a brief review of the literature regarding various surgical approaches, emphasising the utility of anterior and posterior approaches.

  18. Colorimetric characterization of liquid crystal display using an improved two-stage model

    Institute of Scientific and Technical Information of China (English)

    Yong Wang; Haisong Xu

    2006-01-01

    @@ An improved two-stage model of colorimetric characterization for liquid crystal display (LCD) was proposed. The model included an S-shape nonlinear function with four coefficients for each channel to fit the Tone reproduction curve (TRC), and a linear transfer matrix with black-level correction. To compare with the simple model (SM), gain-offset-gain (GOG), S-curve and three-one-dimensional look-up tables (3-1D LUTs) models, an identical LCD was characterized and the color differences were calculated and summarized using the set of 7 × 7 × 7 digital-to-analog converter (DAC) triplets as test data. The experimental results showed that the model was outperformed in comparison with the GOG and SM ones, and near to that of the S-curve model and 3-1D LUTs method.

  19. Fast Image Segmentation Based on a Two-Stage Geometrical Active Contour

    Institute of Scientific and Technical Information of China (English)

    肖昌炎; 张素; 陈亚珠

    2005-01-01

    A fast two-stage geometric active contour algorithm for image segmentation is developed. First, the Eikonal equation problem is quickly solved using an improved fast sweeping method, and a criterion of local minimum of area gradient (LMAG) is presented to extract the optimal arrival time. Then, the final time function is passed as an initial state to an area and length minimizing flow model, which adjusts the interface more accurately and prevents it from leaking. For object with complete and salient edge, using the first stage only is able to obtain an ideal result, and this results in a time complexity of O(M), where M is the number of points in each coordinate direction. Both stages are needed for convoluted shapes, but the computation cost can be drastically reduced. Efficiency of the algorithm is verified in segmentation experiments of real images with different feature.

  20. Parametric theoretical study of a two-stage solar organic Rankine cycle for RO desalination

    Energy Technology Data Exchange (ETDEWEB)

    Kosmadakis, G.; Manolakos, D.; Papadakis, G. [Department of Natural Resources and Agricultural Engineering, Agricultural University of Athens, 75 Iera Odos Street, 11855 Athens (Greece)

    2010-05-15

    The present work concerns the parametric study of an autonomous, two-stage solar organic Rankine cycle for RO desalination. The main goal of the current simulation is to estimate the efficiency, as well as to calculate the annual mechanical energy available for desalination in the considered cases, in order to evaluate the influence of various parameters on the performance of the system. The parametric study concerns the variation of different parameters, without changing actually the baseline case. The effect of the collectors' slope and the total number of evacuated tube collectors used, have been extensively examined. The total cost is also taken into consideration and is calculated for the different cases examined, along with the specific fresh water cost (EUR/m{sup 3}). (author)

  1. Conditional estimation of exponential random graph models from snowball sampling designs

    NARCIS (Netherlands)

    Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng

    2013-01-01

    A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members

  2. Random or systematic sampling to detect a localised microbial contamination within a batch of food

    NARCIS (Netherlands)

    Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.

    2011-01-01

    Pathogenic microorganisms are known to be distributed heterogeneously in food products that are solid, semi-solid or powdered, like for instance peanut butter, cereals, or powdered milk. This complicates effective detection of the pathogens by sampling. Two-class sampling plans, which are deployed w

  3. Removal of trichloroethylene (TCE) contaminated soil using a two-stage anaerobic-aerobic composting technique.

    Science.gov (United States)

    Ponza, Supat; Parkpian, Preeda; Polprasert, Chongrak; Shrestha, Rajendra P; Jugsujinda, Aroon

    2010-01-01

    The effect of organic carbon addition on remediation of trichloroethylene (TCE) contaminated clay soil was investigated using a two stage anaerobic-aerobic composting system. TCE removal rate and processes involved were determined. Uncontaminated clay soil was treated with composting materials (dried cow manure, rice husk and cane molasses) to represent carbon based treatments (5%, 10% and 20% OC). All treatments were spiked with TCE at 1,000 mg TCE/kg DW and incubated under anaerobic and mesophillic condition (35 degrees C) for 8 weeks followed by continuous aerobic condition for another 6 weeks. TCE dissipation, its metabolites and biogas composition were measured throughout the experimental period. Results show that TCE degradation depended upon the amount of organic carbon (OC) contained within the composting treatments/matrices. The highest TCE removal percentage (97%) and rate (75.06 micro Mole/kg DW/day) were obtained from a treatment of 10% OC composting matrices as compared to 87% and 27.75 micro Mole/kg DW/day for 20% OC, and 83% and 38.08 micro Mole/kg DW/day for soil control treatment. TCE removal rate was first order reaction kinetics. Highest degradation rate constant (k(1) = 0.035 day(- 1)) was also obtained from the 10% OC treatment, followed by 20% OC (k(1) = 0.026 day(- 1)) and 5% OC or soil control treatment (k(1) = 0.023 day(- 1)). The half-life was 20, 27 and 30 days, respectively. The overall results suggest that sequential two stages anaerobic-aerobic composting technique has potential for remediation of TCE in heavy texture soil, providing that easily biodegradable source of organic carbon is present.

  4. Two-Stage Surgical Treatment for Non-Union of a Shortened Osteoporotic Femur

    Directory of Open Access Journals (Sweden)

    Galal Zaki Said

    2013-01-01

    Full Text Available Introduction: We report a case of non-union with severe shortening of the femur following diaphysectomy for chronic osteomyelitis.Case Presentation: A boy, aged 16 years presented with a dangling and excessively short left lower limb. He was using an elbow crutch in his right hand to help him walk. He had a history of diaphysectomy for chronic osteomyelitis at the age of 9. Examination revealed a freely mobile non-union of the left femur. The femur was the seat of an 18 cm shortening and a 4 cm defect at the non-union site; the knee joint was ankylosed in extension. The tibia and fibula were 10 cm short. Considering the extensive shortening in the femur and tibia in addition to osteoporosis, he was treated in two stages. In stage I, the femoral non-union was treated by open reduction, internal fixation and iliac bone grafting. The patient was then allowed to walk with full weight bearing in an extension brace for 7 months. In Stage II, equalization of leg length discrepancy (LLD was achieved by simultaneous distraction of the femur and tibia by unilateral frames. At the 6 month follow- up, he was fully weight bearing without any walking aid, with a heel lift to compensate the 1.5 cm shortening. Three years later he reported that he was satisfied with the result of treatment and was leading a normal life as a university student.Conclusions: Two-stage treatment succeeded to restore about 20 cm of the femoral shortening in a severely osteoporotic bone. It has also succeeded in reducing the treatment time of the external fixator.

  5. Design and Characterization of two stage High-Speed CMOS Operational Amplifier

    Directory of Open Access Journals (Sweden)

    Rahul Chaudhari

    2014-03-01

    Full Text Available A method described in this paper is to design a Two Stage CMOS operational amplifier and analyze the effect of various aspect ratios on the characteristics of this Op-Amp, which operates at 1.8V power supply using tsmc 0.18μm CMOS technology. In this paper trade-off curves are computed between all characteristics such as Gain, PM, GBW, ICMRR, CMRR, Slew Rate etc. The OPAMP designed is a two-stage CMOS OPAMP. The OPAMP is designed to exhibit a unity gain frequency of 14MHz and exhibits a gain of 59.98dB with a 61.235 phase margin. Design has been carried out in Mentor graphics tools. Simulation results are verified using Model Sim Eldo and Design Architect IC. The task of CMOS operational amplifiers (Op-Amps design optimization is investigated in this work. This Paper focused on the optimization of various aspect ratios, which gave the result of different parameter. When this task is analyzed as a search problem, it can be translated into a multi-objective optimization application in which various Op-Amps’ specifications have to be taken into account, i.e., Gain, GBW (gain-bandwidth product, phase margin and others. The results are compared with respect to standard characteristics of the op-amp with the help of graph and table. Simulation results agree with theoretical predictions. Simulations confirm that the settling time can be further improved by increasing the value of GBW, the settling time is achieved 19ns. It has been demonstrated that when W/L increases the parameters GBW increases and settling time reduces.

  6. Anti-kindling Induced by Two-Stage Coordinated Reset Stimulation with Weak Onset Intensity

    Science.gov (United States)

    Zeitler, Magteld; Tass, Peter A.

    2016-01-01

    Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR) stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e., unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies. PMID:27242500

  7. Focused ultrasound simultaneous irradiation/MRI imaging, and two-stage general kinetic model.

    Directory of Open Access Journals (Sweden)

    Sheng-Yao Huang

    Full Text Available Many studies have investigated how to use focused ultrasound (FUS to temporarily disrupt the blood-brain barrier (BBB in order to facilitate the delivery of medication into lesion sites in the brain. In this study, through the setup of a real-time system, FUS irradiation and injections of ultrasound contrast agent (UCA and Gadodiamide (Gd, an MRI contrast agent can be conducted simultaneously during MRI scanning. By using this real-time system, we were able to investigate in detail how the general kinetic model (GKM is used to estimate Gd penetration in the FUS irradiated area in a rat's brain resulting from UCA concentration changes after single FUS irradiation. Two-stage GKM was proposed to estimate the Gd penetration in the FUS irradiated area in a rat's brain under experimental conditions with repeated FUS irradiation combined with different UCA concentrations. The results showed that the focal increase in the transfer rate constant of Ktrans caused by BBB disruption was dependent on the doses of UCA. Moreover, the amount of in vivo penetration of Evans blue in the FUS irradiated area in a rat's brain under various FUS irradiation experimental conditions was assessed to show the positive correlation with the transfer rate constants. Compared to the GKM method, the Two-stage GKM is more suitable for estimating the transfer rate constants of the brain treated with repeated FUS irradiations. This study demonstrated that the entire process of BBB disrupted by FUS could be quantitatively monitored by real-time dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.

  8. Condition monitoring of distributed systems using two-stage Bayesian inference data fusion

    Science.gov (United States)

    Jaramillo, Víctor H.; Ottewill, James R.; Dudek, Rafał; Lepiarczyk, Dariusz; Pawlik, Paweł

    2017-03-01

    In industrial practice, condition monitoring is typically applied to critical machinery. A particular piece of machinery may have its own condition monitoring system that allows the health condition of said piece of equipment to be assessed independently of any connected assets. However, industrial machines are typically complex sets of components that continuously interact with one another. In some cases, dynamics resulting from the inception and development of a fault can propagate between individual components. For example, a fault in one component may lead to an increased vibration level in both the faulty component, as well as in connected healthy components. In such cases, a condition monitoring system focusing on a specific element in a connected set of components may either incorrectly indicate a fault, or conversely, a fault might be missed or masked due to the interaction of a piece of equipment with neighboring machines. In such cases, a more holistic condition monitoring approach that can not only account for such interactions, but utilize them to provide a more complete and definitive diagnostic picture of the health of the machinery is highly desirable. In this paper, a Two-Stage Bayesian Inference approach allowing data from separate condition monitoring systems to be combined is presented. Data from distributed condition monitoring systems are combined in two stages, the first data fusion occurring at a local, or component, level, and the second fusion combining data at a global level. Data obtained from an experimental rig consisting of an electric motor, two gearboxes, and a load, operating under a range of different fault conditions is used to illustrate the efficacy of the method at pinpointing the root cause of a problem. The obtained results suggest that the approach is adept at refining the diagnostic information obtained from each of the different machine components monitored, therefore improving the reliability of the health assessment of

  9. Novel two-stage piezoelectric-based ocean wave energy harvesters for moored or unmoored buoys

    Science.gov (United States)

    Murray, R.; Rastegar, J.

    2009-03-01

    Harvesting mechanical energy from ocean wave oscillations for conversion to electrical energy has long been pursued as an alternative or self-contained power source. The attraction to harvesting energy from ocean waves stems from the sheer power of the wave motion, which can easily exceed 50 kW per meter of wave front. The principal barrier to harvesting this power is the very low and varying frequency of ocean waves, which generally vary from 0.1Hz to 0.5Hz. In this paper the application of a novel class of two-stage electrical energy generators to buoyant structures is presented. The generators use the buoy's interaction with the ocean waves as a low-speed input to a primary system, which, in turn, successively excites an array of vibratory elements (secondary system) into resonance - like a musician strumming a guitar. The key advantage of the present system is that by having two decoupled systems, the low frequency and highly varying buoy motion is converted into constant and much higher frequency mechanical vibrations. Electrical energy may then be harvested from the vibrating elements of the secondary system with high efficiency using piezoelectric elements. The operating principles of the novel two-stage technique are presented, including analytical formulations describing the transfer of energy between the two systems. Also, prototypical design examples are offered, as well as an in-depth computer simulation of a prototypical heaving-based wave energy harvester which generates electrical energy from the up-and-down motion of a buoy riding on the ocean's surface.

  10. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  11. Two-Stage Latissimus Dorsi Flap with Implant for Unilateral Breast Reconstruction: Getting the Size Right

    Directory of Open Access Journals (Sweden)

    Jiajun Feng

    2016-03-01

    Full Text Available BackgroundThe aim of unilateral breast reconstruction after mastectomy is to craft a natural-looking breast with symmetry. The latissimus dorsi (LD flap with implant is an established technique for this purpose. However, it is challenging to obtain adequate volume and satisfactory aesthetic results using a one-stage operation when considering factors such as muscle atrophy, wound dehiscence and excessive scarring. The two-stage reconstruction addresses these difficulties by using a tissue expander to gradually enlarge the skin pocket which eventually holds an appropriately sized implant.MethodsWe analyzed nine patients who underwent unilateral two-stage LD reconstruction. In the first stage, an expander was placed along with the LD flap to reconstruct the mastectomy defect, followed by gradual tissue expansion to achieve overexpansion of the skin pocket. The final implant volume was determined by measuring the residual expander volume after aspirating the excess saline. Finally, the expander was replaced with the chosen implant.ResultsThe average volume of tissue expansion was 460 mL. The resultant expansion allowed an implant ranging in volume from 255 to 420 mL to be placed alongside the LD muscle. Seven patients scored less than six on the relative breast retraction assessment formula for breast symmetry, indicating excellent breast symmetry. The remaining two patients scored between six and eight, indicating good symmetry.ConclusionsThis approach allows the size of the eventual implant to be estimated after the skin pocket has healed completely and the LD muscle has undergone natural atrophy. Optimal reconstruction results were achieved using this approach.

  12. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Directory of Open Access Journals (Sweden)

    Andreas Steimer

    Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing

  13. Sulfur removal in advanced two stage pressurized fluidized bed combustion. Technical report, December 1, 1994--February 28, 1995

    Energy Technology Data Exchange (ETDEWEB)

    Abbasian, J.

    1996-03-01

    The objective of this study is to obtain data on the rates and the extent of sulfation reactions involving partially sulfided calcium-based sorbents, and oxygen as well as sulfur dioxide, at operating conditions closely simulating those prevailing in the second stage (combustor) of Advanced Two-Stage Pressurized Fluidized-Bed Combustors (PFBC). In these systems the CO{sub 2} partial pressure generally exceeds the equilibrium value for calcium carbonate decomposition. Therefore, calcium sulfate is produced through the reactions between SO{sub 2} and calcium carbonate as well as the reaction between calcium sulfide and oxygen. To achieve this objective, the rates of reaction involving SO{sub 2} and oxygen (gaseous reactant); and calcium sulfide and calcium carbonate (solid reactants), will be determined by conducting tests in a pressurized thermogravimetric analyzer (HPTGA) unit. The effects of sorbent type, sorbent particle size, reactor temperature and pressure; and O{sub 2} as well as SO{sub 2} partial pressures on the sulfation reactions rate will be determined. During this quarter, samples of the selected limestone and dolomite, sulfided in the fluidized-bed reactor during last quarter, were analyzed. The extent of sulfidation in these samples was in the range of 20 to 50%, which represent carbonizer discharge material at different operating conditions. The high pressure thermogravimetric analyzer (BPTGA) unit has been modified and a new pressure control system was installed to eliminate pressure fluctuation during the sulfation tests.

  14. Sulfur removal in advanced two stage pressurized fluidized bed combustion. Technical report, September 1--November 30, 1994

    Energy Technology Data Exchange (ETDEWEB)

    Abbasian, J.; Hill, A.; Wangerow, J.R. [Inst. of Gas Technology, Chicago, IL (United States)

    1994-12-31

    The objective of this study is to obtain data on the rates and the extent of sulfation reactions involving partially sulfided calcium-based sorbents, and oxygen as well as sulfur dioxide, at operating conditions closely simulating those prevailing in the second stage (combustor) of Advanced Two-Stage Pressurized Fluidized-Bed Combustors (PFBC). In these systems the CO{sub 2} partial pressure generally exceeds the equilibrium value for calcium carbonate decomposition. Therefore, calcium sulfate is produced through the reactions between SO{sub 2} and calcium carbonate as well as the reaction between calcium sulfide and oxygen. To achieve this objective, the rates of reaction involving SO{sub 2} and oxygen (gaseous reactant); and calcium sulfide and calcium carbonate (solid reactants), will be determined by conducting tests in a pressurized thermogravimetric analyzer (HPTGA) unit. The effects of sorbent type, sorbent particle size, reactor temperature and pressure; and O{sub 2} as well as SO{sub 2} partial pressures on the sulfation reactions rate will be determined. During this quarter, samples of the selected limestone and dolomite were sulfided in the fluidized-bed reactor. These tests were conducted in both calcining and non-calcining operating conditions to produce partially-sulfided sorbents containing calcium oxide and calcium carbonate, respectively. These samples which represent the carbonizer discharge material, will be used as the feed material in the sulfation tests to be conducted in the HPTGA unit during the next quarter.

  15. Randomized controlled trial of endoscopic ultrasound-guided fine-needle sampling with or without suction for better cytological diagnosis

    DEFF Research Database (Denmark)

    Puri, Rajesh; Vilmann, Peter; Saftoiu, Adrian

    2009-01-01

    two different techniques of EUS-guided sampling of solid masses, using either non-suction or suction with a 10-ml syringe. MATERIAL AND METHODS: Patients assessed during a 6-month period were randomized to three passes of EUS-guided sampling with suction (26 patients) or non-suction (26 patients....... RESULTS: EUS-guided fine-needle sampling with suction of solid masses increased the number of pathology slides (17.8+/-7.1 slides for suction as compared with 10.2+/-5.5 for non-suction, p=0.0001), without increasing the overall bloodiness of each sample. Sensitivity and the negative predictive values...... were higher when suction was applied, as compared to the non-suction group (85.7% as compared with 66.7%, p=0.05). CONCLUSIONS: This prospective randomized study showed that EUS-guided fine-needle sampling of solid masses using suction yields a higher number of slides without increasing bloodiness...

  16. Hybrid alkali-hydrodynamic disintegration of waste-activated sludge before two-stage anaerobic digestion process.

    Science.gov (United States)

    Grübel, Klaudiusz; Suschka, Jan

    2015-05-01

    The first step of anaerobic digestion, the hydrolysis, is regarded as the rate-limiting step in the degradation of complex organic compounds, such as waste-activated sludge (WAS). The aim of lab-scale experiments was to pre-hydrolyze the sludge by means of low intensive alkaline sludge conditioning before applying hydrodynamic disintegration, as the pre-treatment procedure. Application of both processes as a hybrid disintegration sludge technology resulted in a higher organic matter release (soluble chemical oxygen demand (SCOD)) to the liquid sludge phase compared with the effects of processes conducted separately. The total SCOD after alkalization at 9 pH (pH in the range of 8.96-9.10, SCOD = 600 mg O2/L) and after hydrodynamic (SCOD = 1450 mg O2/L) disintegration equaled to 2050 mg/L. However, due to the synergistic effect, the obtained SCOD value amounted to 2800 mg/L, which constitutes an additional chemical oxygen demand (COD) dissolution of about 35 %. Similarly, the synergistic effect after alkalization at 10 pH was also obtained. The applied hybrid pre-hydrolysis technology resulted in a disintegration degree of 28-35%. The experiments aimed at selection of the most appropriate procedures in terms of optimal sludge digestion results, including high organic matter degradation (removal) and high biogas production. The analyzed soft hybrid technology influenced the effectiveness of mesophilic/thermophilic anaerobic digestion in a positive way and ensured the sludge minimization. The adopted pre-treatment technology (alkalization + hydrodynamic cavitation) resulted in 22-27% higher biogas production and 13-28% higher biogas yield. After two stages of anaerobic digestion (mesophilic conditions (MAD) + thermophilic anaerobic digestion (TAD)), the highest total solids (TS) reduction amounted to 45.6% and was received for the following sample at 7 days MAD + 17 days TAD. About 7% higher TS reduction was noticed compared with the sample after 9

  17. Selection of negative samples and two-stage combination of multiple features for action detection in thousands of videos

    NARCIS (Netherlands)

    Burghouts, G.J.; Schutte, K.; Bouma, H.; Hollander, R.J.M. den

    2013-01-01

    In this paper, a system is presented that can detect 48 human actions in realistic videos, ranging from simple actions such as ‘walk’ to complex actions such as ‘exchange’. We propose a method that gives a major contribution in performance. The reason for this major improvement is related to a diffe

  18. Genetic variants at 1p11.2 and breast cancer risk: a two-stage study in Chinese women.

    Directory of Open Access Journals (Sweden)

    Yue Jiang

    Full Text Available BACKGROUND: Genome-wide association studies (GWAS have identified several breast cancer susceptibility loci, and one genetic variant, rs11249433, at 1p11.2 was reported to be associated with breast cancer in European populations. To explore the genetic variants in this region associated with breast cancer in Chinese women, we conducted a two-stage fine-mapping study with a total of 1792 breast cancer cases and 1867 controls. METHODOLOGY/PRINCIPAL FINDINGS: Seven single nucleotide polymorphisms (SNPs including rs11249433 in a 277 kb region at 1p11.2 were selected and genotyping was performed by using TaqMan® OpenArray™ Genotyping System for stage 1 samples (878 cases and 900 controls. In stage 2 (914 cases and 967 controls, three SNPs (rs2580520, rs4844616 and rs11249433 were further selected and genotyped for validation. The results showed that one SNP (rs2580520 located at a predicted enhancer region of SRGAP2 was consistently associated with a significantly increased risk of breast cancer in a recessive genetic model [Odds Ratio (OR  =  1.66, 95% confidence interval (CI  =  1.16-2.36 for stage 2 samples; OR  =  1.51, 95% CI  =  1.16-1.97 for combined samples, respectively]. However, no significant association was observed between rs11249433 and breast cancer risk in this Chinese population (dominant genetic model in combined samples: OR  =  1.20, 95% CI  =  0.92-1.57. CONCLUSIONS/SIGNIFICANCE: Genotypes of rs2580520 at 1p11.2 suggest that Chinese women may have different breast cancer susceptibility loci, which may contribute to the development of breast cancer in this population.

  19. Network sampling coverage II: The effect of non-random missing data on network measurement.

    Science.gov (United States)

    Smith, Jeffrey A; Moody, James; Morgan, Jonathan

    2017-01-01

    Missing data is an important, but often ignored, aspect of a network study. Measurement validity is affected by missing data, but the level of bias can be difficult to gauge. Here, we describe the effect of missing data on network measurement across widely different circumstances. In Part I of this study (Smith and Moody, 2013), we explored the effect of measurement bias due to randomly missing nodes. Here, we drop the assumption that data are missing at random: what happens to estimates of key network statistics when central nodes are more/less likely to be missing? We answer this question using a wide range of empirical networks and network measures. We find that bias is worse when more central nodes are missing. With respect to network measures, Bonacich centrality is highly sensitive to the loss of central nodes, while closeness centrality is not; distance and bicomponent size are more affected than triad summary measures and behavioral homophily is more robust than degree-homophily. With respect to types of networks, larger, directed networks tend to be more robust, but the relation is weak. We end the paper with a practical application, showing how researchers can use our results (translated into a publically available java application) to gauge the bias in their own data.

  20. Experimental and numerical studies on two-stage combustion of biomass

    Energy Technology Data Exchange (ETDEWEB)

    Houshfar, Eshan

    2012-07-01

    In this thesis, two-stage combustion of biomass was experimentally/numerically investigated in a multifuel reactor. The following emissions issues have been the main focus of the work: 1- NOx and N2O 2- Unburnt species (CO and CxHy) 3- Corrosion related emissions.The study had a focus on two-stage combustion in order to reduce pollutant emissions (primarily NOx emissions). It is well known that pollutant emissions are very dependent on the process conditions such as temperature, reactant concentrations and residence times. On the other hand, emissions are also dependent on the fuel properties (moisture content, volatiles, alkali content, etc.). A detailed study of the important parameters with suitable biomass fuels in order to optimize the various process conditions was performed. Different experimental studies were carried out on biomass fuels in order to study the effect of fuel properties and combustion parameters on pollutant emissions. Process conditions typical for biomass combustion processes were studied. Advanced experimental equipment was used in these studies. The experiments showed the effects of staged air combustion, compared to non-staged combustion, on the emission levels clearly. A NOx reduction of up to 85% was reached with staged air combustion using demolition wood as fuel. An optimum primary excess air ratio of 0.8-0.95 was found as a minimizing parameter for the NOx emissions for staged air combustion. Air staging had, however, a negative effect on N2O emissions. Even though the trends showed a very small reduction in the NOx level as temperature increased for non-staged combustion, the effect of temperature was not significant for NOx and CxHy, neither in staged air combustion or non-staged combustion, while it had a great influence on the N2O and CO emissions, with decreasing levels with increasing temperature. Furthermore, flue gas recirculation (FGR) was used in combination with staged combustion to obtain an enhanced NOx reduction. The