WorldWideScience

Sample records for two-stage random sampling

  1. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  2. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  3. Don't spin the pen: two alternative methods for second-stage sampling in urban cluster surveys

    Directory of Open Access Journals (Sweden)

    Rose Angela MC

    2007-06-01

    Full Text Available Abstract In two-stage cluster surveys, the traditional method used in second-stage sampling (in which the first household in a cluster is selected is time-consuming and may result in biased estimates of the indicator of interest. Firstly, a random direction from the center of the cluster is selected, usually by spinning a pen. The houses along that direction are then counted out to the boundary of the cluster, and one is then selected at random to be the first household surveyed. This process favors households towards the center of the cluster, but it could easily be improved. During a recent meningitis vaccination coverage survey in Maradi, Niger, we compared this method of first household selection to two alternatives in urban zones: 1 using a superimposed grid on the map of the cluster area and randomly selecting an intersection; and 2 drawing the perimeter of the cluster area using a Global Positioning System (GPS and randomly selecting one point within the perimeter. Although we only compared a limited number of clusters using each method, we found the sampling grid method to be the fastest and easiest for field survey teams, although it does require a map of the area. Selecting a random GPS point was also found to be a good method, once adequate training can be provided. Spinning the pen and counting households to the boundary was the most complicated and time-consuming. The two methods tested here represent simpler, quicker and potentially more robust alternatives to spinning the pen for cluster surveys in urban areas. However, in rural areas, these alternatives would favor initial household selection from lower density (or even potentially empty areas. Bearing in mind these limitations, as well as available resources and feasibility, investigators should choose the most appropriate method for their particular survey context.

  4. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Evaluating the Validity of a Two-stage Sample in a Birth Cohort Established from Administrative Databases.

    Science.gov (United States)

    El-Zein, Mariam; Conus, Florence; Benedetti, Andrea; Parent, Marie-Elise; Rousseau, Marie-Claude

    2016-01-01

    When using administrative databases for epidemiologic research, a subsample of subjects can be interviewed, eliciting information on undocumented confounders. This article presents a thorough investigation of the validity of a two-stage sample encompassing an assessment of nonparticipation and quantification of the extent of bias. Established through record linkage of administrative databases, the Québec Birth Cohort on Immunity and Health (n = 81,496) aims to study the association between Bacillus Calmette-Guérin vaccination and asthma. Among 76,623 subjects classified in four Bacillus Calmette-Guérin-asthma strata, a two-stage sampling strategy with a balanced design was used to randomly select individuals for interviews. We compared stratum-specific sociodemographic characteristics and healthcare utilization of stage 2 participants (n = 1,643) with those of eligible nonparticipants (n = 74,980) and nonrespondents (n = 3,157). We used logistic regression to determine whether participation varied across strata according to these characteristics. The effect of nonparticipation was described by the relative odds ratio (ROR = ORparticipants/ORsource population) for the association between sociodemographic characteristics and asthma. Parental age at childbirth, area of residence, family income, and healthcare utilization were comparable between groups. Participants were slightly more likely to be women and have a mother born in Québec. Participation did not vary across strata by sex, parental birthplace, or material and social deprivation. Estimates were not biased by nonparticipation; most RORs were below one and bias never exceeded 20%. Our analyses evaluate and provide a detailed demonstration of the validity of a two-stage sample for researchers assembling similar research infrastructures.

  6. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  7. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    Science.gov (United States)

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  8. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  9. Adjuvant therapy in stage I and stage II epithelial ovarian cancer. Results of two prospective randomized trials

    International Nuclear Information System (INIS)

    Young, R.C.; Walton, L.A.; Ellenberg, S.S.; Homesley, H.D.; Wilbanks, G.D.; Decker, D.G.; Miller, A.; Park, R.; Major, F. Jr.

    1990-01-01

    About a third of patients with ovarian cancer present with localized disease; despite surgical resection, up to half the tumors recur. Since it has not been established whether adjuvant treatment can benefit such patients, we conducted two prospective, randomized national cooperative trials of adjuvant therapy in patients with localized ovarian carcinoma. All patients underwent surgical resection plus comprehensive staging and, 18 months later, surgical re-exploration. In the first trial, 81 patients with well-differentiated or moderately well differentiated cancers confined to the ovaries (Stages Iai and Ibi) were assigned to receive either no chemotherapy or melphalan (0.2 mg per kilogram of body weight per day for five days, repeated every four to six weeks for up to 12 cycles). After a median follow-up of more than six years, there were no significant differences between the patients given no chemotherapy and those treated with melphalan with respect to either five-year disease-free survival or overall survival. In the second trial, 141 patients with poorly differentiated Stage I tumors or with cancer outside the ovaries but limited to the pelvis (Stage II) were randomly assigned to treatment with either melphalan (in the same regimen as above) or a single intraperitoneal dose of 32P (15 mCi) at the time of surgery. In this trial (median follow-up, greater than 6 years) the outcomes for the two treatment groups were similar with respect to five-year disease-free survival (80 percent in both groups) and overall survival (81 percent with melphalan vs. 78 percent with 32P; P = 0.48). We conclude that in patients with localized ovarian cancer, comprehensive staging at the time of surgical resection can serve to identify those patients (as defined by the first trial) who can be followed without adjuvant chemotherapy

  10. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  11. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  12. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  13. Relative Efficiencies of a Three-Stage Versus a Two-Stage Sample Design For a New NLS Cohort Study. 22U-884-38.

    Science.gov (United States)

    Folsom, R. E.; Weber, J. H.

    Two sampling designs were compared for the planned 1978 national longitudinal survey of high school seniors with respect to statistical efficiency and cost. The 1972 survey used a stratified two-stage sample of high schools and seniors within schools. In order to minimize interviewer travel costs, an alternate sampling design was proposed,…

  14. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  15. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  16. Genome-wide association data classification and SNPs selection using two-stage quality-based Random Forests.

    Science.gov (United States)

    Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark

    2015-01-01

    Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed

  17. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  18. Systematic versus random sampling in stereological studies.

    Science.gov (United States)

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  19. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    Science.gov (United States)

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  20. Two-stage revision of septic knee prosthesis with articulating knee spacers yields better infection eradication rate than one-stage or two-stage revision with static spacers.

    Science.gov (United States)

    Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L

    2012-12-01

    The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.

  1. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  2. BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal

    International Nuclear Information System (INIS)

    Sagar, B.

    1989-01-01

    1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included

  3. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge

    OpenAIRE

    Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Fr?d?ric; Petignat, Patrick

    2015-01-01

    Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed f...

  4. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  5. Single-stage laparoscopic common bile duct exploration and cholecystectomy versus two-stage endoscopic stone extraction followed by laparoscopic cholecystectomy for patients with concomitant gallbladder stones and common bile duct stones: a randomized controlled trial.

    Science.gov (United States)

    Bansal, Virinder Kumar; Misra, Mahesh C; Rajan, Karthik; Kilambi, Ragini; Kumar, Subodh; Krishna, Asuri; Kumar, Atin; Pandav, Chandrakant S; Subramaniam, Rajeshwari; Arora, M K; Garg, Pramod Kumar

    2014-03-01

    The ideal method for managing concomitant gallbladder stones and common bile duct (CBD) stones is debatable. The currently preferred method is two-stage endoscopic stone extraction followed by laparoscopic cholecystectomy (LC). This prospective randomized trial compared the success and cost effectiveness of single- and two-stage management of patients with concomitant gallbladder and CBD stones. Consecutive patients with concomitant gallbladder and CBD stones were randomized to either single-stage laparoscopic CBD exploration and cholecystectomy (group 1) or endoscopic retrograde cholangiopancreatography (ERCP) for endoscopic extraction of CBD stones followed by LC (group 2). Success was defined as complete clearance of CBD and cholecystectomy by the intended method. Cost effectiveness was measured using the incremental cost-effectiveness ratio. Intention-to-treat analysis was performed to compare outcomes. From February 2009 to October 2012, 168 patients were randomized: 84 to the single-stage procedure (group 1) and 84 to the two-stage procedure (group 2). Both groups were matched with regard to demographic and clinical parameters. The success rates of laparoscopic CBD exploration and ERCP for clearance of CBD were similar (91.7 vs. 88.1 %). The overall success rate also was comparable: 88.1 % in group 1 and 79.8 % in group 2 (p = 0.20). Direct choledochotomy was performed in 83 of the 84 patients. The mean operative time was significantly longer in group 1 (135.7 ± 36.6 vs. 72.4 ± 27.6 min; p ≤ 0.001), but the overall hospital stay was significantly shorter (4.6 ± 2.4 vs. 5.3 ± 6.2 days; p = 0.03). Group 2 had a significantly greater number of procedures per patient (p gallbladder and CBD stones had similar success and complication rates, but the single-stage strategy was better in terms of shorter hospital stay, need for fewer procedures, and cost effectiveness.

  6. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  7. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  8. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  9. Projection correlation between two random vectors.

    Science.gov (United States)

    Zhu, Liping; Xu, Kai; Li, Runze; Zhong, Wei

    2017-12-01

    We propose the use of projection correlation to characterize dependence between two random vectors. Projection correlation has several appealing properties. It equals zero if and only if the two random vectors are independent, it is not sensitive to the dimensions of the two random vectors, it is invariant with respect to the group of orthogonal transformations, and its estimation is free of tuning parameters and does not require moment conditions on the random vectors. We show that the sample estimate of the projection correction is [Formula: see text]-consistent if the two random vectors are independent and root-[Formula: see text]-consistent otherwise. Monte Carlo simulation studies indicate that the projection correlation has higher power than the distance correlation and the ranks of distances in tests of independence, especially when the dimensions are relatively large or the moment conditions required by the distance correlation are violated.

  10. Two-stage revision surgery with preformed spacers and cementless implants for septic hip arthritis: a prospective, non-randomized cohort study

    Directory of Open Access Journals (Sweden)

    Logoluso Nicola

    2011-05-01

    Full Text Available Abstract Background Outcome data on two-stage revision surgery for deep infection after septic hip arthritis are limited and inconsistent. This study presents the medium-term results of a new, standardized two-stage arthroplasty with preformed hip spacers and cementless implants in a consecutive series of adult patients with septic arthritis of the hip treated according to a same protocol. Methods Nineteen patients (20 hips were enrolled in this prospective, non-randomized cohort study between 2000 and 2008. The first stage comprised femoral head resection, debridement, and insertion of a preformed, commercially available, antibiotic-loaded cement hip spacer. After eradication of infection, a cementless total hip arthroplasty was implanted in the second stage. Patients were assessed for infection recurrence, pain (visual analog scale [VAS] and hip joint function (Harris Hip score. Results The mean time between first diagnosis of infection and revision surgery was 5.8 ± 9.0 months; the average duration of follow up was 56.6 (range, 24 - 104 months; all 20 hips were successfully converted to prosthesis an average 22 ± 5.1 weeks after spacer implantation. Reinfection after total hip joint replacement occurred in 1 patient. The mean VAS pain score improved from 48 (range, 35 - 84 pre-operatively to 18 (range, 0 - 38 prior to spacer removal and to 8 (range, 0 - 15 at the last follow-up assessment after prosthesis implantation. The average Harris Hip score improved from 27.5 before surgery to 61.8 between the two stages to 92.3 at the final follow-up assessment. Conclusions Satisfactory outcomes can be obtained with two-stage revision hip arthroplasty using preformed spacers and cementless implants for prosthetic hip joint infections of various etiologies.

  11. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  12. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  13. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  14. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  15. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  16. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  17. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge.

    Science.gov (United States)

    Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick

    2015-01-01

    Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2) detected by s-DRY, 56.2% (95%CI: 47.6-64.4) by Dr-WET, and 54.6% (95%CI: 46.1-62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5-79.8) for s-FTA, 84.6% (95%CI: 66.5-93.9) for s-DRY, and 76.9% (95%CI: 58.0-89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. International Standard Randomized Controlled Trial Number (ISRCTN): 43310942.

  18. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....

  19. Health plan auditing: 100-percent-of-claims vs. random-sample audits.

    Science.gov (United States)

    Sillup, George P; Klimberg, Ronald K

    2011-01-01

    The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.

  20. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  1. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  2. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  3. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  4. Sampling problems for randomly broken sticks

    Energy Technology Data Exchange (ETDEWEB)

    Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)

    2003-04-11

    Consider the random partitioning model of a population (represented by a stick of length 1) into n species (fragments) with identically distributed random weights (sizes). Upon ranking the fragments' weights according to ascending sizes, let S{sub m:n} be the size of the mth smallest fragment. Assume that some observer is sampling such populations as follows: drop at random k points (the sample size) onto this stick and record the corresponding numbers of visited fragments. We shall investigate the following sampling problems: (1) what is the sample size if the sampling is carried out until the first visit of the smallest fragment (size S{sub 1:n})? (2) For a given sample size, have all the fragments of the stick been visited at least once or not? This question is related to Feller's random coupon collector problem. (3) In what order are new fragments being discovered and what is the random number of samples separating the discovery of consecutive new fragments until exhaustion of the list? For this problem, the distribution of the size-biased permutation of the species' weights, as the sequence of their weights in their order of appearance is needed and studied.

  5. Assessment of heavy metals in Averrhoa bilimbi and A. carambola fruit samples at two developmental stages.

    Science.gov (United States)

    Soumya, S L; Nair, Bindu R

    2016-05-01

    Though the fruits of Averrhoa bilimbi and A. carambola are economically and medicinally important, they remain underutilized. The present study reports heavy metal quantitation in the fruit samples of A. bilimbi and A. carambola (Oxalidaceae), collected at two stages of maturity. Heavy metals are known to interfere with the functioning of vital cellular components. Although toxic, some elements are considered essential for human health, in trace quantities. Heavy metals such as Cr, Mn, Co, Cu, Zn, As, Se, Pb, and Cd were analyzed by atomic absorption spectroscopy (AAS). The samples under investigation included, A. bilimbi unripe (BU) and ripe (BR), A. carambola sour unripe (CSU) and ripe (CSR), and A. carambola sweet unripe (CTU) and ripe (CTR). Heavy metal analysis showed that relatively higher level of heavy metals was present in BR samples compared to the rest of the samples. The highest amount of As and Se were recorded in BU samples while Mn content was highest in CSU samples and Co in CSR. Least amounts of Cr, Zn, Se, Cd, and Pb were noted in CTU while, Mn, Cu, and As were least in CTR. Thus, the sweet types of A. carambola (CTU, CTR) had comparatively lower heavy metal content. There appears to be no reason for concern since different fruit samples of Averrhoa studied presently showed the presence of various heavy metals in trace quantities.

  6. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge.

    Directory of Open Access Journals (Sweden)

    Rosa Catarino

    Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.

  7. Preoperative staging of lung cancer with PET/CT: cost-effectiveness evaluation alongside a randomized controlled trial

    DEFF Research Database (Denmark)

    Søgaard, Rikke; Fischer, Barbara Malene B; Mortensen, Jann

    2011-01-01

    PURPOSE: Positron emission tomography (PET)/CT has become a widely used technology for preoperative staging of non-small cell lung cancer (NSCLC). Two recent randomized controlled trials (RCT) have established its efficacy over conventional staging, but no studies have assessed its cost-effective......PURPOSE: Positron emission tomography (PET)/CT has become a widely used technology for preoperative staging of non-small cell lung cancer (NSCLC). Two recent randomized controlled trials (RCT) have established its efficacy over conventional staging, but no studies have assessed its cost......-effectiveness. The objective of this study was to assess the cost-effectiveness of PET/CT as an adjunct to conventional workup for preoperative staging of NSCLC. METHODS: The study was conducted alongside an RCT in which 189 patients were allocated to conventional staging (n = 91) or conventional staging + PET/CT (n = 98......) and followed for 1 year after which the numbers of futile thoracotomies in each group were monitored. A full health care sector perspective was adapted for costing resource use. The outcome parameter was defined as the number needed to treat (NNT)-here number of PET/CT scans needed-to avoid one futile...

  8. Chronic infections in hip arthroplasties: comparing risk of reinfection following one-stage and two-stage revision: a systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Lange J

    2012-03-01

    Full Text Available Jeppe Lange1,2, Anders Troelsen3, Reimar W Thomsen4, Kjeld Søballe1,51Lundbeck Foundation Centre for Fast-Track Hip and Knee Surgery, Aarhus C, 2Center for Planned Surgery, Silkeborg Regional Hospital, Silkeborg, 3Department of Orthopaedics, Hvidovre Hospital, Hvidovre, 4Department of Clinical Epidemiology, Aarhus University Hospital, Aalborg, 5Department of Orthopaedics, Aarhus University Hospital, Aarhus C, DenmarkBackground: Two-stage revision is regarded by many as the best treatment of chronic infection in hip arthroplasties. Some international reports, however, have advocated one-stage revision. No systematic review or meta-analysis has ever compared the risk of reinfection following one-stage and two-stage revisions for chronic infection in hip arthroplasties.Methods: The review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis. Relevant studies were identified using PubMed and Embase. We assessed studies that included patients with a chronic infection of a hip arthroplasty treated with either one-stage or two-stage revision and with available data on occurrence of reinfections. We performed a meta-analysis estimating absolute risk of reinfection using a random-effects model.Results: We identified 36 studies eligible for inclusion. None were randomized controlled trials or comparative studies. The patients in these studies had received either one-stage revision (n = 375 or two-stage revision (n = 929. Reinfection occurred with an estimated absolute risk of 13.1% (95% confidence interval: 10.0%–17.1% in the one-stage cohort and 10.4% (95% confidence interval: 8.5%–12.7% in the two-stage cohort. The methodological quality of most included studies was considered low, with insufficient data to evaluate confounding factors.Conclusions: Our results may indicate three additional reinfections per 100 reimplanted patients when performing a one-stage versus two-stage revision. However, the

  9. SIMS analysis using a new novel sample stage

    International Nuclear Information System (INIS)

    Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo

    2006-01-01

    We have developed a novel sample stage for Cameca IMS-series instruments that allows us to adjust the tilt of the sample holder and to vary the height of the sample surface from outside the vacuum chamber. A third function of the stage is the capability to cool sample to -150 deg. C using liquid nitrogen. Using this stage, we can measure line profiles of 10 mm in length without any variation in the secondary ion yields. By moving the sample surface toward the input lens, the primary ion beam is well focused when the energy of the primary ions is reduced. Sample cooling is useful for samples such as organic materials that are easily damaged by primary ions or electrons

  10. SIMS analysis using a new novel sample stage

    Energy Technology Data Exchange (ETDEWEB)

    Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)

    2006-07-30

    We have developed a novel sample stage for Cameca IMS-series instruments that allows us to adjust the tilt of the sample holder and to vary the height of the sample surface from outside the vacuum chamber. A third function of the stage is the capability to cool sample to -150 deg. C using liquid nitrogen. Using this stage, we can measure line profiles of 10 mm in length without any variation in the secondary ion yields. By moving the sample surface toward the input lens, the primary ion beam is well focused when the energy of the primary ions is reduced. Sample cooling is useful for samples such as organic materials that are easily damaged by primary ions or electrons.

  11. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    Science.gov (United States)

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  12. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  13. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  14. Teaching basic life support with an automated external defibrillator using the two-stage or the four-stage teaching technique.

    Science.gov (United States)

    Bjørnshave, Katrine; Krogh, Lise Q; Hansen, Svend B; Nebsbjerg, Mette A; Thim, Troels; Løfgren, Bo

    2018-02-01

    Laypersons often hesitate to perform basic life support (BLS) and use an automated external defibrillator (AED) because of self-perceived lack of knowledge and skills. Training may reduce the barrier to intervene. Reduced training time and costs may allow training of more laypersons. The aim of this study was to compare BLS/AED skills' acquisition and self-evaluated BLS/AED skills after instructor-led training with a two-stage versus a four-stage teaching technique. Laypersons were randomized to either two-stage or four-stage teaching technique courses. Immediately after training, the participants were tested in a simulated cardiac arrest scenario to assess their BLS/AED skills. Skills were assessed using the European Resuscitation Council BLS/AED assessment form. The primary endpoint was passing the test (17 of 17 skills adequately performed). A prespecified noninferiority margin of 20% was used. The two-stage teaching technique (n=72, pass rate 57%) was noninferior to the four-stage technique (n=70, pass rate 59%), with a difference in pass rates of -2%; 95% confidence interval: -18 to 15%. Neither were there significant differences between the two-stage and four-stage groups in the chest compression rate (114±12 vs. 115±14/min), chest compression depth (47±9 vs. 48±9 mm) and number of sufficient rescue breaths between compression cycles (1.7±0.5 vs. 1.6±0.7). In both groups, all participants believed that their training had improved their skills. Teaching laypersons BLS/AED using the two-stage teaching technique was noninferior to the four-stage teaching technique, although the pass rate was -2% (95% confidence interval: -18 to 15%) lower with the two-stage teaching technique.

  15. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  16. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  17. Two-sample discrimination of Poisson means

    Science.gov (United States)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  18. Experimental studies of two-stage centrifugal dust concentrator

    Science.gov (United States)

    Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.

    2018-03-01

    The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.

  19. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  20. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  1. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  2. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  3. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    Science.gov (United States)

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  4. A versatile ultra high vacuum sample stage with six degrees of freedom

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, A. W.; Tromp, R. M. [IBM T.J. Watson Research Center, 1101 Kitchawan Road, P.O. Box 218, Yorktown Heights, New York 10598 (United States)

    2013-07-15

    We describe the design and practical realization of a versatile sample stage with six degrees of freedom. The stage was designed for use in a Low Energy Electron Microscope, but its basic design features will be useful for numerous other applications. The degrees of freedom are X, Y, and Z, two tilts, and azimuth. All motions are actuated in an ultrahigh vacuum base pressure environment by piezoelectric transducers with integrated position sensors. The sample can be load-locked. During observation, the sample is held at a potential of −15 kV, at temperatures between room temperature and 1500 °C, and in background gas pressures up to 1 × 10{sup −4} Torr.

  5. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  6. Possible two-stage 87Sr evolution in the Stockdale Rhyolite

    International Nuclear Information System (INIS)

    Compston, W.; McDougall, I.; Wyborn, D.

    1982-01-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage 87 Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial 87 Sr/ 86 Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y. (orig.)

  7. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  8. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  9. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  10. Numerical simulation of two-dimensional late-stage coarsening for nucleation and growth

    International Nuclear Information System (INIS)

    Akaiwa, N.; Meiron, D.I.

    1995-01-01

    Numerical simulations of two-dimensional late-stage coarsening for nucleation and growth or Ostwald ripening are performed at area fractions 0.05 to 0.4 using the monopole and dipole approximations of a boundary integral formulation for the steady state diffusion equation. The simulations are performed using two different initial spatial distributions. One is a random spatial distribution, and the other is a random spatial distribution with depletion zones around the particles. We characterize the spatial correlations of particles by the radial distribution function, the pair correlation functions, and the structure function. Although the initial spatial correlations are different, we find time-independent scaled correlation functions in the late stage of coarsening. An important feature of the late-stage spatial correlations is that depletion zones exist around particles. A log-log plot of the structure function shows that the slope at small wave numbers is close to 4 and is -3 at very large wave numbers for all area fractions. At large wave numbers we observe oscillations in the structure function. We also confirm the cubic growth law of the average particle radius. The rate constant of the cubic growth law and the particle size distribution functions are also determined. We find qualitatively good agreement between experiments and the present simulations. In addition, the present results agree well with simulation results using the Cahn-Hilliard equation

  11. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  12. Discriminative motif discovery via simulated evolution and random under-sampling.

    Directory of Open Access Journals (Sweden)

    Tao Song

    Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  13. Discriminative motif discovery via simulated evolution and random under-sampling.

    Science.gov (United States)

    Song, Tao; Gu, Hong

    2014-01-01

    Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  14. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  15. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  16. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  17. Investigating causal associations between use of nicotine, alcohol, caffeine and cannabis: a two-sample bidirectional Mendelian randomization study.

    Science.gov (United States)

    Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M

    2018-07-01

    Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine and cannabis use. Two-sample MR was employed to estimate bidirectional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week) and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these were not supported by the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine and cannabis use. © 2018 Society for the Study of Addiction.

  18. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  19. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  20. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  1. Improving the collection of knowledge, attitude and practice data with community surveys: a comparison of two second-stage sampling methods.

    Science.gov (United States)

    Davis, Rosemary H; Valadez, Joseph J

    2014-12-01

    Second-stage sampling techniques, including spatial segmentation, are widely used in community health surveys when reliable household sampling frames are not available. In India, an unresearched technique for household selection is used in eight states, which samples the house with the last marriage or birth as the starting point. Users question whether this last-birth or last-marriage (LBLM) approach introduces bias affecting survey results. We conducted two simultaneous population-based surveys. One used segmentation sampling; the other used LBLM. LBLM sampling required modification before assessment was possible and a more systematic approach was tested using last birth only. We compared coverage proportions produced by the two independent samples for six malaria indicators and demographic variables (education, wealth and caste). We then measured the level of agreement between the caste of the selected participant and the caste of the health worker making the selection. No significant difference between methods was found for the point estimates of six malaria indicators, education, caste or wealth of the survey participants (range of P: 0.06 to >0.99). A poor level of agreement occurred between the caste of the health worker used in household selection and the caste of the final participant, (Κ = 0.185), revealing little association between the two, and thereby indicating that caste was not a source of bias. Although LBLM was not testable, a systematic last-birth approach was tested. If documented concerns of last-birth sampling are addressed, this new method could offer an acceptable alternative to segmentation in India. However, inter-state caste variation could affect this result. Therefore, additional assessment of last birth is required before wider implementation is recommended. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  2. Possible two-stage /sup 87/Sr evolution in the Stockdale Rhyolite

    Energy Technology Data Exchange (ETDEWEB)

    Compston, W.; McDougall, I. (Australian National Univ., Canberra. Research School of Earth Sciences); Wyborn, D. (Department of Minerals and Energy, Canberra (Australia). Bureau of Mineral Resources)

    1982-12-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage /sup 87/Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial /sup 87/Sr//sup 86/Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y.

  3. Effects of regularly consuming dietary fibre rich soluble cocoa products on bowel habits in healthy subjects: a free-living, two-stage, randomized, crossover, single-blind intervention

    Directory of Open Access Journals (Sweden)

    Sarriá Beatriz

    2012-04-01

    Full Text Available Abstract Background Dietary fibre is both preventive and therapeutic for bowel functional diseases. Soluble cocoa products are good sources of dietary fibre that may be supplemented with this dietary component. This study assessed the effects of regularly consuming two soluble cocoa products (A and B with different non-starch polysaccharides levels (NSP, 15.1 and 22.0% w/w, respectively on bowel habits using subjective intestinal function and symptom questionnaires, a daily diary and a faecal marker in healthy individuals. Methods A free-living, two-stage, randomized, crossover, single-blind intervention was carried out in 44 healthy men and women, between 18-55 y old, who had not taken dietary supplements, laxatives, or antibiotics six months before the start of the study. In the four-week-long intervention stages, separated by a three-week-wash-out stage, two servings of A and B, that provided 2.26 vs. 6.60 g/day of NSP respectively, were taken. In each stage, volunteers' diet was recorded using a 72-h food intake report. Results Regularly consuming cocoa A and B increased fibre intake, although only cocoa B significantly increased fibre intake (p Conclusions Regular consumption of the cocoa products increases dietary fibre intake to recommended levels and product B improves bowel habits. The use of both objective and subjective assessments to evaluate the effects of food on bowel habits is recommended.

  4. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  5. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  6. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  7. Effects of pushing techniques during the second stage of labor: A randomized controlled trial.

    Science.gov (United States)

    Koyucu, Refika Genç; Demirci, Nurdan

    2017-10-01

    Spontaneous pushing is a method that is used in the management of the second stage of labor and suggested to be more physiological for the mother and infant. The present study aims to evaluate the effects of pushing techniques on the mother and newborn. This randomized prospective study was performed between June 2013-March 2014 in a tertiary maternity clinic in Istanbul. 80 low risk, nulliparous cases were randomized to pushing groups. Valsalva pushing group was told to hold their breath while pushing. No visual-verbal instructions were given to spontaneous pushing group and they were encouraged to push without preventing respiration. Demographic data, second stage period, perineal laceration rates, fetal heart rate patterns, presence of meconium stained amniotic liquid, newborn APGAR scores, POP-Q examination and Q-tip test results were evaluated in these cases. The second stage of labor was significantly longer with spontaneous pushing. Decrease in Hb levels in valsalva pushing group was determined to be higher than spontaneous pushing group. An increased urethral mobility was observed in valsalva pushing group. Although the duration of the second stage of labor was longer compared to valsalva pushing technique, women were able to give birth without requiring any verbal or visual instruction, without exceeding the limit value of two hours and without affecting fetal wellness and neonatal results. Copyright © 2017. Published by Elsevier B.V.

  8. FunSAV: predicting the functional effect of single amino acid variants using a two-stage random forest model.

    Directory of Open Access Journals (Sweden)

    Mingjun Wang

    Full Text Available Single amino acid variants (SAVs are the most abundant form of known genetic variations associated with human disease. Successful prediction of the functional impact of SAVs from sequences can thus lead to an improved understanding of the underlying mechanisms of why a SAV may be associated with certain disease. In this work, we constructed a high-quality structural dataset that contained 679 high-quality protein structures with 2,048 SAVs by collecting the human genetic variant data from multiple resources and dividing them into two categories, i.e., disease-associated and neutral variants. We built a two-stage random forest (RF model, termed as FunSAV, to predict the functional effect of SAVs by combining sequence, structure and residue-contact network features with other additional features that were not explored in previous studies. Importantly, a two-step feature selection procedure was proposed to select the most important and informative features that contribute to the prediction of disease association of SAVs. In cross-validation experiments on the benchmark dataset, FunSAV achieved a good prediction performance with the area under the curve (AUC of 0.882, which is competitive with and in some cases better than other existing tools including SIFT, SNAP, Polyphen2, PANTHER, nsSNPAnalyzer and PhD-SNP. The sourcecodes of FunSAV and the datasets can be downloaded at http://sunflower.kuicr.kyoto-u.ac.jp/sjn/FunSAV.

  9. Placental cord drainage in the third stage of labor: Randomized clinical trial.

    Science.gov (United States)

    Vasconcelos, Fernanda Barros; Katz, Leila; Coutinho, Isabela; Lins, Vanessa Laranjeiras; de Amorim, Melania Maria

    2018-01-01

    An open randomized clinical trial was developed at Instituto de Medicina Integral Prof. Fernando Figueira (IMIP) in Recife and at Petronila Campos Municipal Hospital in São Lourenço da Mata, both in Pernambuco, northeastern Brazil, including 226 low-risk pregnant women bearing a single, full-term, live fetus after delayed cord clamping, 113 randomized to placental cord drainage and 113 to a control group not submitted to this procedure. Women incapable of understanding the study objectives and those who went on to have an instrumental or cesarean delivery were excluded. Duration of the third stage of labor did not differ between the two groups (14.2±12.9 versus 13.7±12.1 minutes (mean ± SD), p = 0.66). Likewise, there was no significant difference in mean blood loss (248±254 versus 208±187ml, p = 0.39) or in postpartum hematocrit levels (32.3±4.06 versus 32.8±4.25mg/dl, p = 0.21). Furthermore, no differences were found between the groups for any of the secondary outcomes (postpartum hemorrhage >500 or >1000ml, therapeutic use of oxytocin, third stage >30 or 60 minutes, digital evacuation of the uterus or curettage, symptoms of postpartum anemia and maternal satisfaction). Placental cord drainage had no effect in reducing duration or blood loss during the third stage of labor. ClinicalTrials.gov: www.clinicaltrial.gov, NCT01655576.

  10. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    International Nuclear Information System (INIS)

    Vaezzadeh, Mehdi; Yazdani, Ahmad; Vaezzadeh, Majid; Daneshmand, Gissoo; Kanzeghi, Ali

    2006-01-01

    The magnetic behavior of Gd-based intermetallic compound (Gd 2 Al (1-x) Au x ) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement χ(T) shows two different stages. Moreover for (T>T K2 ) a fall of the value of χ(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T K1 ) an increase of χ(T) and in the second stage (T K2 ) a new remarkable decrease of χ(T) (T K1 >T K2 ). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution

  11. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  12. Comparison of single-stage and temperature-phased two-stage anaerobic digestion of oily food waste

    International Nuclear Information System (INIS)

    Wu, Li-Jie; Kobayashi, Takuro; Li, Yu-You; Xu, Kai-Qin

    2015-01-01

    Highlights: • A single-stage and two two-stage anaerobic systems were synchronously operated. • Similar methane production 0.44 L/g VS_a_d_d_e_d from oily food waste was achieved. • The first stage of the two-stage process became inefficient due to serious pH drop. • Recycle favored the hythan production in the two-stage digestion. • The conversion of unsaturated fatty acids was enhanced by recycle introduction. - Abstract: Anaerobic digestion is an effective technology to recover energy from oily food waste. A single-stage system and temperature-phased two-stage systems with and without recycle for anaerobic digestion of oily food waste were constructed to compare the operation performances. The synchronous operation indicated the similar ability to produce methane in the three systems, with a methane yield of 0.44 L/g VS_a_d_d_e_d. The pH drop to less than 4.0 in the first stage of two-stage system without recycle resulted in poor hydrolysis, and methane or hydrogen was not produced in this stage. Alkalinity supplement from the second stage of two-stage system with recycle improved pH in the first stage to 5.4. Consequently, 35.3% of the particulate COD in the influent was reduced in the first stage of two-stage system with recycle according to a COD mass balance, and hydrogen was produced with a percentage of 31.7%, accordingly. Similar solids and organic matter were removed in the single-stage system and two-stage system without recycle. More lipid degradation and the conversion of long-chain fatty acids were achieved in the single-stage system. Recycling was proved to be effective in promoting the conversion of unsaturated long-chain fatty acids into saturated fatty acids in the two-stage system.

  13. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  14. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  15. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  16. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  17. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  18. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Vaezzadeh, Mehdi [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)]. E-mail: mehdi@kntu.ac.ir; Yazdani, Ahmad [Tarbiat Modares University, P.O. Box 14155-4838, Tehran (Iran, Islamic Republic of); Vaezzadeh, Majid [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Daneshmand, Gissoo [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Kanzeghi, Ali [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)

    2006-05-01

    The magnetic behavior of Gd-based intermetallic compound (Gd{sub 2}Al{sub (1-x)}Au{sub x}) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement {chi}(T) shows two different stages. Moreover for (T>T{sub K2}) a fall of the value of {chi}(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T{sub K1}) an increase of {chi}(T) and in the second stage (T{sub K2}) a new remarkable decrease of {chi}(T) (T{sub K1}>T{sub K2}). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution.

  19. Two-Stage Centrifugal Fan

    Science.gov (United States)

    Converse, David

    2011-01-01

    Fan designs are often constrained by envelope, rotational speed, weight, and power. Aerodynamic performance and motor electrical performance are heavily influenced by rotational speed. The fan used in this work is at a practical limit for rotational speed due to motor performance characteristics, and there is no more space available in the packaging for a larger fan. The pressure rise requirements keep growing. The way to ordinarily accommodate a higher DP is to spin faster or grow the fan rotor diameter. The invention is to put two radially oriented stages on a single disk. Flow enters the first stage from the center; energy is imparted to the flow in the first stage blades, the flow is redirected some amount opposite to the direction of rotation in the fixed stators, and more energy is imparted to the flow in the second- stage blades. Without increasing either rotational speed or disk diameter, it is believed that as much as 50 percent more DP can be achieved with this design than with an ordinary, single-stage centrifugal design. This invention is useful primarily for fans having relatively low flow rates with relatively high pressure rise requirements.

  20. Quantification of physical activity using the QAPACE Questionnaire: a two stage cluster sample design survey of children and adolescents attending urban school.

    Science.gov (United States)

    Barbosa, Nicolas; Sanchez, Carlos E; Patino, Efrain; Lozano, Benigno; Thalabard, Jean C; LE Bozec, Serge; Rieu, Michel

    2016-05-01

    Quantification of physical activity as energy expenditure is important since youth for the prevention of chronic non communicable diseases in adulthood. It is necessary to quantify physical activity expressed in daily energy expenditure (DEE) in school children and adolescents between 8-16 years, by age, gender and socioeconomic level (SEL) in Bogotá. This is a Two Stage Cluster Survey Sample. From a universe of 4700 schools and 760000 students from three existing socioeconomic levels in Bogotá (low, medium and high). The random sample was 20 schools and 1840 students (904 boys and 936 girls). Foreshadowing desertion of participants and inconsistency in the questionnaire responses, the sample size was increased. Thus, 6 individuals of each gender for each of the nine age groups were selected, resulting in a total sample of 2160 individuals. Selected students filled the QAPACE questionnaire under supervision. The data was analyzed comparing means with multivariate general linear model. Fixed factors used were: gender (boys and girls), age (8 to 16 years old) and tri-strata SEL (low, medium and high); as independent variables were assessed: height, weight, leisure time, expressed in hours/day and dependent variable: daily energy expenditure DEE (kJ.kg-1.day-1): during leisure time (DEE-LT), during school time (DEE-ST), during vacation time (DEE-VT), and total mean DEE per year (DEEm-TY) RESULTS: Differences in DEE by gender, in boys, LT and all DEE, with the SEL all variables were significant; but age-SEL was only significant in DEE-VT. In girls, with the SEL all variables were significant. The post hoc multiple comparisons tests were significant with age using Fisher's Least Significant Difference (LSD) test in all variables. For both genders and for all SELs the values in girls had the higher value except SEL high (5-6) The boys have higher values in DEE-LT, DEE-ST, DEE-VT; except in DEEm-TY in SEL (5-6) In SEL (5-6) all DEEs for both genders are highest. For SEL

  1. Two-stage single-volume exchange transfusion in severe hemolytic disease of the newborn.

    Science.gov (United States)

    Abbas, Wael; Attia, Nayera I; Hassanein, Sahar M A

    2012-07-01

    Evaluation of two-stage single-volume exchange transfusion (TSSV-ET) in decreasing the post-exchange rebound increase in serum bilirubin level, with subsequent reduction of the need for repeated exchange transfusions. The study included 104 neonates with hyperbilirubinemia needing exchange transfusion. They were randomly enrolled into two equal groups, each group comprised 52 neonates. TSSV-ET was performed for the 52 neonates and the traditional single-stage double-volume exchange transfusion (SSDV-ET) was performed to 52 neonates. TSSV-ET significantly lowered rebound serum bilirubin level (12.7 ± 1.1 mg/dL), compared to SSDV-ET (17.3 ± 1.7 mg/dL), p < 0.001. Need for repeated exchange transfusions was significantly lower in TSSV-ET group (13.5%), compared to 32.7% in SSDV-ET group, p < 0.05. No significant difference was found between the two groups as regards the morbidity (11.5% and 9.6%, respectively) and the mortality (1.9% for both groups). Two-stage single-volume exchange transfusion proved to be more effective in reducing rebound serum bilirubin level post-exchange and in decreasing the need for repeated exchange transfusions.

  2. Multi-stage pulsed laser deposition of aluminum nitride at different temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Duta, L. [National Institute for Lasers, Plasma, and Radiation Physics, 409 Atomistilor Street, 077125 Magurele (Romania); Stan, G.E. [National Institute of Materials Physics, 105 bis Atomistilor Street, 077125 Magurele (Romania); Stroescu, H.; Gartner, M.; Anastasescu, M. [Institute of Physical Chemistry “Ilie Murgulescu”, Romanian Academy, 202 Splaiul Independentei, 060021 Bucharest (Romania); Fogarassy, Zs. [Research Institute for Technical Physics and Materials Science, Hungarian Academy of Sciences, Konkoly Thege Miklos u. 29-33, H-1121 Budapest (Hungary); Mihailescu, N. [National Institute for Lasers, Plasma, and Radiation Physics, 409 Atomistilor Street, 077125 Magurele (Romania); Szekeres, A., E-mail: szekeres@issp.bas.bg [Institute of Solid State Physics, Bulgarian Academy of Sciences, Tzarigradsko Chaussee 72, Sofia 1784 (Bulgaria); Bakalova, S. [Institute of Solid State Physics, Bulgarian Academy of Sciences, Tzarigradsko Chaussee 72, Sofia 1784 (Bulgaria); Mihailescu, I.N., E-mail: ion.mihailescu@inflpr.ro [National Institute for Lasers, Plasma, and Radiation Physics, 409 Atomistilor Street, 077125 Magurele (Romania)

    2016-06-30

    Highlights: • Multi-stage pulsed laser deposition of aluminum nitride at different temperatures. • 800 °C seed film boosts the next growth of crystalline structures at lower temperature. • Two-stage deposited AlN samples exhibit randomly oriented wurtzite structures. • Band gap energy values increase with deposition temperature. • Correlation was observed between single- and multi-stage AlN films. - Abstract: We report on multi-stage pulsed laser deposition of aluminum nitride (AlN) on Si (1 0 0) wafers, at different temperatures. The first stage of deposition was carried out at 800 °C, the optimum temperature for AlN crystallization. In the second stage, the deposition was conducted at lower temperatures (room temperature, 350 °C or 450 °C), in ambient Nitrogen, at 0.1 Pa. The synthesized structures were analyzed by grazing incidence X-ray diffraction (GIXRD), transmission electron microscopy (TEM), atomic force microscopy and spectroscopic ellipsometry (SE). GIXRD measurements indicated that the two-stage deposited AlN samples exhibited a randomly oriented wurtzite structure with nanosized crystallites. The peaks were shifted to larger angles, indicative for smaller inter-planar distances. Remarkably, TEM images demonstrated that the high-temperature AlN “seed” layers (800 °C) promoted the growth of poly-crystalline AlN structures at lower deposition temperatures. When increasing the deposition temperature, the surface roughness of the samples exhibited values in the range of 0.4–2.3 nm. SE analyses showed structures which yield band gap values within the range of 4.0–5.7 eV. A correlation between the results of single- and multi-stage AlN depositions was observed.

  3. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  4. Randomized branch sampling to estimatefruit production in Pecan trees cv. ‘Barton’

    Directory of Open Access Journals (Sweden)

    Filemom Manoel Mokochinski

    Full Text Available ABSTRACT: Sampling techniques to quantify the production of fruits are still very scarce and create a gap in crop development research. This study was conducted in a rural property in the county of Cachoeira do Sul - RS to estimate the efficiency of randomized branch sampling (RBS in quantifying the production of pecan fruit at three different ages (5,7 and 10 years. Two selection techniques were tested: the probability proportional to the diameter (PPD and the uniform probability (UP techniques, which were performed on nine trees, three from each age and randomly chosen. The RBS underestimated fruit production for all ages, and its main drawback was the high sampling error (125.17% - PPD and 111.04% - UP. The UP was regarded as more efficient than the PPD, though both techniques estimated similar production and similar experimental errors. In conclusion, we reported that branch sampling was inaccurate for this case study, requiring new studies to produce estimates with smaller sampling error.

  5. Genetic diversity of two Tunisian sheep breeds using random ...

    African Journals Online (AJOL)

    Random amplified polymorphic DNA (RAPD) markers were used to study genetic diversity and population structure in six sheep populations belonging to two native Tunisian breeds (the Barbarine and the Western thin tail). A total of 96 samples were typed using eight RAPD primers. 62 bands were scored, of which 44 ...

  6. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  7. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    Science.gov (United States)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  8. The Orientation of Gastric Biopsy Samples Improves the Inter-observer Agreement of the OLGA Staging System.

    Science.gov (United States)

    Cotruta, Bogdan; Gheorghe, Cristian; Iacob, Razvan; Dumbrava, Mona; Radu, Cristina; Bancila, Ion; Becheanu, Gabriel

    2017-12-01

    Evaluation of severity and extension of gastric atrophy and intestinal metaplasia is recommended to identify subjects with a high risk for gastric cancer. The inter-observer agreement for the assessment of gastric atrophy is reported to be low. The aim of the study was to evaluate the inter-observer agreement for the assessment of severity and extension of gastric atrophy using oriented and unoriented gastric biopsy samples. Furthermore, the quality of biopsy specimens in oriented and unoriented samples was analyzed. A total of 35 subjects with dyspeptic symptoms addressed for gastrointestinal endoscopy that agreed to enter the study were prospectively enrolled. The OLGA/OLGIM gastric biopsies protocol was used. From each subject two sets of biopsies were obtained (four from the antrum, two oriented and two unoriented, two from the gastric incisure, one oriented and one unoriented, four from the gastric body, two oriented and two unoriented). The orientation of the biopsy samples was completed using nitrocellulose filters (Endokit®, BioOptica, Milan, Italy). The samples were blindly examined by two experienced pathologists. Inter-observer agreement was evaluated using kappa statistic for inter-rater agreement. The quality of histopathology specimens taking into account the identification of lamina propria was analyzed in oriented vs. unoriented samples. The samples with detectable lamina propria mucosae were defined as good quality specimens. Categorical data was analyzed using chi-square test and a two-sided p value <0.05 was considered statistically significant. A total of 350 biopsy samples were analyzed (175 oriented / 175 unoriented). The kappa index values for oriented/unoriented OLGA 0/I/II/III and IV stages have been 0.62/0.13, 0.70/0.20, 0.61/0.06, 0.62/0.46, and 0.77/0.50, respectively. For OLGIM 0/I/II/III stages the kappa index values for oriented/unoriented samples were 0.83/0.83, 0.88/0.89, 0.70/0.88 and 0.83/1, respectively. No case of OLGIM IV

  9. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  10. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  11. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  12. [Acupotomy and acupuncture in the treatment of avascular necrosis of femoral head at the early and middle stages:a clinical randomized controlled trial].

    Science.gov (United States)

    Wang, Zhanyou; Zhou, Xuelong; Xie, Lishuang; Liang, Dongyue; Wang, Ying; Zhang, Hong-An; Zheng, Jinghong

    2016-10-12

    To compare the efficacy difference between acupotomy and acupuncture in the treatment of avascular necrosis of femoral head at the early and middle stages. The randomized controlled prospective study method was adopted. Sixty cases of avascular necrosis of femoral head at Ficat-ArletⅠto Ⅱ stages were randomized into an acupotomy group (32 cases) and an acupuncture group (28 cases) by the third part. In the acupotomy group, the acupotomy was adopted for the loose solution at the treatment sites of hip joint, once every two weeks, totally for 3 times. In the acupuncture group, ashi points around the hip joint were selected and stimulated with warm acupuncture therapy, once every day, for 6 weeks. Harris hip score was observed before and after treatment. The efficacy was evaluated in the two groups. Harris hip score was improved significantly after treatment in the two groups (both P avascular necrosis of femoral head at the early and middle stages.

  13. A comparison of two sampling approaches for assessing the urban forest canopy cover from aerial photography.

    Science.gov (United States)

    Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker

    2016-01-01

    Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...

  14. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  15. Influence of Cu(NO32 initiation additive in two-stage mode conditions of coal pyrolytic decomposition

    Directory of Open Access Journals (Sweden)

    Larionov Kirill

    2017-01-01

    Full Text Available Two-stage process (pyrolysis and oxidation of brown coal sample with Cu(NO32 additive pyrolytic decomposition was studied. Additive was introduced by using capillary wetness impregnation method with 5% mass concentration. Sample reactivity was studied by thermogravimetric analysis with staged gaseous medium supply (argon and air at heating rate 10 °C/min and intermediate isothermal soaking. The initiative additive introduction was found to significantly reduce volatile release temperature and accelerate thermal decomposition of sample. Mass-spectral analysis results reveal that significant difference in process characteristics is connected to volatile matter release stage which is initiated by nitrous oxide produced during copper nitrate decomposition.

  16. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  17. Coupling methods for multistage sampling

    OpenAIRE

    Chauvet, Guillaume

    2015-01-01

    Multistage sampling is commonly used for household surveys when there exists no sampling frame, or when the population is scattered over a wide area. Multistage sampling usually introduces a complex dependence in the selection of the final units, which makes asymptotic results quite difficult to prove. In this work, we consider multistage sampling with simple random without replacement sampling at the first stage, and with an arbitrary sampling design for further stages. We consider coupling ...

  18. Hierarchical Cluster Analysis of Three-Dimensional Reconstructions of Unbiased Sampled Microglia Shows not Continuous Morphological Changes from Stage 1 to 2 after Multiple Dengue Infections in Callithrix penicillata

    Science.gov (United States)

    Diniz, Daniel G.; Silva, Geane O.; Naves, Thaís B.; Fernandes, Taiany N.; Araújo, Sanderson C.; Diniz, José A. P.; de Farias, Luis H. S.; Sosthenes, Marcia C. K.; Diniz, Cristovam G.; Anthony, Daniel C.; da Costa Vasconcelos, Pedro F.; Picanço Diniz, Cristovam W.

    2016-01-01

    It is known that microglial morphology and function are related, but few studies have explored the subtleties of microglial morphological changes in response to specific pathogens. In the present report we quantitated microglia morphological changes in a monkey model of dengue disease with virus CNS invasion. To mimic multiple infections that usually occur in endemic areas, where higher dengue infection incidence and abundant mosquito vectors carrying different serotypes coexist, subjects received once a week subcutaneous injections of DENV3 (genotype III)-infected culture supernatant followed 24 h later by an injection of anti-DENV2 antibody. Control animals received either weekly anti-DENV2 antibodies, or no injections. Brain sections were immunolabeled for DENV3 antigens and IBA-1. Random and systematic microglial samples were taken from the polymorphic layer of dentate gyrus for 3-D reconstructions, where we found intense immunostaining for TNFα and DENV3 virus antigens. We submitted all bi- or multimodal morphological parameters of microglia to hierarchical cluster analysis and found two major morphological phenotypes designated types I and II. Compared to type I (stage 1), type II microglia were more complex; displaying higher number of nodes, processes and trees and larger surface area and volumes (stage 2). Type II microglia were found only in infected monkeys, whereas type I microglia was found in both control and infected subjects. Hierarchical cluster analysis of morphological parameters of 3-D reconstructions of random and systematic selected samples in control and ADE dengue infected monkeys suggests that microglia morphological changes from stage 1 to stage 2 may not be continuous. PMID:27047345

  19. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  20. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  1. One-stage vs two-stage cartilage repair: a current review

    Directory of Open Access Journals (Sweden)

    Daniel Meyerkort

    2010-10-01

    Full Text Available Daniel Meyerkort, David Wood, Ming-Hao ZhengCenter for Orthopaedic Research, School of Surgery and Pathology, University of Western Australia, Perth, AustraliaIntroduction: Articular cartilage has a poor capacity for regeneration if damaged. Various methods have been used to restore the articular surface, improve pain, function, and slow progression to osteoarthritis.Method: A PubMed review was performed on 18 March, 2010. Search terms included “autologous chondrocyte implantation (ACI” and “microfracture” or “mosaicplasty”. The aim of this review was to determine if 1-stage or 2-stage procedures for cartilage repair produced different functional outcomes.Results: The main procedures currently used are ACI and microfracture. Both first-generation ACI and microfracture result in clinical and functional improvement with no significant differences. A significant increase in functional outcome has been observed in second-generation procedures such as Hyalograft C, matrix-induced ACI, and ChondroCelect compared with microfracture. ACI results in a higher percentage of patients with clinical improvement than mosaicplasty; however, these results may take longer to achieve.Conclusion: Clinical and functional improvements have been demonstrated with ACI, microfracture, mosaicplasty, and synthetic cartilage constructs. Heterogeneous products and lack of good-quality randomized-control trials make product comparison difficult. Future developments involve scaffolds, gene therapy, growth factors, and stem cells to create a single-stage procedure that results in hyaline articular cartilage.Keywords: autologous chondrocyte implantation, microfracture, cartilage repair

  2. A stage is a stage is a stage: a direct comparison of two scoring systems.

    Science.gov (United States)

    Dawson, Theo L

    2003-09-01

    L. Kohlberg (1969) argued that his moral stages captured a developmental sequence specific to the moral domain. To explore that contention, the author compared stage assignments obtained with the Standard Issue Scoring System (A. Colby & L. Kohlberg, 1987a, 1987b) and those obtained with a generalized content-independent stage-scoring system called the Hierarchical Complexity Scoring System (T. L. Dawson, 2002a), on 637 moral judgment interviews (participants' ages ranged from 5 to 86 years). The correlation between stage scores produced with the 2 systems was .88. Although standard issue scoring and hierarchical complexity scoring often awarded different scores up to Kohlberg's Moral Stage 2/3, from his Moral Stage 3 onward, scores awarded with the two systems predominantly agreed. The author explores the implications for developmental research.

  3. Random vs. systematic sampling from administrative databases involving human subjects.

    Science.gov (United States)

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  4. Effekt of a two-stage nursing assesment and intervention - a randomized intervention study

    DEFF Research Database (Denmark)

    Rosted, Elizabeth Emilie; Poulsen, Ingrid; Hendriksen, Carsten

    % of geriatric patients have complex and often unresolved caring needs. The objective was to examine the effect of a two-stage nursing assessment and intervention to address the patients uncompensated problems given just after discharge from ED and one and six months after. Method: We conducted a prospective...... nursing assessment comprising a checklist of 10 physical, mental, medical and social items. The focus was on unresolved problems which require medical intervention, new or different home care services, or comprehensive geriatric assessment. Following this the nurses made relevant referrals...... to the geriatric outpatient clinic, community health centre, primary physician or arrangements with next-of-kin. Findings: Primary endpoints will be presented as unplanned readmission to ED; admission to nursing home; and death. Secondary endpoints will be presented as physical function; depressive symptoms...

  5. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  6. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  7. Comparison of Oone-Stage Free Gracilis Muscle Flap With Two-Stage Method in Chronic Facial Palsy

    Directory of Open Access Journals (Sweden)

    J Ghaffari

    2007-08-01

    Full Text Available Background:Rehabilitation of facial paralysis is one of the greatest challenges faced by reconstructive surgeons today. The traditional method for treatment of patients with facial palsy is the two-stage free gracilis flap which has a long latency period of between the two stages of surgery.Methods: In this paper, we prospectively compared the results of the one-stage gracilis flap method with the two -stage technique.Results:Out of 41 patients with facial palsy refered to Hazrat-e-Fatemeh Hospital 31 were selected from whom 22 underwent two- stage and 9 one-stage method treatment. The two groups were identical according to age,sex,intensity of illness, duration, and chronicity of illness. Mean duration of follow up was 37 months. There was no significant relation between the two groups regarding the symmetry of face in repose, smiling, whistling and nasolabial folds. Frequency of complications was equal in both groups. The postoperative surgeons and patients' satisfaction were equal in both groups. There was no significant difference between the mean excursion of muscle flap in one-stage (9.8 mm and two-stage groups (8.9 mm. The ratio of contraction of the affected side compared to the normal side was similar in both groups. The mean time of the initial contraction of the muscle flap in the one-stage group (5.5 months had a significant difference (P=0.001 with the two-stage one (6.5 months.The study revealed a highly significant difference (P=0.0001 between the mean waiting period from the first operation to the beginning of muscle contraction in one-stage(5.5 monthsand two-stage groups(17.1 months.Conclusion:It seems that the results and complication of the two methods are the same,but the one-stage method requires less time for facial reanimation,and is costeffective because it saves time and decreases hospitalization costs.

  8. Statistical Methods and Tools for Hanford Staged Feed Tank Sampling

    Energy Technology Data Exchange (ETDEWEB)

    Fountain, Matthew S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brigantic, Robert T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-10-01

    This report summarizes work conducted by Pacific Northwest National Laboratory to technically evaluate the current approach to staged feed sampling of high-level waste (HLW) sludge to meet waste acceptance criteria (WAC) for transfer from tank farms to the Hanford Waste Treatment and Immobilization Plant (WTP). The current sampling and analysis approach is detailed in the document titled Initial Data Quality Objectives for WTP Feed Acceptance Criteria, 24590-WTP-RPT-MGT-11-014, Revision 0 (Arakali et al. 2011). The goal of this current work is to evaluate and provide recommendations to support a defensible, technical and statistical basis for the staged feed sampling approach that meets WAC data quality objectives (DQOs).

  9. A New Two-Stage Approach to Short Term Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Dragan Tasić

    2013-04-01

    Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

  10. Extending cluster Lot Quality Assurance Sampling designs for surveillance programs

    OpenAIRE

    Hund, Lauren; Pagano, Marcello

    2014-01-01

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than ...

  11. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  12. An inexact two-stage stochastic robust programming for residential micro-grid management-based on random demand

    International Nuclear Information System (INIS)

    Ji, L.; Niu, D.X.; Huang, G.H.

    2014-01-01

    In this paper a stochastic robust optimization problem of residential micro-grid energy management is presented. Combined cooling, heating and electricity technology (CCHP) is introduced to satisfy various energy demands. Two-stage programming is utilized to find the optimal installed capacity investment and operation control of CCHP (combined cooling heating and power). Moreover, interval programming and robust stochastic optimization methods are exploited to gain interval robust solutions under different robustness levels which are feasible for uncertain data. The obtained results can help micro-grid managers minimizing the investment and operation cost with lower system failure risk when facing fluctuant energy market and uncertain technology parameters. The different robustness levels reflect the risk preference of micro-grid manager. The proposed approach is applied to residential area energy management in North China. Detailed computational results under different robustness level are presented and analyzed for providing investment decision and operation strategies. - Highlights: • An inexact two-stage stochastic robust programming model for CCHP management. • The energy market and technical parameters uncertainties were considered. • Investment decision, operation cost, and system safety were analyzed. • Uncertainties expressed as discrete intervals and probability distributions

  13. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive......Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  14. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    International Nuclear Information System (INIS)

    Ariunbaatar, Javkhlan; Scotto Di Perta, Ester; Panico, Antonio; Frunzo, Luigi; Esposito, Giovanni; Lens, Piet N.L.; Pirozzi, Francesco

    2015-01-01

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC 50 of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid

  15. Staging of gastric adenocarcinoma using two-phase spiral CT: correlation with pathologic staging

    International Nuclear Information System (INIS)

    Seo, Tae Seok; Lee, Dong Ho; Ko, Young Tae; Lim, Joo Won

    1998-01-01

    To correlate the preoperative staging of gastric adenocarcinoma using two-phase spiral CT with pathologic staging. One hundred and eighty patients with gastric cancers confirmed during surgery underwent two-phase spiral CT, and were evaluated retrospectively. CT scans were obtained in the prone position after ingestion of water. Scans were performed 35 and 80 seconds after the start of infusion of 120mL of non-ionic contrast material with the speed of 3mL/sec. Five mm collimation, 7mm/sec table feed and 5mm reconstruction interval were used. T-and N-stage were determined using spiral CT images, without knowledge of the pathologic results. Pathologic staging was later compared with CT staging. Pathologic T-stage was T1 in 70 cases(38.9%), T2 in 33(18.3%), T3 in 73(40.6%), and T4 in 4(2.2%). Type-I or IIa elevated lesions accouted for 10 of 70 T1 cases(14.3%) and flat or depressed lesions(type IIb, IIc, or III) for 60(85.7%). Pathologic N-stage was NO in 85 cases(47.2%), N1 in 42(23.3%), N2 in 31(17.2%), and N3 in 22(12,2%). The detection rate of early gastric cancer using two-phase spiral CT was 100.0%(10 of 10 cases) among elevated lesions and 78.3%(47 of 60 cases) among flat or depressed lesions. With regard to T-stage, there was good correlation between CT image and pathology in 86 of 180 cases(47.8%). Overstaging occurred in 23.3%(42 of 180 cases) and understaging in 28.9%(52 of 180 cases). With regard to N-stage, good correlation between CT image and pathology was noted in 94 of 180 cases(52.2%). The rate of understaging(31.7%, 57 of 180 cases) was higher than that of overstaging(16.1%, 29 of 180 cases)(p<0.001). The detection rate of early gastric cancer using two-phase spiral CT was 81.4%, and there was no significant difference in detectability between elevated and depressed lesions. Two-phase spiral CT for determing the T-and N-stage of gastric cancer was not effective;it was accurate in abont 50% of cases understaging tended to occur.=20

  16. Mediastinal lymph node dissection versus mediastinal lymph node sampling for early stage non-small cell lung cancer: a systematic review and meta-analysis.

    Science.gov (United States)

    Huang, Xiongfeng; Wang, Jianmin; Chen, Qiao; Jiang, Jielin

    2014-01-01

    This systematic review and meta-analysis aimed to evaluate the overall survival, local recurrence, distant metastasis, and complications of mediastinal lymph node dissection (MLND) versus mediastinal lymph node sampling (MLNS) in stage I-IIIA non-small cell lung cancer (NSCLC) patients. A systematic search of published literature was conducted using the main databases (MEDLINE, PubMed, EMBASE, and Cochrane databases) to identify relevant randomized controlled trials that compared MLND vs. MLNS in NSCLC patients. Methodological quality of included randomized controlled trials was assessed according to the criteria from the Cochrane Handbook for Systematic Review of Interventions (Version 5.1.0). Meta-analysis was performed using The Cochrane Collaboration's Review Manager 5.3. The results of the meta-analysis were expressed as hazard ratio (HR) or risk ratio (RR), with their corresponding 95% confidence interval (CI). We included results reported from six randomized controlled trials, with a total of 1,791 patients included in the primary meta-analysis. Compared to MLNS in NSCLC patients, there was no statistically significant difference in MLND on overall survival (HR = 0.77, 95% CI 0.55 to 1.08; P = 0.13). In addition, the results indicated that local recurrence rate (RR = 0.93, 95% CI 0.68 to 1.28; P = 0.67), distant metastasis rate (RR = 0.88, 95% CI 0.74 to 1.04; P = 0.15), and total complications rate (RR = 1.10, 95% CI 0.67 to 1.79; P = 0.72) were similar, no significant difference found between the two groups. Results for overall survival, local recurrence rate, and distant metastasis rate were similar between MLND and MLNS in early stage NSCLC patients. There was no evidence that MLND increased complications compared with MLNS. Whether or not MLND is superior to MLNS for stage II-IIIA remains to be determined.

  17. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  18. Study of shallow junction formation by boron-containing cluster ion implantation of silicon and two-stage annealing

    Science.gov (United States)

    Lu, Xin-Ming

    Shallow junction formation made by low energy ion implantation and rapid thermal annealing is facing a major challenge for ULSI (ultra large scale integration) as the line width decreases down to the sub micrometer region. The issues include low beam current, the channeling effect in low energy ion implantation and TED (transient enhanced diffusion) during annealing after ion implantation. In this work, boron containing small cluster ions, such as GeB, SiB and SiB2, was generated by using the SNICS (source of negative ion by cesium sputtering) ion source to implant into Si substrates to form shallow junctions. The use of boron containing cluster ions effectively reduces the boron energy while keeping the energy of the cluster ion beam at a high level. At the same time, it reduces the channeling effect due to amorphization by co-implanted heavy atoms like Ge and Si. Cluster ions have been used to produce 0.65--2keV boron for low energy ion implantation. Two stage annealing, which is a combination of low temperature (550°C) preannealing and high temperature annealing (1000°C), was carried out to anneal the Si sample implanted by GeB, SiBn clusters. The key concept of two-step annealing, that is, the separation of crystal regrowth, point defects removal with dopant activation from dopant diffusion, is discussed in detail. The advantages of the two stage annealing include better lattice structure, better dopant activation and retarded boron diffusion. The junction depth of the two stage annealed GeB sample was only half that of the one-step annealed sample, indicating that TED was suppressed by two stage annealing. Junction depths as small as 30 nm have been achieved by two stage annealing of sample implanted with 5 x 10-4/cm2 of 5 keV GeB at 1000°C for 1 second. The samples were evaluated by SIMS (secondary ion mass spectrometry) profiling, TEM (transmission electron microscopy) and RBS (Rutherford Backscattering Spectrometry)/channeling. Cluster ion implantation

  19. Distribution of age at menopause in two Danish samples

    DEFF Research Database (Denmark)

    Boldsen, J L; Jeune, B

    1990-01-01

    We analyzed the distribution of reported age at natural menopause in two random samples of Danish women (n = 176 and n = 150) to determine the shape of the distribution and to disclose any possible trends in the distribution parameters. It was necessary to correct the frequencies of the reported...... ages for the effect of differing ages at reporting. The corrected distribution of age at menopause differs from the normal distribution in the same way in both samples. Both distributions could be described by a mixture of two normal distributions. It appears that most of the parameters of the normal...... distribution mixtures remain unchanged over a 50-year time lag. The position of the distribution, that is, the mean age at menopause, however, increases slightly but significantly....

  20. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  1. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  2. Two phase sampling

    CERN Document Server

    Ahmad, Zahoor; Hanif, Muhammad

    2013-01-01

    The development of estimators of population parameters based on two-phase sampling schemes has seen a dramatic increase in the past decade. Various authors have developed estimators of population using either one or two auxiliary variables. The present volume is a comprehensive collection of estimators available in single and two phase sampling. The book covers estimators which utilize information on single, two and multiple auxiliary variables of both quantitative and qualitative nature. Th...

  3. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    Science.gov (United States)

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  4. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  5. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  6. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  7. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  8. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    Science.gov (United States)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  9. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    Energy Technology Data Exchange (ETDEWEB)

    Ariunbaatar, Javkhlan, E-mail: jaka@unicas.it [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Scotto Di Perta, Ester [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy); Panico, Antonio [Telematic University PEGASO, Piazza Trieste e Trento, 48, 80132 Naples (Italy); Frunzo, Luigi [Department of Mathematics and Applications Renato Caccioppoli, University of Naples Federico II, Via Claudio, 21, 80125 Naples (Italy); Esposito, Giovanni [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); Lens, Piet N.L. [UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Pirozzi, Francesco [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy)

    2015-04-15

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC{sub 50} of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid.

  10. Two-stage dental implants inserted in a one-stage procedure : a prospective comparative clinical study

    NARCIS (Netherlands)

    Heijdenrijk, Kees

    2002-01-01

    The results of this study indicate that dental implants designed for a submerged implantation procedure can be used in a single-stage procedure and may be as predictable as one-stage implants. Although one-stage implant systems and two-stage.

  11. Note on an Identity Between Two Unbiased Variance Estimators for the Grand Mean in a Simple Random Effects Model.

    Science.gov (United States)

    Levin, Bruce; Leu, Cheng-Shiun

    2013-01-01

    We demonstrate the algebraic equivalence of two unbiased variance estimators for the sample grand mean in a random sample of subjects from an infinite population where subjects provide repeated observations following a homoscedastic random effects model.

  12. Maxillofacial growth and speech outcome after one-stage or two-stage palatoplasty in unilateral cleft lip and palate. A systematic review.

    Science.gov (United States)

    Reddy, Rajgopal R; Gosla Reddy, Srinivas; Vaidhyanathan, Anitha; Bergé, Stefaan J; Kuijpers-Jagtman, Anne Marie

    2017-06-01

    The number of surgical procedures to repair a cleft palate may play a role in the outcome for maxillofacial growth and speech. The aim of this systematic review was to investigate the relationship between the number of surgical procedures performed to repair the cleft palate and maxillofacial growth, speech and fistula formation in non-syndromic patients with unilateral cleft lip and palate. An electronic search was performed in PubMed/old MEDLINE, the Cochrane Library, EMBASE, Scopus and CINAHL databases for publications between 1960 and December 2015. Publications before 1950-journals of plastic and maxillofacial surgery-were hand searched. Additional hand searches were performed on studies mentioned in the reference lists of relevant articles. Search terms included unilateral, cleft lip and/or palate and palatoplasty. Two reviewers assessed eligibility for inclusion, extracted data, applied quality indicators and graded level of evidence. Twenty-six studies met the inclusion criteria. All were retrospective and non-randomized comparisons of one- and two-stage palatoplasty. The methodological quality of most of the studies was graded moderate to low. The outcomes concerned the comparison of one- and two-stage palatoplasty with respect to growth of the mandible, maxilla and cranial base, and speech and fistula formation. Due to the lack of high-quality studies there is no conclusive evidence of a relationship between one- or two-stage palatoplasty and facial growth, speech and fistula formation in patients with unilateral cleft lip and palate. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  13. Comparative assessment of single-stage and two-stage anaerobic digestion for the treatment of thin stillage.

    Science.gov (United States)

    Nasr, Noha; Elbeshbishy, Elsayed; Hafez, Hisham; Nakhla, George; El Naggar, M Hesham

    2012-05-01

    A comparative evaluation of single-stage and two-stage anaerobic digestion processes for biomethane and biohydrogen production using thin stillage was performed to assess the impact of separating the acidogenic and methanogenic stages on anaerobic digestion. Thin stillage, the main by-product from ethanol production, was characterized by high total chemical oxygen demand (TCOD) of 122 g/L and total volatile fatty acids (TVFAs) of 12 g/L. A maximum methane yield of 0.33 L CH(4)/gCOD(added) (STP) was achieved in the two-stage process while a single-stage process achieved a maximum yield of only 0.26 L CH(4)/gCOD(added) (STP). The separation of acidification stage increased the TVFAs to TCOD ratio from 10% in the raw thin stillage to 54% due to the conversion of carbohydrates into hydrogen and VFAs. Comparison of the two processes based on energy outcome revealed that an increase of 18.5% in the total energy yield was achieved using two-stage anaerobic digestion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  15. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  16. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  17. Optimal production lot size and reorder point of a two-stage supply chain while random demand is sensitive with sales teams' initiatives

    Science.gov (United States)

    Sankar Sana, Shib

    2016-01-01

    The paper develops a production-inventory model of a two-stage supply chain consisting of one manufacturer and one retailer to study production lot size/order quantity, reorder point sales teams' initiatives where demand of the end customers is dependent on random variable and sales teams' initiatives simultaneously. The manufacturer produces the order quantity of the retailer at one lot in which the procurement cost per unit quantity follows a realistic convex function of production lot size. In the chain, the cost of sales team's initiatives/promotion efforts and wholesale price of the manufacturer are negotiated at the points such that their optimum profits reached nearer to their target profits. This study suggests to the management of firms to determine the optimal order quantity/production quantity, reorder point and sales teams' initiatives/promotional effort in order to achieve their maximum profits. An analytical method is applied to determine the optimal values of the decision variables. Finally, numerical examples with its graphical presentation and sensitivity analysis of the key parameters are presented to illustrate more insights of the model.

  18. Two-Stage Multi-Objective Collaborative Scheduling for Wind Farm and Battery Switch Station

    Directory of Open Access Journals (Sweden)

    Zhe Jiang

    2016-10-01

    Full Text Available In order to deal with the uncertainties of wind power, wind farm and electric vehicle (EV battery switch station (BSS were proposed to work together as an integrated system. In this paper, the collaborative scheduling problems of such a system were studied. Considering the features of the integrated system, three indices, which include battery swapping demand curtailment of BSS, wind curtailment of wind farm, and generation schedule tracking of the integrated system are proposed. In addition, a two-stage multi-objective collaborative scheduling model was designed. In the first stage, a day-ahead model was built based on the theory of dependent chance programming. With the aim of maximizing the realization probabilities of these three operating indices, random fluctuations of wind power and battery switch demand were taken into account simultaneously. In order to explore the capability of BSS as reserve, the readjustment process of the BSS within each hour was considered in this stage. In addition, the stored energy rather than the charging/discharging power of BSS during each period was optimized, which will provide basis for hour-ahead further correction of BSS. In the second stage, an hour-ahead model was established. In order to cope with the randomness of wind power and battery swapping demand, the proposed hour-ahead model utilized ultra-short term prediction of the wind power and the battery switch demand to schedule the charging/discharging power of BSS in a rolling manner. Finally, the effectiveness of the proposed models was validated by case studies. The simulation results indicated that the proposed model could realize complement between wind farm and BSS, reduce the dependence on power grid, and facilitate the accommodation of wind power.

  19. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  20. Randomized study of preoperative radiation and surgery or irradiation alone in the treatment of Stage IB and IIA carcinoma of the uterine cervix

    International Nuclear Information System (INIS)

    Perez, C.A.; Camel, H.M.; Kao, M.S.; Askin, F.

    1980-01-01

    A prospective randomized study in selected patients with Stage IB and IIA carcinoma of the uterine cervix was carried out. Patients were randomized to be treated with 1) irradiation alone consisting of 1000 rad whole pelvis, additional 4000 rads to the parametria with a step wedge midline block, and two intracavitary insertions for 7500 mgh; and 2) irradiation and surgery, consisting of 2000 rad whole pelvis irradiation, one intracavitary insertion for 5000 to 6000 mgh followed in two to six weeks later by a radical hysterectomy with pelvic lymphadenectomy. The five-year, tumor-free actuarial survival for Stage IB patients treated with radiation was 87% and with preoperative radiation and surgery 82%. In Stage IIA, the actuarial five-year survival NED was 57% for the irradiation alone group and 71% for the patients treated with preoperative radiation and radical hysterectomy. Major complications of therapy were slightly higher in the patients trated with radiation alone (9.4%, consisting of one recto-vaginal fistula and one vesico-vaginal fistula and a combined recto-vesico-vaginal fistula in another patient). In the preoperative radiation group, only two ureteral strictures (4.1%) were noted. The present study shows no significant difference in therapeutic results or morbidity for invasive carcinoma of the uterine cervix Stage IB or IIA treated with irradiation alone or combined with a radical hysterectomy

  1. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  2. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    International Nuclear Information System (INIS)

    Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W

    2013-01-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)

  3. Preoperative staging of lung cancer with PET/CT: cost-effectiveness evaluation alongside a randomized controlled trial

    International Nuclear Information System (INIS)

    Soegaard, Rikke; Fischer, Barbara Malene B.; Mortensen, Jann; Hoejgaard, Liselotte; Lassen, Ulrik

    2011-01-01

    Positron emission tomography (PET)/CT has become a widely used technology for preoperative staging of non-small cell lung cancer (NSCLC). Two recent randomized controlled trials (RCT) have established its efficacy over conventional staging, but no studies have assessed its cost-effectiveness. The objective of this study was to assess the cost-effectiveness of PET/CT as an adjunct to conventional workup for preoperative staging of NSCLC. The study was conducted alongside an RCT in which 189 patients were allocated to conventional staging (n = 91) or conventional staging + PET/CT (n = 98) and followed for 1 year after which the numbers of futile thoracotomies in each group were monitored. A full health care sector perspective was adapted for costing resource use. The outcome parameter was defined as the number needed to treat (NNT) - here number of PET/CT scans needed - to avoid one futile thoracotomy. All monetary estimates were inflated to 2010 EUR. The incremental cost of the PET/CT-based regimen was estimated at 3,927 EUR [95% confidence interval (CI) -3,331; 10,586] and the NNT at 4.92 (95% CI 3.00; 13.62). These resulted in an average incremental cost-effectiveness ratio of 19,314 EUR, which would be cost-effective at a probability of 0.90 given a willingness to pay of 50,000 EUR per avoided futile thoracotomy. When costs of comorbidity-related hospital services were excluded, the PET/CT regimen appeared dominant. Applying a full health care sector perspective, the cost-effectiveness of PET/CT for staging NSCLC seems to depend on the willingness to pay in order to avoid a futile thoracotomy. However, given that four outliers in terms of extreme comorbidity were all randomized to the PET/CT arm, there is uncertainty about the conclusion. When hospital costs of comorbidity were excluded, the PET/CT regimen was found to be both more accurate and cost saving. (orig.)

  4. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality

    Science.gov (United States)

    Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in

  5. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality.

    Science.gov (United States)

    L'Engle, Kelly; Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit dialing of mobile

  6. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality.

    Directory of Open Access Journals (Sweden)

    Kelly L'Engle

    Full Text Available Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample.The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census.The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample.The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit

  7. A Gas-Spring-Loaded X-Y-Z Stage System for X-ray Microdiffraction Sample Manipulation

    International Nuclear Information System (INIS)

    Shu Deming; Cai Zhonghou; Lai, Barry

    2007-01-01

    We have designed and constructed a gas-spring-loaded x-y-z stage system for x-ray microdiffraction sample manipulation at the Advanced Photon Source XOR 2-ID-D station. The stage system includes three DC-motor-driven linear stages and a gas-spring-based heavy preloading structure, which provides antigravity forces to ensure that the stage system keeps high-positioning performance under variable goniometer orientation. Microdiffraction experiments with this new stage system showed significant sample manipulation performance improvement

  8. Evaluation of a modified two-stage inferior alveolar nerve block technique: A preliminary investigation

    Directory of Open Access Journals (Sweden)

    Ashwin Rao

    2017-01-01

    Full Text Available Introduction: The two-stage technique of inferior alveolar nerve block (IANB administration does not address the pain associated with “needle insertion” and “local anesthetic solution deposition” in the “first stage” of the injection. This study evaluated a “modified two stage technique” to the reaction of children during “needle insertion” and “local anesthetic solution deposition” during the “first stage” and compared it to the “first phase” of the IANB administered with the standard one-stage technique. Materials and Methods: This was a parallel, single-blinded comparative study. A total of 34 children (between 6 and 10 years of age were randomly divided into two groups to receive an IANB either through the modified two-stage technique (MTST (Group A; 15 children or the standard one-stage technique (SOST (Group B; 19 children. The evaluation was done using the Face Legs Activity Cry Consolability (FLACC; which is an objective scale based on the expressions of the child scale. The obtained data was analyzed using Fishers Exact test with the P value set at <0.05 as level of significance. Results: 73.7% of children in Group B indicated moderate pain during the “first phase” of SOST and no children indicated such in the “first stage” of group A. Group A had 33.3% children who scored “0” indicating relaxed/comfortable children compared to 0% in Group B. In Group A, 66.7% of children scored between 1–3 indicating mild discomfort compared to 26.3% in group B. The difference in the scores between the two groups in each category (relaxed/comfortable, mild discomfort, moderate pain was highly significant (P < 0.001. Conclusion: Reaction of children in Group A during “needle insertion” and “local anesthetic solution deposition” in the “first stage” of MTST was significantly lower than that of Group B during the “first phase” of the SOST.

  9. Effect of food additives on hyperphosphatemia among patients with end-stage renal disease: a randomized controlled trial.

    Science.gov (United States)

    Sullivan, Catherine; Sayre, Srilekha S; Leon, Janeen B; Machekano, Rhoderick; Love, Thomas E; Porter, David; Marbury, Marquisha; Sehgal, Ashwini R

    2009-02-11

    High dietary phosphorus intake has deleterious consequences for renal patients and is possibly harmful for the general public as well. To prevent hyperphosphatemia, patients with end-stage renal disease limit their intake of foods that are naturally high in phosphorus. However, phosphorus-containing additives are increasingly being added to processed and fast foods. The effect of such additives on serum phosphorus levels is unclear. To determine the effect of limiting the intake of phosphorus-containing food additives on serum phosphorus levels among patients with end-stage renal disease. Cluster randomized controlled trial at 14 long-term hemodialysis facilities in northeast Ohio. Two hundred seventy-nine patients with elevated baseline serum phosphorus levels (>5.5 mg/dL) were recruited between May and October 2007. Two shifts at each of 12 large facilities and 1 shift at each of 2 small facilities were randomly assigned to an intervention or control group. Intervention participants (n=145) received education on avoiding foods with phosphorus additives when purchasing groceries or visiting fast food restaurants. Control participants (n=134) continued to receive usual care. Change in serum phosphorus level after 3 months. At baseline, there was no significant difference in serum phosphorus levels between the 2 groups. After 3 months, the decline in serum phosphorus levels was 0.6 mg/dL larger among intervention vs control participants (95% confidence interval, -1.0 to -0.1 mg/dL). Intervention participants also had statistically significant increases in reading ingredient lists (Pfood knowledge scores (P = .13). Educating end-stage renal disease patients to avoid phosphorus-containing food additives resulted in modest improvements in hyperphosphatemia. clinicaltrials.gov Identifier: NCT00583570.

  10. Comparison of two-staged ORIF and limited internal fixation with external fixator for closed tibial plafond fractures.

    Science.gov (United States)

    Wang, Cheng; Li, Ying; Huang, Lei; Wang, Manyi

    2010-10-01

    To compare the results of two-staged open reduction and internal fixation (ORIF) and limited internal fixation with external fixator (LIFEF) for closed tibial plafond fractures. From January 2005 to June 2007, 56 patients with closed type B3 or C Pilon fractures were randomly allocated into groups I and II. Two-staged ORIF was performed in group I and LIFEF in group II. The outcome measures included bone union, nonunion, malunion, pin-tract infection, wound infection, osteomyelitis, ankle joint function, etc. These postoperative data were analyzed with Statistical Package for Social Sciences (SPSS) 13.0. Incidence of superficial soft tissue infection (involved in wound infection or pin-tract infection) in group I was lower than that in group II (P delayed union, and arthritis symptoms, with no statistical significance. Both groups resulted similar ankle joint function. Logistic regression analysis indicated that smoking and fracture pattern were the two factors significantly influencing the final outcomes. In the treatment of closed tibial plafond fractures, both two-staged ORIF and LIFEF offer similar results. Patients undergo LIFEF carry significantly greater radiation exposure and higher superficial soft tissue infection rate (usually occurs on pin tract and does not affect the final outcomes).

  11. Late-stage pharmaceutical R&D and pricing policies under two-stage regulation.

    Science.gov (United States)

    Jobjörnsson, Sebastian; Forster, Martin; Pertile, Paolo; Burman, Carl-Fredrik

    2016-12-01

    We present a model combining the two regulatory stages relevant to the approval of a new health technology: the authorisation of its commercialisation and the insurer's decision about whether to reimburse its cost. We show that the degree of uncertainty concerning the true value of the insurer's maximum willingness to pay for a unit increase in effectiveness has a non-monotonic impact on the optimal price of the innovation, the firm's expected profit and the optimal sample size of the clinical trial. A key result is that there exists a range of values of the uncertainty parameter over which a reduction in uncertainty benefits the firm, the insurer and patients. We consider how different policy parameters may be used as incentive mechanisms, and the incentives to invest in R&D for marginal projects such as those targeting rare diseases. The model is calibrated using data on a new treatment for cystic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling

    OpenAIRE

    Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.

    2013-01-01

    Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...

  13. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  14. Systematic random sampling of the comet assay.

    Science.gov (United States)

    McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan

    2009-07-01

    The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.

  15. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  16. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  17. Routine conventional karyotyping of lymphoma staging bone marrow samples does not contribute clinically relevant information.

    Science.gov (United States)

    Nardi, Valentina; Pulluqi, Olja; Abramson, Jeremy S; Dal Cin, Paola; Hasserjian, Robert P

    2015-06-01

    Bone marrow (BM) evaluation is an important part of lymphoma staging, which guides patient management. Although positive staging marrow is defined as morphologically identifiable disease, such samples often also include flow cytometric analysis and conventional karyotyping. Cytogenetic analysis is a labor-intensive and costly procedure and its utility in this setting is uncertain. We retrospectively reviewed pathological reports of 526 staging marrow specimens in which conventional karyotyping had been performed. All samples originated from a single institution from patients with previously untreated Hodgkin and non-Hodgkin lymphomas presenting in an extramedullary site. Cytogenetic analysis revealed clonal abnormalities in only eight marrow samples (1.5%), all of which were positive for lymphoma by morphologic evaluation. Flow cytometry showed a small clonal lymphoid population in three of the 443 morphologically negative marrow samples (0.7%). Conventional karyotyping is rarely positive in lymphoma staging marrow samples and, in our cohort, the BM karyotype did not contribute clinically relevant information in the vast majority of cases. Our findings suggest that karyotyping should not be performed routinely on BM samples taken to stage previously diagnosed extramedullary lymphomas unless there is pathological evidence of BM involvement by lymphoma. © 2015 Wiley Periodicals, Inc.

  18. [Comparison research on two-stage sequencing batch MBR and one-stage MBR].

    Science.gov (United States)

    Yuan, Xin-Yan; Shen, Heng-Gen; Sun, Lei; Wang, Lin; Li, Shi-Feng

    2011-01-01

    Aiming at resolving problems in MBR operation, like low nitrogen and phosphorous removal efficiency, severe membrane fouling and etc, comparison research on two-stage sequencing batch MBR (TSBMBR) and one-stage aerobic MBR has been done in this paper. The results indicated that TSBMBR owned advantages of SBR in removing nitrogen and phosphorous, which could make up the deficiency of traditional one-stage aerobic MBR in nitrogen and phosphorous removal. During steady operation period, effluent average NH4(+) -N, TN and TP concentration is 2.83, 12.20, 0.42 mg/L, which could reach domestic scenic environment use. From membrane fouling control point of view, TSBMBR has lower SMP in supernatant, specific trans-membrane flux deduction rate, membrane fouling resistant than one-stage aerobic MBR. The sedimentation and gel layer resistant of TSBMBR was only 6.5% and 33.12% of one-stage aerobic MBR. Besides high efficiency in removing nitrogen and phosphorous, TSBMBR could effectively reduce sedimentation and gel layer pollution on membrane surface. Comparing with one-stage MBR, TSBMBR could operate with higher trans-membrane flux, lower membrane fouling rate and better pollutants removal effects.

  19. Energy demand in Portuguese manufacturing: a two-stage model

    International Nuclear Information System (INIS)

    Borges, A.M.; Pereira, A.M.

    1992-01-01

    We use a two-stage model of factor demand to estimate the parameters determining energy demand in Portuguese manufacturing. In the first stage, a capital-labor-energy-materials framework is used to analyze the substitutability between energy as a whole and other factors of production. In the second stage, total energy demand is decomposed into oil, coal and electricity demands. The two stages are fully integrated since the energy composite used in the first stage and its price are obtained from the second stage energy sub-model. The estimates obtained indicate that energy demand in manufacturing responds significantly to price changes. In addition, estimation results suggest that there are important substitution possibilities among energy forms and between energy and other factors of production. The role of price changes in energy-demand forecasting, as well as in energy policy in general, is clearly established. (author)

  20. The influence of magnetic field strength in ionization stage on ion transport between two stages of a double stage Hall thruster

    International Nuclear Information System (INIS)

    Yu Daren; Song Maojiang; Li Hong; Liu Hui; Han Ke

    2012-01-01

    It is futile for a double stage Hall thruster to design a special ionization stage if the ionized ions cannot enter the acceleration stage. Based on this viewpoint, the ion transport under different magnetic field strengths in the ionization stage is investigated, and the physical mechanisms affecting the ion transport are analyzed in this paper. With a combined experimental and particle-in-cell simulation study, it is found that the ion transport between two stages is chiefly affected by the potential well, the potential barrier, and the potential drop at the bottom of potential well. With the increase of magnetic field strength in the ionization stage, there is larger plasma density caused by larger potential well. Furthermore, the potential barrier near the intermediate electrode declines first and then rises up while the potential drop at the bottom of potential well rises up first and then declines as the magnetic field strength increases in the ionization stage. Consequently, both the ion current entering the acceleration stage and the total ion current ejected from the thruster rise up first and then decline as the magnetic field strength increases in the ionization stage. Therefore, there is an optimal magnetic field strength in the ionization stage to guide the ion transport between two stages.

  1. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  2. Synthesis of Programmable Main-chain Liquid-crystalline Elastomers Using a Two-stage Thiol-acrylate Reaction.

    Science.gov (United States)

    Saed, Mohand O; Torbati, Amir H; Nair, Devatha P; Yakacki, Christopher M

    2016-01-19

    This study presents a novel two-stage thiol-acrylate Michael addition-photopolymerization (TAMAP) reaction to prepare main-chain liquid-crystalline elastomers (LCEs) with facile control over network structure and programming of an aligned monodomain. Tailored LCE networks were synthesized using routine mixing of commercially available starting materials and pouring monomer solutions into molds to cure. An initial polydomain LCE network is formed via a self-limiting thiol-acrylate Michael-addition reaction. Strain-to-failure and glass transition behavior were investigated as a function of crosslinking monomer, pentaerythritol tetrakis(3-mercaptopropionate) (PETMP). An example non-stoichiometric system of 15 mol% PETMP thiol groups and an excess of 15 mol% acrylate groups was used to demonstrate the robust nature of the material. The LCE formed an aligned and transparent monodomain when stretched, with a maximum failure strain over 600%. Stretched LCE samples were able to demonstrate both stress-driven thermal actuation when held under a constant bias stress or the shape-memory effect when stretched and unloaded. A permanently programmed monodomain was achieved via a second-stage photopolymerization reaction of the excess acrylate groups when the sample was in the stretched state. LCE samples were photo-cured and programmed at 100%, 200%, 300%, and 400% strain, with all samples demonstrating over 90% shape fixity when unloaded. The magnitude of total stress-free actuation increased from 35% to 115% with increased programming strain. Overall, the two-stage TAMAP methodology is presented as a powerful tool to prepare main-chain LCE systems and explore structure-property-performance relationships in these fascinating stimuli-sensitive materials.

  3. Self-monitoring of urinary salt excretion as a method of salt-reduction education: a parallel, randomized trial involving two groups.

    Science.gov (United States)

    Yasutake, Kenichiro; Miyoshi, Emiko; Misumi, Yukiko; Kajiyama, Tomomi; Fukuda, Tamami; Ishii, Taeko; Moriguchi, Ririko; Murata, Yusuke; Ohe, Kenji; Enjoji, Munechika; Tsuchihashi, Takuya

    2018-02-20

    The present study aimed to evaluate salt-reduction education using a self-monitoring urinary salt-excretion device. Parallel, randomized trial involving two groups. The following parameters were checked at baseline and endline of the intervention: salt check sheet, eating behaviour questionnaire, 24 h home urine collection, blood pressure before and after urine collection. The intervention group self-monitored urine salt excretion using a self-measuring device for 4 weeks. In the control group, urine salt excretion was measured, but the individuals were not informed of the result. Seventy-eight individuals (control group, n 36; intervention group, n 42) collected two 24 h urine samples from a target population of 123 local resident volunteers. The samples were then analysed. There were no differences in clinical background or related parameters between the two groups. The 24 h urinary Na:K ratio showed a significant decrease in the intervention group (-1·1) compared with the control group (-0·0; P=0·033). Blood pressure did not change in either group. The results of the salt check sheet did not change in the control group but were significantly lower in the intervention group. The score of the eating behaviour questionnaire did not change in the control group, but the intervention group showed a significant increase in eating behaviour stage. Self-monitoring of urinary salt excretion helps to improve 24 h urinary Na:K, salt check sheet scores and stage of eating behaviour. Thus, usage of self-monitoring tools has an educational potential in salt intake reduction.

  4. Single-stage Acetabular Revision During Two-stage THA Revision for Infection is Effective in Selected Patients.

    Science.gov (United States)

    Fink, Bernd; Schlumberger, Michael; Oremek, Damian

    2017-08-01

    The treatment of periprosthetic infections of hip arthroplasties typically involves use of either a single- or two-stage (with implantation of a temporary spacer) revision surgery. In patients with severe acetabular bone deficiencies, either already present or after component removal, spacers cannot be safely implanted. In such hips where it is impossible to use spacers and yet a two-stage revision of the prosthetic stem is recommended, we have combined a two-stage revision of the stem with a single revision of the cup. To our knowledge, this approach has not been reported before. (1) What proportion of patients treated with single-stage acetabular reconstruction as part of a two-stage revision for an infected THA remain free from infection at 2 or more years? (2) What are the Harris hip scores after the first stage and at 2 years or more after the definitive reimplantation? Between June 2009 and June 2014, we treated all patients undergoing surgical treatment for an infected THA using a single-stage acetabular revision as part of a two-stage THA exchange if the acetabular defect classification was Paprosky Types 2B, 2C, 3A, 3B, or pelvic discontinuity and a two-stage procedure was preferred for the femur. The procedure included removal of all components, joint débridement, definitive acetabular reconstruction (with a cage to bridge the defect, and a cemented socket), and a temporary cemented femoral component at the first stage; the second stage consisted of repeat joint and femoral débridement and exchange of the femoral component to a cementless device. During the period noted, 35 patients met those definitions and were treated with this approach. No patients were lost to followup before 2 years; mean followup was 42 months (range, 24-84 months). The clinical evaluation was performed with the Harris hip scores and resolution of infection was assessed by the absence of clinical signs of infection and a C-reactive protein level less than 10 mg/L. All

  5. Early-stage squamous cell carcinoma of the oropharynx: Radiotherapy vs. Trans-Oral Robotic Surgery (ORATOR) – study protocol for a randomized phase II trial

    International Nuclear Information System (INIS)

    Nichols, Anthony C; Kuruvilla, Sara; Chen, Jeff; Corsten, Martin; Odell, Michael; Eapen, Libni; Theurer, Julie; Doyle, Philip C; Wehrli, Bret; Kwan, Keith; Palma, David A; Yoo, John; Hammond, J Alex; Fung, Kevin; Winquist, Eric; Read, Nancy; Venkatesan, Varagur; MacNeil, S Danielle; Ernst, D Scott

    2013-01-01

    The incidence of oropharyngeal squamous cell carcinoma (OPSCC) has markedly increased over the last three decades due to newly found associations with human papillomavirus (HPV) infection. Primary radiotherapy (RT) is the treatment of choice for OPSCC at most centers, and over the last decade, the addition of concurrent chemotherapy has led to a significant improvement in survival, but at the cost of increased acute and late toxicity. Transoral robotic surgery (TORS) has emerged as a promising alternative treatment, with preliminary case series demonstrating encouraging oncologic, functional, and quality of life (QOL) outcomes. However, comparisons of TORS and RT in a non-randomized fashion are susceptible to bias. The goal of this randomized phase II study is to compare QOL, functional outcomes, toxicity profiles, and survival following primary RT (± chemotherapy) vs. TORS (± adjuvant [chemo] RT) in patients with OPSCC. The target patient population comprises OPSCC patients who would be unlikely to require chemotherapy post-resection: Tumor stage T1-T2 with likely negative margins at surgery; Nodal stage N0-2, ≤3 cm in size, with no evidence of extranodal extension on imaging. Participants will be randomized in a 1:1 ratio between Arm 1 (RT ± chemotherapy) and Arm 2 (TORS ± adjuvant [chemo] RT). In Arm 1, patients with N0 disease will receive RT alone, whereas N1-2 patients will receive concurrent chemoradiation. In Arm 2, patients will undergo TORS along with selective neck dissections, which may be staged. Pathologic high-risk features will be used to determine the requirement for adjuvant radiotherapy +/- chemotherapy. The primary endpoint is QOL score using the M.D. Anderson Dysphagia Inventory (MDADI), with secondary endpoints including survival, toxicity, other QOL outcomes, and swallowing function. A sample of 68 patients is required. This study, if successful, will provide a much-needed randomized comparison of the conventional strategy of primary RT

  6. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  7. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  8. Single-stage laparoscopic common bile duct exploration and cholecystectomy versus two-stage endoscopic stone extraction followed by laparoscopic cholecystectomy for patients with gallbladder stones with common bile duct stones: systematic review and meta-analysis of randomized trials with trial sequential analysis.

    Science.gov (United States)

    Singh, Anand Narayan; Kilambi, Ragini

    2018-03-30

    The ideal management of common bile duct (CBD) stones associated with gall stones is a matter of debate. We planned a meta-analysis of randomized trials comparing single-stage laparoscopic CBD exploration and cholecystectomy (LCBDE) with two-stage preoperative endoscopic stone extraction followed by cholecystectomy (ERCP + LC). We searched the Pubmed/Medline, Web of science, Science citation index, Google scholar and Cochrane Central Register of Controlled trials electronic databases till June 2017 for all English language randomized trials comparing the two approaches. Statistical analysis was performed using Review Manager (RevMan) [Computer program], Version 5.3. Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration, 2014 and results were expressed as odds ratio for dichotomous variables and mean difference for continuous. p value ≤ 0.05 was considered significant. Trial sequential analysis (TSA) was performed using TSA version 0.9.5.5 (Copenhagen: The Copenhagen Trial Unit, Centre for Clinical Intervention Research, 2016). PROSPERO trial registration number is CRD42017074673. A total of 11 trials were included in the analysis, with a total of 1513 patients (751-LCBDE; 762-ERCP + LC). LCBDE was found to have significantly lower rates of technical failure [OR 0.59, 95% CI (0.38, 0.93), p = 0.02] and shorter hospital stay [MD - 1.63, 95% CI (- 3.23, - 0.03), p = 0.05]. There was no significant difference in mortality [OR 0.37, 95% CI (0.09, 1.51), p = 0.17], morbidity [OR 0.97, 95% CI (0.70, 1.33), p = 0.84], cost [MD - 379.13, 95% CI (- 784.80, 111.2), p = 0.13] or recurrent/retained stones [OR 1.01, 95% CI (0.38, 2.73), p = 0.98]. TSA showed that although the Z-curve crossed the boundaries of conventional significance, the estimated information size is yet to be achieved. Single-stage LCBDE is superior to ERCP + LC in terms of technical success and shorter hospital stay in good-risk patients with

  9. Delayed versus immediate pushing in second stage of labor.

    Science.gov (United States)

    Kelly, Mary; Johnson, Eileen; Lee, Vickie; Massey, Liz; Purser, Debbie; Ring, Karen; Sanderson, Stephanye; Styles, Juanita; Wood, Deb

    2010-01-01

    Comparison of two different methods for management of second stage of labor: immediate pushing at complete cervical dilation of 10 cm and delayed pushing 90 minutes after complete cervical dilation. This study was a randomized clinical trial in a labor and delivery unit of a not-for-profit community hospital. A sample of 44 nulliparous mothers with continuous epidural anesthesia were studied after random assignment to treatment groups. Subjects were managed with either immediate or delayed pushing during the second stage of labor at the time cervical dilation was complete. The primary outcome measure was the length of pushing during second stage of labor. Secondary outcomes included length of second stage of labor, maternal fatigue and perineal injuries, and fetal heart rate decelerations. Two-tailed, unpaired Student's t-tests and Chi-square analysis were used for data analysis. Level of significance was set at p pushing; N = 16 delayed pushing). The delayed pushing group had significantly shorter amount of time spent in pushing compared with the immediate pushing group (38.9 +/- 6.9 vs. 78.7 +/- 7.9 minutes, respectively, p = .002). Maternal fatigue scores, perineal injuries, and fetal heart rate decelerations were similar for both groups. Delaying pushing for up to 90 minutes after complete cervical dilation resulted in a significant decrease in the time mothers spent pushing without a significant increase in total time in second stage of labor.In clinical practice, healthcare providers sometimes resist delaying the onset of pushing after second stage of labor has begun because of a belief it will increase labor time. This study's finding of a 51% reduction in pushing time when mothers delay pushing for up to 90 minutes, with no significant increase in overall time for second stage of labor, disputes that concern.

  10. Traditionally vs sonographically coached pushing in second stage of labor: a pilot randomized controlled trial.

    Science.gov (United States)

    Bellussi, F; Alcamisi, L; Guizzardi, G; Parma, D; Pilu, G

    2018-03-13

    To investigate the usefulness of visual biofeedback using transperineal ultrasound to improve coached pushing during the active second stage of labor in nulliparous women. This was a randomized controlled trial of low-risk nulliparous women in the active second stage of labor. Patients were allocated to either coached pushing aided by visual demonstration on transperineal ultrasound of the progress of the fetal head (sonographic coaching) or traditional coaching. Patients in both groups were coached by an obstetrician for the first 20 min of the active second stage of labor and, subsequently, the labor was supervised by a midwife. Primary outcomes were duration of the active second stage and increase in the angle of progression at the end of the coaching process. Secondary outcomes included the incidence of operative delivery and complications of labor. Forty women were recruited into the study. Those who received sonographic coaching had a shorter active phase of the second stage (30 min (interquartile range (IQR), 24-42 min) vs 45 min (IQR, 39-55 min); P = 0.01) and a greater increase in the angle of progression (13.5° (IQR, 9-20°) vs 5° (IQR, 3-9.5°); P = 0.01) in the first 20 min of the active second stage of labor than did those who had traditional coaching. No differences were found in the secondary outcomes between the two groups. Our preliminary data suggest that transperineal ultrasound may be a useful adjunct to coached pushing during the active second stage of labor. Further studies are required to confirm these findings and better define the benefits of this approach. Copyright © 2018 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2018 ISUOG. Published by John Wiley & Sons Ltd.

  11. A random sampling procedure for anisotropic distributions

    International Nuclear Information System (INIS)

    Nagrajan, P.S.; Sethulakshmi, P.; Raghavendran, C.P.; Bhatia, D.P.

    1975-01-01

    A procedure is described for sampling the scattering angle of neutrons as per specified angular distribution data. The cosine of the scattering angle is written as a double Legendre expansion in the incident neutron energy and a random number. The coefficients of the expansion are given for C, N, O, Si, Ca, Fe and Pb and these elements are of interest in dosimetry and shielding. (author)

  12. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    International Nuclear Information System (INIS)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.; Park, T. K.; Deng, P.; Yang, G.; Jung, Y. S.; Kim, T. K.; Stauff, N. E.

    2016-01-01

    This report presents the performance characteristics of two ''two-stage'' fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  13. Preoperative Radiotherapy in Resectable Rectal Cancer: A Prospective Randomized Study of Two Different Approaches

    International Nuclear Information System (INIS)

    EITTA, M.A.; EL- WAHIDI, G.F.; FOUDA, M.A.; ABO EL-NAGA, E.M.; GAD EL-HAK, N.

    2010-01-01

    Preoperative radiotherapy in resectable rectal cancer has a number of potential advantages, most importantly reducing local recurrence, increasing survival and down-staging effect. Purpose: This prospective study was designed to compare between two different approaches of preoperative radiotherapy, either short course or long course radiotherapy. The primary endpoint is to evaluate the local recurrence rate, overall survival (OS) and disease free survival (DFS). The secondary endpoint is to evaluate down staging, treatment toxicity and ability to do sphincter sparing procedure (SSP), aiming at helping in the choice of the optimal treatment modality. Patients and Methods: This is a prospective randomized study of patients with resectable rectal cancer who presented to the department of Clinical Oncology and Nuclear Medicine, Mansoura University during the time period between June 2007 and September 2009. These patients received preoperative radiotherapy and were randomized into two arms: Arm 1, short course (SCRT) 25Gy/week/5 fractions followed by surgery within one week, and arm 2, long course preoperative radiotherapy (LCRT) 45Gy/5 weeks/25 fractions followed by surgery after 4-6 weeks. Adjuvant chemotherapy was given 4-6 weeks after surgery according to the postoperative pathology. Results: After a median follow-up of 18 months (range 6 to 28 months), we studied the patterns of recurrence. Three patients experienced local recurrence (LR), two out of 14 (14.2%) in arm 1 and one out of 15 patients (6.7%) in arm 2, (p=0.598). Three patients developed distant metastases [two in arm 1 (14.2%) and one in arm 2 (6.7%), p=0.598]. Two-year OS rate was 64±3% and 66±2%, (p= 0.389), and the 2-year DFS rate was 61±2% and 83±2% for arms 1 and 2, respectively (p=0.83). Tumor (T) downstaging was more achieved in LCRT arm with a statistically significant difference, but did not reach statistical significance in node (N) down-staging. SSP was more available in LCRT but with no

  14. On A Two-Stage Supply Chain Model In The Manufacturing Industry ...

    African Journals Online (AJOL)

    We model a two-stage supply chain where the upstream stage (stage 2) always meet demand from the downstream stage (stage 1).Demand is stochastic hence shortages will occasionally occur at stage 2. Stage 2 must fill these shortages by expediting using overtime production and/or backordering. We derive optimal ...

  15. Preservation Effect of Two-Stage Cinnamon Bark (Cinnamomum Burmanii) Oleoresin Microcapsules On Vacuum-Packed Ground Beef During Refrigerated Storage

    Science.gov (United States)

    Irfiana, D.; Utami, R.; Khasanah, L. U.; Manuhara, G. J.

    2017-04-01

    The purpose of this study was to determine the effect of two stage cinnamon bark oleoresin microcapsules (0%, 0.5% and 1%) on the TPC (Total Plate Count), TBA (thiobarbituric acid), pH, and RGB color (Red, Green, and Blue) of vacuum-packed ground beef during refrigerated storage (at 0, 4, 8, 12, and 16 days). This study showed that the addition of two stage cinnamon bark oleoresin microcapsules affected the quality of vacuum-packed ground beef during 16 days of refrigerated storage. The results showed that the TPC value of the vacuum-packed ground beef sample with the addition 0.5% and 1% microcapsules was lower than the value of control sample. The TPC value of the control sample, sample with additional 0.5% and 1% microcapsules were 5.94; 5.46; and 5.16 log CFU/g respectively. The TBA value of vacuum-packed ground beef were 0.055; 0.041; and 0.044 mg malonaldehyde/kg, resepectively on the 16th day of storage. The addition of two-stage cinnamon bark oleoresin microcapsules could inhibit the growth of microbia and decrease the oxidation process of vacuum-packed ground beef. Moreover, the change of vacuum-packed ground beef pH and RGB color with the addition 0.5% and 1% microcapsules were less than those of the control sample. The addition of 1% microcapsules showed the best effect in preserving the vacuum-packed ground beef.

  16. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  17. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  18. A random cluster survey and a convenience sample give comparable estimates of immunity to vaccine preventable diseases in children of school age in Victoria, Australia.

    Science.gov (United States)

    Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L

    2002-08-19

    We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.

  19. Fast randomized point location without preprocessing in two- and three-dimensional Delaunay triangulations

    Energy Technology Data Exchange (ETDEWEB)

    Muecke, E.P.; Saias, I.; Zhu, B.

    1996-05-01

    This paper studies the point location problem in Delaunay triangulations without preprocessing and additional storage. The proposed procedure finds the query point simply by walking through the triangulation, after selecting a good starting point by random sampling. The analysis generalizes and extends a recent result of d = 2 dimensions by proving this procedure to take expected time close to O(n{sup 1/(d+1)}) for point location in Delaunay triangulations of n random points in d = 3 dimensions. Empirical results in both two and three dimensions show that this procedure is efficient in practice.

  20. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  1. On the Sampling

    OpenAIRE

    Güleda Doğan

    2017-01-01

    This editorial is on statistical sampling, which is one of the most two important reasons for editorial rejection from our journal Turkish Librarianship. The stages of quantitative research, the stage in which we are sampling, the importance of sampling for a research, deciding on sample size and sampling methods are summarised briefly.

  2. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  3. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster

    DEFF Research Database (Denmark)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....

  4. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  5. Measuring factor IX activity of nonacog beta pegol with commercially available one-stage clotting and chromogenic assay kits: a two-center study.

    Science.gov (United States)

    Bowyer, A E; Hillarp, A; Ezban, M; Persson, P; Kitchen, S

    2016-07-01

    Essentials Validated assays are required to precisely measure factor IX (FIX) activity in FIX products. N9-GP and two other FIX products were assessed in various coagulation assay systems at two sites. Large variations in FIX activity measurements were observed for N9-GP using some assays. One-stage and chromogenic assays accurately measuring FIX activity for N9-GP were identified. Background Measurement of factor IX activity (FIX:C) with activated partial thromboplastin time-based one-stage clotting assays is associated with a large degree of interlaboratory variation in samples containing glycoPEGylated recombinant FIX (rFIX), i.e. nonacog beta pegol (N9-GP). Validation and qualification of specific assays and conditions are necessary for the accurate assessment of FIX:C in samples containing N9-GP. Objectives To assess the accuracy of various one-stage clotting and chromogenic assays for measuring FIX:C in samples containing N9-GP as compared with samples containing rFIX or plasma-derived FIX (pdFIX) across two laboratory sites. Methods FIX:C, in severe hemophilia B plasma spiked with a range of concentrations (from very low, i.e. 0.03 IU mL(-1) , to high, i.e. 0.90 IU mL(-1) ) of N9-GP, rFIX (BeneFIX), and pdFIX (Mononine), was determined at two laboratory sites with 10 commercially available one-stage clotting assays and two chromogenic FIX:C assays. Assays were performed with a plasma calibrator and different analyzers. Results A high degree of variation in FIX:C measurement was observed for one-stage clotting assays for N9-GP as compared with rFIX or pdFIX. Acceptable N9-GP recovery was observed in the low-concentration to high-concentration samples tested with one-stage clotting assays using SynthAFax or DG Synth, or with chromogenic FIX:C assays. Similar patterns of FIX:C measurement were observed at both laboratory sites, with minor differences probably being attributable to the use of different analyzers. Conclusions These results suggest that, of the

  6. Velocity and Dispersion for a Two-Dimensional Random Walk

    International Nuclear Information System (INIS)

    Li Jinghui

    2009-01-01

    In the paper, we consider the transport of a two-dimensional random walk. The velocity and the dispersion of this two-dimensional random walk are derived. It mainly show that: (i) by controlling the values of the transition rates, the direction of the random walk can be reversed; (ii) for some suitably selected transition rates, our two-dimensional random walk can be efficient in comparison with the one-dimensional random walk. Our work is motivated in part by the challenge to explain the unidirectional transport of motor proteins. When the motor proteins move at the turn points of their tracks (i.e., the cytoskeleton filaments and the DNA molecular tubes), some of our results in this paper can be used to deal with the problem. (general)

  7. Penicillin at the late stage of leptospirosis: a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Costa Everaldo

    2003-01-01

    Full Text Available There is evidence that an early start of penicillin reduces the case-fatality rate of leptospirosis and that chemoprophylaxis is efficacious in persons exposed to the sources of leptospira. The existent data, however, are inconsistent regarding the benefit of introducing penicillin at a late stage of leptospirosis. The present study was developed to assess whether the introduction of penicillin after more than four days of symptoms reduces the in-hospital case-fatality rate of leptospirosis. A total of 253 patients aged 15 to 76 years with advanced leptospirosis, i.e., more than four days of symptoms, admitted to an infectious disease hospital located in Salvador, Brazil, were selected for the study. The patients were randomized to one of two treatment groups: with intravenous penicillin, 6 million units day (one million unit every four hours for seven days (n = 125 and without (n = 128 penicillin. The main outcome was death during hospitalization. The case-fatality rate was approximately twice as high in the group treated with penicillin (12%; 15/125 than in the comparison group (6.3%; 8/128. This difference pointed in the opposite direction of the study hypothesis, but was not statistically significant (p = 0.112. Length of hospital stay was similar between the treatment groups. According to the results of the present randomized clinical trial initiation of penicillin in patients with severe forms of leptospirosis after at least four days of symptomatic leptospirosis is not beneficial. Therefore, more attention should be directed to prevention and earlier initiation of the treatment of leptospirosis.

  8. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of twotwo-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  9. Stereotactic ablative radiotherapy versus lobectomy for operable stage I non-small-cell lung cancer : a pooled analysis of two randomised trials

    NARCIS (Netherlands)

    Chang, Joe Y.; Senan, Suresh; Paul, Marinus A.; Mehran, Reza J.; Louie, Alexander V.; Balter, Peter; Groen, Harry; McRae, Stephen E.; Widder, Joachim; Feng, Lei; van den Borne, Ben E. E. M.; Munsell, Mark F.; Hurkmans, Coen; Berry, Donald A.; van Werkhoven, Erik; Kresl, John J.; Dingemans, Anne-Marie; Dawood, Omar; Haasbeek, Cornelis J. A.; Carpenter, Larry S.; De Jaeger, Katrien; Komaki, Ritsuko; Slotman, Ben J.; Smit, Egbert F.; Roth, Jack A.

    Background The standard of care for operable, stage I, non-small-cell lung cancer (NSCLC) is lobectomy with mediastinal lymph node dissection or sampling. Stereotactic ablative radiotherapy (SABR) for inoperable stage I NSCLC has shown promising results, but two independent, randomised, phase 3

  10. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  11. STATISTICAL LANDMARKS AND PRACTICAL ISSUES REGARDING THE USE OF SIMPLE RANDOM SAMPLING IN MARKET RESEARCHES

    Directory of Open Access Journals (Sweden)

    CODRUŢA DURA

    2010-01-01

    Full Text Available The sample represents a particular segment of the statistical populationchosen to represent it as a whole. The representativeness of the sample determines the accuracyfor estimations made on the basis of calculating the research indicators and the inferentialstatistics. The method of random sampling is part of probabilistic methods which can be usedwithin marketing research and it is characterized by the fact that it imposes the requirementthat each unit belonging to the statistical population should have an equal chance of beingselected for the sampling process. When the simple random sampling is meant to be rigorouslyput into practice, it is recommended to use the technique of random number tables in order toconfigure the sample which will provide information that the marketer needs. The paper alsodetails the practical procedure implemented in order to create a sample for a marketingresearch by generating random numbers using the facilities offered by Microsoft Excel.

  12. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  13. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  14. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    Science.gov (United States)

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  15. The Two-Word Stage: Motivated by Linguistic or Cognitive Constraints?

    Science.gov (United States)

    Berk, Stephanie; Lillo-Martin, Diane

    2012-01-01

    Child development researchers often discuss a "two-word" stage during language acquisition. However, there is still debate over whether the existence of this stage reflects primarily cognitive or linguistic constraints. Analyses of longitudinal data from two Deaf children, Mei and Cal, not exposed to an accessible first language (American Sign…

  16. A two-stage heating scheme for heat assisted magnetic recording

    Science.gov (United States)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  17. Efficacy of single-stage and two-stage Fowler–Stephens laparoscopic orchidopexy in the treatment of intraabdominal high testis

    Directory of Open Access Journals (Sweden)

    Chang-Yuan Wang

    2017-11-01

    Conclusion: In the case of testis with good collateral circulation, single-stage F-S laparoscopic orchidopexy had the same safety and efficacy as the two-stage F-S procedure. Surgical options should be based on comprehensive consideration of intraoperative testicular location, testicular ischemia test, and collateral circumstances surrounding the testes. Under the appropriate conditions, we propose single-stage F-S laparoscopic orchidopexy be preferred. It may be appropriate to avoid unnecessary application of the two-stage procedure that has a higher cost and causes more pain for patients.

  18. On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests

    Directory of Open Access Journals (Sweden)

    Aaditya Ramdas

    2017-01-01

    Full Text Available Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. Inthisshortsurvey,wefocusonteststatisticsthatinvolvetheWassersteindistance. Usingan entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ plots and receiver operating characteristic or ordinal dominance (ROC/ODC curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.

  19. Two-Stage Regularized Linear Discriminant Analysis for 2-D Data.

    Science.gov (United States)

    Zhao, Jianhua; Shi, Lei; Zhu, Ji

    2015-08-01

    Fisher linear discriminant analysis (LDA) involves within-class and between-class covariance matrices. For 2-D data such as images, regularized LDA (RLDA) can improve LDA due to the regularized eigenvalues of the estimated within-class matrix. However, it fails to consider the eigenvectors and the estimated between-class matrix. To improve these two matrices simultaneously, we propose in this paper a new two-stage method for 2-D data, namely a bidirectional LDA (BLDA) in the first stage and the RLDA in the second stage, where both BLDA and RLDA are based on the Fisher criterion that tackles correlation. BLDA performs the LDA under special separable covariance constraints that incorporate the row and column correlations inherent in 2-D data. The main novelty is that we propose a simple but effective statistical test to determine the subspace dimensionality in the first stage. As a result, the first stage reduces the dimensionality substantially while keeping the significant discriminant information in the data. This enables the second stage to perform RLDA in a much lower dimensional subspace, and thus improves the two estimated matrices simultaneously. Experiments on a number of 2-D synthetic and real-world data sets show that BLDA+RLDA outperforms several closely related competitors.

  20. Systematic mediastinal lymphadenectomy or mediastinal lymph node sampling in patients with pathological stage I NSCLC: a meta-analysis.

    Science.gov (United States)

    Dong, Siyuan; Du, Jiang; Li, Wenya; Zhang, Shuguang; Zhong, Xinwen; Zhang, Lin

    2015-02-01

    To evaluate the evidence comparing systematic mediastinal lymphadenectomy (SML) and mediastinal lymph node sampling (MLS) in the treatment of pathological stage I NSCLC using meta-analytical techniques. A literature search was undertaken until January 2014 to identify the comparative studies evaluating 1-, 3-, and 5-year survival rates. The pooled odds ratios (OR) and the 95 % confidence intervals (95 % CI) were calculated with either the fixed or random effect models. One RCT study and four retrospective studies were included in our meta-analysis. These studies included a total of 711 patients: 317 treated with SML, and 394 treated with MLS. The SML and the MLS did not demonstrate a significant difference in the 1-year survival rate. There were significant statistical differences between the 3-year (P = 0.03) and 5-year survival rates (P = 0.004), which favored SML. This meta-analysis suggests that in pathological stage I NSCLC, the MLS can get the similar outcome to the SML in terms of 1-year survival rate. However, the SML is superior to MLS in terms of 3- and 5-year survival rates.

  1. Two-stage precipitation of plutonium trifluoride

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1984-04-01

    Plutonium trifluoride was precipitated using a two-stage precipitation system. A series of precipitation experiments identified the significant process variables affecting precipitate characteristics. A mathematical precipitation model was developed which was based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter that can be used to control particle characteristics

  2. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    Science.gov (United States)

    Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.

    2011-09-01

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1,2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  3. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    International Nuclear Information System (INIS)

    Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.

    2011-01-01

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2 /Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  4. LOD score exclusion analyses for candidate genes using random population samples.

    Science.gov (United States)

    Deng, H W; Li, J; Recker, R R

    2001-05-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.

  5. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages.

    Science.gov (United States)

    Peterman, William; Brocato, Emily R; Semlitsch, Raymond D; Eggert, Lori S

    2016-01-01

    In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (F ST and D C ) and isolation-by-distance (IBD) among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using D C , the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  6. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages

    Directory of Open Access Journals (Sweden)

    William Peterman

    2016-03-01

    Full Text Available In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (FST and DC and isolation-by-distance (IBD among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using DC, the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  7. A New Concept of Two-Stage Multi-Element Resonant-/Cyclo-Converter for Two-Phase IM/SM Motor

    Directory of Open Access Journals (Sweden)

    Mahmud Ali Rzig Abdalmula

    2013-01-01

    Full Text Available The paper deals with a new concept of power electronic two-phase system with two-stage DC/AC/AC converter and two-phase IM/PMSM motor. The proposed system consisting of two-stage converter comprises: input resonant boost converter with AC output, two-phase half-bridge cyclo-converter commutated by HF AC input voltage, and induction or synchronous motor. Such a system with AC interlink, as a whole unit, has better properties as a 3-phase reference VSI inverter: higher efficiency due to soft switching of both converter stages, higher switching frequency, smaller dimensions and weight with lesser number of power semiconductor switches and better price. In comparison with currently used conventional system configurations the proposed system features a good efficiency of electronic converters and also has a good torque overloading of two-phase AC induction or synchronous motors. Design of two-stage multi-element resonant converter and results of simulation experiments are presented in the paper.

  8. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  9. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    OpenAIRE

    Chen, Yanju; Wang, Ye

    2015-01-01

    This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the s...

  10. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhehui [Los Alamos; Iaroshenko, O. [Los Alamos; Li, S. [Los Alamos; Liu, T. [Fermilab; Parab, N. [Argonne (main); Chen, W. W. [Purdue U.; Chu, P. [Los Alamos; Kenyon, G. [Los Alamos; Lipton, R. [Fermilab; Sun, K.-X. [Nevada U., Las Vegas

    2017-09-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  11. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2012-01-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general

  12. Randomized comparison of vaginal self-sampling by standard vs. dry swabs for Human papillomavirus testing

    International Nuclear Information System (INIS)

    Eperon, Isabelle; Vassilakos, Pierre; Navarria, Isabelle; Menoud, Pierre-Alain; Gauthier, Aude; Pache, Jean-Claude; Boulvain, Michel; Untiet, Sarah; Petignat, Patrick

    2013-01-01

    To evaluate if human papillomavirus (HPV) self-sampling (Self-HPV) using a dry vaginal swab is a valid alternative for HPV testing. Women attending colposcopy clinic were recruited to collect two consecutive Self-HPV samples: a Self-HPV using a dry swab (S-DRY) and a Self-HPV using a standard wet transport medium (S-WET). These samples were analyzed for HPV using real time PCR (Roche Cobas). Participants were randomized to determine the order of the tests. Questionnaires assessing preferences and acceptability for both tests were conducted. Subsequently, women were invited for colposcopic examination; a physician collected a cervical sample (physician-sampling) with a broom-type device and placed it into a liquid-based cytology medium. Specimens were then processed for the production of cytology slides and a Hybrid Capture HPV DNA test (Qiagen) was performed from the residual liquid. Biopsies were performed if indicated. Unweighted kappa statistics (κ) and McNemar tests were used to measure the agreement among the sampling methods. A total of 120 women were randomized. Overall HPV prevalence was 68.7% (95% Confidence Interval (CI) 59.3–77.2) by S-WET, 54.4% (95% CI 44.8–63.9) by S-DRY and 53.8% (95% CI 43.8–63.7) by HC. Among paired samples (S-WET and S-DRY), the overall agreement was good (85.7%; 95% CI 77.8–91.6) and the κ was substantial (0.70; 95% CI 0.57-0.70). The proportion of positive type-specific HPV agreement was also good (77.3%; 95% CI 68.2-84.9). No differences in sensitivity for cervical intraepithelial neoplasia grade one (CIN1) or worse between the two Self-HPV tests were observed. Women reported the two Self-HPV tests as highly acceptable. Self-HPV using dry swab transfer does not appear to compromise specimen integrity. Further study in a large screening population is needed. ClinicalTrials.gov: http://clinicaltrials.gov/show/NCT01316120

  13. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.

  14. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    Science.gov (United States)

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  15. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  16. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  17. Transport fuels from two-stage coal liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Benito, A.; Cebolla, V.; Fernandez, I.; Martinez, M.T.; Miranda, J.L.; Oelert, H.; Prado, J.G. (Instituto de Carboquimica CSIC, Zaragoza (Spain))

    1994-03-01

    Four Spanish lignites and their vitrinite concentrates were evaluated for coal liquefaction. Correlationships between the content of vitrinite and conversion in direct liquefaction were observed for the lignites but not for the vitrinite concentrates. The most reactive of the four coals was processed in two-stage liquefaction at a higher scale. First-stage coal liquefaction was carried out in a continuous unit at Clausthal University at a temperature of 400[degree]C at 20 MPa hydrogen pressure and with anthracene oil as a solvent. The coal conversion obtained was 75.41% being 3.79% gases, 2.58% primary condensate and 69.04% heavy liquids. A hydroprocessing unit was built at the Instituto de Carboquimica for the second-stage coal liquefaction. Whole and deasphalted liquids from the first-stage liquefaction were processed at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The effects of liquid hourly space velocity (LHSV), temperature, gas/liquid ratio and catalyst on the heteroatom liquids, and levels of 5 ppm of nitrogen and 52 ppm of sulphur were reached at 450[degree]C, 10 MPa hydrogen pressure, 0.08 kg H[sub 2]/kg feedstock and with Harshaw HT-500E catalyst. The liquids obtained were hydroprocessed again at 420[degree]C, 10 MPa hydrogen pressure and 0.06 kg H[sub 2]/kg feedstock to hydrogenate the aromatic structures. In these conditions, the aromaticity was reduced considerably, and 39% of naphthas and 35% of kerosene fractions were obtained. 18 refs., 4 figs., 4 tabs.

  18. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  19. Two-stage model of development of heterogeneous uranium-lead systems in zircon

    International Nuclear Information System (INIS)

    Mel'nikov, N.N.; Zevchenkov, O.A.

    1985-01-01

    Behaviour of isotope systems of multiphase zircons at their two-stage distortion is considered. The results of calculations testify to the fact that linear correlations on the diagram with concordance can be explained including two-stage discovery of U-Pb systems of cogenetic zircons if zircon is considered physically heterogeneous and losing in its different part different ratios of accumulated radiogenic lead. ''Metamorphism ages'' obtained by these two-stage opening zircons are intermediate, and they not have geochronological significance while ''crystallization ages'' remain rather close to real ones. Two-stage opening zircons in some cases can be diagnosed by discordance of their crystal component

  20. Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing.

    Science.gov (United States)

    Williamson, Graham R

    2003-11-01

    This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.

  1. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  2. Two technicians apply insulation to S-II second stage

    Science.gov (United States)

    1964-01-01

    Two technicians apply insulation to the outer surface of the S-II second stage booster for the Saturn V moon rocket. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.

  3. A simulation-based interval two-stage stochastic model for agricultural nonpoint source pollution control through land retirement

    International Nuclear Information System (INIS)

    Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.

    2006-01-01

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties

  4. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Marginal Bone Remodeling around healing Abutment vs Final Abutment Placement at Second Stage Implant Surgery: A 12-month Randomized Clinical Trial.

    Science.gov (United States)

    Nader, Nabih; Aboulhosn, Maissa; Berberi, Antoine; Manal, Cordahi; Younes, Ronald

    2016-01-01

    The periimplant bone level has been used as one of the criteria to assess the success of dental implants. It has been documented that the bone supporting two-piece implants undergoes resorption first following the second-stage surgery and later on further to abutment connection and delivery of the final prosthesis. The aim of this multicentric randomized clinical trial was to evaluate the crestal bone resorption around internal connection dental implants using a new surgical protocol that aims to respect the biological distance, relying on the benefit of a friction fit connection abutment (test group) compared with implants receiving conventional healing abutments at second-stage surgery (control group). A total of partially edentulous patients were consecutively treated at two private clinics, with two adjacent two-stage implants. Three months after the first surgery, one of the implants was randomly allocated to the control group and was uncovered using a healing abutment, while the other implant received a standard final abutment and was seated and tightened to 30 Ncm. At each step of the prosthetic try-in, the abutment in the test group was removed and then retightened to 30 Ncm. Horizontal bone changes were assessed using periapical radiographs immediately after implant placement and at 3 (second-stage surgery), 6, 9 and 12 months follow-up examinations. At 12 months follow-up, no implant failure was reported in both groups. In the control group, the mean periimplant bone resorption was 0.249 ± 0.362 at M3, 0.773 ± 0.413 at M6, 0.904 ± 0.36 at M9 and 1.047 ± 0.395 at M12. The test group revealed a statistically significant lower marginal bone loss of 20.88% at M3 (0.197 ± 0.262), 22.25% at M6 (0.601 ± 0.386), 24.23% at M9 (0.685 ± 0.341) and 19.2% at M9 (0.846 ± 0.454). The results revealed that bone loss increased over time, with the greatest change in bone loss occurring between 3 and 6 months. Alveolar bone loss was significantly greater in the

  6. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

  7. Two sampling techniques for game meat

    OpenAIRE

    van der Merwe, Maretha; Jooste, Piet J.; Hoffman, Louw C.; Calitz, Frikkie J.

    2013-01-01

    A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g) and square centimetres (cm2) for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12) that statistically proved the two measuring units correlated. The two sampling...

  8. Investigation of Power Losses of Two-Stage Two-Phase Converter with Two-Phase Motor

    Directory of Open Access Journals (Sweden)

    Michal Prazenica

    2011-01-01

    Full Text Available The paper deals with determination of losses of two-stage power electronic system with two-phase variable orthogonal output. The simulation is focused on the investigation of losses in the converter during one period in steady-state operation. Modeling and simulation of two matrix converters with R-L load is shown in the paper. The simulation results confirm a very good time-waveform of the phase current and the system seems to be suitable for low-cost application in automotive/aerospace industries and in application with high frequency voltage sources.

  9. A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.

    Science.gov (United States)

    Mascaro, Joseph; Asner, Gregory P; Knapp, David E; Kennedy-Bowdoin, Ty; Martin, Roberta E; Anderson, Christopher; Higgins, Mark; Chadwick, K Dana

    2014-01-01

    Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag"), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1) when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.

  10. A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.

    Directory of Open Access Journals (Sweden)

    Joseph Mascaro

    Full Text Available Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus. The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag", which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1 when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.

  11. Device for two-stage cementing of casing

    Energy Technology Data Exchange (ETDEWEB)

    Kudimov, D A; Goncharevskiy, Ye N; Luneva, L G; Shchelochkov, S N; Shil' nikova, L N; Tereshchenko, V G; Vasiliev, V A; Volkova, V V; Zhdokov, K I

    1981-01-01

    A device is claimed for two-stage cementing of casing. It consists of a body with lateral plugging vents, upper and lower movable sleeves, a check valve with axial channels that's situated in the lower sleeve, and a displacement limiting device for the lower sleeve. To improve the cementing process of the casing by preventing overflow of cementing fluids from the annular space into the first stage casing, the limiter is equipped with a spring rod that is capable of covering the axial channels of the check valve while it's in an operating mode. In addition, the rod in the upper part is equipped with a reinforced area under the axial channels of the check valve.

  12. MVP Chemotherapy and Hyperfractionated Radiotherapy for Stage III Unresectable Non-Small Cell Lung Cancer - Randomized for maintenance Chemotherapy vs. Observation; Preliminary Report-

    International Nuclear Information System (INIS)

    Choi, Euk Kyung; Chang, Hye Sook; Suh, Cheol Won

    1991-01-01

    To evaluate the effect of MVP chemotherapy and hyperfractionated radiotherapy in Stage III unresectable non small cell lung cancer (NSCLC), authors have conducted a prospective randomized study since January 1991. Stage IIIa or IIIb unresectable NSCLC patients were treated with hyperfractionated radiotherapy (120 cGy/fx BID) up to 6500 cGY following 3 cycles of induction MVP (Mitomycin C 6 mg/m 2 , vinblastine 6 mg/m 2 , Cisplatin 60 mg/m 2 ) and randomized for either observation or 3 cycles of maintenance MVP chemotherapy. Until August 1991, 18 patients were registered to this study. 4 cases were stage IIIa and 14 were stage IIIb. Among 18 cases 2 were lost after 2 cycles of chemotherapy, and 16 were analyzed for this preliminary report. The response rate of induction chemotherapy was 62.5%; partial response, 50% and minimal response, 12.5%. Residual tumor of the one partial responder was completely disappeared after radiotherapy. Among 6 cases who were progressed during induction chemotherapy, 4 of them were also progressed after radiotherapy. All patients were tolerated BID radiotherapy without definite increase of acute complications, compared with conventional radiotherapy group. But at the time of this report, one patient expired in two month after the completion of the radiotherapy because of treatment related complication. Although the longer follow up is needed, authors are encouraged with higher response rate and acceptable toxicity of this treatment. Authors believe that this study is worthwhile to continue

  13. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  14. Growth stage-based modulation in physiological and biochemical attributes of two genetically diverse wheat (Triticum aestivum L.) cultivars grown in salinized hydroponic culture.

    Science.gov (United States)

    Ashraf, Muhammad Arslan; Ashraf, Muhammad

    2016-04-01

    Hydroponic experiment was conducted to appraise variation in the salt tolerance potential of two wheat cultivars (salt tolerant, S-24, and moderately salt sensitive, MH-97) at different growth stages. These two wheat cultivars are not genetically related as evident from randomized polymorphic DNA analysis (random amplified polymorphic DNA (RAPD)) which revealed 28% genetic diversity. Salinity stress caused a marked reduction in grain yield of both wheat cultivars. However, cv. S-24 was superior to cv. MH-97 in maintaining grain yield under saline stress. Furthermore, salinity caused a significant variation in different physiological attributes measured at different growth stages. Salt stress caused considerable reduction in different water relation attributes of wheat plants. A significant reduction in leaf water, osmotic, and turgor potentials was recorded in both wheat cultivars at different growth stages. Maximal reduction in leaf water potential was recorded at the reproductive stage in both wheat cultivars. In contrast, maximal turgor potential was observed at the boot stage. Salt-induced adverse effects of salinity on different water relation attributes were more prominent in cv. MH-97 as compared to those in cv. S-24. Salt stress caused a substantial decrease in glycine betaine and alpha tocopherols. These biochemical attributes exhibited significant salt-induced variation at different growth stages in both wheat cultivars. For example, maximal accumulation of glycine betaine was evident at the early growth stages (vegetative and boot). However, cv. S-24 showed higher accumulation of this organic osmolyte, and this could be the reason for maintenance of higher turgor than that of cv. MH-97 under stress conditions. Salt stress significantly increased the endogenous levels of toxic ions (Na(+) and Cl(-)) and decreased essential cations (K(+) and Ca(2+)) in both wheat cultivars at different growth stages. Furthermore, K(+)/Na(+) and Ca(2+)/Na(+) ratios

  15. The effect of uterine fundal pressure on the duration of the second stage of labor: a randomized controlled trial.

    Science.gov (United States)

    Api, Olus; Balcin, Muge Emeksiz; Ugurel, Vedat; Api, Murat; Turan, Cem; Unal, Orhan

    2009-01-01

    To determine the effect of uterine fundal pressure on shortening the second stage of labor and on the fetal outcome. Randomized controlled trial. Teaching and research hospital. One hundred ninety-seven women between 37 and 42 gestational weeks with singleton cephalic presentation admitted to the delivery unit. Random allocation into groups with or without manual fundal pressure during the second stage of labor. The primary outcome measure was the duration of the second stage of labor. Secondary outcome measures were umbilical artery pH, HCO3-, base excess, pO2, pCO2 values and the rate of instrumental delivery, severe maternal morbidity/mortality, neonatal trauma, admission to neonatal intensive care unit, and neonatal death. There were no significant differences in the mean duration of the second stage of labor and secondary outcome measures except for mean pO2 which was lower and mean pCO2 which was higher in the fundal pressure group. Nevertheless, the values still remained within normal ranges and there were no neonates with an Apgar score <7 in either of the groups. Application of fundal pressure on a delivering woman was ineffective in shortening the second stage of labor.

  16. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa

    Science.gov (United States)

    2013-01-01

    Background Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. Methods A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Results Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. Conclusions We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer. PMID:24355316

  17. Repetitive, small-bore two-stage light gas gun

    International Nuclear Information System (INIS)

    Combs, S.K.; Foust, C.R.; Fehling, D.T.; Gouge, M.J.; Milora, S.L.

    1991-01-01

    A repetitive two-stage light gas gun for high-speed pellet injection has been developed at Oak Ridge National Laboratory. In general, applications of the two-stage light gas gun have been limited to only single shots, with a finite time (at least minutes) needed for recovery and preparation for the next shot. The new device overcomes problems associated with repetitive operation, including rapidly evacuating the propellant gases, reloading the gun breech with a new projectile, returning the piston to its initial position, and refilling the first- and second-stage gas volumes to the appropriate pressure levels. In addition, some components are subjected to and must survive severe operating conditions, which include rapid cycling to high pressures and temperatures (up to thousands of bars and thousands of kelvins) and significant mechanical shocks. Small plastic projectiles (4-mm nominal size) and helium gas have been used in the prototype device, which was equipped with a 1-m-long pump tube and a 1-m-long gun barrel, to demonstrate repetitive operation (up to 1 Hz) at relatively high pellet velocities (up to 3000 m/s). The equipment is described, and experimental results are presented. 124 refs., 6 figs., 5 tabs

  18. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  19. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  20. Magnetoresistance of a two-dimensional electron gas in a random magnetic field

    DEFF Research Database (Denmark)

    Smith, Anders; Taboryski, Rafael Jozef; Hansen, Luise Theil

    1994-01-01

    We report magnetoresistance measurements on a two-dimensional electron gas made from a high-mobility GaAs/AlxGa1-xAs heterostructure, where the externally applied magnetic field was expelled from regions of the semiconductor by means of superconducting lead grains randomly distributed on the surf...... on the surface of the sample. A theoretical explanation in excellent agreement with the experiment is given within the framework of the semiclassical Boltzmann equation. © 1994 The American Physical Society...

  1. Spatial Distribution and Sampling Plans for Grapevine Plant Canopy-Inhabiting Scaphoideus titanus (Hemiptera: Cicadellidae) Nymphs.

    Science.gov (United States)

    Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann

    2016-04-01

    The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.

  2. Marginal bone level in two Danish cross-sectional population samples in 1997-1998 and 2007-2008.

    Science.gov (United States)

    Bahrami, Golnosh; Vaeth, Michael; Wenzel, Ann; Isidor, Flemming

    2018-04-12

    The aim of this study was to compare the marginal bone level of two randomly selected population samples from 1997/1998 and 2007/2008, with special emphasis on the role of smoking habits and gender. Two cross-sectional randomly selected population samples [1997/1998 (N = 616) and 2007/2008 (N = 396)] were analysed with respect to the marginal bone level. The marginal bone level was measured in full-mouth intraoral radiographs. Information on smoking was gathered using questionnaires. Multiple regression analysis was used in order to adjust for correlating factors (gender, age, smoking habits and number of teeth). After adjusting for confounding factors, the population sample from 2007/2008 had on average a slightly, but statistically significantly, more reduced average marginal bone level (0.15 mm) than the population sample from 1997/1998. Men had more reduced marginal bone level than women (0.12 mm). Smokers in both population samples had more reduced marginal bone level than non-smokers (0.39 mm and 0.12 mm for 1997/1998; 0.65 mm and 0.16 mm for 2007/2008). In these populations, sampled 10 years apart, the 2007/2008 population sample had a slightly more reduced marginal bone level than the 1997/1998 population sample. Men had more reduced marginal bone level than women, and smoking is considered a major risk factor for a reduced marginal bone level.

  3. Research on Two-channel Interleaved Two-stage Paralleled Buck DC-DC Converter for Plasma Cutting Power Supply

    DEFF Research Database (Denmark)

    Yang, Xi-jun; Qu, Hao; Yao, Chen

    2014-01-01

    As for high power plasma power supply, due to high efficiency and flexibility, multi-channel interleaved multi-stage paralleled Buck DC-DC Converter becomes the first choice. In the paper, two-channel interleaved two- stage paralleled Buck DC-DC Converter powered by three-phase AC power supply...

  4. Production of endo-pectate lyase by two stage cultivation of Erwinia carotovora

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, Satoshi; Kobayashi, Yoshiaki

    1987-02-26

    The productivity of endo-pectate lyase from Erwinia carotovora GIR 1044 was found to be greatly improved by two stage cultivation: in the first stage the bacterium was grown with an inducing carbon source, e.g., pectin, and in the second stage it was cultivated with glycerol, xylose, or fructose with the addition of monosodium L-glutamate as nitrogen source. In the two stage cultivation using pectin or glycerol as the carbon source the enzyme activity reached 400 units/ml, almost 3 times as much as that of one stage cultivation in a 10 liter fermentor. Using two stage cultivation in the 200 liter fermentor improved enzyme productivity over that in the 10 liter fermentor, with 500 units/ml of activity. Compared with the cultivation in Erlenmeyer flasks, fermentor cultivation improved enzyme productivity. The optimum cultivating conditions were agitation of 480 rpm with aeration of 0.5 vvm at 28 /sup 0/C. (4 figs, 4 tabs, 14 refs)

  5. Hypospadias repair: Byar's two stage operation revisited.

    Science.gov (United States)

    Arshad, A R

    2005-06-01

    Hypospadias is a congenital deformity characterised by an abnormally located urethral opening, that could occur anywhere proximal to its normal location on the ventral surface of glans penis to the perineum. Many operations had been described for the management of this deformity. One hundred and fifteen patients with hypospadias were treated at the Department of Plastic Surgery, Hospital Kuala Lumpur, Malaysia between September 1987 and December 2002, of which 100 had Byar's procedure performed on them. The age of the patients ranged from neonates to 26 years old. Sixty-seven patients had penoscrotal (58%), 20 had proximal penile (18%), 13 had distal penile (11%) and 15 had subcoronal hypospadias (13%). Operations performed were Byar's two-staged (100), Bracka's two-staged (11), flip-flap (2) and MAGPI operation (2). The most common complication encountered following hypospadias surgery was urethral fistula at a rate of 18%. There is a higher incidence of proximal hypospadias in the Malaysian community. Byar's procedure is a very versatile technique and can be used for all types of hypospadias. Fistula rate is 18% in this series.

  6. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  7. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer

  8. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    Science.gov (United States)

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  9. Monitoring oil persistence on beaches : SCAT versus stratified random sampling designs

    International Nuclear Information System (INIS)

    Short, J.W.; Lindeberg, M.R.; Harris, P.M.; Maselko, J.M.; Pella, J.J.; Rice, S.D.

    2003-01-01

    In the event of a coastal oil spill, shoreline clean-up assessment teams (SCAT) commonly rely on visual inspection of the entire affected area to monitor the persistence of the oil on beaches. Occasionally, pits are excavated to evaluate the persistence of subsurface oil. This approach is practical for directing clean-up efforts directly following a spill. However, sampling of the 1989 Exxon Valdez oil spill in Prince William Sound 12 years later has shown that visual inspection combined with pit excavation does not offer estimates of contaminated beach area of stranded oil volumes. This information is needed to statistically evaluate the significance of change with time. Assumptions regarding the correlation of visually-evident surface oil and cryptic subsurface oil are usually not evaluated as part of the SCAT mandate. Stratified random sampling can avoid such problems and could produce precise estimates of oiled area and volume that allow for statistical assessment of major temporal trends and the extent of the impact. The 2001 sampling of the shoreline of Prince William Sound showed that 15 per cent of surface oil occurrences were associated with subsurface oil. This study demonstrates the usefulness of the stratified random sampling method and shows how sampling design parameters impact statistical outcome. Power analysis based on the study results, indicate that optimum power is derived when unnecessary stratification is avoided. It was emphasized that sampling effort should be balanced between choosing sufficient beaches for sampling and the intensity of sampling

  10. FIRST DIRECT EVIDENCE OF TWO STAGES IN FREE RECALL

    Directory of Open Access Journals (Sweden)

    Eugen Tarnow

    2015-12-01

    Full Text Available I find that exactly two stages can be seen directly in sequential free recall distributions. These distributions show that the first three recalls come from the emptying of working memory, recalls 6 and above come from a second stage and the 4th and 5th recalls are mixtures of the two.A discontinuity, a rounded step function, is shown to exist in the fitted linear slope of the recall distributions as the recall shifts from the emptying of working memory (positive slope to the second stage (negative slope. The discontinuity leads to a first estimate of the capacity of working memory at 4-4.5 items. The total recall is shown to be a linear combination of the content of working memory and items recalled in the second stage with 3.0-3.9 items coming from working memory, a second estimate of the capacity of working memory. A third, separate upper limit on the capacity of working memory is found (3.06 items, corresponding to the requirement that the content of working memory cannot exceed the total recall, item by item. This third limit is presumably the best limit on the average capacity of unchunked working memory.The second stage of recall is shown to be reactivation: The average times to retrieve additional items in free recall obey a linear relationship as a function of the recall probability which mimics recognition and cued recall, both mechanisms using reactivation (Tarnow, 2008.

  11. Extending cluster lot quality assurance sampling designs for surveillance programs.

    Science.gov (United States)

    Hund, Lauren; Pagano, Marcello

    2014-07-20

    Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Modal intersection types, two-level languages, and staged synthesis

    DEFF Research Database (Denmark)

    Henglein, Fritz; Rehof, Jakob

    2016-01-01

    -linguistic framework for staged program synthesis, where metaprograms are automatically synthesized which, when executed, generate code in a target language. We survey the basic theory of staged synthesis and illustrate by example how a two-level language theory specialized from λ∩ ⎕ can be used to understand......A typed λ-calculus, λ∩ ⎕, is introduced, combining intersection types and modal types. We develop the metatheory of λ∩ ⎕, with particular emphasis on the theory of subtyping and distributivity of the modal and intersection type operators. We describe how a stratification of λ∩ ⎕ leads to a multi...... the process of staged synthesis....

  13. Empirical study of classification process for two-stage turbo air classifier in series

    Science.gov (United States)

    Yu, Yuan; Liu, Jiaxiang; Li, Gang

    2013-05-01

    The suitable process parameters for a two-stage turbo air classifier are important for obtaining the ultrafine powder that has a narrow particle-size distribution, however little has been published internationally on the classification process for the two-stage turbo air classifier in series. The influence of the process parameters of a two-stage turbo air classifier in series on classification performance is empirically studied by using aluminum oxide powders as the experimental material. The experimental results show the following: 1) When the rotor cage rotary speed of the first-stage classifier is increased from 2 300 r/min to 2 500 r/min with a constant rotor cage rotary speed of the second-stage classifier, classification precision is increased from 0.64 to 0.67. However, in this case, the final ultrafine powder yield is decreased from 79% to 74%, which means the classification precision and the final ultrafine powder yield can be regulated through adjusting the rotor cage rotary speed of the first-stage classifier. 2) When the rotor cage rotary speed of the second-stage classifier is increased from 2 500 r/min to 3 100 r/min with a constant rotor cage rotary speed of the first-stage classifier, the cut size is decreased from 13.16 μm to 8.76 μm, which means the cut size of the ultrafine powder can be regulated through adjusting the rotor cage rotary speed of the second-stage classifier. 3) When the feeding speed is increased from 35 kg/h to 50 kg/h, the "fish-hook" effect is strengthened, which makes the ultrafine powder yield decrease. 4) To weaken the "fish-hook" effect, the equalization of the two-stage wind speeds or the combination of a high first-stage wind speed with a low second-stage wind speed should be selected. This empirical study provides a criterion of process parameter configurations for a two-stage or multi-stage classifier in series, which offers a theoretical basis for practical production.

  14. Two-stage, high power X-band amplifier experiment

    International Nuclear Information System (INIS)

    Kuang, E.; Davis, T.J.; Ivers, J.D.; Kerslick, G.S.; Nation, J.A.; Schaechter, L.

    1993-01-01

    At output powers in excess of 100 MW the authors have noted the development of sidebands in many TWT structures. To address this problem an experiment using a narrow bandwidth, two-stage TWT is in progress. The TWT amplifier consists of a dielectric (e = 5) slow-wave structure, a 30 dB sever section and a 8.8-9.0 GHz passband periodic, metallic structure. The electron beam used in this experiment is a 950 kV, 1 kA, 50 ns pencil beam propagating along an applied axial field of 9 kG. The dielectric first stage has a maximum gain of 30 dB measured at 8.87 GHz, with output powers of up to 50 MW in the TM 01 mode. In these experiments the dielectric amplifier output power is about 3-5 MW and the output power of the complete two-stage device is ∼160 MW at the input frequency. The sidebands detected in earlier experiments have been eliminated. The authors also report measurements of the energy spread of the electron beam resulting from the amplification process. These experimental results are compared with MAGIC code simulations and analytic work they have carried out on such devices

  15. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  16. Novel rare missense variations and risk of autism spectrum disorder: whole-exome sequencing in two families with affected siblings and a two-stage follow-up study in a Japanese population.

    Directory of Open Access Journals (Sweden)

    Jun Egawa

    Full Text Available Rare inherited variations in multiplex families with autism spectrum disorder (ASD are suggested to play a major role in the genetic etiology of ASD. To further investigate the role of rare inherited variations, we performed whole-exome sequencing (WES in two families, each with three affected siblings. We also performed a two-stage follow-up case-control study in a Japanese population. WES of the six affected siblings identified six novel rare missense variations. Among these variations, CLN8 R24H was inherited in one family by three affected siblings from an affected father and thus co-segregated with ASD. In the first stage of the follow-up study, we genotyped the six novel rare missense variations identified by WES in 241 patients and 667 controls (the Niigata sample. Only CLN8 R24H had higher mutant allele frequencies in patients (1/482 compared with controls (1/1334. In the second stage, this variation was further genotyped, yet was not detected in a sample of 309 patients and 350 controls (the Nagoya sample. In the combined Niigata and Nagoya samples, there was no significant association (odds ratio = 1.8, 95% confidence interval = 0.1-29.6. These results suggest that CLN8 R24H plays a role in the genetic etiology of ASD, at least in a subset of ASD patients.

  17. Critical Behaviour of a Two-Dimensional Random Antiferromagnet

    DEFF Research Database (Denmark)

    Als-Nielsen, Jens Aage; Birgeneau, R. J.; Guggenheim, H. J.

    1976-01-01

    A neutron scattering study of the order parameter, correlation length and staggered susceptibility of the two-dimensional random antiferromagnet Rb2Mn0.5Ni0.5F4 is reported. The system is found to exhibit a well-defined phase transition with critical exponents identical to those of the isomorphou...... pure materials K2NiF4 and K2MnF4. Thus, in these systems, which have the asymptotic critical behaviour of the two-dimensional Ising model, randomness has no measurable effect on the phase-transition behaviour....

  18. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    Directory of Open Access Journals (Sweden)

    P.S.A. Cunha

    Full Text Available ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which are then approximately linearized. To deal with the uncertain nature of the item demand levels, we apply a Monte Carlo simulation-based method to generate finite and discrete sets of scenarios. Moreover, the proposed approach does not require restricted assumptions to the behavior of the probabilistic phenomena, as does several existing methods in the literature. Numerical experiments with the proposed approach for randomly generated instances of the problem show results with errors around 1%.

  19. Noncausal two-stage image filtration at presence of observations with anomalous errors

    OpenAIRE

    S. V. Vishnevyy; S. Ya. Zhuk; A. N. Pavliuchenkova

    2013-01-01

    Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptiv...

  20. CFD simulations of compressed air two stage rotary Wankel expander – Parametric analysis

    International Nuclear Information System (INIS)

    Sadiq, Ghada A.; Tozer, Gavin; Al-Dadah, Raya; Mahmoud, Saad

    2017-01-01

    Highlights: • CFD ANSYS-Fluent 3D simulation of Wankel expander is developed. • Single and two-stage expander’s performance is compared. • Inlet and outlet ports shape and configurations are investigated. • Isentropic efficiency of two stage Wankel expander of 91% is achieved. - Abstract: A small scale volumetric Wankel expander is a powerful device for small-scale power generation in compressed air energy storage (CAES) systems and Organic Rankine cycles powered by different heat sources such as, biomass, low temperature geothermal, solar and waste heat leading to significant reduction in CO_2 emissions. Wankel expanders outperform other types of expander due to their ability to produce two power pulses per revolution per chamber additional to higher compactness, lower noise and vibration and lower cost. In this paper, a computational fluid dynamics (CFD) model was developed using ANSYS 16.2 to simulate the flow dynamics for a single and two stage Wankel expanders and to investigate the effect of port configurations, including size and spacing, on the expander’s power output and isentropic efficiency. Also, single-stage and two-stage expanders were analysed with different operating conditions. Single-stage 3D CFD results were compared to published work showing close agreement. The CFD modelling was used to investigate the performance of the rotary device using air as an ideal gas with various port diameters ranging from 15 mm to 50 mm; port spacing varying from 28 mm to 66 mm; different Wankel expander sizes (r = 48, e = 6.6, b = 32) mm and (r = 58, e = 8, b = 40) mm both as single-stage and as two-stage expanders with different configurations and various operating conditions. Results showed that the best Wankel expander design for a single-stage was (r = 48, e = 6.6, b = 32) mm, with the port diameters 20 mm and port spacing equal to 50 mm. Moreover, combining two Wankel expanders horizontally, with a larger one at front, produced 8.52 kW compared

  1. A sero-survey of rinderpest in nomadic pastoral systems in central and southern Somalia from 2002 to 2003, using a spatially integrated random sampling approach.

    Science.gov (United States)

    Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M

    2010-12-01

    A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.

  2. Reducing sample size by combining superiority and non-inferiority for two primary endpoints in the Social Fitness study.

    Science.gov (United States)

    Donkers, Hanneke; Graff, Maud; Vernooij-Dassen, Myrra; Nijhuis-van der Sanden, Maria; Teerenstra, Steven

    2017-01-01

    In randomized controlled trials, two endpoints may be necessary to capture the multidimensional concept of the intervention and the objectives of the study adequately. We show how to calculate sample size when defining success of a trial by combinations of superiority and/or non-inferiority aims for the endpoints. The randomized controlled trial design of the Social Fitness study uses two primary endpoints, which can be combined into five different scenarios for defining success of the trial. We show how to calculate power and sample size for each scenario and compare these for different settings of power of each endpoint and correlation between them. Compared to a single primary endpoint, using two primary endpoints often gives more power when success is defined as: improvement in one of the two endpoints and no deterioration in the other. This also gives better power than when success is defined as: improvement in one prespecified endpoint and no deterioration in the remaining endpoint. When two primary endpoints are equally important, but a positive effect in both simultaneously is not per se required, the objective of having one superior and the other (at least) non-inferior could make sense and reduce sample size. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  4. Multi stage electrodialysis for separation of two metal ion species

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, K.; Sakurai, H.; Nii, S.; Sugiura, K. [Nagoya Univ., Nagoya (Japan)

    1995-04-20

    In this article, separation of two metal ions by electrodialysis with a cation exchange membrane has been investigated. In other words, separation of potassium ion and sodium ion has been investigated by using batch dialysis with and without an electric field and continuous electrodialysis with a four-stage dialyzer. As a result, the difference in the permselectivity between the dialysis with and without an electric field has not been appreciable for the system of potassium and sodium ions with the cation exchange membrane. Concerning the continuous electrodialysis, the concentration ratio between potassium and sodium ions in the outlet solution from the recovery side of the dialyzer has increased with the reflux flow rate and the number of stages. In case when the reflux flow rate has been zero, the concentration ratio with the four-stage dialyzer has become 1.5 which is almost the same as with that with a two-stage dialyzer consisting of a simple membrane. When the reflux flow ratio has been 0.7, the concentration ratio has reached 3.6. 20 refs., 8 figs.

  5. A two-stage approach to estimate spatial and spatio-temporal disease risks in the presence of local discontinuities and clusters.

    Science.gov (United States)

    Adin, A; Lee, D; Goicoa, T; Ugarte, María Dolores

    2018-01-01

    Disease risk maps for areal unit data are often estimated from Poisson mixed models with local spatial smoothing, for example by incorporating random effects with a conditional autoregressive prior distribution. However, one of the limitations is that local discontinuities in the spatial pattern are not usually modelled, leading to over-smoothing of the risk maps and a masking of clusters of hot/coldspot areas. In this paper, we propose a novel two-stage approach to estimate and map disease risk in the presence of such local discontinuities and clusters. We propose approaches in both spatial and spatio-temporal domains, where for the latter the clusters can either be fixed or allowed to vary over time. In the first stage, we apply an agglomerative hierarchical clustering algorithm to training data to provide sets of potential clusters, and in the second stage, a two-level spatial or spatio-temporal model is applied to each potential cluster configuration. The superiority of the proposed approach with regard to a previous proposal is shown by simulation, and the methodology is applied to two important public health problems in Spain, namely stomach cancer mortality across Spain and brain cancer incidence in the Navarre and Basque Country regions of Spain.

  6. Hybrid biogas upgrading in a two-stage thermophilic reactor

    DEFF Research Database (Denmark)

    Corbellini, Viola; Kougias, Panagiotis; Treu, Laura

    2018-01-01

    The aim of this study is to propose a hybrid biogas upgrading configuration composed of two-stage thermophilic reactors. Hydrogen is directly injected in the first stage reactor. The output gas from the first reactor (in-situ biogas upgrade) is subsequently transferred to a second upflow reactor...... (ex-situ upgrade), in which enriched hydrogenotrophic culture is responsible for the hydrogenation of carbon dioxide to methane. The overall objective of the work was to perform an initial methane enrichment in the in-situ reactor, avoiding deterioration of the process due to elevated pH levels......, and subsequently, to complete the biogas upgrading process in the ex-situ chamber. The methane content in the first stage reactor reached on average 87% and the corresponding value in the second stage was 91%, with a maximum of 95%. A remarkable accumulation of volatile fatty acids was observed in the first...

  7. Runway Operations Planning: A Two-Stage Solution Methodology

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the

  8. Profile fitting and the two-stage method in neutron powder diffractometry for structure and texture analysis

    International Nuclear Information System (INIS)

    Jansen, E.; Schaefer, W.; Will, G.; Kernforschungsanlage Juelich G.m.b.H.

    1988-01-01

    An outline and an application of the two-stage method in neutron powder diffractometry are presented. Stage (1): Individual reflection data like position, half-width and integrated intensity are analysed by profile fitting. The profile analysis is based on an experimentally determined instrument function and can be applied without prior knowledge of a structural model. A mathematical procedure is described which results in a variance-covariance matrix containing standard deviations and correlations of the refined reflection parameters. Stage (2): The individual reflection data derived from the profile fitting procedure can be used for appropriate purposes either in structure determination or in texture and strain or stress analysis. The integrated intensities are used in the non-diagonal weighted least-squares routine POWLS for structure refinement. The weight matrix is given by the inverted variance-covariance matrix of stage (1). This procedure is the basis for reliable and real Bragg R values and for a realistic estimation of standard deviations of structural parameters. In the case of texture analysis the integrated intensities are compiled into pole figures representing the intensity distribution for all sample orientations of individual hkl. Various examples for the wide application of the two-stage method in structure and texture analysis are given: Structure refinement of a standard quartz specimen, magnetic ordering in the system Tb x Y 1-x Ag, preferred orientation effects in deformed marble and texture investigations of a triclinic plagioclase. (orig.)

  9. Soluble CD44 concentration in the serum and peritoneal fluid samples of patients with different stages of endometriosis.

    Science.gov (United States)

    Mashayekhi, Farhad; Aryaee, Hadis; Mirzajani, Ebrahim; Yasin, Ashraf Ale; Fathi, Abdolsatar

    2015-09-01

    Endometriosis is a gynecological disease defined by the histological presence of endometrial glands and stroma outside the uterine cavity, most commonly implanted over visceral and peritoneal surface within the female pelvis. CD44 is a membrane protein expressed by human endometrial cells, and it has been shown to promote the adhesion of endometrial cells. The aim of this study was to determine the levels of soluble CD44 (sCD44) in the serum and peritoneal fluid (PF) samples of patients with different stages of endometriosis. 39 PF and serum samples from normal healthy and 130 samples from different stages of patients with endometriosis (33 cases of stage I, 38 stage II, 30 stage III and 29 stage IV) were included in this study. Total protein concentration (TPC) and the level of s-cMet in the serum were determined by Bio-Rad protein assay based on the Bradford dye procedure and enzyme-linked immunosorbent assay, respectively. No significant change in the TPC was seen in the serum of patients with endometriosis when compared to normal controls. Results obtained demonstrated that all serum and peritoneal fluid samples, presented sCD44 expression, whereas, starting from stages I to IV endometriosis, a significant increase of sCD44 expression was observed as compared to control group. The results of this study show that a high expression of sCD44 is correlated with advanced stages of endometriosis. It is also concluded that the detection of serum and/or peritoneal fluid sCD44 may be useful in classifying endometriosis.

  10. Single-stage versus two-stage anaerobic fluidized bed bioreactors in treating municipal wastewater: Performance, foulant characteristics, and microbial community.

    Science.gov (United States)

    Wu, Bing; Li, Yifei; Lim, Weikang; Lee, Shi Lin; Guo, Qiming; Fane, Anthony G; Liu, Yu

    2017-03-01

    This study examined the receptive performance, membrane foulant characteristics, and microbial community in the single-stage and two-stage anaerobic fluidized membrane bioreactor (AFMBR) treating settled raw municipal wastewater with the aims to explore fouling mechanisms and microbial community structure in both systems. Both AFMBRs exhibited comparable organic removal efficiency and membrane performances. In the single-stage AFMBR, less soluble organic substances were removed through biosorption by GAC and biodegradation than those in the two-stage AFMBR. Compared to the two-stage AFMBR, the formation of cake layer was the main cause of the observed membrane fouling in the single-stage AFMBR at the same employed flux. The accumulation rate of the biopolymers was linearly correlated with the membrane fouling rate. In the chemical-cleaned foulants, humic acid-like substances and silicon were identified as the predominant organic and inorganic fouants respectively. As such, the fluidized GAC particles might not be effective in removing these substances from the membrane surfaces. High-throughout pyrosequencing analysis further revealed that beta-Proteobacteria were predominant members in both AFMBRs, which contributed to the development of biofilms on the fluidized GAC and membrane surfaces. However, it was also noted that the abundance of the identified dominant in the membrane surface-associated biofilm seemed to be related to the permeate flux and reactor configuration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. TWO-STAGE HEAT PUMPS FOR ENERGY SAVING TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. E. Denysova

    2017-09-01

    Full Text Available The problem of energy saving becomes one of the most important in power engineering. It is caused by exhaustion of world reserves in hydrocarbon fuel, such as gas, oil and coal representing sources of traditional heat supply. Conventional sources have essential shortcomings: low power, ecological and economic efficiencies, that can be eliminated by using alternative methods of power supply, like the considered one: low-temperature natural heat of ground waters of on the basis of heat pump installations application. The heat supply system considered provides an effective use of two stages heat pump installation operating as heat source at ground waters during the lowest ambient temperature period. Proposed is a calculation method of heat pump installations on the basis of groundwater energy. Calculated are the values of electric energy consumption by the compressors’ drive, and the heat supply system transformation coefficient µ for a low-potential source of heat from ground waters allowing to estimate high efficiency of two stages heat pump installations.

  12. Development and testing of a two stage granular filter to improve collection efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Rangan, R.S.; Prakash, S.G.; Chakravarti, S.; Rao, S.R.

    1999-07-01

    A circulating bed granular filter (CBGF) with a single filtration stage was tested with a PFB combustor in the Coal Research Facility of BHEL R and D in Hyderabad during the years 1993--95. Filter outlet dust loading varied between 20--50 mg/Nm{sup 3} for an inlet dust loading of 5--8 gms/Nm{sup 3}. The results were reported in Fluidized Bed Combustion-Volume 2, ASME 1995. Though the outlet consists of predominantly fine particulates below 2 microns, it is still beyond present day gas turbine specifications for particulate concentration. In order to enhance the collection efficiency, a two-stage granular filtration concept was evolved, wherein the filter depth is divided between two stages, accommodated in two separate vertically mounted units. The design also incorporates BHEL's scale-up concept of multiple parallel stages. The two-stage concept minimizes reentrainment of captured dust by providing clean granules in the upper stage, from where gases finally exit the filter. The design ensures that dusty gases come in contact with granules having a higher dust concentration at the bottom of the two-stage unit, where most of the cleaning is completed. A second filtration stage of cleaned granules is provided in the top unit (where the granules are returned to the system after dedusting) minimizing reentrainment. Tests were conducted to determine the optimum granule to dust ratio (G/D ratio) which decides the granule circulation rate required for the desired collection efficiency. The data brings out the importance of pre-separation and the limitation on inlet dust loading for any continuous system of granular filtration. Collection efficiencies obtained were much higher (outlet dust being 3--9 mg/Nm{sub 3}) than in the single stage filter tested earlier for similar dust loading at the inlet. The results indicate that two-stage granular filtration has a high potential for HTHT application with fewer risks as compared to other systems under development.

  13. Automatic sleep stage classification using two facial electrodes.

    Science.gov (United States)

    Virkkala, Jussi; Velin, Riitta; Himanen, Sari-Leena; Värri, Alpo; Müller, Kiti; Hasan, Joel

    2008-01-01

    Standard sleep stage classification is based on visual analysis of central EEG, EOG and EMG signals. Automatic analysis with a reduced number of sensors has been studied as an easy alternative to the standard. In this study, a single-channel electro-oculography (EOG) algorithm was developed for separation of wakefulness, SREM, light sleep (S1, S2) and slow wave sleep (S3, S4). The algorithm was developed and tested with 296 subjects. Additional validation was performed on 16 subjects using a low weight single-channel Alive Monitor. In the validation study, subjects attached the disposable EOG electrodes themselves at home. In separating the four stages total agreement (and Cohen's Kappa) in the training data set was 74% (0.59), in the testing data set 73% (0.59) and in the validation data set 74% (0.59). Self-applicable electro-oculography with only two facial electrodes was found to provide reasonable sleep stage information.

  14. Randomized trial on external radiation therapy alone versus external radiation therapy followed by brachytherapy in early stage nasopharyngeal carcinoma with a long term result

    International Nuclear Information System (INIS)

    Gao Li; Yuan Zhiyong; Xu Guozhen; Li Suyan; Xiao Guangli; Cai Weiming

    2004-01-01

    Objective: To compare local control and toxicity in patients treated with external beam radiotherapy followed by intracavitary brachytherapy (BT) versus external beam radiotherapy alone (RT) for locally early stage nasopharyngeal carcinoma (NPC). Methods: From 1990 to 1997, 126 NPC patients staged T1 and T2 by 1992 Fuzhou Staging System (oropharynx, carotid sheath and soft tissue around cervical vertebral involvement excluded) were randomized into RT alone and RT followed BT groups. The two groups were comparable in age, gender, stage and pathology. The median follow-up was 112 months. T1 patients were randomized before the treatment into RT alone group of 66-70 Gy and RT plus BT with the dose of 56 Gy plus 10-16 Gy BT boost to the nasopharynx. For T2 patients, if MRI or CT showed no residual lesion in parapharyngeal space after 50 Gy, they were randomized into RT alone (median dose: 72 Gy) or RT of 66 Gy followed by 8-24 Gy BT boost (1-3 fractions over 1-3 weeks). Results: In RT group, 8 patients (13.1%) failed in primary site during the follow-up period, 7 (11%) in BT group. The 5-year local control rates was 86% for RT group and 88% for BT group (P=0.47). The 5-year overall survival rates were 83% and 84% (P=0.84), respectively. Ten patients (18%) in RT group (4 of grade I, 6 of grade II) and 7 patients (11%) in BT group (4 of grade I, 3 of grade II, P=0.31) developed radiation induced encephalopathy. The incidence of trismus was much lower in BT group than in RT group (26% versus 10%, P=0.02). No soft palate perforation or sphenoid necrosis were observed. Conclusion: Compared to conventional external beam radiotherapy, planned irradiation plus intracavitary brachytherapy not only can achieve similar local control and survival rates for locally early stage nasopharyngeal carcinoma, but also decrease irradiation dose and the trismus incidence. (authors)

  15. Area Determination of Diabetic Foot Ulcer Images Using a Cascaded Two-Stage SVM-Based Classification.

    Science.gov (United States)

    Wang, Lei; Pedersen, Peder C; Agu, Emmanuel; Strong, Diane M; Tulu, Bengisu

    2017-09-01

    The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.

  16. Two spatial light modulator system for laboratory simulation of random beam propagation in random media.

    Science.gov (United States)

    Wang, Fei; Toselli, Italo; Korotkova, Olga

    2016-02-10

    An optical system consisting of a laser source and two independent consecutive phase-only spatial light modulators (SLMs) is shown to accurately simulate a generated random beam (first SLM) after interaction with a stationary random medium (second SLM). To illustrate the range of possibilities, a recently introduced class of random optical frames is examined on propagation in free space and several weak turbulent channels with Kolmogorov and non-Kolmogorov statistics.

  17. Evaluation of social-cognitive versus stage-matched, self-help physical activity interventions at the workplace.

    Science.gov (United States)

    Griffin-Blake, C Shannon; DeJoy, David M

    2006-01-01

    To compare the effectiveness of stage-matched vs. social-cognitive physical activity interventions in a work setting. Both interventions were designed as minimal-contact, self-help programs suitable for large-scale application. Randomized trial. Participants were randomized into one of the two intervention groups at baseline; the follow-up assessment was conducted 1 month later. A large, public university in the southeastern region of the United States. Employees from two academic colleges within the participating institution were eligible to participate: 366 employees completed the baseline assessment; 208 of these completed both assessments (baseline and follow-up) and met the compliance criteria. Printed, self-help exercise booklets (12 to 16 pages in length) either (1) matched to the individual's stage of motivational readiness for exercise adoption at baseline or (2) derived from social-cognitive theory but not matched by stage. Standard questionnaires were administered to assess stage of motivational readiness for physical activity; physical activity participation; and exercise-related processes of change, decisional balance, self-efficacy, outcome expectancy, and goal satisfaction. The two interventions were equally effective in moving participants to higher levels of motivational readiness for regular physical activity. Among participants not already in maintenance at baseline, 34.9% in the stage-matched condition progressed, while 33.9% in the social-cognitive group did so (chi2 = not significant). Analyses of variance showed that the two treatment groups did not differ in terms of physical activity participation, cognitive and behavioral process use, decisional balance, or the other psychological constructs. For both treatment groups, cognitive process use remained high across all stages, while behavioral process use increased at the higher stages. The pros component of decisional balance did not vary across stage, whereas cons decreased significantly

  18. 39% access time improvement, 11% energy reduction, 32 kbit 1-read/1-write 2-port static random-access memory using two-stage read boost and write-boost after read sensing scheme

    Science.gov (United States)

    Yamamoto, Yasue; Moriwaki, Shinichi; Kawasumi, Atsushi; Miyano, Shinji; Shinohara, Hirofumi

    2016-04-01

    We propose novel circuit techniques for 1 clock (1CLK) 1 read/1 write (1R/1W) 2-port static random-access memories (SRAMs) to improve read access time (tAC) and write margins at low voltages. Two-stage read boost (TSR-BST) and write word line boost (WWL-BST) after the read sensing schemes have been proposed. TSR-BST reduces the worst read bit line (RBL) delay by 61% and RBL amplitude by 10% at V DD = 0.5 V, which improves tAC by 39% and reduces energy dissipation by 11% at V DD = 0.55 V. WWL-BST after read sensing scheme improves minimum operating voltage (V min) by 140 mV. A 32 kbit 1CLK 1R/1W 2-port SRAM with TSR-BST and WWL-BST has been developed using a 40 nm CMOS.

  19. Development of Explosive Ripper with Two-Stage Combustion

    Science.gov (United States)

    1974-10-01

    inch pipe duct work, the width of this duct proved to be detrimental in marginally rippable material; the duct, instead of the penetrator tip, was...marginally rippable rock. ID. Operating Requirements 2. Fuel The two-stage combustion device is designed to operate using S A 42. the same diesel

  20. One- vs 2-Stage Bursectomy for Septic Olecranon and Prepatellar Bursitis: A Prospective Randomized Trial.

    Science.gov (United States)

    Uçkay, Ilker; von Dach, Elodie; Perez, Cédric; Agostinho, Americo; Garnerin, Philippe; Lipsky, Benjamin A; Hoffmeyer, Pierre; Pittet, Didier

    2017-07-01

    To assess the optimal surgical approach and costs for patients hospitalized with septic bursitis. From May 1, 2011, through December 24, 2014, hospitalized patients with septic bursitis at University of Geneva Hospitals were randomized (1:1) to receive 1- vs 2-stage bursectomy. All the patients received postsurgical oral antibiotic drug therapy for 7 days. Of 164 enrolled patients, 130 had bursitis of the elbow and 34 of the patella. The surgical approach used was 1-stage in 79 patients and 2-stage in 85. Overall, there were 22 treatment failures: 8 of 79 patients (10%) in the 1-stage arm and 14 of 85 (16%) in the 2-stage arm (Pearson χ 2 test; P=.23). Recurrent infection was caused by the same pathogen in 7 patients (4%) and by a different pathogen in 5 (3%). Outcomes were better in the 1- vs 2-stage arm for wound dehiscence for elbow bursitis (1 of 66 vs 9 of 64; Fisher exact test P=.03), median length of hospital stay (4.5 vs 6.0 days), nurses' workload (605 vs 1055 points), and total costs (Sw₣6881 vs Sw₣11,178; all Pbursitis requiring hospital admission, bursectomy with primary closure, together with antibiotic drug therapy for 7 days, was safe, effective, and resource saving. Using a 2-stage approach may be associated with a higher rate of wound dehiscence for olecranon bursitis than the 1-stage approach. Clinicaltrials.gov Identifier: NCT01406652. Copyright © 2017 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  1. The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.

    Science.gov (United States)

    Rodgers, J L

    1999-10-01

    A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.

  2. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...... that the austenitization kinetics is governed by Ni-diffusion and that slow transformation kinetics separating the two stages is caused by soft impingement in the martensite phase. Increasing the lath width in the kinetics model had a similar effect on the austenitization kinetics as increasing the heating-rate....

  3. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    Science.gov (United States)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  4. An adaptive two-stage dose-response design method for establishing proof of concept.

    Science.gov (United States)

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  5. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  6. Comparison of Four Estimators under sampling without Replacement

    African Journals Online (AJOL)

    The results were obtained using a program written in Microsoft Visual C++ programming language. It was observed that the two-stage sampling under unequal probabilities without replacement is always better than the other three estimators considered. Keywords: Unequal probability sampling, two-stage sampling, ...

  7. A two-stage preventive maintenance optimization model incorporating two-dimensional extended warranty

    International Nuclear Information System (INIS)

    Su, Chun; Wang, Xiaolin

    2016-01-01

    In practice, customers can decide whether to buy an extended warranty or not, at the time of item sale or at the end of the basic warranty. In this paper, by taking into account the moments of customers purchasing two-dimensional extended warranty, the optimization of imperfect preventive maintenance for repairable items is investigated from the manufacturer's perspective. A two-dimensional preventive maintenance strategy is proposed, under which the item is preventively maintained according to a specified age interval or usage interval, whichever occurs first. It is highlighted that when the extended warranty is purchased upon the expiration of the basic warranty, the manufacturer faces a two-stage preventive maintenance optimization problem. Moreover, in the second stage, the possibility of reducing the servicing cost over the extended warranty period is explored by classifying customers on the basis of their usage rates and then providing them with customized preventive maintenance programs. Numerical examples show that offering customized preventive maintenance programs can reduce the manufacturer's warranty cost, while a larger saving in warranty cost comes from encouraging customers to buy the extended warranty at the time of item sale. - Highlights: • A two-dimensional PM strategy is investigated. • Imperfect PM strategy is optimized by considering both two-dimensional BW and EW. • Customers are categorized based on their usage rates throughout the BW period. • Servicing cost of the EW is reduced by offering customized PM programs. • Customers buying the EW at the time of sale is preferred for the manufacturer.

  8. Two-stage agglomeration of fine-grained herbal nettle waste

    Science.gov (United States)

    Obidziński, Sławomir; Joka, Magdalena; Fijoł, Olga

    2017-10-01

    This paper compares the densification work necessary for the pressure agglomeration of fine-grained dusty nettle waste, with the densification work involved in two-stage agglomeration of the same material. In the first stage, the material was pre-densified through coating with a binder material in the form of a 5% potato starch solution, and then subjected to pressure agglomeration. A number of tests were conducted to determine the effect of the moisture content in the nettle waste (15, 18 and 21%), as well as the process temperature (50, 70, 90°C) on the values of densification work and the density of the obtained pellets. For pre-densified pellets from a mixture of nettle waste and a starch solution, the conducted tests determined the effect of pellet particle size (1, 2, and 3 mm) and the process temperature (50, 70, 90°C) on the same values. On the basis of the tests, we concluded that the introduction of a binder material and the use of two-stage agglomeration in nettle waste densification resulted in increased densification work (as compared to the densification of nettle waste alone) and increased pellet density.

  9. Two-stage hepatectomy: who will not jump over the second hurdle?

    Science.gov (United States)

    Turrini, O; Ewald, J; Viret, F; Sarran, A; Goncalves, A; Delpero, J-R

    2012-03-01

    Two-stage hepatectomy uses compensatory liver regeneration after a first noncurative hepatectomy to enable a second curative resection in patients with bilobar colorectal liver metastasis (CLM). To determine the predictive factors of failure of two-stage hepatectomy. Between 2000 and 2010, 48 patients with irresectable CLM were eligible for two-stage hepatectomy. The planned strategy was a) cleaning of the left hepatic lobe (first hepatectomy), b) right portal vein embolisation and c) right hepatectomy (second hepatectomy). Six patients had occult CLM (n = 5) or extra-hepatic disease (n = 1), which was discovered during the first hepatectomy. Thus, 42 patients completed the first hepatectomy and underwent portal vein embolisation in order to receive the second hepatectomy. Eight patients did not undergo a second hepatectomy due to disease progression. Upon univariate analysis, two factors were identified that precluded patients from having the second hepatectomy: the combined resection of a primary tumour during the first hepatectomy (p = 0.01) and administration of chemotherapy between the two hepatectomies (p = 0.03). An independent association with impairment to perform the two-stage strategy was demonstrated by multivariate analysis for only the combined resection of the primary colorectal cancer during the first hepatectomy (p = 0.04). Due to the small number of patients and the absence of equivalent conclusions in other studies, we cannot recommend performance of an isolated colorectal resection prior to chemotherapy. However, resection of an asymptomatic primary tumour before chemotherapy should not be considered as an outdated procedure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Assessing the suitability of summary data for two-sample Mendelian randomization analyses using MR-Egger regression: the role of the I2 statistic.

    Science.gov (United States)

    Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R

    2016-12-01

    : MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We

  11. Improving ambulatory saliva-sampling compliance in pregnant women: a randomized controlled study.

    Directory of Open Access Journals (Sweden)

    Julian Moeller

    Full Text Available OBJECTIVE: Noncompliance with scheduled ambulatory saliva sampling is common and has been associated with biased cortisol estimates in nonpregnant subjects. This study is the first to investigate in pregnant women strategies to improve ambulatory saliva-sampling compliance, and the association between sampling noncompliance and saliva cortisol estimates. METHODS: We instructed 64 pregnant women to collect eight scheduled saliva samples on two consecutive days each. Objective compliance with scheduled sampling times was assessed with a Medication Event Monitoring System and self-reported compliance with a paper-and-pencil diary. In a randomized controlled study, we estimated whether a disclosure intervention (informing women about objective compliance monitoring and a reminder intervention (use of acoustical reminders improved compliance. A mixed model analysis was used to estimate associations between women's objective compliance and their diurnal cortisol profiles, and between deviation from scheduled sampling and the cortisol concentration measured in the related sample. RESULTS: Self-reported compliance with a saliva-sampling protocol was 91%, and objective compliance was 70%. The disclosure intervention was associated with improved objective compliance (informed: 81%, noninformed: 60%, F(1,60  = 17.64, p<0.001, but not the reminder intervention (reminders: 68%, without reminders: 72%, F(1,60 = 0.78, p = 0.379. Furthermore, a woman's increased objective compliance was associated with a higher diurnal cortisol profile, F(2,64  = 8.22, p<0.001. Altered cortisol levels were observed in less objective compliant samples, F(1,705  = 7.38, p = 0.007, with delayed sampling associated with lower cortisol levels. CONCLUSIONS: The results suggest that in pregnant women, objective noncompliance with scheduled ambulatory saliva sampling is common and is associated with biased cortisol estimates. To improve sampling compliance, results suggest

  12. Monte Carlo studies of two-dimensional random-anisotropy magnets

    Science.gov (United States)

    Denholm, D. R.; Sluckin, T. J.

    1993-07-01

    We have carried out a systematic set of Monte Carlo simulations of the Harris-Plischke-Zuckermann lattice model of random magnetic anisotropy on a two-dimensional square lattice, using the classical Metropolis algorithm. We have considered varying temperature T, external magnetic field H (both in the reproducible and irreproducible limits), time scale of the simulation τ in Monte Carlo steps and anisotropy ratio D/J. In the absence of randomness this model reduces to the XY model in two dimensions, which possesses the familiar Kosterlitz-Thouless low-temperature phase with algebraic but no long-range order. In the presence of random anisotropy we find evidence of a low-temperature phase with some disordered features, which might be identified with a spin-glass phase. The low-temperature Kosterlitz-Thouless phase survives at intermediate temperatures for low randomness, but is no longer present for large D/J. We have also studied the high-H approach to perfect order, for which there are theoretical predictions due to Chudnovsky.

  13. Labor Union Effects on Innovation and Commercialization Productivity: An Integrated Propensity Score Matching and Two-Stage Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Dongphil Chun

    2015-04-01

    Full Text Available Research and development (R&D is a critical factor in sustaining a firm’s competitive advantage. Accurate measurement of R&D productivity and investigation of its influencing factors are of value for R&D productivity improvements. This study is divided into two sections. The first section outlines the innovation and commercialization stages of firm-level R&D activities. This section analyzes the productivity of each stage using a propensity score matching (PSM and two-stage data envelopment analysis (DEA integrated model to solve the selection bias problem. Second, this study conducts a comparative analysis among subgroups categorized as labor unionized or non-labor unionized on productivity at each stage. We used Korea Innovation Survey (KIS data for analysis using a sample of 400 Korean manufacturers. The key findings of this study include: (1 firm innovation and commercialization productivity are balanced and show relatively low innovation productivity; and (2 labor unions have a positive effect on commercialization productivity. Moreover, labor unions are an influential factor in determining manufacturing firms’ commercialization productivity.

  14. Managing uncertainty - a qualitative study of surgeons' decision-making for one-stage and two-stage revision surgery for prosthetic hip joint infection.

    Science.gov (United States)

    Moore, Andrew J; Blom, Ashley W; Whitehouse, Michael R; Gooberman-Hill, Rachael

    2017-04-12

    Approximately 88,000 primary hip replacements are performed in England and Wales each year. Around 1% go on to develop deep prosthetic joint infection. Between one-stage and two-stage revision arthroplasty best treatment options remain unclear. Our aims were to characterise consultant orthopaedic surgeons' decisions about performing either one-stage or two-stage revision surgery for patients with deep prosthetic infection (PJI) after hip arthroplasty, and to identify whether a randomised trial comparing one-stage with two-stage revision would be feasible. Semi-structured interviews were conducted with 12 consultant surgeons who perform revision surgery for PJI after hip arthroplasty at 5 high-volume National Health Service (NHS) orthopaedic departments in England and Wales. Surgeons were interviewed before the development of a multicentre randomised controlled trial. Data were analysed using a thematic approach. There is no single standardised surgical intervention for the treatment of PJI. Surgeons balance multiple factors when choosing a surgical strategy which include multiple patient-related factors, their own knowledge and expertise, available infrastructure and the infecting organism. Surgeons questioned whether it was appropriate that the two-stage revision remained the best treatment, and some surgeons' willingness to consider more one-stage revisions had increased over recent years and were influenced by growing evidence showing equivalence between surgical techniques, and local observations of successful one-stage revisions. Custom-made articulating spacers was a practice that enabled uncertainty to be managed in the absence of definitive evidence about the superiority of one surgical technique over the other. Surgeons highlighted the need for research evidence to inform practice and thought that a randomised trial to compare treatments was needed. Most surgeons thought that patients who they treated would be eligible for trial participation in instances

  15. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    Science.gov (United States)

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  16. Preoperative chemoradiotherapy versus postoperative chemoradiotherapy for stage II–III resectable rectal cancer: a meta-analysis of randomized controlled trials

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jin Ho [Gyeongsang National University School of Medicine, Jinju (Korea, Republic of); Jeong, Jae Uk [Chonnam National University School of Medicine, Gwangju (Korea, Republic of); Lee, Jong Hoon; Kim, Sung Hwan [The Catholic University of Korea, Suwon (Korea, Republic of); Cho, Hyeon Min [The Catholic University of Korea, Suwon (Korea, Republic of); Um, Jun Won [University Ansan Hospital, Ansan (Korea, Republic of); Jang, Hong Seok [The Catholic University of Korea, Seoul (Korea, Republic of)

    2017-09-15

    Whether preoperative chemoradiotherapy (CRT) is better than postoperative CRT in oncologic outcome and toxicity is contentious in prospective randomized clinical trials. We systematically analyze and compare the treatment result, toxicity, and sphincter preservation rate between preoperative CRT and postoperative CRT in stage II–III rectal cancer. We searched Medline, Embase, and Cochrane Library from 1990 to 2014 for relevant trials. Only phase III randomized studies performing CRT and curative surgery were selected and the data were extracted. Meta-analysis was used to pool oncologic outcome and toxicity data across studies. Three randomized phase III trials were finally identified. The meta-analysis results showed significantly lower 5-year locoregional recurrence rate in the preoperative-CRT group than in the postoperative-CRT group (hazard ratio, 0.59; 95% confidence interval, 0.41–0.84; p = 0.004). The 5-year distant recurrence rate (p = 0.55), relapse-free survival (p = 0.14), and overall survival (p = 0.22) showed no significant difference between two groups. Acute toxicity was significantly lower in the preoperativeCRT group than in the postoperative-CRT group (p < 0.001). However, there was no significant difference between two groups in perioperative and chronic complications (p = 0.53). The sphincter-saving rate was not significantly different between two groups (p = 0.24). The conversion rate from abdominoperineal resection to low anterior resection in low rectal cancer was significantly higher in the preoperative-CRT group than in the postoperative-CRT group (p < 0.001). As compared to postoperative CRT, preoperative CRT improves only locoregional control, not distant control and survival, with similar chronic toxicity and sphincter preservation rate in rectal cancer patients.

  17. Occupational position and its relation to mental distress in a random sample of Danish residents

    DEFF Research Database (Denmark)

    Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D

    2010-01-01

    PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...

  18. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  19. Diagnosis Of Persistent Infection In Prosthetic Two-Stage Exchange: PCR analysis of Sonication fluid From Bone Cement Spacers.

    Science.gov (United States)

    Mariaux, Sandrine; Tafin, Ulrika Furustrand; Borens, Olivier

    2017-01-01

    Introduction: When treating periprosthetic joint infections with a two-stage procedure, antibiotic-impregnated spacers are used in the interval between removal of prosthesis and reimplantation. According to our experience, cultures of sonicated spacers are most often negative. The objective of our study was to investigate whether PCR analysis would improve the detection of bacteria in the spacer sonication fluid. Methods: A prospective monocentric study was performed from September 2014 to January 2016. Inclusion criteria were two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Beside tissues samples and sonication, broad range bacterial PCRs, specific S. aureus PCRs and Unyvero-multiplex PCRs were performed on the sonicated spacer fluid. Results: 30 patients were identified (15 hip, 14 knee and 1 ankle replacements). At reimplantation, cultures of tissue samples and spacer sonication fluid were all negative. Broad range PCRs were all negative. Specific S. aureus PCRs were positive in 5 cases. We had two persistent infections and four cases of infection recurrence were observed, with bacteria different than for the initial infection in three cases. Conclusion: The three different types of PCRs did not detect any bacteria in spacer sonication fluid that was culture-negative. In our study, PCR did not improve the bacterial detection and did not help to predict whether the patient will present a persistent or recurrent infection. Prosthetic 2-stage exchange with short interval and antibiotic-impregnated spacer is an efficient treatment to eradicate infection as both culture- and molecular-based methods were unable to detect bacteria in spacer sonication fluid after reimplantation.

  20. Two-Stage Multiobjective Optimization for Emergency Supplies Allocation Problem under Integrated Uncertainty

    Directory of Open Access Journals (Sweden)

    Xuejie Bai

    2016-01-01

    Full Text Available This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method.

  1. Two-stage exchange knee arthroplasty: does resistance of the infecting organism influence the outcome?

    Science.gov (United States)

    Kurd, Mark F; Ghanem, Elie; Steinbrecher, Jill; Parvizi, Javad

    2010-08-01

    Periprosthetic joint infection after TKA is a challenging complication. Two-stage exchange arthroplasty is the accepted standard of care, but reported failure rates are increasing. It has been suggested this is due to the increased prevalence of methicillin-resistant infections. We asked the following questions: (1) What is the reinfection rate after two-stage exchange arthroplasty? (2) Which risk factors predict failure? (3) Which variables are associated with acquiring a resistant organism periprosthetic joint infection? This was a case-control study of 102 patients with infected TKA who underwent a two-stage exchange arthroplasty. Ninety-six patients were followed for a minimum of 2 years (mean, 34.5 months; range, 24-90.1 months). Cases were defined as failures of two-stage exchange arthroplasty. Two-stage exchange arthroplasty was successful in controlling the infection in 70 patients (73%). Patients who failed two-stage exchange arthroplasty were 3.37 times more likely to have been originally infected with a methicillin-resistant organism. Older age, higher body mass index, and history of thyroid disease were predisposing factors to infection with a methicillin-resistant organism. Innovative interventions are needed to improve the effectiveness of two-stage exchange arthroplasty for TKA infection with a methicillin-resistant organism as current treatment protocols may not be adequate for control of these virulent pathogens. Level IV, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.

  2. Radiotherapy in stage 3, unresectable, asymptomatic non-small cell lung cancer. Final results of a prospective randomized study of 240 patients

    International Nuclear Information System (INIS)

    Reinfuss, M.; Glinski, B.; Kowalska, T.; Kulpa, J.; Zawila, K.; Reinfuss, K.; Dymek, P.; Herman, K.; Skolyszewski, J.

    1999-01-01

    Purpose: to report the results of a prospective randomized study concerning the role of radiotherapy in the treatment of stage III, unresectable, asymptomatic non-small cell lung cancer. Material and methods: between 1992 and 1996, 240 patients with stage III, unresectable, asymptomatic non-small cell lung cancer were enrolled in this study, and sequentially randomized to one of the three treatment arms: conventional irradiation, hypo-fractionated irradiation and control group. In the conventional irradiation arm (79 patients), a dose of 50 Gy in 25 fractions in five weeks was delivered to the primary tumor and the mediastinum. In the hypo-fractionated irradiation arm (81 patients), there were two courses of irradiation separated by an interval of four weeks. In each series, patients received 20 Gy in five fractions in five days, in the same treatment volume as the conventional irradiation group. in the control group arm, 80 patients initially did not receive radiotherapy and were only observed. Delayed palliative hypo-fractionated irradiation (20-25 Gy in four to five fractions in four to five days) was given to the primary tumor when major symptoms developed. Results: the two-year actuarial survival rates for patients in the conventional irradiation, hypo-fractionated irradiation and control group arms were 18%, 6% and 0%, with a median survival time of 12 months, nine months and six months respectively. The differences between survival rates were statistically significant at the 0.05 level. Conclusion: although irradiation provides good palliation the results are disappointing. The comparison of conventional and hypo-fractionated irradiation shows an advantage for conventional schedules. (author)

  3. Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling

    Directory of Open Access Journals (Sweden)

    Bo Yu

    2015-01-01

    Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.

  4. Analysis of U and Pu resin bead samples with a single stage mass spectrometer

    International Nuclear Information System (INIS)

    Smith, D.H.; Walker, R.L.; Bertram, L.K.; Carter, J.A.

    1979-01-01

    Resin bead sampling enables the shipment of nanogram U and Pu quantities for analysis. Application of this sampling technique to safeguards was investigated with a single-stage mass spectrometer. Standards gave results in good agreement with NBS certified values. External precisions of +-0.5% were obtained on isotopic ratios of approx. 0.01; precisions on quantitative measurements are +-1.0%

  5. Comparison of two techniques used for the recovery of third-stage strongylid nematode larvae from herbage.

    Science.gov (United States)

    Krecek, R C; Maingi, N

    2004-07-14

    A laboratory trial to determine the efficacy of two methods in recovering known numbers of third-stage (L3) strongylid nematode larvae from herbage was carried out. Herbage samples consisting almost entirely of star grass (Cynodon aethiopicus) that had no L3 nematode parasitic larvae were collected at Onderstepoort, South Africa. Two hundred grams samples were placed in fibreglass fly gauze bags and seeded with third-stage strongylid nematode larvae at 11 different levels of herbage infectivity ranging from 50 to 8000 L3/kg. Eight replicates were prepared for each of the 11 levels of herbage infectivity. Four of these were processed using a modified automatic Speed Queen heavy-duty washing machine at a regular normal cycle, followed by isolation of larvae through centrifugation-flotation in saturated sugar solution. Larvae in the other four samples were recovered after soaking the herbage in water overnight and the larvae isolated with the Baermann technique of the washing. There was a strong correlation between the number of larvae recovered using both methods and the number of larvae in the seeded samples, indicating that the two methods give a good indication of changes in the numbers of larvae on pasture if applied in epidemiological studies. The washing machine method recovered higher numbers of larvae than the soaking and Baermann method at all levels of pasture seeding, probably because the machine washed the samples more thoroughly and a sugar centrifugation-flotation step was used. Larval suspensions obtained using the washing machine method were therefore cleaner and thus easier to examine under the microscope. In contrast, the soaking and Baermann method may be more suitable in field-work, especially in places where resources and equipment are scarce, as it is less costly in equipment and less labour intensive. Neither method recovered all the larvae from the seeded samples. The recovery rates for the washing machine method ranged from 18 to 41% while

  6. A Two-Stage Maximum Entropy Prior of Location Parameter with a Stochastic Multivariate Interval Constraint and Its Properties

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2016-05-01

    Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.

  7. Kinetics of two-stage fermentation process for the production of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Kaushik [Department of Chemical Engineering, G.H. Patel College of Engineering and Technology, Vallabh Vidyanagar 388 120, Gujarat (India); Muthukumar, Manoj; Kumar, Anish; Das, Debabrata [Fermentation Technology Laboratory, Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India)

    2008-02-15

    Two-stage process described in the present work is a combination of dark and photofermentation in a sequential batch mode. In the first stage glucose is fermented to acetate, CO{sub 2} and H{sub 2} in an anaerobic dark fermentation by Enterobacter cloacae DM11. This is followed by a successive second stage where acetate is converted to H{sub 2} and CO{sub 2} in a photobioreactor by photosynthetic bacteria, Rhodobacter sphaeroides O.U. 001. The yield of hydrogen in the first stage was about 3.31molH{sub 2}(molglucose){sup -1} (approximately 82% of theoretical) and that in the second stage was about 1.5-1.72molH{sub 2}(molaceticacid){sup -1} (approximately 37-43% of theoretical). The overall yield of hydrogen in two-stage process considering glucose as preliminary substrate was found to be higher compared to a single stage process. Monod model, with incorporation of substrate inhibition term, has been used to determine the growth kinetic parameters for the first stage. The values of maximum specific growth rate ({mu} {sub max}) and K{sub s} (saturation constant) were 0.398h{sup -1} and 5.509gl{sup -1}, respectively, using glucose as substrate. The experimental substrate and biomass concentration profiles have good resemblance with those obtained by kinetic model predictions. A model based on logistic equation has been developed to describe the growth of R. sphaeroides O.U 001 in the second stage. Modified Gompertz equation was applied to estimate the hydrogen production potential, rate and lag phase time in a batch process for various initial concentration of glucose, based on the cumulative hydrogen production curves. Both the curve fitting and statistical analysis showed that the equation was suitable to describe the progress of cumulative hydrogen production. (author)

  8. Two-stage energy storage equalization system for lithium-ion battery pack

    Science.gov (United States)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  9. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed...

  10. Treatment of Middle East Respiratory Syndrome with a combination of lopinavir-ritonavir and interferon-β1b (MIRACLE trial): study protocol for a randomized controlled trial.

    Science.gov (United States)

    Arabi, Yaseen M; Alothman, Adel; Balkhy, Hanan H; Al-Dawood, Abdulaziz; AlJohani, Sameera; Al Harbi, Shmeylan; Kojan, Suleiman; Al Jeraisy, Majed; Deeb, Ahmad M; Assiri, Abdullah M; Al-Hameed, Fahad; AlSaedi, Asim; Mandourah, Yasser; Almekhlafi, Ghaleb A; Sherbeeni, Nisreen Murad; Elzein, Fatehi Elnour; Memon, Javed; Taha, Yusri; Almotairi, Abdullah; Maghrabi, Khalid A; Qushmaq, Ismael; Al Bshabshe, Ali; Kharaba, Ayman; Shalhoub, Sarah; Jose, Jesna; Fowler, Robert A; Hayden, Frederick G; Hussein, Mohamed A

    2018-01-30

    It had been more than 5 years since the first case of Middle East Respiratory Syndrome coronavirus infection (MERS-CoV) was recorded, but no specific treatment has been investigated in randomized clinical trials. Results from in vitro and animal studies suggest that a combination of lopinavir/ritonavir and interferon-β1b (IFN-β1b) may be effective against MERS-CoV. The aim of this study is to investigate the efficacy of treatment with a combination of lopinavir/ritonavir and recombinant IFN-β1b provided with standard supportive care, compared to treatment with placebo provided with standard supportive care in patients with laboratory-confirmed MERS requiring hospital admission. The protocol is prepared in accordance with the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) guidelines. Hospitalized adult patients with laboratory-confirmed MERS will be enrolled in this recursive, two-stage, group sequential, multicenter, placebo-controlled, double-blind randomized controlled trial. The trial is initially designed to include 2 two-stage components. The first two-stage component is designed to adjust sample size and determine futility stopping, but not efficacy stopping. The second two-stage component is designed to determine efficacy stopping and possibly readjustment of sample size. The primary outcome is 90-day mortality. This will be the first randomized controlled trial of a potential treatment for MERS. The study is sponsored by King Abdullah International Medical Research Center, Riyadh, Saudi Arabia. Enrollment for this study began in November 2016, and has enrolled thirteen patients as of Jan 24-2018. ClinicalTrials.gov, ID: NCT02845843 . Registered on 27 July 2016.

  11. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    Science.gov (United States)

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  12. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    Science.gov (United States)

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  13. Adaptive importance sampling of random walks on continuous state spaces

    International Nuclear Information System (INIS)

    Baggerly, K.; Cox, D.; Picard, R.

    1998-01-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material

  14. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  15. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  16. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  17. Experimental study on an innovative multifunction heat pipe type heat recovery two-stage sorption refrigeration system

    International Nuclear Information System (INIS)

    Li, T.X.; Wang, R.Z.; Wang, L.W.; Lu, Z.S.

    2008-01-01

    An innovative multifunction heat pipe type sorption refrigeration system is designed, in which a two-stage sorption thermodynamic cycle based on two heat recovery processes was employed to reduce the driving heat source temperature, and the composite sorbent of CaCl 2 and activated carbon was used to improve the mass and heat transfer performances. For this test unit, the heating, cooling and heat recovery processes between two reactive beds are performed by multifunction heat pipes. The aim of this paper is to investigate the cycled characteristics of two-stage sorption refrigeration system with heat recovery processes. The two sub-cycles of a two-stage cycle have different sorption platforms though the adsorption and desorption temperatures are equivalent. The experimental results showed that the pressure evolutions of two beds are nearly equivalent during the first stage, and desorption pressure during the second stage is large higher than that in the first stage while the desorption temperatures are same during the two operation stages. In comparison with conventional two-stage cycle, the two-stage cycle with heat recovery processes can reduce the heating load for desorber and cooling load for adsorber, the coefficient of performance (COP) has been improved more than 23% when both cycles have the same regeneration temperature of 103 deg. C and the cooling water temperature of 30 deg. C. The advanced two-stage cycle provides an effective method for application of sorption refrigeration technology under the condition of low-grade temperature heat source or utilization of renewable energy

  18. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    Science.gov (United States)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  19. Comparison of sample types and diagnostic methods for in vivo detection of Mycoplasma hyopneumoniae during early stages of infection.

    Science.gov (United States)

    Pieters, Maria; Daniels, Jason; Rovira, Albert

    2017-05-01

    Detection of Mycoplasma hyopneumoniae in live pigs during the early stages of infection is critical for timely implementation of control measures, but is technically challenging. This study compared the sensitivity of various sample types and diagnostic methods for detection of M. hyopneumoniae during the first 28days after experimental exposure. Twenty-one 8-week old pigs were intra-tracheally inoculated on day 0 with M. hyopneumoniae strain 232. Two age matched pigs were mock inoculated and maintained as negative controls. On post-inoculation days 0, 2, 5, 9, 14, 21 and 28, nasal swabs, laryngeal swabs, tracheobronchial lavage fluid, and blood samples were obtained from each pig and oral fluid samples were obtained from each room in which pigs were housed. Serum samples were assayed by ELISA for IgM and IgG M. hyopneumoniae antibodies and C-reactive protein. All other samples were tested for M. hyopneumoniae DNA by species-specific real-time PCR. Serum antibodies (IgG) to M. hyopneumoniae were detected in challenge-inoculated pigs on days 21 and 28. M. hyopneumoniae DNA was detected in samples from experimentally inoculated pigs beginning at 5days post-inoculation. Laryngeal swabs at all samplings beginning on day 5 showed the highest sensitivity for M. hyopneumoniae DNA Detection, while oral fluids showed the lowest sensitivity. Although laryngeal swabs are not considered the typical M. hyopneumoniae diagnostic sample, under the conditions of this study laryngeal swabs tested by PCR proved to be a practical and reliable diagnostic sample for M. hyopneumoniae detection in vivo during early-stage infection. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Genetic variants at 1p11.2 and breast cancer risk: a two-stage study in Chinese women.

    Directory of Open Access Journals (Sweden)

    Yue Jiang

    Full Text Available BACKGROUND: Genome-wide association studies (GWAS have identified several breast cancer susceptibility loci, and one genetic variant, rs11249433, at 1p11.2 was reported to be associated with breast cancer in European populations. To explore the genetic variants in this region associated with breast cancer in Chinese women, we conducted a two-stage fine-mapping study with a total of 1792 breast cancer cases and 1867 controls. METHODOLOGY/PRINCIPAL FINDINGS: Seven single nucleotide polymorphisms (SNPs including rs11249433 in a 277 kb region at 1p11.2 were selected and genotyping was performed by using TaqMan® OpenArray™ Genotyping System for stage 1 samples (878 cases and 900 controls. In stage 2 (914 cases and 967 controls, three SNPs (rs2580520, rs4844616 and rs11249433 were further selected and genotyped for validation. The results showed that one SNP (rs2580520 located at a predicted enhancer region of SRGAP2 was consistently associated with a significantly increased risk of breast cancer in a recessive genetic model [Odds Ratio (OR  =  1.66, 95% confidence interval (CI  =  1.16-2.36 for stage 2 samples; OR  =  1.51, 95% CI  =  1.16-1.97 for combined samples, respectively]. However, no significant association was observed between rs11249433 and breast cancer risk in this Chinese population (dominant genetic model in combined samples: OR  =  1.20, 95% CI  =  0.92-1.57. CONCLUSIONS/SIGNIFICANCE: Genotypes of rs2580520 at 1p11.2 suggest that Chinese women may have different breast cancer susceptibility loci, which may contribute to the development of breast cancer in this population.

  1. [Asthma at acute attack stage treated with "Shao's five needling therapy": a multi-central randomized controlled study].

    Science.gov (United States)

    Shao, Su-Ju; Quan, Chun-Fen; Shao, Su-Xia; Zhou, Miao; Jing, Xin-Jian; Zhao, Yu-Xiao; Ren, Zhi-Xin; Wang, Pei-Yu; Gao, Xi-Yan; Yang, Jie; Ren, Zhong; Kong, Li

    2013-09-01

    To evaluate the clinical efficacy of asthma at acute attack stage treated with "Shao's five needling therapy". The randomized controlled method was applied to divide 210 cases into an observation group and a control group, 105 cases in each one. In the observation group, "Shao's five needling therapy" [Feishu (BL 13), Dazhui (GV 14), Fengmen (BL 12)] and the combined therapy were adopted, including oxygen uptake, aerosol inhalation and oral administration of prednisone. In the control group, the oral administration of theophylline sustained release tablet and the combined therapy were applied. The treatment was continued for 7 days. The clinical symptoms and physical signs such as wheezing, cough, expectoration, chest stuffiness, wheezing rale and shortness of breath, as well as lung function indices such as forced expiratory volume one second (FEV1) and peak expiratory flow (PEF) were observed before and after treatment in the two groups. In the observation group, 69 cases were cured clinically, 20 cases effective remarkably, 7 cases effective and 0 case failed. In the control group, 49 cases were cured clinically, 31 cases effective remarkably, 15 cases effective and 0 case failed. The difference in the efficacy was significant in comparison of the two groups (P asthma at acute attack stage. It significantly relieves the symptoms and physical signs of the patients and improves lung functions. The effect is better than that of theophylline sustained release tablet.

  2. Reliability and validity of the Modified Erikson Psychosocial Stage Inventory in diverse samples.

    Science.gov (United States)

    Leidy, N K; Darling-Fisher, C S

    1995-04-01

    The Modified Erikson Psychosocial Stage Inventory (MEPSI) is a relatively simple survey measure designed to assess the strength of psychosocial attributes that arise from progression through Erikson's eight stages of development. The purpose of this study was to employ secondary analysis to evaluate the internal-consistency reliability and construct validity of the MEPSI across four diverse samples: healthy young adults, hemophilic men, healthy older adults, and older adults with chronic obstructive pulmonary disease. Special attention was given to the performance of the measure across gender, with exploratory analyses examining possible age cohort and health status effects. Internal-consistency estimates for the aggregate measure were high, whereas subscale reliability levels varied across age groups. Construct validity was supported across samples. Gender, cohort, and health effects offered interesting psychometric and theoretical insights and direction for further research. Findings indicated that the MEPSI might be a useful instrument for operationalizing and testing Eriksonian developmental theory in adults.

  3. A controlled, randomized, comparative study of a radiant heat bandage on the healing of stage 3-4 pressure ulcers: a pilot study.

    Science.gov (United States)

    Thomas, David R; Diebold, Marilyn R; Eggemeyer, Linda M

    2005-01-01

    Pressure ulcers, like other chronic wounds, fail to proceed through an orderly and timely process to produce anatomical or functional integrity. Treatment of pressure ulcers is directed to improving host factors and providing an optimum wound environment. In addition to providing a moist wound environment, it has been theorized that preventing hypothermia in a wound and maintaining a normothermic state might improve wound healing. Forty-one subjects with a stage 3 or stage 4 truncal pressure ulcer >1.0 cm(2) were recruited from outpatient clinics, long-term care nursing homes, and a rehabilitation center. The experimental group was randomized to a radiant-heat dressing device and the control group was randomized to a hydrocolloid dressing, with or without a calcium alginate filler. Subjects were followed until healed or for 12 weeks. Eight subjects (57%) in the experimental group had complete healing of their pressure ulcer compared with 7 subjects (44%) with complete healing in the control group (P = .46). Although a 13% difference in healing rate between the two arms of the study was found, this difference was not statistically significant. At almost all points along the healing curve, the proportion not healed was higher in the control arm.

  4. One-Stage and Two-Stage Schemes of High Performance Synchronous PWM with Smooth Pulses-Ratio Changing

    DEFF Research Database (Denmark)

    Oleschuk, V.; Blaabjerg, Frede

    2002-01-01

    This paper presents detailed description of one-stage and two-stage schemes of a novel method of synchronous, pulsewidth modulation (PWM) for voltage source inverters for ac drive application. The proposed control functions provide accurate realization of different versions of voltage space vector...... modulation with synchronization of the voltage waveform of the inverter and with smooth pulse-ratio changing. Voltage spectra do not contain even harmonic and sub-harmonics (combined harmonics) during the whole control range including the zone of overmodulation. Examples of determination of the basic control...

  5. One-stage exchange with antibacterial hydrogel coated implants provides similar results to two-stage revision, without the coating, for the treatment of peri-prosthetic infection.

    Science.gov (United States)

    Capuano, Nicola; Logoluso, Nicola; Gallazzi, Enrico; Drago, Lorenzo; Romanò, Carlo Luca

    2018-03-16

    Aim of this study was to verify the hypothesis that a one-stage exchange procedure, performed with an antibiotic-loaded, fast-resorbable hydrogel coating, provides similar infection recurrence rate than a two-stage procedure without the coating, in patients affected by peri-prosthetic joint infection (PJI). In this two-center case-control, study, 22 patients, treated with a one-stage procedure, using implants coated with an antibiotic-loaded hydrogel [defensive antibacterial coating (DAC)], were compared with 22 retrospective matched controls, treated with a two-stage revision procedure, without the coating. At a mean follow-up of 29.3 ± 5.0 months, two patients (9.1%) in the DAC group showed an infection recurrence, compared to three patients (13.6%) in the two-stage group. Clinical scores were similar between groups, while average hospital stay and antibiotic treatment duration were significantly reduced after one-stage, compared to two-stage (18.9 ± 2.9 versus 35.8 ± 3.4 and 23.5 ± 3.3 versus 53.7 ± 5.6 days, respectively). Although in a relatively limited series of patients, our data shows similar infection recurrence rate after one-stage exchange with DAC-coated implants, compared to two-stage revision without coating, with reduced overall hospitalization time and antibiotic treatment duration. These findings warrant further studies in the possible applications of antibacterial coating technologies to treat implant-related infections. III.

  6. Adults' Knowledge of Child Development in Alberta, Canada: Comparing the Level of Knowledge of Adults in Two Samples in 2007 and 2013

    Science.gov (United States)

    Pujadas Botey, Anna; Vinturache, Angela; Bayrampour, Hamideh; Breitkreuz, Rhonda; Bukutu, Cecilia; Gibbard, Ben; Tough, Suzanne

    2017-01-01

    Parents and non-parental adults who interact with children influence child development. This study evaluates the knowledge of child development in two large and diverse samples of adults from Alberta in 2007 and 2013. Telephone interviews were completed by two random samples (1,443 in 2007; 1,451 in 2013). Participants were asked when specific…

  7. A two-phase inspection model for a single component system with three-stage degradation

    International Nuclear Information System (INIS)

    Wang, Huiying; Wang, Wenbin; Peng, Rui

    2017-01-01

    This paper presents a two-phase inspection schedule and an age-based replacement policy for a single plant item contingent on a three-stage degradation process. The two phase inspection schedule can be observed in practice. The three stages are defined as the normal working stage, low-grade defective stage and critical defective stage. When an inspection detects that an item is in the low-grade defective stage, we may delay the preventive replacement action if the time to the age-based replacement is less than or equal to a threshold level. However, if it is above this threshold level, the item will be replaced immediately. If the item is found in the critical defective stage, it is replaced immediately. A hybrid bee colony algorithm is developed to find the optimal solution for the proposed model which has multiple decision variables. A numerical example is conducted to show the efficiency of this algorithm, and simulations are conducted to verify the correctness of the model. - Highlights: • A two-phase inspection model is studied. • The failure process has three stages. • The delayed replacement is considered.

  8. Two sampling techniques for game meat

    Directory of Open Access Journals (Sweden)

    Maretha van der Merwe

    2013-03-01

    Full Text Available A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g and square centimetres (cm2 for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12 that statistically proved the two measuring units correlated. The two sampling techniques were conducted on the same game carcasses (n = 13 and analyses performed for aerobic plate count (APC, Escherichia coli and Staphylococcus aureus, for both techniques. A more representative result was obtained by swabbing and no damage was caused to the carcass. Conversely, the excision technique yielded fewer organisms and caused minor damage to the carcass. The recovery ratio from the sampling technique improved 5.4 times for APC, 108.0 times for E. coli and 3.4 times for S. aureus over the results obtained from the excision technique. It was concluded that the sampling methods of excision and swabbing can be used to obtain bacterial profiles from both export and local carcasses and could be used to indicate whether game carcasses intended for the local market are possibly on par with game carcasses intended for the export market and therefore safe for human consumption.

  9. Two sampling techniques for game meat.

    Science.gov (United States)

    van der Merwe, Maretha; Jooste, Piet J; Hoffman, Louw C; Calitz, Frikkie J

    2013-03-20

    A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g) and square centimetres (cm2) for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12) that statistically proved the two measuring units correlated. The two sampling techniques were conducted on the same game carcasses (n = 13) and analyses performed for aerobic plate count (APC), Escherichia coli and Staphylococcus aureus, for both techniques. A more representative result was obtained by swabbing and no damage was caused to the carcass. Conversely, the excision technique yielded fewer organisms and caused minor damage to the carcass. The recovery ratio from the sampling technique improved 5.4 times for APC, 108.0 times for E. coli and 3.4 times for S. aureus over the results obtained from the excision technique. It was concluded that the sampling methods of excision and swabbing can be used to obtain bacterial profiles from both export and local carcasses and could be used to indicate whether game carcasses intended for the local market are possibly on par with game carcasses intended for the export market and therefore safe for human consumption.

  10. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  11. Experimental Results of the First Two Stages of an Advanced Transonic Core Compressor Under Isolated and Multi-Stage Conditions

    Science.gov (United States)

    Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.

    2015-01-01

    NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency of large gas turbine engines. Under ERA the task for a High Pressure Ratio Core Technology program calls for a higher overall pressure ratio of 60 to 70. This mean that the HPC would have to almost double in pressure ratio and keep its high level of efficiency. The challenge is how to match the corrected mass flow rate of the front two supersonic high reaction and high corrected tip speed stages with a total pressure ratio of 3.5. NASA and GE teamed to address this challenge by using the initial geometry of an advanced GE compressor design to meet the requirements of the first 2 stages of the very high pressure ratio core compressor. The rig was configured to run as a 2 stage machine, with Strut and IGV, Rotor 1 and Stator 1 run as independent tests which were then followed by adding the second stage. The goal is to fully understand the stage performances under isolated and multi-stage conditions and fully understand any differences and provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to isolate fluid dynamics loss source mechanisms due to interaction and endwalls. The paper will present the description of the compressor test article, its predicted performance and operability, and the experimental results for both the single stage and two stage configurations. We focus the detailed measurements on 97 and 100 of design speed at 3 vane setting angles.

  12. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  13. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  14. Compressed gas combined single- and two-stage light-gas gun

    Science.gov (United States)

    Lamberson, L. E.; Boettcher, P. A.

    2018-02-01

    With more than 1 trillion artificial objects smaller than 1 μm in low and geostationary Earth orbit, space assets are subject to the constant threat of space debris impact. These collisions occur at hypervelocity or speeds greater than 3 km/s. In order to characterize material behavior under this extreme event as well as study next-generation materials for space exploration, this paper presents a unique two-stage light-gas gun capable of replicating hypervelocity impacts. While a limited number of these types of facilities exist, they typically are extremely large and can be costly and dangerous to operate. The design presented in this paper is novel in two distinct ways. First, it does not use a form of combustion in the first stage. The projectile is accelerated from a pressure differential using air and inert gases (or purely inert gases), firing a projectile in a nominal range of 1-4 km/s. Second, the design is modular in that the first stage sits on a track sled and can be pulled back and used in itself to study lower speed impacts without any further modifications, with the first stage piston as the impactor. The modularity of the instrument allows the ability to investigate three orders of magnitude of impact velocities or between 101 and 103 m/s in a single, relatively small, cost effective instrument.

  15. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  16. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  17. Statistics of light deflection in a random two-phase medium

    International Nuclear Information System (INIS)

    Sviridov, A P

    2007-01-01

    The statistics of the angles of light deflection during its propagation in a random two-phase medium with randomly oriented phase interfaces is considered within the framework of geometrical optics. The probabilities of finding a randomly walking photon in different phases of the inhomogeneous medium are calculated. Analytic expressions are obtained for the scattering phase function and the scattering phase matrix which relates the Stokes vector of the incident light beam with the Stokes vectors of deflected beams. (special issue devoted to multiple radiation scattering in random media)

  18. Design and construction of a two-stage centrifugal pump | Nordiana ...

    African Journals Online (AJOL)

    Centrifugal pumps are widely used in moving liquids from one location to another in homes, offices and industries. Due to the ever increasing demand for centrifugal pumps it became necessary to design and construction of a two-stage centrifugal pump. The pump consisted of an electric motor, a shaft, two rotating impellers ...

  19. A Nationwide Random Sampling Survey of Potential Complicated Grief in Japan

    Science.gov (United States)

    Mizuno, Yasunao; Kishimoto, Junji; Asukai, Nozomu

    2012-01-01

    To investigate the prevalence of significant loss, potential complicated grief (CG), and its contributing factors, we conducted a nationwide random sampling survey of Japanese adults aged 18 or older (N = 1,343) using a self-rating Japanese-language version of the Complicated Grief Brief Screen. Among them, 37.0% experienced their most significant…

  20. Response of stiff piles to random two-way lateral loading

    DEFF Research Database (Denmark)

    Bakmar, Christian LeBlanc; Byrne, B.W.; Houlsby, G. T.

    2010-01-01

    A model for predicting the accumulated rotation of stiff piles under random two-way loading is presented. The model is based on a strain superposition rule similar to Miner's rule and uses rainflow-counting to decompose a random time-series of varying loads into a set of simple load reversals. Th....... The method is consistent with the work of LeBlanc et al. (2010) and is supported by 1g laboratory tests. An example is given for an offshore wind turbine indicating that accumulated pile rotation during the life of the turbine is dominated by the worst expected load.......A model for predicting the accumulated rotation of stiff piles under random two-way loading is presented. The model is based on a strain superposition rule similar to Miner's rule and uses rainflow-counting to decompose a random time-series of varying loads into a set of simple load reversals...

  1. Area G perimeter surface-soil and single-stage water sampling. Environmental surveillance for fiscal year 95. Progress report

    International Nuclear Information System (INIS)

    Childs, M.; Conrad, R.

    1997-09-01

    ESH-19 personnel collected soil and single-stage water samples around the perimeter of Area G at Los Alamos National Laboratory (LANL) during FY 95 to characterize possible radionuclide movement out of Area G through surface water and entrained sediment runoff. Soil samples were analyzed for tritium, total uranium, isotopic plutonium, americium-241, and cesium-137. The single-stage water samples were analyzed for tritium and plutonium isotopes. All radiochemical data was compared with analogous samples collected during FY 93 and 94 and reported in LA-12986 and LA-13165-PR. Six surface soils were also submitted for metal analyses. These data were included with similar data generated for soil samples collected during FY 94 and compared with metals in background samples collected at the Area G expansion area

  2. A Two-stage Improvement Method for Robot Based 3D Surface Scanning

    Science.gov (United States)

    He, F. B.; Liang, Y. D.; Wang, R. F.; Lin, Y. S.

    2018-03-01

    As known that the surface of unknown object was difficult to measure or recognize precisely, hence the 3D laser scanning technology was introduced and used properly in surface reconstruction. Usually, the surface scanning speed was slower and the scanning quality would be better, while the speed was faster and the quality would be worse. In this case, the paper presented a new two-stage scanning method in order to pursuit the quality of surface scanning in a faster speed. The first stage was rough scanning to get general point cloud data of object’s surface, and then the second stage was specific scanning to repair missing regions which were determined by chord length discrete method. Meanwhile, a system containing a robotic manipulator and a handy scanner was also developed to implement the two-stage scanning method, and relevant paths were planned according to minimum enclosing ball and regional coverage theories.

  3. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    Science.gov (United States)

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  4. A high-power two stage traveling-wave tube amplifier

    International Nuclear Information System (INIS)

    Shiffler, D.; Nation, J.A.; Schachter, L.; Ivers, J.D.; Kerslick, G.S.

    1991-01-01

    Results are presented on the development of a two stage high-efficiency, high-power 8.76-GHz traveling-wave tube amplifier. The work presented augments previously reported data on a single stage amplifier and presents new data on the operational characteristics of two identical amplifiers operated in series and separated from each other by a sever. Peak powers of 410 MW have been obtained over the complete pulse duration of the device, with a conversion efficiency from the electron beam to microwave energy of 45%. In all operating conditions the severed amplifier showed a ''sideband''-like structure in the frequency spectrum of the microwave radiation. A similar structure was apparent at output powers in excess of 70 MW in the single stage device. The frequencies of the ''sidebands'' are not symmetric with respect to the center frequency. The maximum, single frequency, average output power was 210 MW corresponding to an amplifier efficiency of 24%. Simulation data is also presented that indicates that the short amplifiers used in this work exhibit significant differences in behavior from conventional low-power amplifiers. These include finite length effects on the gain characteristics, which may account for the observed narrow bandwidth of the amplifiers and for the appearance of the sidebands. It is also found that the bunching length for the beam may be a significant fraction of the total amplifier length

  5. Randomization tests

    CERN Document Server

    Edgington, Eugene

    2007-01-01

    Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani

  6. Biogas production of Chicken Manure by Two-stage fermentation process

    Science.gov (United States)

    Liu, Xin Yuan; Wang, Jing Jing; Nie, Jia Min; Wu, Nan; Yang, Fang; Yang, Ren Jie

    2018-06-01

    This paper performs a batch experiment for pre-acidification treatment and methane production from chicken manure by the two-stage anaerobic fermentation process. Results shows that the acetate was the main component in volatile fatty acids produced at the end of pre-acidification stage, accounting for 68% of the total amount. The daily biogas production experienced three peak period in methane production stage, and the methane content reached 60% in the second period and then slowly reduced to 44.5% in the third period. The cumulative methane production was fitted by modified Gompertz equation, and the kinetic parameters of the methane production potential, the maximum methane production rate and lag phase time were 345.2 ml, 0.948 ml/h and 343.5 h, respectively. The methane yield of 183 ml-CH4/g-VSremoved during the methane production stage and VS removal efficiency of 52.7% for the whole fermentation process were achieved.

  7. Two-stage high frequency pulse tube refrigerator with base temperature below 10 K

    Science.gov (United States)

    Chen, Liubiao; Wu, Xianlin; Liu, Sixue; Zhu, Xiaoshuang; Pan, Changzhao; Guo, Jia; Zhou, Yuan; Wang, Junjie

    2017-12-01

    This paper introduces our recent experimental results of pulse tube refrigerator driven by linear compressor. The working frequency is 23-30 Hz, which is much higher than the G-M type cooler (the developed cryocooler will be called high frequency pulse tube refrigerator in this paper). To achieve a temperature below 10 K, two types of two-stage configuration, gas coupled and thermal coupled, have been designed, built and tested. At present, both types can achieve a no-load temperature below 10 K by using only one compressor. As to gas-coupled HPTR, the second stage can achieve a cooling power of 16 mW/10K when the first stage applied a 400 mW heat load at 60 K with a total input power of 400 W. As to thermal-coupled HPTR, the designed cooling power of the first stage is 10W/80K, and then the temperature of the second stage can get a temperature below 10 K with a total input power of 300 W. In the current preliminary experiment, liquid nitrogen is used to replace the first coaxial configuration as the precooling stage, and a no-load temperature 9.6 K can be achieved with a stainless steel mesh regenerator. Using Er3Ni sphere with a diameter about 50-60 micron, the simulation results show it is possible to achieve a temperature below 8 K. The configuration, the phase shifters and the regenerative materials of the developed two types of two-stage high frequency pulse tube refrigerator will be discussed, and some typical experimental results and considerations for achieving a better performance will also be presented in this paper.

  8. The Stages of Change in Smoking Cessation in a Representative Sample of Korean Adult Smokers

    OpenAIRE

    Jhun, Hyung-Joon; Seo, Hong-Gwan

    2006-01-01

    This study reports the stages of change in smoking cessation in a representative sample of Korean adult smokers. The study subjects, all adult smokers (n=2,422), were recruited from the second Korea National Health and Nutrition Examination Survey conducted in 2001. The stages of change were categorized using demographic (age and sex), socioeconomic (education, residence, and household income), and smoking characteristics (age at smoking onset, duration of smoking, and number of cigarettes sm...

  9. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  10. Excitations in a Two-Dimensional Random Antiferromagnet

    DEFF Research Database (Denmark)

    Birgeneau, R. J.; Walker, L. R.; Guggenheim, H. J.

    1975-01-01

    Inelastic neutron scattering studies of the magnetic excitations in the planar Heisenberg random antiferromagnet Rb2Mn0.5Ni0.5F4 at 7K are reported. Two well-defined bands of excitations are observed. A simple mean crystal model is found to predict accurately the measured dispersion relations using...

  11. LOD score exclusion analyses for candidate QTLs using random population samples.

    Science.gov (United States)

    Deng, Hong-Wen

    2003-11-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.

  12. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    Science.gov (United States)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  13. The effect of homogenization pressure and stages on the amounts of Lactic and Acetic acids of probiotic yoghurt

    Directory of Open Access Journals (Sweden)

    R Massoud

    2014-12-01

    Full Text Available Nowadays the use of probiotic products especially yogurt, due to having wonderful and health properties, has become popular in the world. In this study, the effect of homogenization pressure (100, 150 and 200 bars and stage (single and two on the amount of lactic and acetic acids was investigated. Yoghurts were manufactured from low-fat milk treated using high pressure homogenization at 100,150 and 200 bar and at 60°C. The amount of lactic and acetic acids was determined after the days 1, 7, 14 and 21 of storage at 4ºC. The experiments were set up using a completely randomized design. With the increase of pressure and stage of homogenization, the amount of both acids was increased (p<0.01. The greatest amount of lactic and acetic acids during the storage period was observed in the sample homogenized at a pressure of 200 bars and two stages.

  14. The Dirichet-Multinomial model for multivariate randomized response data and small samples

    NARCIS (Netherlands)

    Avetisyan, Marianna; Fox, Gerardus J.A.

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The

  15. Two-stage Catalytic Reduction of NOx with Hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Umit S. Ozkan; Erik M. Holmgreen; Matthew M. Yung; Jonathan Halter; Joel Hiltner

    2005-12-21

    A two-stage system for the catalytic reduction of NO from lean-burn natural gas reciprocating engine exhaust is investigated. Each of the two stages uses a distinct catalyst. The first stage is oxidation of NO to NO{sub 2} and the second stage is reduction of NO{sub 2} to N{sub 2} with a hydrocarbon. The central idea is that since NO{sub 2} is a more easily reduced species than NO, it should be better able to compete with oxygen for the combustion reaction of hydrocarbon, which is a challenge in lean conditions. Early work focused on demonstrating that the N{sub 2} yield obtained when NO{sub 2} was reduced was greater than when NO was reduced. NO{sub 2} reduction catalysts were designed and silver supported on alumina (Ag/Al{sub 2}O{sub 3}) was found to be quite active, able to achieve 95% N{sub 2} yield in 10% O{sub 2} using propane as the reducing agent. The design of a catalyst for NO oxidation was also investigated, and a Co/TiO{sub 2} catalyst prepared by sol-gel was shown to have high activity for the reaction, able to reach equilibrium conversion of 80% at 300 C at GHSV of 50,000h{sup -1}. After it was shown that NO{sub 2} could be more easily reduced to N{sub 2} than NO, the focus shifted on developing a catalyst that could use methane as the reducing agent. The Ag/Al{sub 2}O{sub 3} catalyst was tested and found to be inactive for NOx reduction with methane. Through iterative catalyst design, a palladium-based catalyst on a sulfated-zirconia support (Pd/SZ) was synthesized and shown to be able to selectively reduce NO{sub 2} in lean conditions using methane. Development of catalysts for the oxidation reaction also continued and higher activity, as well as stability in 10% water, was observed on a Co/ZrO{sub 2} catalyst, which reached equilibrium conversion of 94% at 250 C at the same GHSV. The Co/ZrO{sub 2} catalyst was also found to be extremely active for oxidation of CO, ethane, and propane, which could potential eliminate the need for any separate

  16. Validation of two complementary oral-health related quality of life indicators (OIDP and OSS 0-10 in two qualitatively distinct samples of the Spanish population

    Directory of Open Access Journals (Sweden)

    Albaladejo A

    2008-11-01

    Full Text Available Abstract Background Oral health-related quality of life can be assessed positively, by measuring satisfaction with mouth, or negatively, by measuring oral impact on the performance of daily activities. The study objective was to validate two complementary indicators, i.e., the OIDP (Oral Impacts on Daily Performances and Oral Satisfaction 0–10 Scale (OSS, in two qualitatively different socio-demographic samples of the Spanish adult population, and to analyse the factors affecting both perspectives of well-being. Methods A cross-sectional study was performed, recruiting a Validation Sample from randomly selected Health Centres in Granada (Spain, representing the general population (n = 253, and a Working Sample (n = 561 randomly selected from active Regional Government staff, i.e., representing the more privileged end of the socio-demographic spectrum of this reference population. All participants were examined according to WHO methodology and completed an in-person interview on their oral impacts and oral satisfaction using the OIDP and OSS 0–10 respectively. The reliability and validity of the two indicators were assessed. An alternative method of describing the causes of oral impacts is presented. Results The reliability coefficient (Cronbach's alpha of the OIDP was above the recommended 0.7 threshold in both Validation and Occupational samples (0.79 and 0.71 respectively. Test-retest analysis confirmed the external reliability of the OSS (Intraclass Correlation Coefficient, 0.89; p Conclusion OIDP and OSS are valid and reliable subjective measures of oral impacts and oral satisfaction, respectively, in an adult Spanish population. Exploring simultaneously these issues may provide useful insights into how satisfaction and impact on well-being are constructed.

  17. The global stability of a delayed predator-prey system with two stage-structure

    International Nuclear Information System (INIS)

    Wang Fengyan; Pang Guoping

    2009-01-01

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  18. Experimental and numerical studies on two-stage combustion of biomass

    Energy Technology Data Exchange (ETDEWEB)

    Houshfar, Eshan

    2012-07-01

    In this thesis, two-stage combustion of biomass was experimentally/numerically investigated in a multifuel reactor. The following emissions issues have been the main focus of the work: 1- NOx and N2O 2- Unburnt species (CO and CxHy) 3- Corrosion related emissions.The study had a focus on two-stage combustion in order to reduce pollutant emissions (primarily NOx emissions). It is well known that pollutant emissions are very dependent on the process conditions such as temperature, reactant concentrations and residence times. On the other hand, emissions are also dependent on the fuel properties (moisture content, volatiles, alkali content, etc.). A detailed study of the important parameters with suitable biomass fuels in order to optimize the various process conditions was performed. Different experimental studies were carried out on biomass fuels in order to study the effect of fuel properties and combustion parameters on pollutant emissions. Process conditions typical for biomass combustion processes were studied. Advanced experimental equipment was used in these studies. The experiments showed the effects of staged air combustion, compared to non-staged combustion, on the emission levels clearly. A NOx reduction of up to 85% was reached with staged air combustion using demolition wood as fuel. An optimum primary excess air ratio of 0.8-0.95 was found as a minimizing parameter for the NOx emissions for staged air combustion. Air staging had, however, a negative effect on N2O emissions. Even though the trends showed a very small reduction in the NOx level as temperature increased for non-staged combustion, the effect of temperature was not significant for NOx and CxHy, neither in staged air combustion or non-staged combustion, while it had a great influence on the N2O and CO emissions, with decreasing levels with increasing temperature. Furthermore, flue gas recirculation (FGR) was used in combination with staged combustion to obtain an enhanced NOx reduction. The

  19. Articulating spacers used in two-stage revision of infected hip and knee prostheses abrade with time.

    Science.gov (United States)

    Fink, Bernd; Rechtenbach, Annett; Büchner, Hubert; Vogt, Sebastian; Hahn, Michael

    2011-04-01

    Articulating spacers used in two-stage revision surgery of infected prostheses have the potential to abrade and subsequently induce third-body wear of the new prosthesis. We asked whether particulate material abraded from spacers could be detected in the synovial membrane 6 weeks after implantation when the spacers were removed for the second stage of the revision. Sixteen hip spacers (cemented prosthesis stem articulating with a cement cup) and four knee spacers (customized mobile cement spacers) were explanted 6 weeks after implantation and the synovial membranes were removed at the same time. The membranes were examined by xray fluorescence spectroscopy, xray diffraction for the presence of abraded particles originating from the spacer material, and analyzed in a semiquantitative manner by inductively coupled plasma mass spectrometry. Histologic analyses also were performed. We found zirconium dioxide in substantial amounts in all samples, and in the specimens of the hip synovial lining, we detected particles that originated from the metal heads of the spacers. Histologically, zirconium oxide particles were seen in the synovial membrane of every spacer and bone cement particles in one knee and two hip spacers. The observations suggest cement spacers do abrade within 6 weeks. Given the presence of abrasion debris, we recommend total synovectomy and extensive lavage during the second-stage reimplantation surgery to minimize the number of abraded particles and any retained bacteria.

  20. Correlation between Cervical Vertebral Maturation Stages and Dental Maturation in a Saudi Sample

    Directory of Open Access Journals (Sweden)

    Nayef H Felemban

    2017-01-01

    Full Text Available Background: The aim of the present study was to compare the cervical vertebra maturation stages method and dental maturity using tooth calcification stages. Methods: The current study comprised of 405 subjects selected from orthodontic patients of Saudi origin coming to clinics of the specialized dental centers in western region of Saudi Arabia. Dental age was assessed according to the developmental stages of upper and lower third molars and skeletal maturation according to the cervical vertebrae maturation stage method. Statistical analysis was done using Kruskal-Wallis H, Mann-Whitney U test, Chi-Square test; t-test and Spearman correlation coefficient for inter group comparison. Results: The females were younger than males in all cervical stages. The CS1-CS2 show the period before the peak of growth, during CS3-CS5 it’s the pubertal growth spurt and CS6 is the period after the peak of the growth. The mean age and standard deviation for cervical stages of CS2, CS3 and CS4 were 12.09 ±1.72 years, 13.19 ±1.62 and 14.88 ±1.52 respectively. The Spearman correlation coefficients between cervical vertebrae and dental maturation were between 0.166 and 0.612, 0.243 and 0.832 for both sexes for upper and lower third molars. The significance levels for all coefficients were equal at 0.01 and 0.05. Conclusion: The results of this study showed that the skeletal maturity increased with the increase in dental ages for both genders. An early rate of skeletal maturation stage was observed in females. This study needs further analysis using a larger sample covering the entire dentition.

  1. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2015-01-01

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman's two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  2. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  3. Hybrid alkali-hydrodynamic disintegration of waste-activated sludge before two-stage anaerobic digestion process.

    Science.gov (United States)

    Grübel, Klaudiusz; Suschka, Jan

    2015-05-01

    The first step of anaerobic digestion, the hydrolysis, is regarded as the rate-limiting step in the degradation of complex organic compounds, such as waste-activated sludge (WAS). The aim of lab-scale experiments was to pre-hydrolyze the sludge by means of low intensive alkaline sludge conditioning before applying hydrodynamic disintegration, as the pre-treatment procedure. Application of both processes as a hybrid disintegration sludge technology resulted in a higher organic matter release (soluble chemical oxygen demand (SCOD)) to the liquid sludge phase compared with the effects of processes conducted separately. The total SCOD after alkalization at 9 pH (pH in the range of 8.96-9.10, SCOD = 600 mg O2/L) and after hydrodynamic (SCOD = 1450 mg O2/L) disintegration equaled to 2050 mg/L. However, due to the synergistic effect, the obtained SCOD value amounted to 2800 mg/L, which constitutes an additional chemical oxygen demand (COD) dissolution of about 35 %. Similarly, the synergistic effect after alkalization at 10 pH was also obtained. The applied hybrid pre-hydrolysis technology resulted in a disintegration degree of 28-35%. The experiments aimed at selection of the most appropriate procedures in terms of optimal sludge digestion results, including high organic matter degradation (removal) and high biogas production. The analyzed soft hybrid technology influenced the effectiveness of mesophilic/thermophilic anaerobic digestion in a positive way and ensured the sludge minimization. The adopted pre-treatment technology (alkalization + hydrodynamic cavitation) resulted in 22-27% higher biogas production and 13-28% higher biogas yield. After two stages of anaerobic digestion (mesophilic conditions (MAD) + thermophilic anaerobic digestion (TAD)), the highest total solids (TS) reduction amounted to 45.6% and was received for the following sample at 7 days MAD + 17 days TAD. About 7% higher TS reduction was noticed compared with the sample after 9

  4. Assessing differences in groups randomized by recruitment chain in a respondent-driven sample of Seattle-area injection drug users.

    Science.gov (United States)

    Burt, Richard D; Thiede, Hanne

    2014-11-01

    Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism

    Science.gov (United States)

    Yang, B.; Guo, X.; Wang, Q. H.; Lu, C. F.; Hu, D.

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s)2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  6. Frequency of hepatitis E virus, rotavirus and porcine enteric calicivirus at various stages of pork carcass processing in two pork processing plants.

    Science.gov (United States)

    Jones, Tineke H; Muehlhauser, Victoria

    2017-10-16

    Hepatitis E virus (HEV), rotavirus (RV), and porcine enteric calicivirus (PEC) infections are common in swine and raises concerns about the potential for zoonotic transmission through undercooked meat products. Enteric viruses can potentially contaminate carcasses during meat processing operations. There is a lack of information on the prevalence and control of enteric viruses in the pork processing chain. This study compared the incidence and levels of contamination of hog carcasses with HEV, RV and PEC at different stages of the dressing process. A total of 1000 swabs were collected from 2 pork processing plants on 10 separate occasions over the span of a year. The samples were obtained from random sites on hog carcasses at 4 dressing stages (plant A: bleeding, dehairing, pasteurization, and evisceration; plant B: bleeding, skinning, evisceration, and washing) and from meat cuts. Numbers of genome copies (gc) of HEV, RV and PEC were determined by RT-qPCR. RV and PEC were detected in 100%, and 18% of samples, respectively, after bleeding for plant A and in 98%, and 36% of samples, respectively, after bleeding for plant B. After evisceration, RV and PEC were detected in 21% and 3% of samples, respectively, for plant A and in 1%, and 0% of samples, respectively for plant B. RV and PEC were detected on 1%, and 5% of pork cuts, respectively, for plant A and on 0%, and 0% of pork cuts, respectively, for plant B. HEV was not detected in any pork carcass or retail pork samples from plants A or B. The frequency of PEC and RV on pork is progressively reduced along the pork processing chain but the viruses were not completely eliminated. The findings suggest that consumers could be at risk when consuming undercooked meat contaminated with pathogenic enteric viruses. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  7. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  8. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  9. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  10. Divergent methylation pattern in adult stage between two forms of Tetranychus urticae (Acari: Tetranychidae).

    Science.gov (United States)

    Yang, Si-Xia; Guo, Chao; Zhao, Xiu-Ting; Sun, Jing-Tao; Hong, Xiao-Yue

    2017-02-19

    The two-spotted spider mite, Tetranychus urticae Koch has two forms: green form and red form. Understanding the molecular basis of how these two forms established without divergent genetic background is an intriguing area. As a well-known epigenetic process, DNA methylation has particularly important roles in gene regulation and developmental variation across diverse organisms that do not alter genetic background. Here, to investigate whether DNA methylation could be associated with different phenotypic consequences in the two forms of T. urticae, we surveyed the genome-wide cytosine methylation status and expression level of DNA methyltransferase 3 (Tudnmt3) throughout their entire life cycle. Methylation-sensitive amplification polymorphism (MSAP) analyses of 585 loci revealed variable methylation patterns in the different developmental stages. In particular, principal coordinates analysis (PCoA) indicates a significant epigenetic differentiation between female adults of the two forms. The gene expression of Tudnmt3 was detected in all examined developmental stages, which was significantly different in the adult stage of the two forms. Together, our results reveal the epigenetic distance between the two forms of T. urticae, suggesting that DNA methylation might be implicated in different developmental demands, and contribute to different phenotypes in the adult stage of these two forms. © 2017 Institute of Zoology, Chinese Academy of Sciences.

  11. Phase 1b randomized trial and follow-up study in Uganda of the blood-stage malaria vaccine candidate BK-SE36.

    Science.gov (United States)

    Palacpac, Nirianne Marie Q; Ntege, Edward; Yeka, Adoke; Balikagala, Betty; Suzuki, Nahoko; Shirai, Hiroki; Yagi, Masanori; Ito, Kazuya; Fukushima, Wakaba; Hirota, Yoshio; Nsereko, Christopher; Okada, Takuya; Kanoi, Bernard N; Tetsutani, Kohhei; Arisue, Nobuko; Itagaki, Sawako; Tougan, Takahiro; Ishii, Ken J; Ueda, Shigeharu; Egwang, Thomas G; Horii, Toshihiro

    2013-01-01

    Up to now a malaria vaccine remains elusive. The Plasmodium falciparum serine repeat antigen-5 formulated with aluminum hydroxyl gel (BK-SE36) is a blood-stage malaria vaccine candidate that has undergone phase 1a trial in malaria-naive Japanese adults. We have now assessed the safety and immunogenicity of BK-SE36 in a malaria endemic area in Northern Uganda. We performed a two-stage, randomized, single-blinded, placebo-controlled phase 1b trial (Current Controlled trials ISRCTN71619711). A computer-generated sequence randomized healthy subjects for 2 subcutaneous injections at 21-day intervals in Stage1 (21-40 year-olds) to 1-mL BK-SE36 (BKSE1.0) (n = 36) or saline (n = 20) and in Stage2 (6-20 year-olds) to BKSE1.0 (n = 33), 0.5-mL BK-SE36 (BKSE0.5) (n = 33), or saline (n = 18). Subjects and laboratory personnel were blinded. Safety and antibody responses 21-days post-second vaccination (Day42) were assessed. Post-trial, to compare the risk of malaria episodes 130-365 days post-second vaccination, Stage2 subjects were age-matched to 50 control individuals. Nearly all subjects who received BK-SE36 had induration (Stage1, n = 33, 92%; Stage2, n = 63, 96%) as a local adverse event. No serious adverse event related to BK-SE36 was reported. Pre-existing anti-SE36 antibody titers negatively correlated with vaccination-induced antibody response. At Day42, change in antibody titers was significant for seronegative adults (1.95-fold higher than baseline [95% CI, 1.56-2.43], p = 0.004) and 6-10 year-olds (5.71-fold [95% CI, 2.38-13.72], p = 0.002) vaccinated with BKSE1.0. Immunogenicity response to BKSE0.5 was low and not significant (1.55-fold [95% CI, 1.24-1.94], p = 0.75). In the ancillary analysis, cumulative incidence of first malaria episodes with ≥5000 parasites/µL was 7 cases/33 subjects in BKSE1.0 and 10 cases/33 subjects in BKSE0.5 vs. 29 cases/66 subjects in the control group. Risk ratio for BKSE1.0 was 0.48 (95% CI, 0

  12. Hydrodeoxygenation of oils from cellulose in single and two-stage hydropyrolysis

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, J.D.; Snape, C.E. [Strathclyde Univ., Glasgow (United Kingdom); Luengo, C.A. [Universidade Estadual de Campinas, SP (Brazil). Dept. de Fisica Aplicada

    1996-09-01

    To investigate the removal of oxygen (hydrodeoxygenation) during the hydropyrolysis of cellulose, single and two-stage experiments on pure cellulose have been carried out using hydrogen pressures up to 10 MPa and temperatures over the range 300-520{sup o}C. Carbon, oxygen and aromaticity balances have been determined from the product yields and compositions. For the two-stage tests, the primary oils were passed through a bed of commercial Ni/Mo {gamma}-alumina-supported catalyst (Criterion 424, presulphided) at 400{sup o}C. Raising the hydrogen pressure from atmospheric to 10 MPa increased the carbon conversion by 10 mole % which was roughly equally divided between the oil and hydrocarbon gases. The oxygen content of the primary oil was reduced by over 10% to below 20% w/w. The addition of a dispersed iron sulphide catalyst further increased the oil yield at 10 MPa and reduces the oxygen content of the oil by a further 10%. The effect of hydrogen pressure on oil yields was most pronounced at low flow rates where it is beneficial in helping to overcome diffusional resistances. Unlike the dispersed iron sulphide in the first stage, the use of the Ni-Mo catalyst in the second stage reduced both the oxygen content and aromaticity of the oils. (Author)

  13. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  14. An efficient method of randomly sampling the coherent angular scatter distribution

    International Nuclear Information System (INIS)

    Williamson, J.F.; Morin, R.L.

    1983-01-01

    Monte Carlo simulations of photon transport phenomena require random selection of an interaction process at each collision site along the photon track. Possible choices are usually limited to photoelectric absorption and incoherent scatter as approximated by the Klein-Nishina distribution. A technique is described for sampling the coherent angular scatter distribution, for the benefit of workers in medical physics. (U.K.)

  15. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  16. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  17. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  18. Anaerobic digestion of citrus waste using two-stage membrane bioreactor

    Science.gov (United States)

    Millati, Ria; Lukitawesa; Dwi Permanasari, Ervina; Wulan Sari, Kartika; Nur Cahyanto, Muhammad; Niklasson, Claes; Taherzadeh, Mohammad J.

    2018-03-01

    Anaerobic digestion is a promising method to treat citrus waste. However, the presence of limonene in citrus waste inhibits anaerobic digestion process. Limonene is an antimicrobial compound and could inhibit methane forming bacteria that takes a longer time to recover than the injured acid forming bacteria. Hence, volatile fatty acids will be accumulated and methane production will be decreased. One way to solve this problem is by conducting anaerobic digestion process into two stages. The first step is aimed for hydrolysis, acidogenesis, and acetogenesis reactions and the second stage is aimed for methanogenesis reaction. The separation of the system would further allow each stage in their optimum conditions making the process more stable. In this research, anaerobic digestion was carried out in batch operations using 120 ml-glass bottle bioreactors in 2 stages. The first stage was performed in free-cells bioreactor, whereas the second stage was performed in both bioreactor of free cells and membrane bioreactor. In the first stage, the reactor was set into ‘anaerobic’ and ‘semi-aerobic’ conditions to examine the effect of oxygen on facultative anaerobic bacteria in acid production. In the second stage, the protection of membrane towards the cells against limonene was tested. For the first stage, the basal medium was prepared with 1.5 g VS of inoculum and 4.5 g VS of citrus waste. The digestion process was carried out at 55°C for four days. For the second stage, the membrane bioreactor was prepared with 3 g of cells that were encased and sealed in a 3×6 cm2 polyvinylidene fluoride membrane. The medium contained 40 ml basal medium and 10 ml liquid from the first stage. The bioreactors were incubated at 55°C for 2 days under anaerobic condition. The results from the first stage showed that the maximum total sugar under ‘anaerobic’ and ‘semi-aerobic’ conditions was 294.3 g/l and 244.7 g/l, respectively. The corresponding values for total volatile

  19. Is the continuous two-stage anaerobic digestion process well suited for all substrates?

    Science.gov (United States)

    Lindner, Jonas; Zielonka, Simon; Oechsner, Hans; Lemmer, Andreas

    2016-01-01

    Two-stage anaerobic digestion systems are often considered to be advantageous compared to one-stage processes. Although process conditions and fermenter setups are well examined, overall substrate degradation in these systems is controversially discussed. Therefore, the aim of this study was to investigate how substrates with different fibre and sugar contents (hay/straw, maize silage, sugar beet) influence the degradation rate and methane production. Intermediates and gas compositions, as well as methane yields and VS-degradation degrees were recorded. The sugar beet substrate lead to a higher pH-value drop 5.67 in the acidification reactor, which resulted in a six time higher hydrogen production in comparison to the hay/straw substrate (pH-value drop 5.34). As the achieved yields in the two-stage system showed a difference of 70.6% for the hay/straw substrate, and only 7.8% for the sugar beet substrate. Therefore two-stage systems seem to be only recommendable for digesting sugar rich substrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Assessing efficiency and effectiveness of Malaysian Islamic banks: A two stage DEA analysis

    Science.gov (United States)

    Kamarudin, Norbaizura; Ismail, Wan Rosmanira; Mohd, Muhammad Azri

    2014-06-01

    Islamic banks in Malaysia are indispensable players in the financial industry with the growing needs for syariah compliance system. In the banking industry, most recent studies concerned only on operational efficiency. However rarely on the operational effectiveness. Since the production process of banking industry can be described as a two-stage process, two-stage Data Envelopment Analysis (DEA) can be applied to measure the bank performance. This study was designed to measure the overall performance in terms of efficiency and effectiveness of Islamic banks in Malaysia using Two-Stage DEA approach. This paper presents analysis of a DEA model which split the efficiency and effectiveness in order to evaluate the performance of ten selected Islamic Banks in Malaysia for the financial year period ended 2011. The analysis shows average efficient score is more than average effectiveness score thus we can say that Malaysian Islamic banks were more efficient rather than effective. Furthermore, none of the bank exhibit best practice in both stages as we can say that a bank with better efficiency does not always mean having better effectiveness at the same time.

  1. Efficiency of primary care in rural Burkina Faso. A two-stage DEA analysis.

    Science.gov (United States)

    Marschall, Paul; Flessa, Steffen

    2011-07-20

    Providing health care services in Africa is hampered by severe scarcity of personnel, medical supplies and financial funds. Consequently, managers of health care institutions are called to measure and improve the efficiency of their facilities in order to provide the best possible services with their resources. However, very little is known about the efficiency of health care facilities in Africa and instruments of performance measurement are hardly applied in this context. This study determines the relative efficiency of primary care facilities in Nouna, a rural health district in Burkina Faso. Furthermore, it analyses the factors influencing the efficiency of these institutions. We apply a two-stage Data Envelopment Analysis (DEA) based on data from a comprehensive provider and household information system. In the first stage, the relative efficiency of each institution is calculated by a traditional DEA model. In the second stage, we identify the reasons for being inefficient by regression technique. The DEA projections suggest that inefficiency is mainly a result of poor utilization of health care facilities as they were either too big or the demand was too low. Regression results showed that distance is an important factor influencing the efficiency of a health care institution Compared to the findings of existing one-stage DEA analyses of health facilities in Africa, the share of relatively efficient units is slightly higher. The difference might be explained by a rather homogenous structure of the primary care facilities in the Burkina Faso sample. The study also indicates that improving the accessibility of primary care facilities will have a major impact on the efficiency of these institutions. Thus, health decision-makers are called to overcome the demand-side barriers in accessing health care.

  2. The correlation between concept mastery and stage of moral reasoning student using socio-scientific issues on reproductive system material

    Science.gov (United States)

    Lestari, T. A.; Saefudin; Priyandoko, D.

    2018-05-01

    This research aims to analyze the correlation between concept mastery and moral stages of students. The research method using a correlational study with stratified random sampling technique. The population in this research is all of eleventh grade students in Senior High School Bandung. Data were collected from 297 eleventh grade students of three Senior High School in Bandung with use the instrument in the form of examination and stage of moral reasoning questionnaire. The stage of moral reasoning in this research consists of two student’s moral reasoning categories based on 16 questionnaire as the indicators from Jones et al. (2007). The results of this research shows that the average of eleventh grade student’s moral reasoning stage is the advanced stage. The results of this research shows that the concept mastery and the stage of moral reasoning indicates that there are 0.370 0f a positive correlation. This research provides an overview of eleventh grade student about concept mastery and stage of moral reasoning using socio-scientific issues.

  3. Engineering analysis of the two-stage trifluoride precipitation process

    International Nuclear Information System (INIS)

    Luerkens, D.w.W.

    1984-06-01

    An engineering analysis of two-stage trifluoride precipitation processes is developed. Precipitation kinetics are modeled using consecutive reactions to represent fluoride complexation. Material balances across the precipitators are used to model the time dependent concentration profiles of the main chemical species. The results of the engineering analysis are correlated with previous experimental work on plutonium trifluoride and cerium trifluoride

  4. Recent developments of a two-stage light gas gun for pellet injection

    International Nuclear Information System (INIS)

    Reggiori, A.

    1984-01-01

    A report is given on a two-stage pneumatic gun operated with ambient air as first stage driver which has been built and tested. Cylindrical polyethylene pellets of 1 mm diameter and 1 mm length have been launched at velocities up to 1800 m/s, with divergence angles of the pellet trajectory less than 1 0 . It is possible to optimize the pressure pulse for pellets of different masses, simply changing the mass of the piston and/or the initial pressures in the second stage. (author)

  5. A scaling analysis of electronic localization in two-dimensional random media

    International Nuclear Information System (INIS)

    Ye Zhen

    2003-01-01

    By an improved scaling analysis, we suggest that there may appear two possibilities concerning the electronic localization in two-dimensional random media. The first is that all electronic states are localized in two dimensions, as conjectured previously. The second possibility is that electronic behaviors in two- and three-dimensional random systems are similar, in agreement with a recent calculation based on a direct calculation of the conductance with the use of the Kubo formula. In this case, non-localized states are possible in two dimensions, and have some peculiar properties. A few predictions are proposed. Moreover, the present analysis accommodates results from the previous scaling analysis

  6. Considerations Regarding Age at Surgery and Fistula Incidence Using One- and Two-stage Closure for Cleft Palate

    Directory of Open Access Journals (Sweden)

    Simona Stoicescu

    2013-12-01

    Full Text Available Introduction: Although cleft lip and palate (CLP is one of the most common congenital malformations, occurring in 1 in 700 live births, there is still no generally accepted treatment protocol. Numerous surgical techniques have been described for cleft palate repair; these techniques can be divided into one-stage (one operation cleft palate repair and two-stage cleft palate closure. The aim of this study is to present our cleft palate team experience in using the two-stage cleft palate closure and the clinical outcomes in terms of oronasal fistula rate. Material and methods: A retrospective analysis was performed on medical records of 80 patients who underwent palate repair over a five-year period, from 2008 to 2012. All cleft palate patients were incorporated. Information on patient’s gender, cleft type, age at repair, one- or two-stage cleft palate repair were collected and analyzed. Results: Fifty-three (66% and twenty-seven (34% patients underwent two-stage and one-stage repair, respectively. According to Veau classification, more than 60% of them were Veau III and IV, associating cleft lip to cleft palate. Fistula occurred in 34% of the two-stage repairs versus 7% of one-stage repairs, with an overall incidence of 24%. Conclusions: Our study has shown that a two-stage cleft palate closure has a higher rate of fistula formation when compared with the one-stage repair. Two-stage repair is the protocol of choice in wide complete cleft lip and palate cases, while one-stage procedure is a good option for cleft palate alone, or some specific cleft lip and palate cases (narrow cleft palate, older age at surgery

  7. Two-stage acid saccharification of fractionated Gelidium amansii minimizing the sugar decomposition.

    Science.gov (United States)

    Jeong, Tae Su; Kim, Young Soo; Oh, Kyeong Keun

    2011-11-01

    Two-stage acid hydrolysis was conducted on easy reacting cellulose and resistant reacting cellulose of fractionated Gelidium amansii (f-GA). Acid hydrolysis of f-GA was performed at between 170 and 200 °C for a period of 0-5 min, and an acid concentration of 2-5% (w/v, H2SO4) to determine the optimal conditions for acid hydrolysis. In the first stage of the acid hydrolysis, an optimum glucose yield of 33.7% was obtained at a reaction temperature of 190 °C, an acid concentration of 3.0%, and a reaction time of 3 min. In the second stage, a glucose yield of 34.2%, on the basis the amount of residual cellulose from the f-GA, was obtained at a temperature of 190 °C, a sulfuric acid concentration of 4.0%, and a reaction time 3.7 min. Finally, 68.58% of the cellulose derived from f-GA was converted into glucose through two-stage acid saccharification under aforementioned conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. IPP-rich milk protein hydrolysate lowers blood pressure in subjects with stage 1 hypertension, a randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Kloek Joris

    2010-11-01

    Full Text Available Abstract Background Milk derived peptides have been identified as potential antihypertensive agents. The primary objective was to investigate the effectiveness of IPP-rich milk protein hydrolysates (MPH on reducing blood pressure (BP as well as to investigate safety parameters and tolerability. The secondary objective was to confirm or falsify ACE inhibition as the mechanism underlying BP reductions by measuring plasma renin activity and angiotensin I and II. Methods We conducted a randomized, placebo-controlled, double blind, crossover study including 70 Caucasian subjects with prehypertension or stage 1 hypertension. Study treatments consisted of daily consumption of two capsules MPH1 (each containing 7.5 mg Isoleucine-Proline-Proline; IPP, MPH2 (each containing 6.6 mg Methionine-Alanine-Proline, 2.3 mg Leucine-Proline-Proline, 1.8 mg IPP, or placebo (containing cellulose for 4 weeks. Results In subjects with stage 1 hypertension, MPH1 lowered systolic BP by 3.8 mm Hg (P = 0.0080 and diastolic BP by 2.3 mm Hg (P = 0.0065 compared with placebo. In prehypertensive subjects, the differences in BP between MPH1 and placebo were not significant. MPH2 did not change BP significantly compared with placebo in stage I hypertensive or prehypertensive subjects. Intake of MPHs was well tolerated and safe. No treatment differences in hematology, clinical laboratory parameters or adverse effects were observed. No significant differences between MPHs and placebo were found in plasma renin activity, or angiotensin I and II. Conclusions MPH1, containing IPP and no minerals, exerts clinically relevant BP lowering effects in subjects with stage 1 hypertension. It may be included in lifestyle changes aiming to prevent or reduce high BP. Trial registration ClinicalTrials.gov NCT00471263

  9. Two-stage nuclear refrigeration with enhanced nuclear moments

    International Nuclear Information System (INIS)

    Hunik, R.

    1979-01-01

    Experiments are described in which an enhanced nuclear system is used as a precoolant for a nuclear demagnetisation stage. The results show the promising advantages of such a system in those circumstances for which a large cooling power is required at extremely low temperatures. A theoretical review of nuclear enhancement at the microscopic level and its macroscopic thermodynamical consequences is given. The experimental equipment for the implementation of the nuclear enhanced refrigeration method is described and the experiments on two-stage nuclear demagnetisation are discussed. With the nuclear enhanced system PrCu 6 the author could precool a nuclear stage of indium in a magnetic field of 6 T down to temperatures below 10 mK; this resulted in temperature below 1 mK after demagnetisation of the indium. It is demonstrated that the interaction energy between the nuclear moments in an enhanced nuclear system can exceed the nuclear dipolar interaction. Several experiments are described on pulsed nuclear magnetic resonance, as utilised for thermometry purposes. It is shown that platinum NMR-thermometry gives very satisfactory results around 1 mK. The results of experiments on nuclear orientation of radioactive nuclei, e.g. the brute force polarisation of 95 NbPt and 60 CoCu, are presented, some of which are of major importance for the thermometry in the milli-Kelvin region. (Auth.)

  10. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    Science.gov (United States)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  11. A pilot randomized trial assessing the effects of autogenic training in early stage cancer patients in relation to psychological status and immune system responses.

    Science.gov (United States)

    Hidderley, Margaret; Holt, Martin

    2004-03-01

    Autogenic training (AT) is a type of meditation usually used for reducing stress. This pilot study describes how AT was used on a group of early stage cancer patients and the observed effect on stress-related behaviours and immune system responses. This was a randomized trial with 31 early stage breast cancer women, having received a lumpectomy and adjuvant radiotherapy. The women were randomized into two groups. Group 1 received a home visit only. Group 2 received a home visit and 2 months' weekly Autogenic training. At the beginning and end of the 2 monthly periods, the Hospital Anxiety and Depression Scale (HADS) and T and B cell markers were measured to give an indication of changes in immune system responses and measurement of anxiety and depression. At the end of the study, HADS scores and T and B cell markers remained similar in the women who did not receive AT. The women receiving AT showed a strong statistical difference for an improvement in their HADS scores and those women observed in a meditative state as opposed to a relaxed state were found to have an increase in their immune responses. This study suggests AT as a powerful self-help therapy.

  12. A two-stage procedure for determining unsaturated hydraulic characteristics using a syringe pump and outflow observations

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Jensen, Karsten Høgh; Hollenbeck, Karl-Josef

    1997-01-01

    A fast two-stage methodology for determining unsaturated flow characteristics is presented. The procedure builds on direct measurement of the retention characteristic using a syringe pump technique, combined with inverse estimation of the hydraulic conductivity characteristic based on one......-step outflow experiments. The direct measurements are obtained with a commercial syringe pump, which continuously withdraws fluid from a soil sample at a very low and accurate how rate, thus providing the water content in the soil sample. The retention curve is then established by simultaneously monitoring......-step outflow data and the independently measured retention data are included in the objective function of a traditional least-squares minimization routine, providing unique estimates of the unsaturated hydraulic characteristics by means of numerical inversion of Richards equation. As opposed to what is often...

  13. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...... pays any attention to this. In this paper, we show how various capacity and time constraints influence the performance of a specific two-stage system. We study the effects of several basic scheduling and sequencing rules in the presence of these constraints in order to learn the characteristics...

  14. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  15. Two-Stage Load Shedding for Secondary Control in Hierarchical Operation of Islanded Microgrids

    DEFF Research Database (Denmark)

    Zhou, Quan; Li, Zhiyi; Wu, Qiuwei

    2018-01-01

    A two-stage load shedding scheme is presented to cope with the severe power deficit caused by microgrid islanding. Coordinated with the fast response of inverter-based distributed energy resources (DERs), load shedding at each stage and the resulting power flow redistribution are estimated....... The first stage of load shedding will cease rapid frequency decline in which the measured frequency deviation is employed to guide the load shedding level and process. Once a new steady-state is reached, the second stage is activated, which performs load shedding according to the priorities of loads...

  16. Comparison of two-stage thermophilic (68 degrees C/55 degrees C) anaerobic digestion with one-stage thermophilic (55 degrees C) digestion of cattle manure

    DEFF Research Database (Denmark)

    Nielsen, H.B.; Mladenovska, Zuzana; Westermann, Peter

    2004-01-01

    A two-stage 68degreesC/55degreesC anaerobic degradation process for treatment of cattle manure was studied. In batch experiments, an increase of the specific methane yield, ranging from 24% to 56%, was obtained when cattle manure and its fractions (fibers and liquid) were pretreated at 68degrees......, was compared with a conventional single-stage reactor running at 55degreesC with 15-days HRT. When an organic loading of 3 g volatile solids (VS) per liter per day was applied, the two-stage setup had a 6% to 8% higher specific methane yield and a 9% more effective VS-removal than the conventional single......-stage reactor. The 68degreesC reactor generated 7% to 9% of the total amount of methane of the two-stage system and maintained a volatile fatty acids (VFA) concentration of 4.0 to 4.4 g acetate per liter. Population size and activity of aceticlastic methanogens, syntrophic bacteria, and hydrolytic...

  17. Health-related quality-of-life outcomes: a reflexology trial with patients with advanced-stage breast cancer.

    Science.gov (United States)

    Wyatt, Gwen; Sikorskii, Alla; Rahbar, Mohammad Hossein; Victorson, David; You, Mei

    2012-11-01

    To evaluate the safety and efficacy of reflexology, a complementary therapy that applies pressure to specific areas of the feet. Longitudinal, randomized clinical trial. Thirteen community-based medical oncology clinics across the midwestern United States. A convenience sample of 385 predominantly Caucasian women with advanced-stage breast cancer receiving chemotherapy and/or hormonal therapy. Following the baseline interview, women were randomized into three primary groups: reflexology (n = 95), lay foot manipulation (LFM) (n = 95), or conventional care (n = 96). Two preliminary reflexology (n = 51) and LFM (n = 48) test groups were used to establish the protocols. Participants were interviewed again postintervention at study weeks 5 and 11. Breast cancer-specific health-related quality of life (HRQOL), physical functioning, and symptoms. No adverse events were reported. A longitudinal comparison revealed significant improvements in physical functioning for the reflexology group compared to the control group (p = 0.04). Severity of dyspnea was reduced in the reflexology group compared to the control group (p Reflexology may be added to existing evidence-based supportive care to improve HRQOL for patients with advanced-stage breast cancer during chemotherapy and/or hormonal therapy. Reflexology can be recommended for safety and usefulness in relieving dyspnea and enhancing functional status among women with advanced-stage breast cancer.

  18. Application of two-stage biofilter system for the removal of odorous compounds.

    Science.gov (United States)

    Jeong, Gwi-Taek; Park, Don-Hee; Lee, Gwang-Yeon; Cha, Jin-Myeong

    2006-01-01

    Biofiltration is a biological process which is considered to be one of the more successful examples of biotechnological applications to environmental engineering, and is most commonly used in the removal of odoriferous compounds. In this study, we have attempted to assess the efficiency with which both single and complex odoriferous compounds could be removed, using one- or two-stage biofiltration systems. The tested single odor gases, limonene, alpha-pinene, and iso-butyl alcohol, were separately evaluated in the biofilters. Both limonene and alpha-pinene were removed by 90% or more EC (elimination capacity), 364 g/m3/h and 321 g/m3/h, respectively, at an input concentration of 50 ppm and a retention time of 30 s. The iso-butyl alcohol was maintained with an effective removal yield of more than 90% (EC 375 g/m3/h) at an input concentration of 100 ppm. The complex gas removal scheme was applied with a 200 ppm inlet concentration of ethanol, 70 ppm of acetaldehyde, and 70 ppm of toluene with residence time of 45 s in a one- or two-stage biofiltration system. The removal yield of toluene was determined to be lower than that of the other gases in the one-stage biofilter. Otherwise, the complex gases were sufficiently eliminated by the two-stage biofiltration system.

  19. A Concept of Two-Stage-To-Orbit Reusable Launch Vehicle

    Science.gov (United States)

    Yang, Yong; Wang, Xiaojun; Tang, Yihua

    2002-01-01

    Reusable Launch Vehicle (RLV) has a capability of delivering a wide rang of payload to earth orbit with greater reliability, lower cost, more flexibility and operability than any of today's launch vehicles. It is the goal of future space transportation systems. Past experience on single stage to orbit (SSTO) RLVs, such as NASA's NASP project, which aims at developing an rocket-based combined-cycle (RBCC) airplane and X-33, which aims at developing a rocket RLV, indicates that SSTO RLV can not be realized in the next few years based on the state-of-the-art technologies. This paper presents a concept of all rocket two-stage-to-orbit (TSTO) reusable launch vehicle. The TSTO RLV comprises an orbiter and a booster stage. The orbiter is mounted on the top of the booster stage. The TSTO RLV takes off vertically. At the altitude about 50km the booster stage is separated from the orbiter, returns and lands by parachutes and airbags, or lands horizontally by means of its own propulsion system. The orbiter continues its ascent flight and delivers the payload into LEO orbit. After completing orbit mission, the orbiter will reenter into the atmosphere, automatically fly to the ground base and finally horizontally land on the runway. TSTO RLV has less technology difficulties and risk than SSTO, and maybe the practical approach to the RLV in the near future.

  20. Evaluating damping elements for two-stage suspension vehicles

    Directory of Open Access Journals (Sweden)

    Ronald M. Martinod R.

    2012-01-01

    Full Text Available The technical state of the damping elements for a vehicle having two-stage suspension was evaluated by using numerical models based on the multi-body system theory; a set of virtual tests used the eigenproblem mathematical method. A test was developed based on experimental modal analysis (EMA applied to a physical system as the basis for validating the numerical models. The study focused on evaluating vehicle dynamics to determine the influence of the dampers’ technical state in each suspension state.

  1. Vaccines for preventing malaria (blood-stage).

    Science.gov (United States)

    Graves, P; Gelband, H

    2006-10-18

    A malaria vaccine is needed because of the heavy burden of mortality and morbidity due to this disease. This review describes the results of trials of blood (asexual)-stage vaccines. Several are under development, but only one (MSP/RESA, also known as Combination B) has been tested in randomized controlled trials. To assess the effect of blood-stage malaria vaccines in preventing infection, disease, and death. In March 2006, we searched the Cochrane Infectious Diseases Group Specialized Register, CENTRAL (The Cochrane Library 2006, Issue 1), MEDLINE, EMBASE, LILACS, and the Science Citation Index. We also searched conference proceedings and reference lists of articles, and contacted organizations and researchers in the field. Randomized controlled trials comparing blood-stage vaccines (other than SPf66) against P. falciparum, P. vivax, P. malariae, or P. ovale with placebo, control vaccine, or routine antimalarial control measures in people of any age receiving a challenge malaria infection. Both authors independently assessed trial quality and extracted data. Results for dichotomous data were expressed as relative risks (RR) with 95% confidence intervals (CI). Five trials of MSP/RESA vaccine with 217 participants were included; all five reported on safety, and two on efficacy. No severe or systemic adverse effects were reported at doses of 13 to 15 microg of each antigen (39 to 45 microg total). One small efficacy trial with 17 non-immune participants with blood-stage parasites showed no reduction or delay in parasite growth rates after artificial challenge. In the second efficacy trial in 120 children aged five to nine years in Papua New Guinea, episodes of clinical malaria were not reduced, but MSP/RESA significantly reduced parasite density only in children who had not been pretreated with an antimalarial drug (sulfadoxine-pyrimethamine). Infections with the 3D7 parasite subtype of MSP2 (the variant included in the vaccine) were reduced (RR 0.38, 95% CI 0.26 to

  2. Determination and Variation of Core Bacterial Community in a Two-Stage Full-Scale Anaerobic Reactor Treating High-Strength Pharmaceutical Wastewater.

    Science.gov (United States)

    Ma, Haijun; Ye, Lin; Hu, Haidong; Zhang, Lulu; Ding, Lili; Ren, Hongqiang

    2017-10-28

    Knowledge on the functional characteristics and temporal variation of anaerobic bacterial populations is important for better understanding of the microbial process of two-stage anaerobic reactors. However, owing to the high diversity of anaerobic bacteria, close attention should be prioritized to the frequently abundant bacteria that were defined as core bacteria and putatively functionally important. In this study, using MiSeq sequencing technology, the core bacterial community of 98 operational taxonomic units (OTUs) was determined in a two-stage upflow blanket filter reactor treating pharmaceutical wastewater. The core bacterial community accounted for 61.66% of the total sequences and accurately predicted the sample location in the principal coordinates analysis scatter plot as the total bacterial OTUs did. The core bacterial community in the first-stage (FS) and second-stage (SS) reactors were generally distinct, in that the FS core bacterial community was indicated to be more related to a higher-level fermentation process, and the SS core bacterial community contained more microbes in syntrophic cooperation with methanogens. Moreover, the different responses of the FS and SS core bacterial communities to the temperature shock and influent disturbance caused by solid contamination were fully investigated. Co-occurring analysis at the Order level implied that Bacteroidales, Selenomonadales, Anaerolineales, Syneristales, and Thermotogales might play key roles in anaerobic digestion due to their high abundance and tight correlation with other microbes. These findings advance our knowledge about the core bacterial community and its temporal variability for future comparative research and improvement of the two-stage anaerobic system operation.

  3. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    Science.gov (United States)

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  4. Theoretical and experimental investigations on the cooling capacity distributions at the stages in the thermally-coupled two-stage Stirling-type pulse tube cryocooler without external precooling

    Science.gov (United States)

    Tan, Jun; Dang, Haizheng

    2017-03-01

    The two-stage Stirling-type pulse tube cryocooler (SPTC) has advantages in simultaneously providing the cooling powers at two different temperatures, and the capacity in distributing these cooling capacities between the stages is significant to its practical applications. In this paper, a theoretical model of the thermally-coupled two-stage SPTC without external precooling is established based on the electric circuit analogy with considering real gas effects, and the simulations of both the cooling performances and PV power distribution between stages are conducted. The results indicate that the PV power is inversely proportional to the acoustic impedance of each stage, and the cooling capacity distribution is determined by the cold finger cooling efficiency and the PV power into each stage together. The design methods of the cold fingers to achieve both the desired PV power and the cooling capacity distribution between the stages are summarized. The two-stage SPTC is developed and tested based on the above theoretical investigations, and the experimental results show that it can simultaneously achieve 0.69 W at 30 K and 3.1 W at 85 K with an electric input power of 330 W and a reject temperature of 300 K. The consistency between the simulated and the experimental results is observed and the theoretical investigations are experimentally verified.

  5. Lingual mucosal graft two-stage Bracka technique for redo hypospadias repair

    Directory of Open Access Journals (Sweden)

    Ahmed Sakr

    2017-09-01

    Conclusion: Lingual mucosa is a reliable and versatile graft material in the armamentarium of two-stage Bracka hypospadias repair with the merits of easy harvesting and minor donor-site complications.

  6. Prednisolone and acupuncture in Bell's palsy: study protocol for a randomized, controlled trial

    Directory of Open Access Journals (Sweden)

    Wang Kangjun

    2011-06-01

    Full Text Available Abstract Background There are a variety of treatment options for Bell's palsy. Evidence from randomized controlled trials indicates corticosteroids can be used as a proven therapy for Bell's palsy. Acupuncture is one of the most commonly used methods to treat Bell's palsy in China. Recent studies suggest that staging treatment is more suitable for Bell's palsy, according to different path-stages of this disease. The aim of this study is to compare the effects of prednisolone and staging acupuncture in the recovery of the affected facial nerve, and to verify whether prednisolone in combination with staging acupuncture is more effective than prednisolone alone for Bell's palsy in a large number of patients. Methods/Design In this article, we report the design and protocol of a large sample multi-center randomized controlled trial to treat Bell's palsy with prednisolone and/or acupuncture. In total, 1200 patients aged 18 to 75 years within 72 h of onset of acute, unilateral, peripheral facial palsy will be assessed. There are six treatment groups, with four treated according to different path-stages and two not. These patients are randomly assigned to be in one of the following six treatment groups, i.e. 1 placebo prednisolone group, 2 prednisolone group, 3 placebo prednisolone plus acute stage acupuncture group, 4 prednisolone plus acute stage acupuncture group, 5 placebo prednisolone plus resting stage acupuncture group, 6 prednisolone plus resting stage acupuncture group. The primary outcome is the time to complete recovery of facial function, assessed by Sunnybrook system and House-Brackmann scale. The secondary outcomes include the incidence of ipsilateral pain in the early stage of palsy (and the duration of this pain, the proportion of patients with severe pain, the occurrence of synkinesis, facial spasm or contracture, and the severity of residual facial symptoms during the study period. Discussion The result of this trial will assess the

  7. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    Science.gov (United States)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  8. Two-stage pervaporation process for effective in situ removal acetone-butanol-ethanol from fermentation broth.

    Science.gov (United States)

    Cai, Di; Hu, Song; Miao, Qi; Chen, Changjing; Chen, Huidong; Zhang, Changwei; Li, Ping; Qin, Peiyong; Tan, Tianwei

    2017-01-01

    Two-stage pervaporation for ABE recovery from fermentation broth was studied to reduce the energy cost. The permeate after the first stage in situ pervaporation system was further used as the feedstock in the second stage of pervaporation unit using the same PDMS/PVDF membrane. A total 782.5g/L of ABE (304.56g/L of acetone, 451.98g/L of butanol and 25.97g/L of ethanol) was achieved in the second stage permeate, while the overall acetone, butanol and ethanol separation factors were: 70.7-89.73, 70.48-84.74 and 9.05-13.58, respectively. Furthermore, the theoretical evaporation energy requirement for ABE separation in the consolidate fermentation, which containing two-stage pervaporation and the following distillation process, was estimated less than ∼13.2MJ/kg-butanol. The required evaporation energy was only 36.7% of the energy content of butanol. The novel two-stage pervaporation process was effective in increasing ABE production and reducing energy consumption of the solvents separation system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  10. Double sampling with multiple imputation to answer large sample meta-research questions: Introduction and illustration by evaluating adherence to two simple CONSORT guidelines

    Directory of Open Access Journals (Sweden)

    Patrice L. Capers

    2015-03-01

    Full Text Available BACKGROUND: Meta-research can involve manual retrieval and evaluation of research, which is resource intensive. Creation of high throughput methods (e.g., search heuristics, crowdsourcing has improved feasibility of large meta-research questions, but possibly at the cost of accuracy. OBJECTIVE: To evaluate the use of double sampling combined with multiple imputation (DS+MI to address meta-research questions, using as an example adherence of PubMed entries to two simple Consolidated Standards of Reporting Trials (CONSORT guidelines for titles and abstracts. METHODS: For the DS large sample, we retrieved all PubMed entries satisfying the filters: RCT; human; abstract available; and English language (n=322,107. For the DS subsample, we randomly sampled 500 entries from the large sample. The large sample was evaluated with a lower rigor, higher throughput (RLOTHI method using search heuristics, while the subsample was evaluated using a higher rigor, lower throughput (RHITLO human rating method. Multiple imputation of the missing-completely-at-random RHITLO data for the large sample was informed by: RHITLO data from the subsample; RLOTHI data from the large sample; whether a study was an RCT; and country and year of publication. RESULTS: The RHITLO and RLOTHI methods in the subsample largely agreed (phi coefficients: title=1.00, abstract=0.92. Compliance with abstract and title criteria has increased over time, with non-US countries improving more rapidly. DS+MI logistic regression estimates were more precise than subsample estimates (e.g., 95% CI for change in title and abstract compliance by Year: subsample RHITLO 1.050-1.174 vs. DS+MI 1.082-1.151. As evidence of improved accuracy, DS+MI coefficient estimates were closer to RHITLO than the large sample RLOTHI. CONCLUSIONS: Our results support our hypothesis that DS+MI would result in improved precision and accuracy. This method is flexible and may provide a practical way to examine large corpora of

  11. Two-Stage Residual Inclusion Estimation in Health Services Research and Health Economics.

    Science.gov (United States)

    Terza, Joseph V

    2018-06-01

    Empirical analyses in health services research and health economics often require implementation of nonlinear models whose regressors include one or more endogenous variables-regressors that are correlated with the unobserved random component of the model. In such cases, implementation of conventional regression methods that ignore endogeneity will likely produce results that are biased and not causally interpretable. Terza et al. (2008) discuss a relatively simple estimation method that avoids endogeneity bias and is applicable in a wide variety of nonlinear regression contexts. They call this method two-stage residual inclusion (2SRI). In the present paper, I offer a 2SRI how-to guide for practitioners and a step-by-step protocol that can be implemented with any of the popular statistical or econometric software packages. We introduce the protocol and its Stata implementation in the context of a real data example. Implementation of 2SRI for a very broad class of nonlinear models is then discussed. Additional examples are given. We analyze cigarette smoking as a determinant of infant birthweight using data from Mullahy (1997). It is hoped that the discussion will serve as a practical guide to implementation of the 2SRI protocol for applied researchers. © Health Research and Educational Trust.

  12. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  13. EOG and EMG: two important switches in automatic sleep stage classification.

    Science.gov (United States)

    Estrada, E; Nazeran, H; Barragan, J; Burk, J R; Lucas, E A; Behbehani, K

    2006-01-01

    Sleep is a natural periodic state of rest for the body, in which the eyes are usually closed and consciousness is completely or partially lost. In this investigation we used the EOG and EMG signals acquired from 10 patients undergoing overnight polysomnography with their sleep stages determined by expert sleep specialists based on RK rules. Differentiation between Stage 1, Awake and REM stages challenged a well trained neural network classifier to distinguish between classes when only EEG-derived signal features were used. To meet this challenge and improve the classification rate, extra features extracted from EOG and EMG signals were fed to the classifier. In this study, two simple feature extraction algorithms were applied to EOG and EMG signals. The statistics of the results were calculated and displayed in an easy to visualize fashion to observe tendencies for each sleep stage. Inclusion of these features show a great promise to improve the classification rate towards the target rate of 100%

  14. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  15. Improvement of two-stage GM refrigerator performance using a hybrid regenerator

    International Nuclear Information System (INIS)

    Ke, G.; Makuuchi, H.; Hashimoto, T.; Onishi, A.; Li, R.; Satoh, T.; Kanazawa, Y.

    1994-01-01

    To improve the performance of two-stage GM refrigerators, a hybrid regenerator with magnetic materials of Er 3 Ni and ErNi 0.9 Co 0.1 was used in the 2nd stage regenerator because of its large heat exchange capacity. The largest refrigeration capacity achieved with the hybrid regenerator was 0.95W at helium liquefied temperature of 4.2K. This capacity is 15.9% greater than the 0.82W refrigerator with only Er 3 Ni as the 2nd regenerator material. Use of the hybrid regenerator not only increases the refrigeration capacity at 4.2K, but also allows the 4K GM refrigerator to be used with large 1st stage refrigeration capacity, thus making it more practical

  16. NxStage dialysis system-associated thrombocytopenia: a report of two cases.

    Science.gov (United States)

    Sekkarie, Mohamed; Waldron, Michelle; Reynolds, Texas

    2016-01-01

    Thrombocytopenia in hemodialysis patients has recently been reported to be commonly caused by electron-beam sterilization of dialysis filters. We report the occurrence of thrombocytopenia in the first two patients of a newly established home hemodialysis program. The 2 patients switched from conventional hemodialysis using polysulfone electron-beam sterilized dialyzers to a NxStage system, which uses gamma sterilized polyehersulfone dialyzers incorporated into a drop-in cartridge. The thrombocytopenia resolved after return to conventional dialysis in both patients and recurred upon rechallenge in the patient who opted to retry NxStage. This is the first report of thrombocytopenia with the NxStage system according to the authors’ knowledge. Dialysis-associated thrombocytopenia pathophysiology and clinical significance are not well understood and warrant additional investigations.

  17. Evaluation of the effect of one stage versus two stage full mouth disinfection on C-reactive protein and leucocyte count in patients with chronic periodontitis.

    Science.gov (United States)

    Pabolu, Chandra Mohan; Mutthineni, Ramesh Babu; Chintala, Srikanth; Naheeda; Mutthineni, Navya

    2013-07-01

    Conventional non-surgical periodontal therapy is carried out in quadrant basis with 1-2 week interval. This time lag may result in re-infection of instrumented pocket and may impair healing. Therefore, a new approach to full-mouth non-surgical therapy to be completed within two consecutive days with full-mouth disinfection has been suggested. In periodontitis, leukocyte counts and levels of C-reactive protein (CRP) are likely to be slightly elevated, indicating the presence of infection or inflammation. The aim of this study is to compare the efficacy of one stage and two stage non-surgical therapy on clinical parameters along with CRP levels and total white blood cell (TWBC) count. A total of 20 patients were selected and were divided into two groups. Group 1 received one stage full mouth dis-infection and Group 2 received two stages FMD. Plaque index, sulcus bleeding index, probing depth, clinical attachment loss, serum CRP and TWBC count were evaluated for both the groups at baseline and at 1 month post-treatment. The results were analyzed using the Student t-test. Both treatment modalities lead to a significant improvement of the clinical and hematological parameters; however comparison between the two groups showed no significant difference after 1 month. The therapeutic intervention may have a systemic effect on blood count in periodontitis patients. Though one stage FMD had limited benefits over two stages FMD, the therapy can be accomplished in a shorter duration.

  18. A Two-Stage Fuzzy Logic Control Method of Traffic Signal Based on Traffic Urgency Degree

    OpenAIRE

    Yan Ge

    2014-01-01

    City intersection traffic signal control is an important method to improve the efficiency of road network and alleviate traffic congestion. This paper researches traffic signal fuzzy control method on a single intersection. A two-stage traffic signal control method based on traffic urgency degree is proposed according to two-stage fuzzy inference on single intersection. At the first stage, calculate traffic urgency degree for all red phases using traffic urgency evaluation module and select t...

  19. Design and construction of a heat stage for investigations of samples by atomic force microscopy above ambient temperatures

    DEFF Research Database (Denmark)

    Bækmark, Thomas Rosleff; Bjørnholm, Thomas; Mouritsen, Ole G.

    1997-01-01

    The construction from simple and cheap commercially available parts of a miniature heat stage for the direct heating of samples studied with a commercially available optical-lever-detection atomic force microscope is reported. We demonstrate that by using this heat stage, atomic resolution can...... be obtained on highly oriented pyrolytic graphite at 52 °C. The heat stage is of potential use for the investigation of biological material at physiological temperatures. ©1997 American Institute of Physics....

  20. Single-staged vs. two-staged implant placement using bone ring technique in vertically deficient alveolar ridges - Part 1: histomorphometric and micro-CT analysis.

    Science.gov (United States)

    Nakahara, Ken; Haga-Tsujimura, Maiko; Sawada, Kosaku; Kobayashi, Eizaburo; Mottini, Matthias; Schaller, Benoit; Saulacic, Nikola

    2016-11-01

    Simultaneous implant placement with bone grafting shortens the overall treatment period, but might lead to the peri-implant bone loss or even implant failure. The aim of this study was to compare the single-staged to two-staged implant placement using the bone ring technique. Four standardized alveolar bone defects were made in the mandibles of nine dogs. Dental implants (Straumann BL ® , Basel, Switzerland) were inserted simultaneously with bone ring technique in test group and after 6 months of healing period in control group. Animals of both groups were euthanized at 3 and 6 months of osseointegration period. The harvested samples were analyzed by means of histology and micro-CT. The amount of residual bone decreased while the amount of new bone increased up to 9 months of healing period. All morphometric parameters remained stable between 3 and 6 months of osseointegration period within groups. Per a given time point, median area of residual bone graft was higher in test group and area of new bone in control group. The volume of bone ring was greater in test than in control group, reaching the significance at 6 months of osseointegration period (P = 0.002). In the present type of bone defect, single-staged implant placement may be potentially useful to shorten an overall treatment period. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. A randomized phase II study of carboplatin with weekly or every-3-week nanoparticle albumin-bound paclitaxel (abraxane) in patients with extensive-stage small cell lung cancer.

    Science.gov (United States)

    Grilley-Olson, Juneko E; Keedy, Vicki L; Sandler, Alan; Moore, Dominic T; Socinski, Mark A; Stinchcombe, Thomas E

    2015-02-01

    Platinum plus etoposide is the standard therapy for extensive-stage small cell lung cancer (ES-SCLC) and is associated with significant myelosuppression. We hypothesized that the combination of carboplatin and nanoparticle albumin-bound paclitaxel (nab-paclitaxel) would be better tolerated. We investigated carboplatin with nab-paclitaxel on every-3-week and weekly schedules. This noncomparative randomized phase II trial used a two-stage design. The primary objective was objective response rate, and secondary objectives were progression-free survival, overall survival, and toxicity. Patients with ES-SCLC and an Eastern Cooperative Oncology Group performance status ≤2 and no prior chemotherapy were randomized in a 1:1 ratio to arm A (carboplatin area under the curve [AUC] of 6 on day 1 and nab-paclitaxel of 300 mg/m(2) on day 1 every 3 weeks) or arm B (carboplatin AUC of 6 on day 1 and nab-paclitaxel 100 mg/m(2) on days 1, 8, and 15 every 21 days). Response was assessed after every two cycles. Patients required frequent dose reductions, treatment delays, and omission of the weekly therapy. The trial was closed because of slow accrual. Carboplatin and nab-paclitaxel demonstrated activity in ES-SCLC but required frequent dose adjustments. ©AlphaMed Press; the data published online to support this summary is the property of the authors.

  2. Multisite tumor sampling enhances the detection of intratumor heterogeneity at all different temporal stages of tumor evolution.

    Science.gov (United States)

    Erramuzpe, Asier; Cortés, Jesús M; López, José I

    2018-02-01

    Intratumor heterogeneity (ITH) is an inherent process of tumor development that has received much attention in previous years, as it has become a major obstacle for the success of targeted therapies. ITH is also temporally unpredictable across tumor evolution, which makes its precise characterization even more problematic since detection success depends on the precise temporal snapshot at which ITH is analyzed. New and more efficient strategies for tumor sampling are needed to overcome these difficulties which currently rely entirely on the pathologist's interpretation. Recently, we showed that a new strategy, the multisite tumor sampling, works better than the routine sampling protocol for the ITH detection when the tumor time evolution was not taken into consideration. Here, we extend this work and compare the ITH detections of multisite tumor sampling and routine sampling protocols across tumor time evolution, and in particular, we provide in silico analyses of both strategies at early and late temporal stages for four different models of tumor evolution (linear, branched, neutral, and punctuated). Our results indicate that multisite tumor sampling outperforms routine protocols in detecting ITH at all different temporal stages of tumor evolution. We conclude that multisite tumor sampling is more advantageous than routine protocols in detecting intratumor heterogeneity.

  3. Optimization of refueling-shuffling scheme in PWR core by random search strategy

    International Nuclear Information System (INIS)

    Wu Yuan

    1991-11-01

    A random method for simulating optimization of refueling management in a pressurized water reactor (PWR) core is described. The main purpose of the optimization was to select the 'best' refueling arrangement scheme which would produce maximum economic benefits under certain imposed conditions. To fulfill this goal, an effective optimization strategy, two-stage random search method was developed. First, the search was made in a manner similar to the stratified sampling technique. A local optimum can be reached by comparison of the successive results. Then the other random experiences would be carried on between different strata to try to find the global optimum. In general, it can be used as a practical tool for conventional fuel management scheme. However, it can also be used in studies on optimization of Low-Leakage fuel management. Some calculations were done for a typical PWR core on a CYBER-180/830 computer. The results show that the method proposed can obtain satisfactory approach at reasonable low computational cost

  4. Thermodynamics analysis of a modified dual-evaporator CO2 transcritical refrigeration cycle with two-stage ejector

    International Nuclear Information System (INIS)

    Bai, Tao; Yan, Gang; Yu, Jianlin

    2015-01-01

    In this paper, a modified dual-evaporator CO 2 transcritical refrigeration cycle with two-stage ejector (MDRC) is proposed. In MDRC, the two-stage ejector are employed to recover the expansion work from cycle throttling processes and enhance the system performance and obtain dual-temperature refrigeration simultaneously. The effects of some key parameters on the thermodynamic performance of the modified cycle are theoretically investigated based on energetic and exergetic analyses. The simulation results for the modified cycle show that two-stage ejector exhibits more effective system performance improvement than the single ejector in CO 2 dual-temperature refrigeration cycle, and the improvements of the maximum system COP (coefficient of performance) and system exergy efficiency could reach 37.61% and 31.9% over those of the conventional dual-evaporator cycle under the given operating conditions. The exergetic analysis for each component at optimum discharge pressure indicates that the gas cooler, compressor, two-stage ejector and expansion valves contribute main portion to the total system exergy destruction, and the exergy destruction caused by the two-stage ejector could amount to 16.91% of the exergy input. The performance characteristics of the proposed cycle show its promise in dual-evaporator refrigeration system. - Highlights: • Two-stage ejector is used in dual-evaporator CO 2 transcritical refrigeration cycle. • Energetic and exergetic methods are carried out to analyze the system performance. • The modified cycle could obtain dual-temperature refrigeration simultaneously. • Two-stage ejector could effectively improve system COP and exergy efficiency

  5. The experimental study of a two-stage photovoltaic thermal system based on solar trough concentration

    International Nuclear Information System (INIS)

    Tan, Lijun; Ji, Xu; Li, Ming; Leng, Congbin; Luo, Xi; Li, Haili

    2014-01-01

    Highlights: • A two-stage photovoltaic thermal system based on solar trough concentration. • Maximum cell efficiency of 5.21% with the mirror opening width of 57 cm. • With single cycle, maximum temperatures rise in the heating stage is 12.06 °C. • With 30 min multiple cycles, working medium temperature 62.8 °C, increased 28.7 °C. - Abstract: A two-stage photovoltaic thermal system based on solar trough concentration is proposed, in which the metal cavity heating stage is added on the basis of the PV/T stage, and thermal energy with higher temperature is output while electric energy is output. With the 1.8 m 2 mirror PV/T system, the characteristic parameters of the space solar cell under non-concentrating solar radiation and concentrating solar radiation are respectively tested experimentally, and the solar cell output characteristics at different opening widths of concentrating mirror of the PV/T stage under condensation are also tested experimentally. When the mirror opening width was 57 cm, the solar cell efficiency reached maximum value of 5.21%. The experimental platform of the two-stage photovoltaic thermal system was established, with a 1.8 m 2 mirror PV/T stage and a 15 m 2 mirror heating stage, or a 1.8 m 2 mirror PV/T stage and a 30 m 2 mirror heating stage. The results showed that with single cycle, the long metal cavity heating stage would bring lower thermal efficiency, but temperature rise of the working medium is higher, up to 12.06 °C with only single cycle. With 30 min closed multiple cycles, the temperature of the working medium in the water tank was 62.8 °C, with an increase of 28.7 °C, and thermal energy with higher temperature could be output

  6. Stage-specific sampling by pattern recognition receptors during Candida albicans phagocytosis.

    Directory of Open Access Journals (Sweden)

    Sigrid E M Heinsbroek

    2008-11-01

    Full Text Available Candida albicans is a medically important pathogen, and recognition by innate immune cells is critical for its clearance. Although a number of pattern recognition receptors have been shown to be involved in recognition and phagocytosis of this fungus, the relative role of these receptors has not been formally examined. In this paper, we have investigated the contribution of the mannose receptor, Dectin-1, and complement receptor 3; and we have demonstrated that Dectin-1 is the main non-opsonic receptor involved in fungal uptake. However, both Dectin-1 and complement receptor 3 were found to accumulate at the site of uptake, while mannose receptor accumulated on C. albicans phagosomes at later stages. These results suggest a potential role for MR in phagosome sampling; and, accordingly, MR deficiency led to a reduction in TNF-alpha and MCP-1 production in response to C. albicans uptake. Our data suggest that pattern recognition receptors sample the fungal phagosome in a sequential fashion.

  7. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  8. New Results On the Sum of Two Generalized Gaussian Random Variables

    KAUST Repository

    Soury, Hamza

    2015-01-01

    We propose in this paper a new method to compute the characteristic function (CF) of generalized Gaussian (GG) random variable in terms of the Fox H function. The CF of the sum of two independent GG random variables is then deduced. Based on this results, the probability density function (PDF) and the cumulative distribution function (CDF) of the sum distribution are obtained. These functions are expressed in terms of the bivariate Fox H function. Next, the statistics of the distribution of the sum, such as the moments, the cumulant, and the kurtosis, are analyzed and computed. Due to the complexity of bivariate Fox H function, a solution to reduce such complexity is to approximate the sum of two independent GG random variables by one GG random variable with suitable shape factor. The approximation method depends on the utility of the system so three methods of estimate the shape factor are studied and presented.

  9. New Results on the Sum of Two Generalized Gaussian Random Variables

    KAUST Repository

    Soury, Hamza

    2016-01-06

    We propose in this paper a new method to compute the characteristic function (CF) of generalized Gaussian (GG) random variable in terms of the Fox H function. The CF of the sum of two independent GG random variables is then deduced. Based on this results, the probability density function (PDF) and the cumulative distribution function (CDF) of the sum distribution are obtained. These functions are expressed in terms of the bivariate Fox H function. Next, the statistics of the distribution of the sum, such as the moments, the cumulant, and the kurtosis, are analyzed and computed. Due to the complexity of bivariate Fox H function, a solution to reduce such complexity is to approximate the sum of two independent GG random variables by one GG random variable with suitable shape factor. The approximation method depends on the utility of the system so three methods of estimate the shape factor are studied and presented [1].

  10. New Results on the Sum of Two Generalized Gaussian Random Variables

    KAUST Repository

    Soury, Hamza; Alouini, Mohamed-Slim

    2016-01-01

    We propose in this paper a new method to compute the characteristic function (CF) of generalized Gaussian (GG) random variable in terms of the Fox H function. The CF of the sum of two independent GG random variables is then deduced. Based on this results, the probability density function (PDF) and the cumulative distribution function (CDF) of the sum distribution are obtained. These functions are expressed in terms of the bivariate Fox H function. Next, the statistics of the distribution of the sum, such as the moments, the cumulant, and the kurtosis, are analyzed and computed. Due to the complexity of bivariate Fox H function, a solution to reduce such complexity is to approximate the sum of two independent GG random variables by one GG random variable with suitable shape factor. The approximation method depends on the utility of the system so three methods of estimate the shape factor are studied and presented [1].

  11. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  12. One-stage or two-stage revision surgery for prosthetic hip joint infection--the INFORM trial: a study protocol for a randomised controlled trial.

    Science.gov (United States)

    Strange, Simon; Whitehouse, Michael R; Beswick, Andrew D; Board, Tim; Burston, Amanda; Burston, Ben; Carroll, Fran E; Dieppe, Paul; Garfield, Kirsty; Gooberman-Hill, Rachael; Jones, Stephen; Kunutsor, Setor; Lane, Athene; Lenguerrand, Erik; MacGowan, Alasdair; Moore, Andrew; Noble, Sian; Simon, Joanne; Stockley, Ian; Taylor, Adrian H; Toms, Andrew; Webb, Jason; Whittaker, John-Paul; Wilson, Matthew; Wylde, Vikki; Blom, Ashley W

    2016-02-17

    Periprosthetic joint infection (PJI) affects approximately 1% of patients following total hip replacement (THR) and often results in severe physical and emotional suffering. Current surgical treatment options are debridement, antibiotics and implant retention; revision THR; excision of the joint and amputation. Revision surgery can be done as either a one-stage or two-stage operation. Both types of surgery are well-established practice in the NHS and result in similar rates of re-infection, but little is known about the impact of these treatments from the patient's perspective. The main aim of this randomised controlled trial is to determine whether there is a difference in patient-reported outcome measures 18 months after randomisation for one-stage or two-stage revision surgery. INFORM (INFection ORthopaedic Management) is an open, two-arm, multi-centre, randomised, superiority trial. We aim to randomise 148 patients with eligible PJI of the hip from approximately seven secondary care NHS orthopaedic units from across England and Wales. Patients will be randomised via a web-based system to receive either a one-stage revision or a two-stage revision THR. Blinding is not possible due to the nature of the intervention. All patients will be followed up for 18 months. The primary outcome is the WOMAC Index, which assesses hip pain, function and stiffness, collected by questionnaire at 18 months. Secondary outcomes include the following: cost-effectiveness, complications, re-infection rates, objective hip function assessment and quality of life. A nested qualitative study will explore patients' and surgeons' experiences, including their views about trial participation and randomisation. INFORM is the first ever randomised trial to compare two widely accepted surgical interventions for the treatment of PJI: one-stage and two-stage revision THR. The results of the trial will benefit patients in the future as the main focus is on patient-reported outcomes: pain, function

  13. Two-stage laparoscopic approaches for high anorectal malformation: transumbilical colostomy and anorectoplasty.

    Science.gov (United States)

    Yang, Li; Tang, Shao-Tao; Li, Shuai; Aubdoollah, T H; Cao, Guo-Qing; Lei, Hai-Yan; Wang, Xin-Xing

    2014-11-01

    Trans-umbilical colostomy (TUC) has been previously created in patients with Hirschsprung's disease and intermediate anorectal malformation (ARM), but not in patients with high-ARM. The purposes of this study were to assess the feasibility, safety, complications and cosmetic results of TUC in a divided fashion, and subsequently stoma closure and laparoscopic assisted anorectoplasty (LAARP) were simultaneously completed by using the colostomy site for a laparoscopic port in high-ARM patients. Twenty male patients with high-ARMs were chosen for this two-stage procedure. The first-stage consisted of creating the TUC in double-barreled fashion colostomy with a high chimney at the umbilicus, and the loop was divided at the same time, in such a way that the two diverting ends were located at the umbilical incision with the distal end half closed and slightly higher than proximal end. In the second-stage, 3 to 7 months later, the stoma was closed through a peristomal skin incision followed by end-to-end anastomosis and simultaneously LAARP was performed by placing a laparoscopic port at the umbilicus, which was previously the colonostomy site. Umbilical wound closure was performed in a semi-opened fashion to create a deep umbilicus. TUC and LAARP were successfully performed in 20 patients. Four cases with bladder neck fistulas and 16 cases with prostatic urethra fistulas were found. Postoperative complications were rectal mucosal prolapsed in three cases, anal stricture in two cases and wound dehiscence in one case. Neither umbilical ring narrowing, parastomal hernia nor obstructive symptoms was observed. Neither umbilical nor perineal wound infection was observed. Stoma care was easily carried-out by attaching stoma bag. Healing of umbilical wounds after the second-stage was excellent. Early functional stooling outcome were satisfactory. The umbilicus may be an alternative stoma site for double-barreled colostomy in high-ARM patients. The two-stage laparoscopic

  14. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  15. Two stage approach to dynamic soil structure interaction

    International Nuclear Information System (INIS)

    Nelson, I.

    1981-01-01

    A two stage approach is used to reduce the effective size of soil island required to solve dynamic soil structure interaction problems. The ficticious boundaries of the conventional soil island are chosen sufficiently far from the structure so that the presence of the structure causes only a slight perturbation on the soil response near the boundaries. While the resulting finite element model of the soil structure system can be solved, it requires a formidable computational effort. Currently, a two stage approach is used to reduce this effort. The combined soil structure system has many frequencies and wavelengths. For a stiff structure, the lowest frequencies are those associated with the motion of the structure as a rigid body. In the soil, these modes have the longest wavelengths and attenuate most slowly. The higher frequency deformational modes of the structure have shorter wavelengths and their effect attenuates more rapidly with distance from the structure. The difference in soil response between a computation with a refined structural model, and one with a crude model, tends towards zero a very short distance from the structure. In the current work, the 'crude model' is a rigid structure with the same geometry and inertial properties as the refined model. Preliminary calculations indicated that a rigid structure would be a good low frequency approximation to the actual structure, provided the structure was much stiffer than the native soil. (orig./RW)

  16. Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling

    Directory of Open Access Journals (Sweden)

    Hyun-Joo Oh

    2017-01-01

    Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.

  17. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  18. A Novel Two-Stage Dynamic Spectrum Sharing Scheme in Cognitive Radio Networks

    Institute of Scientific and Technical Information of China (English)

    Guodong Zhang; Wei Heng; Tian Liang; Chao Meng; Jinming Hu

    2016-01-01

    In order to enhance the efficiency of spectrum utilization and reduce communication overhead in spectrum sharing process,we propose a two-stage dynamic spectrum sharing scheme in which cooperative and noncooperative modes are analyzed in both stages.In particular,the existence and the uniqueness of Nash Equilibrium (NE) strategies for noncooperative mode are proved.In addition,a distributed iterative algorithm is proposed to obtain the optimal solutions of the scheme.Simulation studies are carried out to show the performance comparison between two modes as well as the system revenue improvement of the proposed scheme compared with a conventional scheme without a virtual price control factor.

  19. Graphics for the multivariate two-sample problem

    International Nuclear Information System (INIS)

    Friedman, J.H.; Rafsky, L.C.

    1981-01-01

    Some graphical methods for comparing multivariate samples are presented. These methods are based on minimal spanning tree techniques developed for multivariate two-sample tests. The utility of these methods is illustrated through examples using both real and artificial data

  20. Opposed piston linear compressor driven two-stage Stirling Cryocooler for cooling of IR sensors in space application

    Science.gov (United States)

    Bhojwani, Virendra; Inamdar, Asif; Lele, Mandar; Tendolkar, Mandar; Atrey, Milind; Bapat, Shridhar; Narayankhedkar, Kisan

    2017-04-01

    A two-stage Stirling Cryocooler has been developed and tested for cooling IR sensors in space application. The concept uses an opposed piston linear compressor to drive the two-stage Stirling expander. The configuration used a moving coil linear motor for the compressor as well as for the expander unit. Electrical phase difference of 80 degrees was maintained between the voltage waveforms supplied to the compressor motor and expander motor. The piston and displacer surface were coated with Rulon an anti-friction material to ensure oil less operation of the unit. The present article discusses analysis results, features of the cryocooler and experimental tests conducted on the developed unit. The two-stages of Cryo-cylinder and the expander units were manufactured from a single piece to ensure precise alignment between the two-stages. Flexure bearings were used to suspend the piston and displacer about its mean position. The objective of the work was to develop a two-stage Stirling cryocooler with 2 W at 120 K and 0.5 W at 60 K cooling capacity for the two-stages and input power of less than 120 W. The Cryocooler achieved a minimum temperature of 40.7 K at stage 2.

  1. Two Stage Fuzzy Methodology to Evaluate the Credit Risks of Investment Projects

    OpenAIRE

    O. Badagadze; G. Sirbiladze; I. Khutsishvili

    2014-01-01

    The work proposes a decision support methodology for the credit risk minimization in selection of investment projects. The methodology provides two stages of projects’ evaluation. Preliminary selection of projects with minor credit risks is made using the Expertons Method. The second stage makes ranking of chosen projects using the Possibilistic Discrimination Analysis Method. The latter is a new modification of a well-known Method of Fuzzy Discrimination Analysis.

  2. Visualizing the Sample Standard Deviation

    Science.gov (United States)

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  3. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Martínez-Jerónimo, Fernando; Flores-Ortíz, Cesar Mateo

    2017-11-20

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL -1 to 17.98gL -1 with an increase in initial glucose concentration from 10.6gL -1 to 30.3gL -1 . However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Y x/s ), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  4. One-stage (Warsaw) and two-stage (Oslo) repair of unilateral cleft lip and palate: Craniofacial outcomes.

    Science.gov (United States)

    Fudalej, Piotr Stanislaw; Wegrodzka, Ewa; Semb, Gunvor; Hortis-Dzierzbicka, Maria

    2015-09-01

    The aim of this study was to compare facial development in subjects with complete unilateral cleft lip and palate (CUCLP) treated with two different surgical protocols. Lateral cephalometric radiographs of 61 patients (42 boys, 19 girls; mean age, 10.9 years; SD, 1) treated consecutively in Warsaw with one-stage repair and 61 age-matched and sex-matched patients treated in Oslo with two-stage surgery were selected to evaluate craniofacial morphology. On each radiograph 13 angular and two ratio variables were measured in order to describe hard and soft tissues of the facial region. The analysis showed that differences between the groups were limited to hard tissues – the maxillary prominence in subjects from the Warsaw group was decreased by almost 4° in comparison with the Oslo group (sella-nasion-A-point (SNA) = 75.3° and 79.1°, respectively) and maxillo-mandibular morphology was less favorable in the Warsaw group than the Oslo group (ANB angle = 0.8° and 2.8°, respectively). The soft tissue contour was comparable in both groups. In conclusion, inter-group differences suggest a more favorable outcome in the Oslo group. However, the distinctiveness of facial morphology in background populations (ie, in Poles and Norwegians) could have contributed to the observed results. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  5. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  6. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  7. A Two-stage DC-DC Converter for the Fuel Cell-Supercapacitor Hybrid System

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2009-01-01

    A wide input range multi-stage converter is proposed with the fuel cells and supercapacitors as a hybrid system. The front-end two-phase boost converter is used to optimize the output power and to reduce the current ripple of fuel cells. The supercapacitor power module is connected by push...... and designed. A 1kW prototype controlled by TMS320F2808 DSP is built in the lab. Simulation and experimental results confirm the feasibility of the proposed two stage dc-dc converter system.......-pull-forward half bridge (PPFHB) converter with coupled inductors in the second stage to handle the slow transient response of the fuel cells and realize the bidirectional power flow control. Moreover, this cascaded structure simplifies the power management. The control strategy for the whole system is analyzed...

  8. Device for sampling HTGR recycle fuel particles

    International Nuclear Information System (INIS)

    Suchomel, R.R.; Lackey, W.J.

    1977-03-01

    Devices for sampling High-Temperature Gas-Cooled Reactor fuel microspheres were evaluated. Analysis of samples obtained with each of two specially designed passive samplers were compared with data generated by more common techniques. A ten-stage two-way sampler was found to produce a representative sample with a constant batch-to-sample ratio

  9. Sampling Polya-Gamma random variates: alternate and approximate techniques

    OpenAIRE

    Windle, Jesse; Polson, Nicholas G.; Scott, James G.

    2014-01-01

    Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.

  10. Numerical simulation of brain tumor growth model using two-stage ...

    African Journals Online (AJOL)

    In the recent years, the study of glioma growth to be an active field of research Mathematical models that describe the proliferation and diffusion properties of the growth have been developed by many researchers. In this work, the performance analysis of two-stage Gauss-Seidel (TSGS) method to solve the glioma growth ...

  11. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  12. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  13. Reliability estimation system: its application to the nuclear geophysical sampling of ore deposits

    International Nuclear Information System (INIS)

    Khaykovich, I.M.; Savosin, S.I.

    1992-01-01

    The reliability estimation system accepted in the Soviet Union for sampling data in nuclear geophysics is based on unique requirements in metrology and methodology. It involves estimating characteristic errors in calibration, as well as errors in measurement and interpretation. This paper describes the methods of estimating the levels of systematic and random errors at each stage of the problem. The data of nuclear geophysics sampling are considered to be reliable if there are no statistically significant, systematic differences between ore intervals determined by this method and by geological control, or by other methods of sampling; the reliability of the latter having been verified. The difference between the random errors is statistically insignificant. The system allows one to obtain information on the parameters of ore intervals with a guaranteed random error and without systematic errors. (Author)

  14. Insights into cadmium diffusion mechanisms in two-stage diffusion profiles in solar-grade Cu(In,Ga)Se2 thin films

    International Nuclear Information System (INIS)

    Biderman, N. J.; Sundaramoorthy, R.; Haldar, Pradeep; Novak, Steven W.; Lloyd, J. R.

    2015-01-01

    Cadmium diffusion experiments were performed on polished copper indium gallium diselenide (Cu(In,Ga)Se 2 or CIGS) samples with resulting cadmium diffusion profiles measured by time-of-flight secondary ion mass spectroscopy. Experiments done in the annealing temperature range between 275 °C and 425 °C reveal two-stage cadmium diffusion profiles which may be indicative of multiple diffusion mechanisms. Each stage can be described by the standard solutions of Fick's second law. The slower cadmium diffusion in the first stage can be described by the Arrhenius equation D 1  = 3 × 10 −4  exp (− 1.53 eV/k B T) cm 2  s −1 , possibly representing vacancy-meditated diffusion. The faster second-stage diffusion coefficients determined in these experiments match the previously reported cadmium diffusion Arrhenius equation of D 2  = 4.8 × 10 −4  exp (−1.04 eV/k B T) cm 2  s −1 , suggesting an interstitial-based mechanism

  15. Application of a stratified random sampling technique to the estimation and minimization of respirable quartz exposure to underground miners

    International Nuclear Information System (INIS)

    Makepeace, C.E.; Horvath, F.J.; Stocker, H.

    1981-11-01

    The aim of a stratified random sampling plan is to provide the best estimate (in the absence of full-shift personal gravimetric sampling) of personal exposure to respirable quartz among underground miners. One also gains information of the exposure distribution of all the miners at the same time. Three variables (or strata) are considered in the present scheme: locations, occupations and times of sampling. Random sampling within each stratum ensures that each location, occupation and time of sampling has equal opportunity of being selected without bias. Following implementation of the plan and analysis of collected data, one can determine the individual exposures and the mean. This information can then be used to identify those groups whose exposure contributes significantly to the collective exposure. In turn, this identification, along with other considerations, allows the mine operator to carry out a cost-benefit optimization and eventual implementation of engineering controls for these groups. This optimization and engineering control procedure, together with the random sampling plan, can then be used in an iterative manner to minimize the mean value of the distribution and collective exposures

  16. Event-triggered synchronization for reaction-diffusion complex networks via random sampling

    Science.gov (United States)

    Dong, Tao; Wang, Aijuan; Zhu, Huiyun; Liao, Xiaofeng

    2018-04-01

    In this paper, the synchronization problem of the reaction-diffusion complex networks (RDCNs) with Dirichlet boundary conditions is considered, where the data is sampled randomly. An event-triggered controller based on the sampled data is proposed, which can reduce the number of controller and the communication load. Under this strategy, the synchronization problem of the diffusion complex network is equivalently converted to the stability of a of reaction-diffusion complex dynamical systems with time delay. By using the matrix inequality technique and Lyapunov method, the synchronization conditions of the RDCNs are derived, which are dependent on the diffusion term. Moreover, it is found the proposed control strategy can get rid of the Zeno behavior naturally. Finally, a numerical example is given to verify the obtained results.

  17. Current Status of Stereotactic Ablative Radiotherapy (SABR for Early-stage 
Non-small Cell Lung Cancer

    Directory of Open Access Journals (Sweden)

    Anhui SHI

    2016-06-01

    Full Text Available High level evidence from randomized studies comparing stereotactic ablative radiotherapy (SABR to surgery is lacking. Although the results of pooled analysis of two randomized trials for STARS and ROSEL showed that SABR is better tolerated and might lead to better overall survival than surgery for operable clinical stage I non-small cell lung cancer (NSCLC, SABR, however, is only recommended as a preferred treatment option for early stage NSCLC patients who cannot or will not undergo surgery. We, therefore, are waiting for the results of the ongoing randomized studies [Veterans affairs lung cancer surgery or stereotactic radiotherapy in the US (VALOR and the SABRTooth study in the United Kingdom (SABRTooths]. Many retrospective and case control studies showed that SABR is safe and effective (local control rate higher than 90%, 5 years survival rate reached 70%, but there are considerable variations in the definitions and staging of lung cancer, operability determination, and surgical approaches to operable lung cancer (open vs video-assisted. Therefore, it is difficult to compare the superiority of radiotherapy and surgery in the treatment of early staged lung cancer. Most studies demonstrated that the efficacy of the two modalities for early staged lung cancer is equivalent; however, due to the limited data, the conclusions from those studies are difficult to be evidence based. Therefore, the controversies will be focusing on the safety and invasiveness of the two treatment modalities. This article will review the ongoing debate in light of these goals.

  18. Sampling and estimating recreational use.

    Science.gov (United States)

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  19. Influence of species and stage of maturity on intake and partial ...

    African Journals Online (AJOL)

    UPuser

    Four silage diets were studied in a 2 x 2 factorial experimental design. The treatments consisted of two grass species (P. maximum or D. eriantha) and two growth stages (boot or full bloom) at harvest. Four multi- cannulated sheep fitted with ruminal, abomasal and ileal cannulas were randomly allocated to one of the four.

  20. Two-stage heterotopic urethroplasty with usage of groin flap. Case report

    Directory of Open Access Journals (Sweden)

    R. T. Adamyan

    2014-11-01

    Full Text Available The article is devoted to the issues of struggle with problems of the urogenital region, arising as a consequence iatrogeny, with the help of plastic surgery. The article provides the case report, which deals with a two-step treatment of the patient with complete loss of the part of the urethra and bladder neck due to iatrogeny. The first stage of surgical treatment is the development of the artificial urethra formation by the rotation groin flap with the axis blood supply. The second stage is connection private urethra with artificial, with a heterotopic location of the lower urinary tract.

  1. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  2. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Science.gov (United States)

    Bril, Aleksander; Kalinina, Olga; Levina, Anastasia

    2018-03-01

    The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  3. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Directory of Open Access Journals (Sweden)

    Bril Aleksander

    2018-01-01

    Full Text Available The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  4. Quick pace of property acquisitions requires two-stage evaluations

    International Nuclear Information System (INIS)

    Hollo, R.; Lockwood, S.

    1994-01-01

    The traditional method of evaluating oil and gas reserves may be too cumbersome for the quick pace of oil and gas property acquisition. An acquisition evaluator must decide quickly if a property meets basic purchase criteria. The current business climate requires a two-stage approach. First, the evaluator makes a quick assessment of the property and submits a bid. If the bid is accepted then the evaluator goes on with a detailed analysis, which represents the second stage. Acquisition of producing properties has become an important activity for many independent oil and gas producers, who must be able to evaluate reserves quickly enough to make effective business decisions yet accurately enough to avoid costly mistakes. Independent thus must be familiar with how transactions usually progress as well as with the basic methods of property evaluation. The paper discusses acquisition activity, the initial offer, the final offer, property evaluation, and fair market value

  5. Influence of one- or two-stage methods for polymerizing complete dentures on adaptation and teeth movements

    Directory of Open Access Journals (Sweden)

    Moises NOGUEIRA

    Full Text Available Abstract Introduction The quality of complete dentures might be influenced by the method of confection. Objective To evaluate the influence of two different methods of processing muco-supported complete dentures on their adaptation and teeth movements. Material and method Denture confection was assigned in two groups (n=10 for upper and lower arches according to polymerization method: 1 conventional one-stage - a wax trial base was made, teeth were arranged and polymerized; 2 two-stage method - the base was waxed and first polymerized. With the denture base polymerized, the teeth were arranged and then, performed the final polymerization. Teeth movements were evaluated in the distances between incisive (I-I, pre-molars (P-P, molars (M-M, left incisor to left molar (LI-LM and right incisor to right molar (RI-RM. For the adaptation analysis, dentures were cut in three different positions: (A distal face of canines, (B mesial face of the first molars, and (C distal face of second molars. Result Denture bases have shown a significant better adaptation when polymerized in the one-stage procedure for both the upper (p=0.000 and the lower (p=0.000 arches, with region A presenting significant better adaptation than region C. In the upper arch, significant reduction in the distance between I-I was observed in the one-stage technique, while the two-stage technique promoted significant reduction in the RI-RM distance. In the lower arch, one-stage technique promoted significant reduction in the distance for RI-RM and two-stage promoted significant reduction in the LI-LM distance. Conclusion Conventional one-stage method presented the better results for denture adaptation. Both fabrication methods presented some alteration in teeth movements.

  6. Enhancing the hydrolysis process of a two-stage biogas technology for the organic fraction of municipal solid waste

    DEFF Research Database (Denmark)

    Nasir, Zeeshan; Uellendahl, Hinrich

    2015-01-01

    The Danish company Solum A/S has developed a two-stage dry anaerobic digestion process labelled AIKAN® for the biological conversion of the organic fraction of municipal solid waste (OFMSW) into biogas and compost. In the AIKAN® process design the methanogenic (2nd) stage is separated from...... the hydrolytic (1st) stage, which enables pump-free feeding of the waste into the 1st stage (processing module), and eliminates the risk for blocking of pumps and pipes by pumping only the percolate from the 1st stage into the 2nd stage (biogas reactor tank). The biogas yield of the AIKAN® two-stage process......, however, has shown to be only about 60% of the theoretical maximum. Previous monitoring of the hydrolytic and methanogenic activity in the two stages of the process revealed that the bottleneck of the whole degradation process is rather found in the hydrolytic first stage while the methanogenic second...

  7. Two-stage combustion for reducing pollutant emissions from gas turbine combustors

    Science.gov (United States)

    Clayton, R. M.; Lewis, D. H.

    1981-01-01

    Combustion and emission results are presented for a premix combustor fueled with admixtures of JP5 with neat H2 and of JP5 with simulated partial-oxidation product gas. The combustor was operated with inlet-air state conditions typical of cruise power for high performance aviation engines. Ultralow NOx, CO and HC emissions and extended lean burning limits were achieved simultaneously. Laboratory scale studies of the non-catalyzed rich-burning characteristics of several paraffin-series hydrocarbon fuels and of JP5 showed sooting limits at equivalence ratios of about 2.0 and that in order to achieve very rich sootless burning it is necessary to premix the reactants thoroughly and to use high levels of air preheat. The application of two-stage combustion for the reduction of fuel NOx was reviewed. An experimental combustor designed and constructed for two-stage combustion experiments is described.

  8. A cause and effect two-stage BSC-DEA method for measuring the relative efficiency of organizations

    Directory of Open Access Journals (Sweden)

    Seyed Esmaeel Najafi

    2011-01-01

    Full Text Available This paper presents an integration of balanced score card (BSE with two-stage data envelopment analysis (DEA. The proposed model of this paper uses different financial and non-financial perspectives to evaluate the performance of decision making units in different BSC stages. At each stage, a two-stage DEA method is implemented to measure the relative efficiency of decision making units and the results are monitored using the cause and effect relationships. An empirical study for a banking sector is also performed using the method developed in this paper and the results are briefly analyzed.

  9. Concentration of polycyclic aromatic hydrocarbons in water samples from different stages of treatment

    Science.gov (United States)

    Pogorzelec, Marta; Piekarska, Katarzyna

    2017-11-01

    The aim of this study was to analyze the presence and concentration of selected polycyclic aromatic hydrocarbons in water samples from different stages of treatment and to verify the usefulness of semipermeable membrane devices for analysis of drinking water. For this purpose, study was conducted for a period of 5 months. Semipermeable membrane devices were deployed in a surface water treatment plant located in Lower Silesia (Poland). To determine the effect of water treatment on concentration of PAHs, three sampling places were chosen: raw water input, stream of water just before disinfection and treated water output. After each month of sampling SPMDs were changed for fresh ones and prepared for further analysis. Concentrations of fifteen polycyclic aromatic hydrocarbons were determined by high performance liquid chromatography (HPLC). Presented study indicates that the use of semipermeable membrane devices can be an effective tool for the analysis of aquatic environment, including monitoring of drinking water, where organic micropollutants are present at very low concentrations.

  10. Concentration of polycyclic aromatic hydrocarbons in water samples from different stages of treatment

    Directory of Open Access Journals (Sweden)

    Pogorzelec Marta

    2017-01-01

    Full Text Available The aim of this study was to analyze the presence and concentration of selected polycyclic aromatic hydrocarbons in water samples from different stages of treatment and to verify the usefulness of semipermeable membrane devices for analysis of drinking water. For this purpose, study was conducted for a period of 5 months. Semipermeable membrane devices were deployed in a surface water treatment plant located in Lower Silesia (Poland. To determine the effect of water treatment on concentration of PAHs, three sampling places were chosen: raw water input, stream of water just before disinfection and treated water output. After each month of sampling SPMDs were changed for fresh ones and prepared for further analysis. Concentrations of fifteen polycyclic aromatic hydrocarbons were determined by high performance liquid chromatography (HPLC. Presented study indicates that the use of semipermeable membrane devices can be an effective tool for the analysis of aquatic environment, including monitoring of drinking water, where organic micropollutants are present at very low concentrations.

  11. A two staged condensation of vapors of an isobutane tower in installations for sulfuric acid alkylation

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, N.P.; Feyzkhanov, R.I.; Idrisov, A.D.; Navalikhin, P.G.; Sakharov, V.D.

    1983-01-01

    In order to increase the concentration of isobutane to greater than 72 to 76 percent in an installation for sulfuric acid alkylation, a system of two staged condensation of vapors from an isobutane tower is placed into operation. The first stage condenses the heavier part of the upper distillate of the tower, which is achieved through somewhat of an increase in the condensate temperature. The product which is condensed in the first stage is completely returned to the tower as a live irrigation. The vapors of the isobutane fraction which did not condense in the first stage are sent to two newly installed condensers, from which the product after condensation passes through intermediate tanks to further depropanization. The two staged condensation of vapors of the isobutane tower reduces the content of the inert diluents, the propane and n-butane in the upper distillate of the isobutane tower and creates more favorable conditions for the operation of the isobutane and propane tower.

  12. On the efficiency of a randomized mirror descent algorithm in online optimization problems

    Science.gov (United States)

    Gasnikov, A. V.; Nesterov, Yu. E.; Spokoiny, V. G.

    2015-04-01

    A randomized online version of the mirror descent method is proposed. It differs from the existing versions by the randomization method. Randomization is performed at the stage of the projection of a subgradient of the function being optimized onto the unit simplex rather than at the stage of the computation of a subgradient, which is common practice. As a result, a componentwise subgradient descent with a randomly chosen component is obtained, which admits an online interpretation. This observation, for example, has made it possible to uniformly interpret results on weighting expert decisions and propose the most efficient method for searching for an equilibrium in a zero-sum two-person matrix game with sparse matrix.

  13. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  14. Validated sampling strategy for assessing contaminants in soil stockpiles

    International Nuclear Information System (INIS)

    Lame, Frank; Honders, Ton; Derksen, Giljam; Gadella, Michiel

    2005-01-01

    Dutch legislation on the reuse of soil requires a sampling strategy to determine the degree of contamination. This sampling strategy was developed in three stages. Its main aim is to obtain a single analytical result, representative of the true mean concentration of the soil stockpile. The development process started with an investigation into how sample pre-treatment could be used to obtain representative results from composite samples of heterogeneous soil stockpiles. Combining a large number of random increments allows stockpile heterogeneity to be fully represented in the sample. The resulting pre-treatment method was then combined with a theoretical approach to determine the necessary number of increments per composite sample. At the second stage, the sampling strategy was evaluated using computerised models of contaminant heterogeneity in soil stockpiles. The now theoretically based sampling strategy was implemented by the Netherlands Centre for Soil Treatment in 1995. It was applied to all types of soil stockpiles, ranging from clean to heavily contaminated, over a period of four years. This resulted in a database containing the analytical results of 2570 soil stockpiles. At the final stage these results were used for a thorough validation of the sampling strategy. It was concluded that the model approach has indeed resulted in a sampling strategy that achieves analytical results representative of the mean concentration of soil stockpiles. - A sampling strategy that ensures analytical results representative of the mean concentration in soil stockpiles is presented and validated

  15. Design and control of a decoupled two degree of freedom translational parallel micro-positioning stage.

    Science.gov (United States)

    Lai, Lei-Jie; Gu, Guo-Ying; Zhu, Li-Min

    2012-04-01

    This paper presents a novel decoupled two degrees of freedom (2-DOF) translational parallel micro-positioning stage. The stage consists of a monolithic compliant mechanism driven by two piezoelectric actuators. The end-effector of the stage is connected to the base by four independent kinematic limbs. Two types of compound flexure module are serially connected to provide 2-DOF for each limb. The compound flexure modules and mirror symmetric distribution of the four limbs significantly reduce the input and output cross couplings and the parasitic motions. Based on the stiffness matrix method, static and dynamic models are constructed and optimal design is performed under certain constraints. The finite element analysis results are then given to validate the design model and a prototype of the XY stage is fabricated for performance tests. Open-loop tests show that maximum static and dynamic cross couplings between the two linear motions are below 0.5% and -45 dB, which are low enough to utilize the single-input-single-out control strategies. Finally, according to the identified dynamic model, an inversion-based feedforward controller in conjunction with a proportional-integral-derivative controller is applied to compensate for the nonlinearities and uncertainties. The experimental results show that good positioning and tracking performances are achieved, which verifies the effectiveness of the proposed mechanism and controller design. The resonant frequencies of the loaded stage at 2 kg and 5 kg are 105 Hz and 68 Hz, respectively. Therefore, the performance of the stage is reasonably good in term of a 200 N load capacity. © 2012 American Institute of Physics

  16. Two-stage decision approach to material accounting

    International Nuclear Information System (INIS)

    Opelka, J.H.; Sutton, W.B.

    1982-01-01

    The validity of the alarm threshold 4sigma has been checked for hypothetical large and small facilities using a two-stage decision model in which the diverter's strategic variable is the quantity diverted, and the defender's strategic variables are the alarm threshold and the effectiveness of the physical security and material control systems in the possible presence of a diverter. For large facilities, the material accounting system inherently appears not to be a particularly useful system for the deterrence of diversions, and essentially no improvement can be made by lowering the alarm threshold below 4sigma. For small facilities, reduction of the threshold to 2sigma or 3sigma is a cost effective change for the accounting system, but is probably less cost effective than making improvements in the material control and physical security systems

  17. Operation of a two-stage continuous fermentation process producing hydrogen and methane from artificial food wastes

    Energy Technology Data Exchange (ETDEWEB)

    Nagai, Kohki; Mizuno, Shiho; Umeda, Yoshito; Sakka, Makiko [Toho Gas Co., Ltd. (Japan); Osaka, Noriko [Tokyo Gas Co. Ltd. (Japan); Sakka, Kazuo [Mie Univ. (Japan)

    2010-07-01

    An anaerobic two-stage continuous fermentation process with combined thermophilic hydrogenogenic and methanogenic stages (two-stage fermentation process) was applied to artificial food wastes on a laboratory scale. In this report, organic loading rate (OLR) conditions for hydrogen fermentation were optimized before operating the two-stage fermentation process. The OLR was set at 11.2, 24.3, 35.2, 45.6, 56.1, and 67.3 g-COD{sub cr} L{sup -1} day{sup -1} with a temperature of 60 C, pH5.5 and 5.0% total solids. As a result, approximately 1.8-2.0 mol-H{sub 2} mol-hexose{sup -1} was obtained at the OLR of 11.2-56.1 g-COD{sub cr} L{sup -1} day{sup -1}. In contrast, it was inferred that the hydrogen yield at the OLR of 67.3 g-COD{sub cr} L{sup -1} day{sup -1} decreased because of an increase in lactate concentration in the culture medium. The performance of the two-stage fermentation process was also evaluated over three months. The hydraulic retention time (HRT) of methane fermentation was able to be shortened 5.0 days (under OLR 12.4 g-COD{sub cr} L{sup -1} day{sup -1} conditions) when the OLR of hydrogen fermentation was 44.0 g-COD{sub cr} L{sup -1} day{sup -1}, and the average gasification efficiency of the two-stage fermentation process was 81% at the time. (orig.)

  18. Spontaneous Pushing in Lateral Position versus Valsalva Maneuver During Second Stage of Labor on Maternal and Fetal Outcomes: A Randomized Clinical Trial.

    Science.gov (United States)

    Vaziri, Farideh; Arzhe, Amene; Asadi, Nasrin; Pourahmad, Saeedeh; Moshfeghy, Zeinab

    2016-10-01

    There are concerns about the harmful effects of the Valsalva maneuver during the second stage of labor. Comparing the effects of spontaneous pushing in the lateral position with the Valsalva maneuver during the second stage of labor on maternal and fetal outcomes. Inclusion criteria in this randomized clinical trial conducted in Iran were as follows: nulliparous mothers, live fetus with vertex presentation, gestational age of 37 - 40 weeks, spontaneous labor, and no complications. The intervention group pushed spontaneously while they were in the lateral position, whereas the control group pushed using Valsalva method while in the supine position at the onset of the second stage of labor. Maternal outcomes such as pain and fatigue severity and fetal outcomes such as pH and pO2 of the umbilical cord blood were measured. Data pertaining to 69 patients, divided into the intervention group (35 subjects) and control group (34 subjects), were analyzed statistically. The mean pain (7.80 ± 1.21 versus 9.05 ± 1.11) and fatigue scores (46.59 ± 21 versus 123.36 ± 43.20) of the two groups showed a statistically significant difference (P pushing in the lateral position reduced fatigue and pain severity of the mothers. Also, it did not worsen fetal outcomes. Thus, it can be used as an alternative method for the Valsalva maneuver.

  19. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    International Nuclear Information System (INIS)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L.; Vassiou, K.

    2015-01-01

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST cluster , average of minimum distance—AMINDIST cluster ) and the area overlap measure (AOM cluster ). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross

  20. Insufficient sensitivity of joint aspiration during the two-stage exchange of the hip with spacers.

    Science.gov (United States)

    Boelch, Sebastian Philipp; Weissenberger, Manuel; Spohn, Frederik; Rudert, Maximilian; Luedemann, Martin

    2018-01-10

    Evaluation of infection persistence during the two-stage exchange of the hip is challenging. Joint aspiration before reconstruction is supposed to rule out infection persistence. Sensitivity and specificity of synovial fluid culture and synovial leucocyte count for detecting infection persistence during the two-stage exchange of the hip were evaluated. Ninety-two aspirations before planned joint reconstruction during the two-stage exchange with spacers of the hip were retrospectively analyzed. The sensitivity and specificity of synovial fluid culture was 4.6 and 94.3%. The sensitivity and specificity of synovial leucocyte count at a cut-off value of 2000 cells/μl was 25.0 and 96.9%. C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) values were significantly higher before prosthesis removal and reconstruction or spacer exchange (p = 0.00; p = 0.013 and p = 0.039; p = 0.002) in the infection persistence group. Receiver operating characteristic area under the curve values before prosthesis removal and reconstruction or spacer exchange for ESR were lower (0.516 and 0.635) than for CRP (0.720 and 0.671). Synovial fluid culture and leucocyte count cannot rule out infection persistence during the two-stage exchange of the hip.