WorldWideScience

Sample records for two-stage probability sample

  1. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  2. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  3. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Evaluation of single and two-stage adaptive sampling designs for estimation of density and abundance of freshwater mussels in a large river

    Science.gov (United States)

    Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.

    2011-01-01

    Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.

  5. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  6. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  7. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  8. Sampling little fish in big rivers: Larval fish detection probabilities in two Lake Erie tributaries and implications for sampling effort and abundance indices

    Science.gov (United States)

    Pritt, Jeremy J.; DuFour, Mark R.; Mayer, Christine M.; Roseman, Edward F.; DeBruyne, Robin L.

    2014-01-01

    Larval fish are frequently sampled in coastal tributaries to determine factors affecting recruitment, evaluate spawning success, and estimate production from spawning habitats. Imperfect detection of larvae is common, because larval fish are small and unevenly distributed in space and time, and coastal tributaries are often large and heterogeneous. We estimated detection probabilities of larval fish from several taxa in the Maumee and Detroit rivers, the two largest tributaries of Lake Erie. We then demonstrated how accounting for imperfect detection influenced (1) the probability of observing taxa as present relative to sampling effort and (2) abundance indices for larval fish of two Detroit River species. We found that detection probabilities ranged from 0.09 to 0.91 but were always less than 1.0, indicating that imperfect detection is common among taxa and between systems. In general, taxa with high fecundities, small larval length at hatching, and no nesting behaviors had the highest detection probabilities. Also, detection probabilities were higher in the Maumee River than in the Detroit River. Accounting for imperfect detection produced up to fourfold increases in abundance indices for Lake Whitefish Coregonus clupeaformis and Gizzard Shad Dorosoma cepedianum. The effect of accounting for imperfect detection in abundance indices was greatest during periods of low abundance for both species. Detection information can be used to determine the appropriate level of sampling effort for larval fishes and may improve management and conservation decisions based on larval fish data.

  9. Relative Efficiencies of a Three-Stage Versus a Two-Stage Sample Design For a New NLS Cohort Study. 22U-884-38.

    Science.gov (United States)

    Folsom, R. E.; Weber, J. H.

    Two sampling designs were compared for the planned 1978 national longitudinal survey of high school seniors with respect to statistical efficiency and cost. The 1972 survey used a stratified two-stage sample of high schools and seniors within schools. In order to minimize interviewer travel costs, an alternate sampling design was proposed,…

  10. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  11. Probability sampling in legal cases: Kansas cellphone users

    Science.gov (United States)

    Kadane, Joseph B.

    2012-10-01

    Probability sampling is a standard statistical technique. This article introduces the basic ideas of probability sampling, and shows in detail how probability sampling was used in a particular legal case.

  12. Probability Judgements in Multi-Stage Problems : Experimental Evidence of Systematic Biases

    NARCIS (Netherlands)

    Gneezy, U.

    1996-01-01

    We report empirical evidence that in problems of random walk with positive drift, bounded rationality leads individuals to under-estimate the probability of success in the long run.In particular, individuals who were given the stage by stage probability distribution failed to aggregate this

  13. Secondary School Students' Reasoning about Conditional Probability, Samples, and Sampling Procedures

    Science.gov (United States)

    Prodromou, Theodosia

    2016-01-01

    In the Australian mathematics curriculum, Year 12 students (aged 16-17) are asked to solve conditional probability problems that involve the representation of the problem situation with two-way tables or three-dimensional diagrams and consider sampling procedures that result in different correct answers. In a small exploratory study, we…

  14. Comparison of Four Estimators under sampling without Replacement

    African Journals Online (AJOL)

    The results were obtained using a program written in Microsoft Visual C++ programming language. It was observed that the two-stage sampling under unequal probabilities without replacement is always better than the other three estimators considered. Keywords: Unequal probability sampling, two-stage sampling, ...

  15. Probability analysis of MCO over-pressurization during staging

    International Nuclear Information System (INIS)

    Pajunen, A.L.

    1997-01-01

    The purpose of this calculation is to determine the probability of Multi-Canister Overpacks (MCOs) over-pressurizing during staging at the Canister Storage Building (CSB). Pressurization of an MCO during staging is dependent upon changes to the MCO gas temperature and the build-up of reaction products during the staging period. These effects are predominantly limited by the amount of water that remains in the MCO following cold vacuum drying that is available for reaction during staging conditions. Because of the potential for increased pressure within an MCO, provisions for a filtered pressure relief valve and rupture disk have been incorporated into the MCO design. This calculation provides an estimate of the frequency that an MCO will contain enough water to pressurize beyond the limits of these design features. The results of this calculation will be used in support of further safety analyses and operational planning efforts. Under the bounding steady state CSB condition assumed for this analysis, an MCO must contain less than 1.6 kg (3.7 lbm) of water available for reaction to preclude actuation of the pressure relief valve at 100 psid. To preclude actuation of the MCO rupture disk at 150 psid, an MCO must contain less than 2.5 kg (5.5 lbm) of water available for reaction. These limits are based on the assumption that hydrogen generated by uranium-water reactions is the sole source of gas produced within the MCO and that hydrates in fuel particulate are the primary source of water available for reactions during staging conditions. The results of this analysis conclude that the probability of the hydrate water content of an MCO exceeding 1.6 kg is 0.08 and the probability that it will exceed 2.5 kg is 0.01. This implies that approximately 32 of 400 staged MCOs may experience pressurization to the point where the pressure relief valve actuates. In the event that an MCO pressure relief valve fails to open, the probability is 1 in 100 that the MCO would experience

  16. A method to combine non-probability sample data with probability sample data in estimating spatial means of environmental variables

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    2003-01-01

    In estimating spatial means of environmental variables of a region from data collected by convenience or purposive sampling, validity of the results can be ensured by collecting additional data through probability sampling. The precision of the pi estimator that uses the probability sample can be

  17. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  18. Evaluating the Validity of a Two-stage Sample in a Birth Cohort Established from Administrative Databases.

    Science.gov (United States)

    El-Zein, Mariam; Conus, Florence; Benedetti, Andrea; Parent, Marie-Elise; Rousseau, Marie-Claude

    2016-01-01

    When using administrative databases for epidemiologic research, a subsample of subjects can be interviewed, eliciting information on undocumented confounders. This article presents a thorough investigation of the validity of a two-stage sample encompassing an assessment of nonparticipation and quantification of the extent of bias. Established through record linkage of administrative databases, the Québec Birth Cohort on Immunity and Health (n = 81,496) aims to study the association between Bacillus Calmette-Guérin vaccination and asthma. Among 76,623 subjects classified in four Bacillus Calmette-Guérin-asthma strata, a two-stage sampling strategy with a balanced design was used to randomly select individuals for interviews. We compared stratum-specific sociodemographic characteristics and healthcare utilization of stage 2 participants (n = 1,643) with those of eligible nonparticipants (n = 74,980) and nonrespondents (n = 3,157). We used logistic regression to determine whether participation varied across strata according to these characteristics. The effect of nonparticipation was described by the relative odds ratio (ROR = ORparticipants/ORsource population) for the association between sociodemographic characteristics and asthma. Parental age at childbirth, area of residence, family income, and healthcare utilization were comparable between groups. Participants were slightly more likely to be women and have a mother born in Québec. Participation did not vary across strata by sex, parental birthplace, or material and social deprivation. Estimates were not biased by nonparticipation; most RORs were below one and bias never exceeded 20%. Our analyses evaluate and provide a detailed demonstration of the validity of a two-stage sample for researchers assembling similar research infrastructures.

  19. Probability Issues in without Replacement Sampling

    Science.gov (United States)

    Joarder, A. H.; Al-Sabah, W. S.

    2007-01-01

    Sampling without replacement is an important aspect in teaching conditional probabilities in elementary statistics courses. Different methods proposed in different texts for calculating probabilities of events in this context are reviewed and their relative merits and limitations in applications are pinpointed. An alternative representation of…

  20. A country-wide probability sample of public attitudes toward stuttering in Portugal.

    Science.gov (United States)

    Valente, Ana Rita S; St Louis, Kenneth O; Leahy, Margaret; Hall, Andreia; Jesus, Luis M T

    2017-06-01

    Negative public attitudes toward stuttering have been widely reported, although differences among countries and regions exist. Clear reasons for these differences remain obscure. Published research is unavailable on public attitudes toward stuttering in Portugal as well as a representative sample that explores stuttering attitudes in an entire country. This study sought to (a) determine the feasibility of a country-wide probability sampling scheme to measure public stuttering attitudes in Portugal using a standard instrument (the Public Opinion Survey of Human Attributes-Stuttering [POSHA-S]) and (b) identify demographic variables that predict Portuguese attitudes. The POSHA-S was translated to European Portuguese through a five-step process. Thereafter, a local administrative office-based, three-stage, cluster, probability sampling scheme was carried out to obtain 311 adult respondents who filled out the questionnaire. The Portuguese population held stuttering attitudes that were generally within the average range of those observed from numerous previous POSHA-S samples. Demographic variables that predicted more versus less positive stuttering attitudes were respondents' age, region of the country, years of school completed, working situation, and number of languages spoken. Non-predicting variables were respondents' sex, marital status, and parental status. A local administrative office-based, probability sampling scheme generated a respondent profile similar to census data and indicated that Portuguese attitudes are generally typical. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Triangulation based inclusion probabilities: a design-unbiased sampling approach

    OpenAIRE

    Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph

    2011-01-01

    A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...

  2. Public Attitudes toward Stuttering in Turkey: Probability versus Convenience Sampling

    Science.gov (United States)

    Ozdemir, R. Sertan; St. Louis, Kenneth O.; Topbas, Seyhun

    2011-01-01

    Purpose: A Turkish translation of the "Public Opinion Survey of Human Attributes-Stuttering" ("POSHA-S") was used to compare probability versus convenience sampling to measure public attitudes toward stuttering. Method: A convenience sample of adults in Eskisehir, Turkey was compared with two replicates of a school-based,…

  3. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  4. Probability sampling design in ethnobotanical surveys of medicinal plants

    Directory of Open Access Journals (Sweden)

    Mariano Martinez Espinosa

    2012-07-01

    Full Text Available Non-probability sampling design can be used in ethnobotanical surveys of medicinal plants. However, this method does not allow statistical inferences to be made from the data generated. The aim of this paper is to present a probability sampling design that is applicable in ethnobotanical studies of medicinal plants. The sampling design employed in the research titled "Ethnobotanical knowledge of medicinal plants used by traditional communities of Nossa Senhora Aparecida do Chumbo district (NSACD, Poconé, Mato Grosso, Brazil" was used as a case study. Probability sampling methods (simple random and stratified sampling were used in this study. In order to determine the sample size, the following data were considered: population size (N of 1179 families; confidence coefficient, 95%; sample error (d, 0.05; and a proportion (p, 0.5. The application of this sampling method resulted in a sample size (n of at least 290 families in the district. The present study concludes that probability sampling methods necessarily have to be employed in ethnobotanical studies of medicinal plants, particularly where statistical inferences have to be made using data obtained. This can be achieved by applying different existing probability sampling methods, or better still, a combination of such methods.

  5. Propagating Mixed Uncertainties in Cyber Attacker Payoffs: Exploration of Two-Phase Monte Carlo Sampling and Probability Bounds Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Chatterjee, Samrat; Tipireddy, Ramakrishna; Oster, Matthew R.; Halappanavar, Mahantesh

    2016-09-16

    Securing cyber-systems on a continual basis against a multitude of adverse events is a challenging undertaking. Game-theoretic approaches, that model actions of strategic decision-makers, are increasingly being applied to address cybersecurity resource allocation challenges. Such game-based models account for multiple player actions and represent cyber attacker payoffs mostly as point utility estimates. Since a cyber-attacker’s payoff generation mechanism is largely unknown, appropriate representation and propagation of uncertainty is a critical task. In this paper we expand on prior work and focus on operationalizing the probabilistic uncertainty quantification framework, for a notional cyber system, through: 1) representation of uncertain attacker and system-related modeling variables as probability distributions and mathematical intervals, and 2) exploration of uncertainty propagation techniques including two-phase Monte Carlo sampling and probability bounds analysis.

  6. Generalized Probability-Probability Plots

    NARCIS (Netherlands)

    Mushkudiani, N.A.; Einmahl, J.H.J.

    2004-01-01

    We introduce generalized Probability-Probability (P-P) plots in order to study the one-sample goodness-of-fit problem and the two-sample problem, for real valued data.These plots, that are constructed by indexing with the class of closed intervals, globally preserve the properties of classical P-P

  7. Acceptance Probability (P a) Analysis for Process Validation Lifecycle Stages.

    Science.gov (United States)

    Alsmeyer, Daniel; Pazhayattil, Ajay; Chen, Shu; Munaretto, Francesco; Hye, Maksuda; Sanghvi, Pradeep

    2016-04-01

    This paper introduces an innovative statistical approach towards understanding how variation impacts the acceptance criteria of quality attributes. Because of more complex stage-wise acceptance criteria, traditional process capability measures are inadequate for general application in the pharmaceutical industry. The probability of acceptance concept provides a clear measure, derived from specific acceptance criteria for each quality attribute. In line with the 2011 FDA Guidance, this approach systematically evaluates data and scientifically establishes evidence that a process is capable of consistently delivering quality product. The probability of acceptance provides a direct and readily understandable indication of product risk. As with traditional capability indices, the acceptance probability approach assumes that underlying data distributions are normal. The computational solutions for dosage uniformity and dissolution acceptance criteria are readily applicable. For dosage uniformity, the expected AV range may be determined using the s lo and s hi values along with the worst case estimates of the mean. This approach permits a risk-based assessment of future batch performance of the critical quality attributes. The concept is also readily applicable to sterile/non sterile liquid dose products. Quality attributes such as deliverable volume and assay per spray have stage-wise acceptance that can be converted into an acceptance probability. Accepted statistical guidelines indicate processes with C pk > 1.33 as performing well within statistical control and those with C pk  1.33 is associated with a centered process that will statistically produce less than 63 defective units per million. This is equivalent to an acceptance probability of >99.99%.

  8. A Tale of Two Probabilities

    Science.gov (United States)

    Falk, Ruma; Kendig, Keith

    2013-01-01

    Two contestants debate the notorious probability problem of the sex of the second child. The conclusions boil down to explication of the underlying scenarios and assumptions. Basic principles of probability theory are highlighted.

  9. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  10. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    Science.gov (United States)

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  11. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  12. Remote Sensing Based Two-Stage Sampling for Accuracy Assessment and Area Estimation of Land Cover Changes

    Directory of Open Access Journals (Sweden)

    Heinz Gallaun

    2015-09-01

    Full Text Available Land cover change processes are accelerating at the regional to global level. The remote sensing community has developed reliable and robust methods for wall-to-wall mapping of land cover changes; however, land cover changes often occur at rates below the mapping errors. In the current publication, we propose a cost-effective approach to complement wall-to-wall land cover change maps with a sampling approach, which is used for accuracy assessment and accurate estimation of areas undergoing land cover changes, including provision of confidence intervals. We propose a two-stage sampling approach in order to keep accuracy, efficiency, and effort of the estimations in balance. Stratification is applied in both stages in order to gain control over the sample size allocated to rare land cover change classes on the one hand and the cost constraints for very high resolution reference imagery on the other. Bootstrapping is used to complement the accuracy measures and the area estimates with confidence intervals. The area estimates and verification estimations rely on a high quality visual interpretation of the sampling units based on time series of satellite imagery. To demonstrate the cost-effective operational applicability of the approach we applied it for assessment of deforestation in an area characterized by frequent cloud cover and very low change rate in the Republic of Congo, which makes accurate deforestation monitoring particularly challenging.

  13. Sampling and estimating recreational use.

    Science.gov (United States)

    Timothy G. Gregoire; Gregory J. Buhyoff

    1999-01-01

    Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.

  14. Sampling, Probability Models and Statistical Reasoning Statistical

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 5. Sampling, Probability Models and Statistical Reasoning Statistical Inference. Mohan Delampady V R Padmawar. General Article Volume 1 Issue 5 May 1996 pp 49-58 ...

  15. Failure Probability Estimation Using Asymptotic Sampling and Its Dependence upon the Selected Sampling Scheme

    Directory of Open Access Journals (Sweden)

    Martinásková Magdalena

    2017-12-01

    Full Text Available The article examines the use of Asymptotic Sampling (AS for the estimation of failure probability. The AS algorithm requires samples of multidimensional Gaussian random vectors, which may be obtained by many alternative means that influence the performance of the AS method. Several reliability problems (test functions have been selected in order to test AS with various sampling schemes: (i Monte Carlo designs; (ii LHS designs optimized using the Periodic Audze-Eglājs (PAE criterion; (iii designs prepared using Sobol’ sequences. All results are compared with the exact failure probability value.

  16. Sampling, Probability Models and Statistical Reasoning -RE ...

    Indian Academy of Sciences (India)

    random sampling allows data to be modelled with the help of probability ... g based on different trials to get an estimate of the experimental error. ... research interests lie in the .... if e is indeed the true value of the proportion of defectives in the.

  17. Don't spin the pen: two alternative methods for second-stage sampling in urban cluster surveys

    Directory of Open Access Journals (Sweden)

    Rose Angela MC

    2007-06-01

    Full Text Available Abstract In two-stage cluster surveys, the traditional method used in second-stage sampling (in which the first household in a cluster is selected is time-consuming and may result in biased estimates of the indicator of interest. Firstly, a random direction from the center of the cluster is selected, usually by spinning a pen. The houses along that direction are then counted out to the boundary of the cluster, and one is then selected at random to be the first household surveyed. This process favors households towards the center of the cluster, but it could easily be improved. During a recent meningitis vaccination coverage survey in Maradi, Niger, we compared this method of first household selection to two alternatives in urban zones: 1 using a superimposed grid on the map of the cluster area and randomly selecting an intersection; and 2 drawing the perimeter of the cluster area using a Global Positioning System (GPS and randomly selecting one point within the perimeter. Although we only compared a limited number of clusters using each method, we found the sampling grid method to be the fastest and easiest for field survey teams, although it does require a map of the area. Selecting a random GPS point was also found to be a good method, once adequate training can be provided. Spinning the pen and counting households to the boundary was the most complicated and time-consuming. The two methods tested here represent simpler, quicker and potentially more robust alternatives to spinning the pen for cluster surveys in urban areas. However, in rural areas, these alternatives would favor initial household selection from lower density (or even potentially empty areas. Bearing in mind these limitations, as well as available resources and feasibility, investigators should choose the most appropriate method for their particular survey context.

  18. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  19. Adjuvant Chemotherapy Improves the Probability of Freedom From Recurrence in Patients With Resected Stage IB Lung Adenocarcinoma.

    Science.gov (United States)

    Hung, Jung-Jyh; Wu, Yu-Chung; Chou, Teh-Ying; Jeng, Wen-Juei; Yeh, Yi-Chen; Hsu, Wen-Hu

    2016-04-01

    The benefit of adjuvant chemotherapy remains controversial for patients with stage IB non-small-cell lung cancer (NSCLC). This study investigated the effect of adjuvant chemotherapy and the predictors of benefit from adjuvant chemotherapy in patients with stage IB lung adenocarcinoma. A total of 243 patients with completely resected pathologic stage IB lung adenocarcinoma were included in the study. Predictors of the benefits of improved overall survival (OS) or probability of freedom from recurrence (FFR) from platinum-based adjuvant chemotherapy in patients with resected stage IB lung adenocarcinoma were investigated. Among the 243 patients, 70 (28.8%) had received platinum-based doublet adjuvant chemotherapy. A micropapillary/solid-predominant pattern (versus an acinar/papillary-predominant pattern) was a significantly worse prognostic factor for probability of FFR (p = 0.033). Although adjuvant chemotherapy (versus surgical intervention alone) was not a significant prognostic factor for OS (p = 0.303), it was a significant prognostic factor for a better probability of FFR (p = 0.029) on multivariate analysis. In propensity-score-matched pairs, there was no significant difference in OS between patients who received adjuvant chemotherapy and those who did not (p = 0.386). Patients who received adjuvant chemotherapy had a significantly better probability of FFR than those who did not (p = 0.043). For patients with a predominantly micropapillary/solid pattern, adjuvant chemotherapy (p = 0.033) was a significant prognostic factor for a better probability of FFR on multivariate analysis. Adjuvant chemotherapy is a favorable prognostic factor for the probability of FFR in patients with stage IB lung adenocarcinoma, particularly in those with a micropapillary/solid-predominant pattern. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Assessment of heavy metals in Averrhoa bilimbi and A. carambola fruit samples at two developmental stages.

    Science.gov (United States)

    Soumya, S L; Nair, Bindu R

    2016-05-01

    Though the fruits of Averrhoa bilimbi and A. carambola are economically and medicinally important, they remain underutilized. The present study reports heavy metal quantitation in the fruit samples of A. bilimbi and A. carambola (Oxalidaceae), collected at two stages of maturity. Heavy metals are known to interfere with the functioning of vital cellular components. Although toxic, some elements are considered essential for human health, in trace quantities. Heavy metals such as Cr, Mn, Co, Cu, Zn, As, Se, Pb, and Cd were analyzed by atomic absorption spectroscopy (AAS). The samples under investigation included, A. bilimbi unripe (BU) and ripe (BR), A. carambola sour unripe (CSU) and ripe (CSR), and A. carambola sweet unripe (CTU) and ripe (CTR). Heavy metal analysis showed that relatively higher level of heavy metals was present in BR samples compared to the rest of the samples. The highest amount of As and Se were recorded in BU samples while Mn content was highest in CSU samples and Co in CSR. Least amounts of Cr, Zn, Se, Cd, and Pb were noted in CTU while, Mn, Cu, and As were least in CTR. Thus, the sweet types of A. carambola (CTU, CTR) had comparatively lower heavy metal content. There appears to be no reason for concern since different fruit samples of Averrhoa studied presently showed the presence of various heavy metals in trace quantities.

  1. Fixation probability in a two-locus intersexual selection model.

    Science.gov (United States)

    Durand, Guillermo; Lessard, Sabin

    2016-06-01

    We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Plant specification of a generic human-error data through a two-stage Bayesian approach

    International Nuclear Information System (INIS)

    Heising, C.D.; Patterson, E.I.

    1984-01-01

    Expert judgement concerning human performance in nuclear power plants is quantitatively coupled with actuarial data on such performance in order to derive plant-specific human-error rate probability distributions. The coupling procedure consists of a two-stage application of Bayes' theorem to information which is grouped by type. The first information type contains expert judgement concerning human performance at nuclear power plants in general. Data collected on human performance at a group of similar plants forms the second information type. The third information type consists of data on human performance in a specific plant which has the same characteristics as the group members. The first and second information types are coupled in the first application of Bayes' theorem to derive a probability distribution for population performance. This distribution is then combined with the third information type in a second application of Bayes' theorem to determine a plant-specific human-error rate probability distribution. The two stage Bayesian procedure thus provides a means to quantitatively couple sparse data with expert judgement in order to obtain a human performance probability distribution based upon available information. Example calculations for a group of like reactors are also given. (author)

  3. A methodology for more efficient tail area sampling with discrete probability distribution

    International Nuclear Information System (INIS)

    Park, Sang Ryeol; Lee, Byung Ho; Kim, Tae Woon

    1988-01-01

    Monte Carlo Method is commonly used to observe the overall distribution and to determine the lower or upper bound value in statistical approach when direct analytical calculation is unavailable. However, this method would not be efficient if the tail area of a distribution is concerned. A new method entitled 'Two Step Tail Area Sampling' is developed, which uses the assumption of discrete probability distribution and samples only the tail area without distorting the overall distribution. This method uses two step sampling procedure. First, sampling at points separated by large intervals is done and second, sampling at points separated by small intervals is done with some check points determined at first step sampling. Comparison with Monte Carlo Method shows that the results obtained from the new method converge to analytic value faster than Monte Carlo Method if the numbers of calculation of both methods are the same. This new method is applied to DNBR (Departure from Nucleate Boiling Ratio) prediction problem in design of the pressurized light water nuclear reactor

  4. Bounds for Tail Probabilities of the Sample Variance

    Directory of Open Access Journals (Sweden)

    Van Zuijlen M

    2009-01-01

    Full Text Available We provide bounds for tail probabilities of the sample variance. The bounds are expressed in terms of Hoeffding functions and are the sharpest known. They are designed having in mind applications in auditing as well as in processing data related to environment.

  5. SIMS analysis using a new novel sample stage

    International Nuclear Information System (INIS)

    Miwa, Shiro; Nomachi, Ichiro; Kitajima, Hideo

    2006-01-01

    We have developed a novel sample stage for Cameca IMS-series instruments that allows us to adjust the tilt of the sample holder and to vary the height of the sample surface from outside the vacuum chamber. A third function of the stage is the capability to cool sample to -150 deg. C using liquid nitrogen. Using this stage, we can measure line profiles of 10 mm in length without any variation in the secondary ion yields. By moving the sample surface toward the input lens, the primary ion beam is well focused when the energy of the primary ions is reduced. Sample cooling is useful for samples such as organic materials that are easily damaged by primary ions or electrons

  6. SIMS analysis using a new novel sample stage

    Energy Technology Data Exchange (ETDEWEB)

    Miwa, Shiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan)]. E-mail: Shiro.Miwa@jp.sony.com; Nomachi, Ichiro [Materials Analysis Lab., Sony Corporation, 4-16-1 Okata, Atsugi 243-0021 (Japan); Kitajima, Hideo [Nanotechnos Corp., 5-4-30 Nishihashimoto, Sagamihara 229-1131 (Japan)

    2006-07-30

    We have developed a novel sample stage for Cameca IMS-series instruments that allows us to adjust the tilt of the sample holder and to vary the height of the sample surface from outside the vacuum chamber. A third function of the stage is the capability to cool sample to -150 deg. C using liquid nitrogen. Using this stage, we can measure line profiles of 10 mm in length without any variation in the secondary ion yields. By moving the sample surface toward the input lens, the primary ion beam is well focused when the energy of the primary ions is reduced. Sample cooling is useful for samples such as organic materials that are easily damaged by primary ions or electrons.

  7. Two-slit experiment: quantum and classical probabilities

    International Nuclear Information System (INIS)

    Khrennikov, Andrei

    2015-01-01

    Inter-relation between quantum and classical probability models is one of the most fundamental problems of quantum foundations. Nowadays this problem also plays an important role in quantum technologies, in quantum cryptography and the theory of quantum random generators. In this letter, we compare the viewpoint of Richard Feynman that the behavior of quantum particles cannot be described by classical probability theory with the viewpoint that quantum–classical inter-relation is more complicated (cf, in particular, with the tomographic model of quantum mechanics developed in detail by Vladimir Man'ko). As a basic example, we consider the two-slit experiment, which played a crucial role in quantum foundational debates at the beginning of quantum mechanics (QM). In particular, its analysis led Niels Bohr to the formulation of the principle of complementarity. First, we demonstrate that in complete accordance with Feynman's viewpoint, the probabilities for the two-slit experiment have the non-Kolmogorovian structure, since they violate one of basic laws of classical probability theory, the law of total probability (the heart of the Bayesian analysis). However, then we show that these probabilities can be embedded in a natural way into the classical (Kolmogorov, 1933) probability model. To do this, one has to take into account the randomness of selection of different experimental contexts, the joint consideration of which led Feynman to a conclusion about the non-classicality of quantum probability. We compare this embedding of non-Kolmogorovian quantum probabilities into the Kolmogorov model with well-known embeddings of non-Euclidean geometries into Euclidean space (e.g., the Poincaré disk model for the Lobachvesky plane). (paper)

  8. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  9. Bayesian enhancement two-stage design for single-arm phase II clinical trials with binary and time-to-event endpoints.

    Science.gov (United States)

    Shi, Haolun; Yin, Guosheng

    2018-02-21

    Simon's two-stage design is one of the most commonly used methods in phase II clinical trials with binary endpoints. The design tests the null hypothesis that the response rate is less than an uninteresting level, versus the alternative hypothesis that the response rate is greater than a desirable target level. From a Bayesian perspective, we compute the posterior probabilities of the null and alternative hypotheses given that a promising result is declared in Simon's design. Our study reveals that because the frequentist hypothesis testing framework places its focus on the null hypothesis, a potentially efficacious treatment identified by rejecting the null under Simon's design could have only less than 10% posterior probability of attaining the desirable target level. Due to the indifference region between the null and alternative, rejecting the null does not necessarily mean that the drug achieves the desirable response level. To clarify such ambiguity, we propose a Bayesian enhancement two-stage (BET) design, which guarantees a high posterior probability of the response rate reaching the target level, while allowing for early termination and sample size saving in case that the drug's response rate is smaller than the clinically uninteresting level. Moreover, the BET design can be naturally adapted to accommodate survival endpoints. We conduct extensive simulation studies to examine the empirical performance of our design and present two trial examples as applications. © 2018, The International Biometric Society.

  10. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    Science.gov (United States)

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  11. A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin

    Science.gov (United States)

    Blaschek, Michael; Duttmann, Rainer

    2015-04-01

    The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using

  12. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  13. Inadequate Iodine Intake in Population Groups Defined by Age, Life Stage and Vegetarian Dietary Practice in a Norwegian Convenience Sample.

    Science.gov (United States)

    Brantsæter, Anne Lise; Knutsen, Helle Katrine; Johansen, Nina Cathrine; Nyheim, Kristine Aastad; Erlund, Iris; Meltzer, Helle Margrete; Henjum, Sigrun

    2018-02-17

    Inadequate iodine intake has been identified in populations considered iodine replete for decades. The objective of the current study is to evaluate urinary iodine concentration (UIC) and the probability of adequate iodine intake in subgroups of the Norwegian population defined by age, life stage and vegetarian dietary practice. In a cross-sectional survey, we assessed the probability of adequate iodine intake by two 24-h food diaries and UIC from two fasting morning spot urine samples in 276 participants. The participants included children ( n = 47), adolescents ( n = 46), adults ( n = 71), the elderly ( n = 23), pregnant women ( n = 45), ovo-lacto vegetarians ( n = 25), and vegans ( n = 19). In all participants combined, the median (95% CI) UIC was 101 (90, 110) µg/L, median (25th, 75th percentile) calculated iodine intake was 112 (77, 175) µg/day and median (25th, 75th percentile) estimated usual iodine intake was 101 (75, 150) µg/day. According to WHOs criteria for evaluation of median UIC, iodine intake was inadequate in the elderly, pregnant women, vegans and non-pregnant women of childbearing age. Children had the highest (82%) and vegans the lowest (14%) probability of adequate iodine intake according to reported food and supplement intakes. This study confirms the need for monitoring iodine intake and status in nationally representative study samples in Norway.

  14. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  15. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  16. FIRST DIRECT EVIDENCE OF TWO STAGES IN FREE RECALL

    Directory of Open Access Journals (Sweden)

    Eugen Tarnow

    2015-12-01

    Full Text Available I find that exactly two stages can be seen directly in sequential free recall distributions. These distributions show that the first three recalls come from the emptying of working memory, recalls 6 and above come from a second stage and the 4th and 5th recalls are mixtures of the two.A discontinuity, a rounded step function, is shown to exist in the fitted linear slope of the recall distributions as the recall shifts from the emptying of working memory (positive slope to the second stage (negative slope. The discontinuity leads to a first estimate of the capacity of working memory at 4-4.5 items. The total recall is shown to be a linear combination of the content of working memory and items recalled in the second stage with 3.0-3.9 items coming from working memory, a second estimate of the capacity of working memory. A third, separate upper limit on the capacity of working memory is found (3.06 items, corresponding to the requirement that the content of working memory cannot exceed the total recall, item by item. This third limit is presumably the best limit on the average capacity of unchunked working memory.The second stage of recall is shown to be reactivation: The average times to retrieve additional items in free recall obey a linear relationship as a function of the recall probability which mimics recognition and cued recall, both mechanisms using reactivation (Tarnow, 2008.

  17. A two-stage cluster sampling method using gridded population data, a GIS, and Google EarthTM imagery in a population-based mortality survey in Iraq

    Directory of Open Access Journals (Sweden)

    Galway LP

    2012-04-01

    Full Text Available Abstract Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.

  18. Sampling informative/complex a priori probability distributions using Gibbs sampling assisted by sequential simulation

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Cordua, Knud Skou

    2010-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample the solutions to non-linear inverse problems. In principle these methods allow incorporation of arbitrarily complex a priori information, but current methods allow only relatively simple...... this algorithm with the Metropolis algorithm to obtain an efficient method for sampling posterior probability densities for nonlinear inverse problems....

  19. A two-stage inexact joint-probabilistic programming method for air quality management under uncertainty.

    Science.gov (United States)

    Lv, Y; Huang, G H; Li, Y P; Yang, Z F; Sun, W

    2011-03-01

    A two-stage inexact joint-probabilistic programming (TIJP) method is developed for planning a regional air quality management system with multiple pollutants and multiple sources. The TIJP method incorporates the techniques of two-stage stochastic programming, joint-probabilistic constraint programming and interval mathematical programming, where uncertainties expressed as probability distributions and interval values can be addressed. Moreover, it can not only examine the risk of violating joint-probability constraints, but also account for economic penalties as corrective measures against any infeasibility. The developed TIJP method is applied to a case study of a regional air pollution control problem, where the air quality index (AQI) is introduced for evaluation of the integrated air quality management system associated with multiple pollutants. The joint-probability exists in the environmental constraints for AQI, such that individual probabilistic constraints for each pollutant can be efficiently incorporated within the TIJP model. The results indicate that useful solutions for air quality management practices have been generated; they can help decision makers to identify desired pollution abatement strategies with minimized system cost and maximized environmental efficiency. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. Measuring factor IX activity of nonacog beta pegol with commercially available one-stage clotting and chromogenic assay kits: a two-center study.

    Science.gov (United States)

    Bowyer, A E; Hillarp, A; Ezban, M; Persson, P; Kitchen, S

    2016-07-01

    Essentials Validated assays are required to precisely measure factor IX (FIX) activity in FIX products. N9-GP and two other FIX products were assessed in various coagulation assay systems at two sites. Large variations in FIX activity measurements were observed for N9-GP using some assays. One-stage and chromogenic assays accurately measuring FIX activity for N9-GP were identified. Background Measurement of factor IX activity (FIX:C) with activated partial thromboplastin time-based one-stage clotting assays is associated with a large degree of interlaboratory variation in samples containing glycoPEGylated recombinant FIX (rFIX), i.e. nonacog beta pegol (N9-GP). Validation and qualification of specific assays and conditions are necessary for the accurate assessment of FIX:C in samples containing N9-GP. Objectives To assess the accuracy of various one-stage clotting and chromogenic assays for measuring FIX:C in samples containing N9-GP as compared with samples containing rFIX or plasma-derived FIX (pdFIX) across two laboratory sites. Methods FIX:C, in severe hemophilia B plasma spiked with a range of concentrations (from very low, i.e. 0.03 IU mL(-1) , to high, i.e. 0.90 IU mL(-1) ) of N9-GP, rFIX (BeneFIX), and pdFIX (Mononine), was determined at two laboratory sites with 10 commercially available one-stage clotting assays and two chromogenic FIX:C assays. Assays were performed with a plasma calibrator and different analyzers. Results A high degree of variation in FIX:C measurement was observed for one-stage clotting assays for N9-GP as compared with rFIX or pdFIX. Acceptable N9-GP recovery was observed in the low-concentration to high-concentration samples tested with one-stage clotting assays using SynthAFax or DG Synth, or with chromogenic FIX:C assays. Similar patterns of FIX:C measurement were observed at both laboratory sites, with minor differences probably being attributable to the use of different analyzers. Conclusions These results suggest that, of the

  1. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  2. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  3. Probability Sampling Method for a Hidden Population Using Respondent-Driven Sampling: Simulation for Cancer Survivors.

    Science.gov (United States)

    Jung, Minsoo

    2015-01-01

    When there is no sampling frame within a certain group or the group is concerned that making its population public would bring social stigma, we say the population is hidden. It is difficult to approach this kind of population survey-methodologically because the response rate is low and its members are not quite honest with their responses when probability sampling is used. The only alternative known to address the problems caused by previous methods such as snowball sampling is respondent-driven sampling (RDS), which was developed by Heckathorn and his colleagues. RDS is based on a Markov chain, and uses the social network information of the respondent. This characteristic allows for probability sampling when we survey a hidden population. We verified through computer simulation whether RDS can be used on a hidden population of cancer survivors. According to the simulation results of this thesis, the chain-referral sampling of RDS tends to minimize as the sample gets bigger, and it becomes stabilized as the wave progresses. Therefore, it shows that the final sample information can be completely independent from the initial seeds if a certain level of sample size is secured even if the initial seeds were selected through convenient sampling. Thus, RDS can be considered as an alternative which can improve upon both key informant sampling and ethnographic surveys, and it needs to be utilized for various cases domestically as well.

  4. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  5. Two-stage decision approach to material accounting

    International Nuclear Information System (INIS)

    Opelka, J.H.; Sutton, W.B.

    1982-01-01

    The validity of the alarm threshold 4sigma has been checked for hypothetical large and small facilities using a two-stage decision model in which the diverter's strategic variable is the quantity diverted, and the defender's strategic variables are the alarm threshold and the effectiveness of the physical security and material control systems in the possible presence of a diverter. For large facilities, the material accounting system inherently appears not to be a particularly useful system for the deterrence of diversions, and essentially no improvement can be made by lowering the alarm threshold below 4sigma. For small facilities, reduction of the threshold to 2sigma or 3sigma is a cost effective change for the accounting system, but is probably less cost effective than making improvements in the material control and physical security systems

  6. A versatile ultra high vacuum sample stage with six degrees of freedom

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, A. W.; Tromp, R. M. [IBM T.J. Watson Research Center, 1101 Kitchawan Road, P.O. Box 218, Yorktown Heights, New York 10598 (United States)

    2013-07-15

    We describe the design and practical realization of a versatile sample stage with six degrees of freedom. The stage was designed for use in a Low Energy Electron Microscope, but its basic design features will be useful for numerous other applications. The degrees of freedom are X, Y, and Z, two tilts, and azimuth. All motions are actuated in an ultrahigh vacuum base pressure environment by piezoelectric transducers with integrated position sensors. The sample can be load-locked. During observation, the sample is held at a potential of −15 kV, at temperatures between room temperature and 1500 °C, and in background gas pressures up to 1 × 10{sup −4} Torr.

  7. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  8. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  9. Possible two-stage 87Sr evolution in the Stockdale Rhyolite

    International Nuclear Information System (INIS)

    Compston, W.; McDougall, I.; Wyborn, D.

    1982-01-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage 87 Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial 87 Sr/ 86 Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y. (orig.)

  10. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  11. Approximation of rejective sampling inclusion probabilities and application to high order correlations

    NARCIS (Netherlands)

    Boistard, H.; Lopuhää, H.P.; Ruiz-Gazen, A.

    2012-01-01

    This paper is devoted to rejective sampling. We provide an expansion of joint inclusion probabilities of any order in terms of the inclusion probabilities of order one, extending previous results by Hájek (1964) and Hájek (1981) and making the remainder term more precise. Following Hájek (1981), the

  12. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  13. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  14. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  15. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  16. Exploring Middle School Students' Heuristic Thinking about Probability

    OpenAIRE

    Mistele, Jean May

    2014-01-01

    ABSTRACT This descriptive qualitative study examines six eighth-grade students' thinking while solving probability problems. This study aimed to gather direct information on students' problem solving processes informed by the heuristics and biases framework. This study used purposive sampling (Patton, 1990) to identify eighth-grade students who were knowledgeable about probability and had reached the formal operational stage of cognitive development. These criterion were necessary to redu...

  17. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  18. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  19. Hodgkin's disease: correlation of clinical characteristics with probabilities for negative lymphangiogram vs. negative laparotomy findings in patients with stage I supradiaphragmatic presentations vs. those in patients with stage II

    International Nuclear Information System (INIS)

    Fuller, Lillian M.; Mirza, Nadeem Q.; Palmer, J. Lynn; Davis, Barry R.; Ha, Chul S.; Rodriguez, M. Alma; Hagemeister, Fredrick B.; Cabanillas, Fernando; McLaughlin, Peter; Butler, James J.; North, Luceil B.; Martin, Richard G.

    1998-01-01

    Purpose: At a time both when late complications and second malignancies have become a growing concern and when staging laparotomy has been largely abandoned and comparative studies for staging Hodgkin's disease by state of the art computed tomography (CT) vs. lymphangiography have revealed minimal differences in results for these procedures, our purpose for undertaking this study was twofold. Our initial reason was to determine and compare probabilities for negative abdominal findings for patients with Stage I presentations with those for patients with Stage II as determined by lymphangiography and subsequently by laparotomy for those patients who had negative lymphangiograms. Our second reason, being an extension of the first, was to create a resource that can be used in conjunction with other information for arriving at appropriate treatment decisions including giving either more or particularly less than standard institutional therapy and especially with respect to the abdomen. Methods and Materials: Data on 714 patients with prelymphangiogram Stage I-II upper torso presentations of Hodgkin's disease were entered prospectively in our database between 1968 and 1987. Twenty-eight with lymphocyte predominant disease, who had both negative lymphangiogram and negative laparotomy findings and 17 with questionable diagnoses of lymphocyte-depleted or unclassified disease were excluded from subsequent analyses of 669 patients with nodular sclerosis (NS) and mixed cellularity (MC) diagnoses. Results: Stage I: in final logistic models, negative lymphangiogram findings were associated strongly with a combination of no constitutional symptoms and nodular sclerosis histology, whereas negative laparotomy findings correlated strongly with a combination of no constitutional symptoms and female sex. Predicted probabilities depended on the ratios of favorable to unfavorable characteristics. Stage II: in final logistic models, negative lymphangiogram findings were associated

  20. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  1. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  2. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  3. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  4. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  5. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  6. Improving the collection of knowledge, attitude and practice data with community surveys: a comparison of two second-stage sampling methods.

    Science.gov (United States)

    Davis, Rosemary H; Valadez, Joseph J

    2014-12-01

    Second-stage sampling techniques, including spatial segmentation, are widely used in community health surveys when reliable household sampling frames are not available. In India, an unresearched technique for household selection is used in eight states, which samples the house with the last marriage or birth as the starting point. Users question whether this last-birth or last-marriage (LBLM) approach introduces bias affecting survey results. We conducted two simultaneous population-based surveys. One used segmentation sampling; the other used LBLM. LBLM sampling required modification before assessment was possible and a more systematic approach was tested using last birth only. We compared coverage proportions produced by the two independent samples for six malaria indicators and demographic variables (education, wealth and caste). We then measured the level of agreement between the caste of the selected participant and the caste of the health worker making the selection. No significant difference between methods was found for the point estimates of six malaria indicators, education, caste or wealth of the survey participants (range of P: 0.06 to >0.99). A poor level of agreement occurred between the caste of the health worker used in household selection and the caste of the final participant, (Κ = 0.185), revealing little association between the two, and thereby indicating that caste was not a source of bias. Although LBLM was not testable, a systematic last-birth approach was tested. If documented concerns of last-birth sampling are addressed, this new method could offer an acceptable alternative to segmentation in India. However, inter-state caste variation could affect this result. Therefore, additional assessment of last birth is required before wider implementation is recommended. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.

  7. Possible two-stage /sup 87/Sr evolution in the Stockdale Rhyolite

    Energy Technology Data Exchange (ETDEWEB)

    Compston, W.; McDougall, I. (Australian National Univ., Canberra. Research School of Earth Sciences); Wyborn, D. (Department of Minerals and Energy, Canberra (Australia). Bureau of Mineral Resources)

    1982-12-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage /sup 87/Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial /sup 87/Sr//sup 86/Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y.

  8. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  9. A combined Importance Sampling and Kriging reliability method for small failure probabilities with time-demanding numerical models

    International Nuclear Information System (INIS)

    Echard, B.; Gayton, N.; Lemaire, M.; Relun, N.

    2013-01-01

    Applying reliability methods to a complex structure is often delicate for two main reasons. First, such a structure is fortunately designed with codified rules leading to a large safety margin which means that failure is a small probability event. Such a probability level is difficult to assess efficiently. Second, the structure mechanical behaviour is modelled numerically in an attempt to reproduce the real response and numerical model tends to be more and more time-demanding as its complexity is increased to improve accuracy and to consider particular mechanical behaviour. As a consequence, performing a large number of model computations cannot be considered in order to assess the failure probability. To overcome these issues, this paper proposes an original and easily implementable method called AK-IS for active learning and Kriging-based Importance Sampling. This new method is based on the AK-MCS algorithm previously published by Echard et al. [AK-MCS: an active learning reliability method combining Kriging and Monte Carlo simulation. Structural Safety 2011;33(2):145–54]. It associates the Kriging metamodel and its advantageous stochastic property with the Importance Sampling method to assess small failure probabilities. It enables the correction or validation of the FORM approximation with only a very few mechanical model computations. The efficiency of the method is, first, proved on two academic applications. It is then conducted for assessing the reliability of a challenging aerospace case study submitted to fatigue.

  10. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  11. Digital simulation of two-dimensional random fields with arbitrary power spectra and non-Gaussian probability distribution functions.

    Science.gov (United States)

    Yura, Harold T; Hanson, Steen G

    2012-04-01

    Methods for simulation of two-dimensional signals with arbitrary power spectral densities and signal amplitude probability density functions are disclosed. The method relies on initially transforming a white noise sample set of random Gaussian distributed numbers into a corresponding set with the desired spectral distribution, after which this colored Gaussian probability distribution is transformed via an inverse transform into the desired probability distribution. In most cases the method provides satisfactory results and can thus be considered an engineering approach. Several illustrative examples with relevance for optics are given.

  12. Two-stage revision of septic knee prosthesis with articulating knee spacers yields better infection eradication rate than one-stage or two-stage revision with static spacers.

    Science.gov (United States)

    Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L

    2012-12-01

    The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.

  13. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  14. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  15. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  16. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  17. On the assessment of extremely low breakdown probabilities by an inverse sampling procedure [gaseous insulation

    DEFF Research Database (Denmark)

    Thyregod, Poul; Vibholm, Svend

    1991-01-01

    the flashover probability function and the corresponding distribution of first breakdown voltages under the inverse sampling procedure, and show how this relation may be utilized to assess the single-shot flashover probability corresponding to the observed average first breakdown voltage. Since the procedure......First breakdown voltages obtained under the inverse sampling procedure assuming a double exponential flashover probability function are discussed. An inverse sampling procedure commences the voltage application at a very low level, followed by applications at stepwise increased levels until...... is based on voltage applications in the neighbourhood of the quantile under investigation, the procedure is found to be insensitive to the underlying distributional assumptions...

  18. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    International Nuclear Information System (INIS)

    Vaezzadeh, Mehdi; Yazdani, Ahmad; Vaezzadeh, Majid; Daneshmand, Gissoo; Kanzeghi, Ali

    2006-01-01

    The magnetic behavior of Gd-based intermetallic compound (Gd 2 Al (1-x) Au x ) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement χ(T) shows two different stages. Moreover for (T>T K2 ) a fall of the value of χ(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T K1 ) an increase of χ(T) and in the second stage (T K2 ) a new remarkable decrease of χ(T) (T K1 >T K2 ). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution

  19. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  20. Comparison of single-stage and temperature-phased two-stage anaerobic digestion of oily food waste

    International Nuclear Information System (INIS)

    Wu, Li-Jie; Kobayashi, Takuro; Li, Yu-You; Xu, Kai-Qin

    2015-01-01

    Highlights: • A single-stage and two two-stage anaerobic systems were synchronously operated. • Similar methane production 0.44 L/g VS_a_d_d_e_d from oily food waste was achieved. • The first stage of the two-stage process became inefficient due to serious pH drop. • Recycle favored the hythan production in the two-stage digestion. • The conversion of unsaturated fatty acids was enhanced by recycle introduction. - Abstract: Anaerobic digestion is an effective technology to recover energy from oily food waste. A single-stage system and temperature-phased two-stage systems with and without recycle for anaerobic digestion of oily food waste were constructed to compare the operation performances. The synchronous operation indicated the similar ability to produce methane in the three systems, with a methane yield of 0.44 L/g VS_a_d_d_e_d. The pH drop to less than 4.0 in the first stage of two-stage system without recycle resulted in poor hydrolysis, and methane or hydrogen was not produced in this stage. Alkalinity supplement from the second stage of two-stage system with recycle improved pH in the first stage to 5.4. Consequently, 35.3% of the particulate COD in the influent was reduced in the first stage of two-stage system with recycle according to a COD mass balance, and hydrogen was produced with a percentage of 31.7%, accordingly. Similar solids and organic matter were removed in the single-stage system and two-stage system without recycle. More lipid degradation and the conversion of long-chain fatty acids were achieved in the single-stage system. Recycling was proved to be effective in promoting the conversion of unsaturated long-chain fatty acids into saturated fatty acids in the two-stage system.

  1. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  2. The probability of an encounter of two Brownian particles before escape

    International Nuclear Information System (INIS)

    Holcman, D; Kupka, I

    2009-01-01

    We study the probability of meeting of two Brownian particles before one of them exits a finite interval. We obtain an explicit expression for the probability as a function of the initial distance between the two particles using the Weierstrass elliptic function. We also find the law of the meeting location. Brownian simulations show the accuracy of our analysis. Finally, we discuss some applications to the probability that a double-strand DNA break repairs in confined environments.

  3. Comparison of two techniques used for the recovery of third-stage strongylid nematode larvae from herbage.

    Science.gov (United States)

    Krecek, R C; Maingi, N

    2004-07-14

    A laboratory trial to determine the efficacy of two methods in recovering known numbers of third-stage (L3) strongylid nematode larvae from herbage was carried out. Herbage samples consisting almost entirely of star grass (Cynodon aethiopicus) that had no L3 nematode parasitic larvae were collected at Onderstepoort, South Africa. Two hundred grams samples were placed in fibreglass fly gauze bags and seeded with third-stage strongylid nematode larvae at 11 different levels of herbage infectivity ranging from 50 to 8000 L3/kg. Eight replicates were prepared for each of the 11 levels of herbage infectivity. Four of these were processed using a modified automatic Speed Queen heavy-duty washing machine at a regular normal cycle, followed by isolation of larvae through centrifugation-flotation in saturated sugar solution. Larvae in the other four samples were recovered after soaking the herbage in water overnight and the larvae isolated with the Baermann technique of the washing. There was a strong correlation between the number of larvae recovered using both methods and the number of larvae in the seeded samples, indicating that the two methods give a good indication of changes in the numbers of larvae on pasture if applied in epidemiological studies. The washing machine method recovered higher numbers of larvae than the soaking and Baermann method at all levels of pasture seeding, probably because the machine washed the samples more thoroughly and a sugar centrifugation-flotation step was used. Larval suspensions obtained using the washing machine method were therefore cleaner and thus easier to examine under the microscope. In contrast, the soaking and Baermann method may be more suitable in field-work, especially in places where resources and equipment are scarce, as it is less costly in equipment and less labour intensive. Neither method recovered all the larvae from the seeded samples. The recovery rates for the washing machine method ranged from 18 to 41% while

  4. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  5. Two stages of Kondo effect and competition between RKKY and Kondo in Gd-based intermetallic compound

    Energy Technology Data Exchange (ETDEWEB)

    Vaezzadeh, Mehdi [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)]. E-mail: mehdi@kntu.ac.ir; Yazdani, Ahmad [Tarbiat Modares University, P.O. Box 14155-4838, Tehran (Iran, Islamic Republic of); Vaezzadeh, Majid [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Daneshmand, Gissoo [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of); Kanzeghi, Ali [Department of Physics, K.N.Toosi University of Technology, P.O. Box 15875-4416, Tehran (Iran, Islamic Republic of)

    2006-05-01

    The magnetic behavior of Gd-based intermetallic compound (Gd{sub 2}Al{sub (1-x)}Au{sub x}) in the form of the powder and needle, is investigated. All the samples are an orthorhombic crystal structure. Only the compound with x=0.4 shows the Kondo effect (other compounds have a normal behavior). Although, for the compound in the form of powder, with x=0.4, the susceptibility measurement {chi}(T) shows two different stages. Moreover for (T>T{sub K2}) a fall of the value of {chi}(T) is observable, which indicates a weak presence of ferromagnetic phase. About the two stages of Kondo effect, we observe at the first (T{sub K1}) an increase of {chi}(T) and in the second stage (T{sub K2}) a new remarkable decrease of {chi}(T) (T{sub K1}>T{sub K2}). For the sample in the form of needles, the first stage is observable only under high magnetic field. This first stage could be corresponds to a narrow resonance between Kondo cloud and itinerant electron. The second stage, which is remarkably visible for the sample in the form of the powder, can be attribute to a complete polarization of Kondo cloud. Observation of these two Kondo stages could be due to the weak presence of RKKY contribution.

  6. Two-Stage Centrifugal Fan

    Science.gov (United States)

    Converse, David

    2011-01-01

    Fan designs are often constrained by envelope, rotational speed, weight, and power. Aerodynamic performance and motor electrical performance are heavily influenced by rotational speed. The fan used in this work is at a practical limit for rotational speed due to motor performance characteristics, and there is no more space available in the packaging for a larger fan. The pressure rise requirements keep growing. The way to ordinarily accommodate a higher DP is to spin faster or grow the fan rotor diameter. The invention is to put two radially oriented stages on a single disk. Flow enters the first stage from the center; energy is imparted to the flow in the first stage blades, the flow is redirected some amount opposite to the direction of rotation in the fixed stators, and more energy is imparted to the flow in the second- stage blades. Without increasing either rotational speed or disk diameter, it is believed that as much as 50 percent more DP can be achieved with this design than with an ordinary, single-stage centrifugal design. This invention is useful primarily for fans having relatively low flow rates with relatively high pressure rise requirements.

  7. Phase I (or phase II) dose-ranging clinical trials: proposal of a two-stage Bayesian design.

    Science.gov (United States)

    Zohar, Sarah; Chevret, Sylvie

    2003-02-01

    We propose a new design for phase I (or phase II) dose-ranging clinical trials aiming at determining a dose of an experimental treatment to satisfy safety (respectively efficacy) requirements, at treating a sufficiently large number of patients to estimate the toxicity (respectively failure) probability of the dose level with a given reliability, and at stopping the trial early if it is likely that no dose is safe (respectively efficacious). A two-stage design was derived from the Continual Reassessment Method (CRM), with implementation of Bayesian criteria to generate stopping rules. A simulation study was conducted to compare the operating characteristics of the proposed two-stage design to those reached by the traditional CRM. Finally, two applications to real data sets are provided.

  8. Sleep Stage Transition Dynamics Reveal Specific Stage 2 Vulnerability in Insomnia.

    Science.gov (United States)

    Wei, Yishul; Colombo, Michele A; Ramautar, Jennifer R; Blanken, Tessa F; van der Werf, Ysbrand D; Spiegelhalder, Kai; Feige, Bernd; Riemann, Dieter; Van Someren, Eus J W

    2017-09-01

    Objective sleep impairments in insomnia disorder (ID) are insufficiently understood. The present study evaluated whether whole-night sleep stage dynamics derived from polysomnography (PSG) differ between people with ID and matched controls and whether sleep stage dynamic features discriminate them better than conventional sleep parameters. Eighty-eight participants aged 21-70 years, including 46 with ID and 42 age- and sex-matched controls without sleep complaints, were recruited through www.sleepregistry.nl and completed two nights of laboratory PSG. Data of 100 people with ID and 100 age- and sex-matched controls from a previously reported study were used to validate the generalizability of findings. The second night was used to obtain, in addition to conventional sleep parameters, probabilities of transitions between stages and bout duration distributions of each stage. Group differences were evaluated with nonparametric tests. People with ID showed higher empirical probabilities to transition from stage N2 to the lighter sleep stage N1 or wakefulness and a faster decaying stage N2 bout survival function. The increased transition probability from stage N2 to stage N1 discriminated people with ID better than any of their deviations in conventional sleep parameters, including less total sleep time, less sleep efficiency, more stage N1, and more wake after sleep onset. Moreover, adding this transition probability significantly improved the discriminating power of a multiple logistic regression model based on conventional sleep parameters. Quantification of sleep stage dynamics revealed a particular vulnerability of stage N2 in insomnia. The feature characterizes insomnia better than-and independently of-any conventional sleep parameter. © Sleep Research Society 2017. Published by Oxford University Press on behalf of the Sleep Research Society. All rights reserved. For permissions, please e-mail journals.permissions@oup.com.

  9. Cortical hypometabolism and hypoperfusion in Parkinson's disease is extensive: probably even at early disease stages

    DEFF Research Database (Denmark)

    Borghammer, Per; Chakravarty, Mallar; Jonsdottir, Kristjana Yr

    2010-01-01

    independent samples of PD patients. We compared SPECT CBF images of 32 early-stage and 33 late-stage PD patients with that of 60 matched controls. We also compared PET FDG images from 23 late-stage PD patients with that of 13 controls. Three different normalization methods were compared: (1) GM normalization...

  10. Probability Machines: Consistent Probability Estimation Using Nonparametric Learning Machines

    Science.gov (United States)

    Malley, J. D.; Kruppa, J.; Dasgupta, A.; Malley, K. G.; Ziegler, A.

    2011-01-01

    Summary Background Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. Objectives The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Methods Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Results Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Conclusions Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications. PMID:21915433

  11. Risk and protective factors of dissocial behavior in a probability sample.

    Science.gov (United States)

    Moral de la Rubia, José; Ortiz Morales, Humberto

    2012-07-01

    The aims of this study were to know risk and protective factors for dissocial behavior keeping in mind that the self-report of dissocial behavior is biased by the impression management. A probability sample of adolescents that lived in two neighborhoods with high indexes of gangs and offenses (112 male and 86 women) was collected. The 27-item Dissocial Behavior Scale (ECODI27; Pacheco & Moral, 2010), Balanced Inventory of Desirable Responding, version 6 (BIDR-6; Paulhus, 1991), Sensation Seeking Scale, form V (SSS-V; Zuckerman, Eysenck, & Eysenck, 1978), Parent-Adolescent Communication Scale (PACS; Barnes & Olson, 1982), 30-item Rathus Assertiveness Schedule (RAS; Rathus, 1973), Interpersonal Reactivity Index (IRI; Davis, 1983) and a social relationship questionnaire (SRQ) were applied. Binary logistic regression was used for the data analysis. A third of the participants showed dissocial behavior. Belonging to a gang in the school (schooled adolescents) or to a gang out of school and job (total sample) and desinhibition were risk factors; being woman, perspective taking and open communication with the father were protective factors. School-leaving was a differential aspect. We insisted on the need of intervention on these variables.

  12. Multivariate survivorship analysis using two cross-sectional samples.

    Science.gov (United States)

    Hill, M E

    1999-11-01

    As an alternative to survival analysis with longitudinal data, I introduce a method that can be applied when one observes the same cohort in two cross-sectional samples collected at different points in time. The method allows for the estimation of log-probability survivorship models that estimate the influence of multiple time-invariant factors on survival over a time interval separating two samples. This approach can be used whenever the survival process can be adequately conceptualized as an irreversible single-decrement process (e.g., mortality, the transition to first marriage among a cohort of never-married individuals). Using data from the Integrated Public Use Microdata Series (Ruggles and Sobek 1997), I illustrate the multivariate method through an investigation of the effects of race, parity, and educational attainment on the survival of older women in the United States.

  13. The Orientation of Gastric Biopsy Samples Improves the Inter-observer Agreement of the OLGA Staging System.

    Science.gov (United States)

    Cotruta, Bogdan; Gheorghe, Cristian; Iacob, Razvan; Dumbrava, Mona; Radu, Cristina; Bancila, Ion; Becheanu, Gabriel

    2017-12-01

    Evaluation of severity and extension of gastric atrophy and intestinal metaplasia is recommended to identify subjects with a high risk for gastric cancer. The inter-observer agreement for the assessment of gastric atrophy is reported to be low. The aim of the study was to evaluate the inter-observer agreement for the assessment of severity and extension of gastric atrophy using oriented and unoriented gastric biopsy samples. Furthermore, the quality of biopsy specimens in oriented and unoriented samples was analyzed. A total of 35 subjects with dyspeptic symptoms addressed for gastrointestinal endoscopy that agreed to enter the study were prospectively enrolled. The OLGA/OLGIM gastric biopsies protocol was used. From each subject two sets of biopsies were obtained (four from the antrum, two oriented and two unoriented, two from the gastric incisure, one oriented and one unoriented, four from the gastric body, two oriented and two unoriented). The orientation of the biopsy samples was completed using nitrocellulose filters (Endokit®, BioOptica, Milan, Italy). The samples were blindly examined by two experienced pathologists. Inter-observer agreement was evaluated using kappa statistic for inter-rater agreement. The quality of histopathology specimens taking into account the identification of lamina propria was analyzed in oriented vs. unoriented samples. The samples with detectable lamina propria mucosae were defined as good quality specimens. Categorical data was analyzed using chi-square test and a two-sided p value <0.05 was considered statistically significant. A total of 350 biopsy samples were analyzed (175 oriented / 175 unoriented). The kappa index values for oriented/unoriented OLGA 0/I/II/III and IV stages have been 0.62/0.13, 0.70/0.20, 0.61/0.06, 0.62/0.46, and 0.77/0.50, respectively. For OLGIM 0/I/II/III stages the kappa index values for oriented/unoriented samples were 0.83/0.83, 0.88/0.89, 0.70/0.88 and 0.83/1, respectively. No case of OLGIM IV

  14. PSA, subjective probability and decision making

    International Nuclear Information System (INIS)

    Clarotti, C.A.

    1989-01-01

    PSA is the natural way to making decisions in face of uncertainty relative to potentially dangerous plants; subjective probability, subjective utility and Bayes statistics are the ideal tools for carrying out a PSA. This paper reports that in order to support this statement the various stages of the PSA procedure are examined in detail and step by step the superiority of Bayes techniques with respect to sampling theory machinery is proven

  15. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  16. Non-parametric adaptive importance sampling for the probability estimation of a launcher impact position

    International Nuclear Information System (INIS)

    Morio, Jerome

    2011-01-01

    Importance sampling (IS) is a useful simulation technique to estimate critical probability with a better accuracy than Monte Carlo methods. It consists in generating random weighted samples from an auxiliary distribution rather than the distribution of interest. The crucial part of this algorithm is the choice of an efficient auxiliary PDF that has to be able to simulate more rare random events. The optimisation of this auxiliary distribution is often in practice very difficult. In this article, we propose to approach the IS optimal auxiliary density with non-parametric adaptive importance sampling (NAIS). We apply this technique for the probability estimation of spatial launcher impact position since it has currently become a more and more important issue in the field of aeronautics.

  17. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  18. Predicting binary choices from probability phrase meanings.

    Science.gov (United States)

    Wallsten, Thomas S; Jang, Yoonhee

    2008-08-01

    The issues of how individuals decide which of two events is more likely and of how they understand probability phrases both involve judging relative likelihoods. In this study, we investigated whether derived scales representing probability phrase meanings could be used within a choice model to predict independently observed binary choices. If they can, this simultaneously provides support for our model and suggests that the phrase meanings are measured meaningfully. The model assumes that, when deciding which of two events is more likely, judges take a single sample from memory regarding each event and respond accordingly. The model predicts choice probabilities by using the scaled meanings of individually selected probability phrases as proxies for confidence distributions associated with sampling from memory. Predictions are sustained for 34 of 41 participants but, nevertheless, are biased slightly low. Sequential sampling models improve the fit. The results have both theoretical and applied implications.

  19. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  20. Prognostic meta-signature of breast cancer developed by two-stage mixture modeling of microarray data

    Directory of Open Access Journals (Sweden)

    Ghosh Debashis

    2004-12-01

    Full Text Available Abstract Background An increasing number of studies have profiled tumor specimens using distinct microarray platforms and analysis techniques. With the accumulating amount of microarray data, one of the most intriguing yet challenging tasks is to develop robust statistical models to integrate the findings. Results By applying a two-stage Bayesian mixture modeling strategy, we were able to assimilate and analyze four independent microarray studies to derive an inter-study validated "meta-signature" associated with breast cancer prognosis. Combining multiple studies (n = 305 samples on a common probability scale, we developed a 90-gene meta-signature, which strongly associated with survival in breast cancer patients. Given the set of independent studies using different microarray platforms which included spotted cDNAs, Affymetrix GeneChip, and inkjet oligonucleotides, the individually identified classifiers yielded gene sets predictive of survival in each study cohort. The study-specific gene signatures, however, had minimal overlap with each other, and performed poorly in pairwise cross-validation. The meta-signature, on the other hand, accommodated such heterogeneity and achieved comparable or better prognostic performance when compared with the individual signatures. Further by comparing to a global standardization method, the mixture model based data transformation demonstrated superior properties for data integration and provided solid basis for building classifiers at the second stage. Functional annotation revealed that genes involved in cell cycle and signal transduction activities were over-represented in the meta-signature. Conclusion The mixture modeling approach unifies disparate gene expression data on a common probability scale allowing for robust, inter-study validated prognostic signatures to be obtained. With the emerging utility of microarrays for cancer prognosis, it will be important to establish paradigms to meta

  1. Effect of optimum stratification on sampling with varying probabilities under proportional allocation

    Directory of Open Access Journals (Sweden)

    Syed Ejaz Husain Rizvi

    2007-10-01

    Full Text Available The problem of optimum stratification on an auxiliary variable when the units from different strata are selected with probability proportional to the value of auxiliary variable (PPSWR was considered by Singh (1975 for univariate case. In this paper we have extended the same problem, for proportional allocation, when two variates are under study. A cum. 3 R3(x rule for obtaining approximately optimum strata boundaries has been provided. It has been shown theoretically as well as empirically that the use of stratification has inverse effect on the relative efficiency of PPSWR as compared to unstratified PPSWR method when proportional method of allocation is envisaged. Further comparison showed that with increase in number of strata the stratified simple random sampling is equally efficient as PPSWR.

  2. Influence of Cu(NO32 initiation additive in two-stage mode conditions of coal pyrolytic decomposition

    Directory of Open Access Journals (Sweden)

    Larionov Kirill

    2017-01-01

    Full Text Available Two-stage process (pyrolysis and oxidation of brown coal sample with Cu(NO32 additive pyrolytic decomposition was studied. Additive was introduced by using capillary wetness impregnation method with 5% mass concentration. Sample reactivity was studied by thermogravimetric analysis with staged gaseous medium supply (argon and air at heating rate 10 °C/min and intermediate isothermal soaking. The initiative additive introduction was found to significantly reduce volatile release temperature and accelerate thermal decomposition of sample. Mass-spectral analysis results reveal that significant difference in process characteristics is connected to volatile matter release stage which is initiated by nitrous oxide produced during copper nitrate decomposition.

  3. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  4. A stage is a stage is a stage: a direct comparison of two scoring systems.

    Science.gov (United States)

    Dawson, Theo L

    2003-09-01

    L. Kohlberg (1969) argued that his moral stages captured a developmental sequence specific to the moral domain. To explore that contention, the author compared stage assignments obtained with the Standard Issue Scoring System (A. Colby & L. Kohlberg, 1987a, 1987b) and those obtained with a generalized content-independent stage-scoring system called the Hierarchical Complexity Scoring System (T. L. Dawson, 2002a), on 637 moral judgment interviews (participants' ages ranged from 5 to 86 years). The correlation between stage scores produced with the 2 systems was .88. Although standard issue scoring and hierarchical complexity scoring often awarded different scores up to Kohlberg's Moral Stage 2/3, from his Moral Stage 3 onward, scores awarded with the two systems predominantly agreed. The author explores the implications for developmental research.

  5. One- and two-stage Arrhenius models for pharmaceutical shelf life prediction.

    Science.gov (United States)

    Fan, Zhewen; Zhang, Lanju

    2015-01-01

    One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.

  6. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  7. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  8. The Linking Probability of Deep Spider-Web Networks

    OpenAIRE

    Pippenger, Nicholas

    2005-01-01

    We consider crossbar switching networks with base $b$ (that is, constructed from $b\\times b$ crossbar switches), scale $k$ (that is, with $b^k$ inputs, $b^k$ outputs and $b^k$ links between each consecutive pair of stages) and depth $l$ (that is, with $l$ stages). We assume that the crossbars are interconnected according to the spider-web pattern, whereby two diverging paths reconverge only after at least $k$ stages. We assume that each vertex is independently idle with probability $q$, the v...

  9. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  10. Comparison of Oone-Stage Free Gracilis Muscle Flap With Two-Stage Method in Chronic Facial Palsy

    Directory of Open Access Journals (Sweden)

    J Ghaffari

    2007-08-01

    Full Text Available Background:Rehabilitation of facial paralysis is one of the greatest challenges faced by reconstructive surgeons today. The traditional method for treatment of patients with facial palsy is the two-stage free gracilis flap which has a long latency period of between the two stages of surgery.Methods: In this paper, we prospectively compared the results of the one-stage gracilis flap method with the two -stage technique.Results:Out of 41 patients with facial palsy refered to Hazrat-e-Fatemeh Hospital 31 were selected from whom 22 underwent two- stage and 9 one-stage method treatment. The two groups were identical according to age,sex,intensity of illness, duration, and chronicity of illness. Mean duration of follow up was 37 months. There was no significant relation between the two groups regarding the symmetry of face in repose, smiling, whistling and nasolabial folds. Frequency of complications was equal in both groups. The postoperative surgeons and patients' satisfaction were equal in both groups. There was no significant difference between the mean excursion of muscle flap in one-stage (9.8 mm and two-stage groups (8.9 mm. The ratio of contraction of the affected side compared to the normal side was similar in both groups. The mean time of the initial contraction of the muscle flap in the one-stage group (5.5 months had a significant difference (P=0.001 with the two-stage one (6.5 months.The study revealed a highly significant difference (P=0.0001 between the mean waiting period from the first operation to the beginning of muscle contraction in one-stage(5.5 monthsand two-stage groups(17.1 months.Conclusion:It seems that the results and complication of the two methods are the same,but the one-stage method requires less time for facial reanimation,and is costeffective because it saves time and decreases hospitalization costs.

  11. Statistical Methods and Tools for Hanford Staged Feed Tank Sampling

    Energy Technology Data Exchange (ETDEWEB)

    Fountain, Matthew S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Brigantic, Robert T. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Peterson, Reid A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-10-01

    This report summarizes work conducted by Pacific Northwest National Laboratory to technically evaluate the current approach to staged feed sampling of high-level waste (HLW) sludge to meet waste acceptance criteria (WAC) for transfer from tank farms to the Hanford Waste Treatment and Immobilization Plant (WTP). The current sampling and analysis approach is detailed in the document titled Initial Data Quality Objectives for WTP Feed Acceptance Criteria, 24590-WTP-RPT-MGT-11-014, Revision 0 (Arakali et al. 2011). The goal of this current work is to evaluate and provide recommendations to support a defensible, technical and statistical basis for the staged feed sampling approach that meets WAC data quality objectives (DQOs).

  12. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  13. The effects of the probability activities in thinking science program on the development of the probabilistic thinking of middle school students

    International Nuclear Information System (INIS)

    Shin, Kyung In; Lee, Sang Kwon; Shin, Ae Kyung; Choi, Byung Soon

    2003-01-01

    The purposes of this study were to investigate the correlation between the cognitive level and the probabilistic thinking level and to analyze the effects of the probability activities in Thinking Science (TS) program on the development of probabilistic thinking. The 219 7th grade students were sampled in the middle school and were divided into an experimental group and a control group. The probability activities in TS program were implemented to the experimental group, while only normal curriculum was conducted in the control group. The results of this study showed that most of 7th grade students were in the concrete operational stage and used both subjective and quantitative strategy simultaneously in probability problem solving. It was also found that the higher the cognitive level of the students, the higher the probabilistic thinking level of them. The sample space and the probability of an event in the constructs of probability were first developed as compared to the probability comparisons and the conditional probability. The probability activities encouraged the students to use quantitative strategy in probability problem solving and to recognize probability of an event. Especially, the effectiveness was relatively higher for the students in the mid concrete operation stage than those in any other stage

  14. May one-stage exchange for Candida albicans peri-prosthetic infection be successful?

    Science.gov (United States)

    Jenny, J-Y; Goukodadja, O; Boeri, C; Gaudias, J

    2016-02-01

    Fungal infection of a total joint arthroplasty has a low incidence but is generally considered as more difficult to cure than bacterial infection. As for bacterial infection, two-stage exchange is considered as the gold standard of treatment. We report two cases of one-stage total joint exchange for fungal peri-prosthetic infection with Candida albicans, where the responsible pathogens was only identified on intraoperative samples. This situation can be considered as a one-stage exchange for fungal peri-prosthetic infection without preoperative identification of the responsible organism, which is considered as having a poor prognosis. Both cases were free of infection after two years. One-stage revision has several potential advantages over two-stage revision, including shorter hospital stay and rehabilitation, no interim period with significant functional impairment, shorter antibiotic treatment, better functional outcome and probably lower costs. We suggest that one-stage revision for C. albicans peri-prosthetic infection may be successful even without preoperative fungal identification. Level IV-Historical cases. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  15. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    International Nuclear Information System (INIS)

    Ariunbaatar, Javkhlan; Scotto Di Perta, Ester; Panico, Antonio; Frunzo, Luigi; Esposito, Giovanni; Lens, Piet N.L.; Pirozzi, Francesco

    2015-01-01

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC 50 of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid

  16. Staging of gastric adenocarcinoma using two-phase spiral CT: correlation with pathologic staging

    International Nuclear Information System (INIS)

    Seo, Tae Seok; Lee, Dong Ho; Ko, Young Tae; Lim, Joo Won

    1998-01-01

    To correlate the preoperative staging of gastric adenocarcinoma using two-phase spiral CT with pathologic staging. One hundred and eighty patients with gastric cancers confirmed during surgery underwent two-phase spiral CT, and were evaluated retrospectively. CT scans were obtained in the prone position after ingestion of water. Scans were performed 35 and 80 seconds after the start of infusion of 120mL of non-ionic contrast material with the speed of 3mL/sec. Five mm collimation, 7mm/sec table feed and 5mm reconstruction interval were used. T-and N-stage were determined using spiral CT images, without knowledge of the pathologic results. Pathologic staging was later compared with CT staging. Pathologic T-stage was T1 in 70 cases(38.9%), T2 in 33(18.3%), T3 in 73(40.6%), and T4 in 4(2.2%). Type-I or IIa elevated lesions accouted for 10 of 70 T1 cases(14.3%) and flat or depressed lesions(type IIb, IIc, or III) for 60(85.7%). Pathologic N-stage was NO in 85 cases(47.2%), N1 in 42(23.3%), N2 in 31(17.2%), and N3 in 22(12,2%). The detection rate of early gastric cancer using two-phase spiral CT was 100.0%(10 of 10 cases) among elevated lesions and 78.3%(47 of 60 cases) among flat or depressed lesions. With regard to T-stage, there was good correlation between CT image and pathology in 86 of 180 cases(47.8%). Overstaging occurred in 23.3%(42 of 180 cases) and understaging in 28.9%(52 of 180 cases). With regard to N-stage, good correlation between CT image and pathology was noted in 94 of 180 cases(52.2%). The rate of understaging(31.7%, 57 of 180 cases) was higher than that of overstaging(16.1%, 29 of 180 cases)(p<0.001). The detection rate of early gastric cancer using two-phase spiral CT was 81.4%, and there was no significant difference in detectability between elevated and depressed lesions. Two-phase spiral CT for determing the T-and N-stage of gastric cancer was not effective;it was accurate in abont 50% of cases understaging tended to occur.=20

  17. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  18. Study of shallow junction formation by boron-containing cluster ion implantation of silicon and two-stage annealing

    Science.gov (United States)

    Lu, Xin-Ming

    Shallow junction formation made by low energy ion implantation and rapid thermal annealing is facing a major challenge for ULSI (ultra large scale integration) as the line width decreases down to the sub micrometer region. The issues include low beam current, the channeling effect in low energy ion implantation and TED (transient enhanced diffusion) during annealing after ion implantation. In this work, boron containing small cluster ions, such as GeB, SiB and SiB2, was generated by using the SNICS (source of negative ion by cesium sputtering) ion source to implant into Si substrates to form shallow junctions. The use of boron containing cluster ions effectively reduces the boron energy while keeping the energy of the cluster ion beam at a high level. At the same time, it reduces the channeling effect due to amorphization by co-implanted heavy atoms like Ge and Si. Cluster ions have been used to produce 0.65--2keV boron for low energy ion implantation. Two stage annealing, which is a combination of low temperature (550°C) preannealing and high temperature annealing (1000°C), was carried out to anneal the Si sample implanted by GeB, SiBn clusters. The key concept of two-step annealing, that is, the separation of crystal regrowth, point defects removal with dopant activation from dopant diffusion, is discussed in detail. The advantages of the two stage annealing include better lattice structure, better dopant activation and retarded boron diffusion. The junction depth of the two stage annealed GeB sample was only half that of the one-step annealed sample, indicating that TED was suppressed by two stage annealing. Junction depths as small as 30 nm have been achieved by two stage annealing of sample implanted with 5 x 10-4/cm2 of 5 keV GeB at 1000°C for 1 second. The samples were evaluated by SIMS (secondary ion mass spectrometry) profiling, TEM (transmission electron microscopy) and RBS (Rutherford Backscattering Spectrometry)/channeling. Cluster ion implantation

  19. Two phase sampling

    CERN Document Server

    Ahmad, Zahoor; Hanif, Muhammad

    2013-01-01

    The development of estimators of population parameters based on two-phase sampling schemes has seen a dramatic increase in the past decade. Various authors have developed estimators of population using either one or two auxiliary variables. The present volume is a comprehensive collection of estimators available in single and two phase sampling. The book covers estimators which utilize information on single, two and multiple auxiliary variables of both quantitative and qualitative nature. Th...

  20. Prognosis of endometrial carcinoma stage I in two Swedish regions

    International Nuclear Information System (INIS)

    Sorbe, B.; Kjellgren, O.; Stenson, S.; Umeaa Univ. Hospital; Uppsala Univ.

    1990-01-01

    A high dose-rate afterloading technique ( 60 Co) was compared with a low dose-rate packing method ( 226 Ra) in the treatment of endometrical carcinoma stage I. In all, 1021 patients treated during the period 1977-1986 at two Swedish gynecologic oncology centers were analyzed regarding treatment set-up, histopathologic outcome in the operative specimens, recurrence rates, survival rates and radiation side effects. Complete tumor eradication in the operative specimen was achieved in 80% after radium therapy and in 60% after irradiation by the high dose-rate technique. The overall recurrence rate was 15.7% in the radium packing series and 11.5% after cobalt afterloading treatment. The risk of pelvic recurrences increased by 2.1-2.6 if hysterectomy was replaced by dilatation and curettage. The two radiation techniques seemed t be comparable with regard to the risk of both pelvic recurrences and distant metastases. The 5-year crude survival rates were 85% in the afterloading series and 82% in the radium series. The corrected survival rates were similar (90%) for the two techniques. Age, tumor grade and uterine size were significant prognostic factors with regard to the probability of death due to cancer. Early radiation reactions had quite similar rates in the two series, whereas late radiation reactions were more frequent in the high dose-rate afterloading group in the 10-12 Gy dose fraction range, but not in the 5-8 Gy range. The radium packing method seemed to give a higher frequency of tumor-free operative specimens in this study, but with regard to recurrence rates and survival probabilities the techniques were comparable. Since the different proportion of surgery in the two series and the histopathologic evaluation might have influenced the rate of local tumor eradication in the operative specimens and the risk of pelvic recurrences the results must be assessed with great caution and only a crude comparison of the two treatment techniques could be made. (orig.)

  1. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  2. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  3. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  4. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    Energy Technology Data Exchange (ETDEWEB)

    Ariunbaatar, Javkhlan, E-mail: jaka@unicas.it [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Scotto Di Perta, Ester [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy); Panico, Antonio [Telematic University PEGASO, Piazza Trieste e Trento, 48, 80132 Naples (Italy); Frunzo, Luigi [Department of Mathematics and Applications Renato Caccioppoli, University of Naples Federico II, Via Claudio, 21, 80125 Naples (Italy); Esposito, Giovanni [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); Lens, Piet N.L. [UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Pirozzi, Francesco [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy)

    2015-04-15

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC{sub 50} of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid.

  5. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  6. Two-stage dental implants inserted in a one-stage procedure : a prospective comparative clinical study

    NARCIS (Netherlands)

    Heijdenrijk, Kees

    2002-01-01

    The results of this study indicate that dental implants designed for a submerged implantation procedure can be used in a single-stage procedure and may be as predictable as one-stage implants. Although one-stage implant systems and two-stage.

  7. Correcting Classifiers for Sample Selection Bias in Two-Phase Case-Control Studies

    Science.gov (United States)

    Theis, Fabian J.

    2017-01-01

    Epidemiological studies often utilize stratified data in which rare outcomes or exposures are artificially enriched. This design can increase precision in association tests but distorts predictions when applying classifiers on nonstratified data. Several methods correct for this so-called sample selection bias, but their performance remains unclear especially for machine learning classifiers. With an emphasis on two-phase case-control studies, we aim to assess which corrections to perform in which setting and to obtain methods suitable for machine learning techniques, especially the random forest. We propose two new resampling-based methods to resemble the original data and covariance structure: stochastic inverse-probability oversampling and parametric inverse-probability bagging. We compare all techniques for the random forest and other classifiers, both theoretically and on simulated and real data. Empirical results show that the random forest profits from only the parametric inverse-probability bagging proposed by us. For other classifiers, correction is mostly advantageous, and methods perform uniformly. We discuss consequences of inappropriate distribution assumptions and reason for different behaviors between the random forest and other classifiers. In conclusion, we provide guidance for choosing correction methods when training classifiers on biased samples. For random forests, our method outperforms state-of-the-art procedures if distribution assumptions are roughly fulfilled. We provide our implementation in the R package sambia. PMID:29312464

  8. Comparative assessment of single-stage and two-stage anaerobic digestion for the treatment of thin stillage.

    Science.gov (United States)

    Nasr, Noha; Elbeshbishy, Elsayed; Hafez, Hisham; Nakhla, George; El Naggar, M Hesham

    2012-05-01

    A comparative evaluation of single-stage and two-stage anaerobic digestion processes for biomethane and biohydrogen production using thin stillage was performed to assess the impact of separating the acidogenic and methanogenic stages on anaerobic digestion. Thin stillage, the main by-product from ethanol production, was characterized by high total chemical oxygen demand (TCOD) of 122 g/L and total volatile fatty acids (TVFAs) of 12 g/L. A maximum methane yield of 0.33 L CH(4)/gCOD(added) (STP) was achieved in the two-stage process while a single-stage process achieved a maximum yield of only 0.26 L CH(4)/gCOD(added) (STP). The separation of acidification stage increased the TVFAs to TCOD ratio from 10% in the raw thin stillage to 54% due to the conversion of carbohydrates into hydrogen and VFAs. Comparison of the two processes based on energy outcome revealed that an increase of 18.5% in the total energy yield was achieved using two-stage anaerobic digestion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  10. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  11. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  12. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  13. A preventive maintenance model with a two-level inspection policy based on a three-stage failure process

    International Nuclear Information System (INIS)

    Wang, Wenbin; Zhao, Fei; Peng, Rui

    2014-01-01

    Inspection is always an important preventive maintenance (PM) activity and can have different depths and cover all or part of plant systems. This paper introduces a two-level inspection policy model for a single component plant system based on a three-stage failure process. Such a failure process divides the system′s life into three stages: good, minor defective and severe defective stages. The first level of inspection, the minor inspection, can only identify the minor defective stage with a certain probability, but can always reveal the severe defective stage. The major inspection can however identify both defective stages perfectly. Once the system is found to be in the minor defective stage, a shortened inspection interval is adopted. If however the system is found to be in the severe defective stage, we may delay the maintenance action if the time to the next planned PM window is less than a threshold level, but otherwise, replace immediately. This corresponds to the well adopted maintenance policy in practice such as periodic inspections with planned PMs. A numerical example is presented to demonstrate the proposed model by comparing with other models. - Highlights: • The system′s deterioration goes through a three-stage process, namely, normal, minor defective and severe defective. • Two levels of inspections are proposed, e.g., minor and major inspections. • Once the minor defective stage is found, instead of taking a maintenance action, a shortened inspection interval is recommended. • When the severe defective stage is found, we delay the maintenance according to the threshold to the next PM. • The decision variables are the inspection intervals and the threshold to PM

  14. The development and validation of the Male Genital Self-Image Scale: results from a nationally representative probability sample of men in the United States.

    Science.gov (United States)

    Herbenick, Debby; Schick, Vanessa; Reece, Michael; Sanders, Stephanie A; Fortenberry, J Dennis

    2013-06-01

    Numerous factors may affect men's sexual experiences, including their health status, past trauma or abuse, medication use, relationships, mood, anxiety, and body image. Little research has assessed the influence of men's genital self-image on their sexual function or behaviors and none has done so in a nationally representative sample. The purpose of this study was to, in a nationally representative probability sample of men ages 18 to 60, assess the reliability and validity of the Male Genital Self-Image Scale (MGSIS), and to examine the relationship between scores on the MGSIS and men's scores on the International Index of Erectile Function (IIEF). The MGSIS was developed in two stages. Phase One involved a review of the literature and an analysis of cross-sectional survey data. Phase Two involved an administration of the scale items to a nationally representative sample of men in the United States ages 18 to 60. Measures include demographic items, the IIEF, and the MGSIS. Overall, most men felt positively about their genitals. However, 24.6% of men expressed some discomfort letting a healthcare provider examine their genitals and about 20% reported dissatisfaction with their genital size. The MGSIS was found to be reliable and valid, with the MGSIS-5 (consisting of five items) being the best fit to the data. The MGSIS was found to be a reliable and valid measure. In addition, men's scores on the MGSIS-5 were found to be positively related to men's scores on the IIEF. © 2013 International Society for Sexual Medicine.

  15. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is first roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit

  16. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  17. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    Science.gov (United States)

    Chardon, Jérémy; Hingray, Benoit; Favre, Anne-Catherine

    2018-01-01

    Statistical downscaling models (SDMs) are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  18. A Gas-Spring-Loaded X-Y-Z Stage System for X-ray Microdiffraction Sample Manipulation

    International Nuclear Information System (INIS)

    Shu Deming; Cai Zhonghou; Lai, Barry

    2007-01-01

    We have designed and constructed a gas-spring-loaded x-y-z stage system for x-ray microdiffraction sample manipulation at the Advanced Photon Source XOR 2-ID-D station. The stage system includes three DC-motor-driven linear stages and a gas-spring-based heavy preloading structure, which provides antigravity forces to ensure that the stage system keeps high-positioning performance under variable goniometer orientation. Microdiffraction experiments with this new stage system showed significant sample manipulation performance improvement

  19. Probability an introduction

    CERN Document Server

    Goldberg, Samuel

    1960-01-01

    Excellent basic text covers set theory, probability theory for finite sample spaces, binomial theorem, probability distributions, means, standard deviations, probability function of binomial distribution, more. Includes 360 problems with answers for half.

  20. Late-stage pharmaceutical R&D and pricing policies under two-stage regulation.

    Science.gov (United States)

    Jobjörnsson, Sebastian; Forster, Martin; Pertile, Paolo; Burman, Carl-Fredrik

    2016-12-01

    We present a model combining the two regulatory stages relevant to the approval of a new health technology: the authorisation of its commercialisation and the insurer's decision about whether to reimburse its cost. We show that the degree of uncertainty concerning the true value of the insurer's maximum willingness to pay for a unit increase in effectiveness has a non-monotonic impact on the optimal price of the innovation, the firm's expected profit and the optimal sample size of the clinical trial. A key result is that there exists a range of values of the uncertainty parameter over which a reduction in uncertainty benefits the firm, the insurer and patients. We consider how different policy parameters may be used as incentive mechanisms, and the incentives to invest in R&D for marginal projects such as those targeting rare diseases. The model is calibrated using data on a new treatment for cystic fibrosis. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Electrofishing capture probability of smallmouth bass in streams

    Science.gov (United States)

    Dauwalter, D.C.; Fisher, W.L.

    2007-01-01

    Abundance estimation is an integral part of understanding the ecology and advancing the management of fish populations and communities. Mark-recapture and removal methods are commonly used to estimate the abundance of stream fishes. Alternatively, abundance can be estimated by dividing the number of individuals sampled by the probability of capture. We conducted a mark-recapture study and used multiple repeated-measures logistic regression to determine the influence of fish size, sampling procedures, and stream habitat variables on the cumulative capture probability for smallmouth bass Micropterus dolomieu in two eastern Oklahoma streams. The predicted capture probability was used to adjust the number of individuals sampled to obtain abundance estimates. The observed capture probabilities were higher for larger fish and decreased with successive electrofishing passes for larger fish only. Model selection suggested that the number of electrofishing passes, fish length, and mean thalweg depth affected capture probabilities the most; there was little evidence for any effect of electrofishing power density and woody debris density on capture probability. Leave-one-out cross validation showed that the cumulative capture probability model predicts smallmouth abundance accurately. ?? Copyright by the American Fisheries Society 2007.

  2. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  3. Routine conventional karyotyping of lymphoma staging bone marrow samples does not contribute clinically relevant information.

    Science.gov (United States)

    Nardi, Valentina; Pulluqi, Olja; Abramson, Jeremy S; Dal Cin, Paola; Hasserjian, Robert P

    2015-06-01

    Bone marrow (BM) evaluation is an important part of lymphoma staging, which guides patient management. Although positive staging marrow is defined as morphologically identifiable disease, such samples often also include flow cytometric analysis and conventional karyotyping. Cytogenetic analysis is a labor-intensive and costly procedure and its utility in this setting is uncertain. We retrospectively reviewed pathological reports of 526 staging marrow specimens in which conventional karyotyping had been performed. All samples originated from a single institution from patients with previously untreated Hodgkin and non-Hodgkin lymphomas presenting in an extramedullary site. Cytogenetic analysis revealed clonal abnormalities in only eight marrow samples (1.5%), all of which were positive for lymphoma by morphologic evaluation. Flow cytometry showed a small clonal lymphoid population in three of the 443 morphologically negative marrow samples (0.7%). Conventional karyotyping is rarely positive in lymphoma staging marrow samples and, in our cohort, the BM karyotype did not contribute clinically relevant information in the vast majority of cases. Our findings suggest that karyotyping should not be performed routinely on BM samples taken to stage previously diagnosed extramedullary lymphomas unless there is pathological evidence of BM involvement by lymphoma. © 2015 Wiley Periodicals, Inc.

  4. [Comparison research on two-stage sequencing batch MBR and one-stage MBR].

    Science.gov (United States)

    Yuan, Xin-Yan; Shen, Heng-Gen; Sun, Lei; Wang, Lin; Li, Shi-Feng

    2011-01-01

    Aiming at resolving problems in MBR operation, like low nitrogen and phosphorous removal efficiency, severe membrane fouling and etc, comparison research on two-stage sequencing batch MBR (TSBMBR) and one-stage aerobic MBR has been done in this paper. The results indicated that TSBMBR owned advantages of SBR in removing nitrogen and phosphorous, which could make up the deficiency of traditional one-stage aerobic MBR in nitrogen and phosphorous removal. During steady operation period, effluent average NH4(+) -N, TN and TP concentration is 2.83, 12.20, 0.42 mg/L, which could reach domestic scenic environment use. From membrane fouling control point of view, TSBMBR has lower SMP in supernatant, specific trans-membrane flux deduction rate, membrane fouling resistant than one-stage aerobic MBR. The sedimentation and gel layer resistant of TSBMBR was only 6.5% and 33.12% of one-stage aerobic MBR. Besides high efficiency in removing nitrogen and phosphorous, TSBMBR could effectively reduce sedimentation and gel layer pollution on membrane surface. Comparing with one-stage MBR, TSBMBR could operate with higher trans-membrane flux, lower membrane fouling rate and better pollutants removal effects.

  5. Energy demand in Portuguese manufacturing: a two-stage model

    International Nuclear Information System (INIS)

    Borges, A.M.; Pereira, A.M.

    1992-01-01

    We use a two-stage model of factor demand to estimate the parameters determining energy demand in Portuguese manufacturing. In the first stage, a capital-labor-energy-materials framework is used to analyze the substitutability between energy as a whole and other factors of production. In the second stage, total energy demand is decomposed into oil, coal and electricity demands. The two stages are fully integrated since the energy composite used in the first stage and its price are obtained from the second stage energy sub-model. The estimates obtained indicate that energy demand in manufacturing responds significantly to price changes. In addition, estimation results suggest that there are important substitution possibilities among energy forms and between energy and other factors of production. The role of price changes in energy-demand forecasting, as well as in energy policy in general, is clearly established. (author)

  6. The influence of magnetic field strength in ionization stage on ion transport between two stages of a double stage Hall thruster

    International Nuclear Information System (INIS)

    Yu Daren; Song Maojiang; Li Hong; Liu Hui; Han Ke

    2012-01-01

    It is futile for a double stage Hall thruster to design a special ionization stage if the ionized ions cannot enter the acceleration stage. Based on this viewpoint, the ion transport under different magnetic field strengths in the ionization stage is investigated, and the physical mechanisms affecting the ion transport are analyzed in this paper. With a combined experimental and particle-in-cell simulation study, it is found that the ion transport between two stages is chiefly affected by the potential well, the potential barrier, and the potential drop at the bottom of potential well. With the increase of magnetic field strength in the ionization stage, there is larger plasma density caused by larger potential well. Furthermore, the potential barrier near the intermediate electrode declines first and then rises up while the potential drop at the bottom of potential well rises up first and then declines as the magnetic field strength increases in the ionization stage. Consequently, both the ion current entering the acceleration stage and the total ion current ejected from the thruster rise up first and then decline as the magnetic field strength increases in the ionization stage. Therefore, there is an optimal magnetic field strength in the ionization stage to guide the ion transport between two stages.

  7. An adaptive two-stage analog/regression model for probabilistic prediction of small-scale precipitation in France

    Directory of Open Access Journals (Sweden)

    J. Chardon

    2018-01-01

    Full Text Available Statistical downscaling models (SDMs are often used to produce local weather scenarios from large-scale atmospheric information. SDMs include transfer functions which are based on a statistical link identified from observations between local weather and a set of large-scale predictors. As physical processes driving surface weather vary in time, the most relevant predictors and the regression link are likely to vary in time too. This is well known for precipitation for instance and the link is thus often estimated after some seasonal stratification of the data. In this study, we present a two-stage analog/regression model where the regression link is estimated from atmospheric analogs of the current prediction day. Atmospheric analogs are identified from fields of geopotential heights at 1000 and 500 hPa. For the regression stage, two generalized linear models are further used to model the probability of precipitation occurrence and the distribution of non-zero precipitation amounts, respectively. The two-stage model is evaluated for the probabilistic prediction of small-scale precipitation over France. It noticeably improves the skill of the prediction for both precipitation occurrence and amount. As the analog days vary from one prediction day to another, the atmospheric predictors selected in the regression stage and the value of the corresponding regression coefficients can vary from one prediction day to another. The model allows thus for a day-to-day adaptive and tailored downscaling. It can also reveal specific predictors for peculiar and non-frequent weather configurations.

  8. Synthesis of Programmable Main-chain Liquid-crystalline Elastomers Using a Two-stage Thiol-acrylate Reaction.

    Science.gov (United States)

    Saed, Mohand O; Torbati, Amir H; Nair, Devatha P; Yakacki, Christopher M

    2016-01-19

    This study presents a novel two-stage thiol-acrylate Michael addition-photopolymerization (TAMAP) reaction to prepare main-chain liquid-crystalline elastomers (LCEs) with facile control over network structure and programming of an aligned monodomain. Tailored LCE networks were synthesized using routine mixing of commercially available starting materials and pouring monomer solutions into molds to cure. An initial polydomain LCE network is formed via a self-limiting thiol-acrylate Michael-addition reaction. Strain-to-failure and glass transition behavior were investigated as a function of crosslinking monomer, pentaerythritol tetrakis(3-mercaptopropionate) (PETMP). An example non-stoichiometric system of 15 mol% PETMP thiol groups and an excess of 15 mol% acrylate groups was used to demonstrate the robust nature of the material. The LCE formed an aligned and transparent monodomain when stretched, with a maximum failure strain over 600%. Stretched LCE samples were able to demonstrate both stress-driven thermal actuation when held under a constant bias stress or the shape-memory effect when stretched and unloaded. A permanently programmed monodomain was achieved via a second-stage photopolymerization reaction of the excess acrylate groups when the sample was in the stretched state. LCE samples were photo-cured and programmed at 100%, 200%, 300%, and 400% strain, with all samples demonstrating over 90% shape fixity when unloaded. The magnitude of total stress-free actuation increased from 35% to 115% with increased programming strain. Overall, the two-stage TAMAP methodology is presented as a powerful tool to prepare main-chain LCE systems and explore structure-property-performance relationships in these fascinating stimuli-sensitive materials.

  9. On Wasserstein Two-Sample Testing and Related Families of Nonparametric Tests

    Directory of Open Access Journals (Sweden)

    Aaditya Ramdas

    2017-01-01

    Full Text Available Nonparametric two-sample or homogeneity testing is a decision theoretic problem that involves identifying differences between two random variables without making parametric assumptions about their underlying distributions. The literature is old and rich, with a wide variety of statistics having being designed and analyzed, both for the unidimensional and the multivariate setting. Inthisshortsurvey,wefocusonteststatisticsthatinvolvetheWassersteindistance. Usingan entropic smoothing of the Wasserstein distance, we connect these to very different tests including multivariate methods involving energy statistics and kernel based maximum mean discrepancy and univariate methods like the Kolmogorov–Smirnov test, probability or quantile (PP/QQ plots and receiver operating characteristic or ordinal dominance (ROC/ODC curves. Some observations are implicit in the literature, while others seem to have not been noticed thus far. Given nonparametric two-sample testing’s classical and continued importance, we aim to provide useful connections for theorists and practitioners familiar with one subset of methods but not others.

  10. Single-stage Acetabular Revision During Two-stage THA Revision for Infection is Effective in Selected Patients.

    Science.gov (United States)

    Fink, Bernd; Schlumberger, Michael; Oremek, Damian

    2017-08-01

    The treatment of periprosthetic infections of hip arthroplasties typically involves use of either a single- or two-stage (with implantation of a temporary spacer) revision surgery. In patients with severe acetabular bone deficiencies, either already present or after component removal, spacers cannot be safely implanted. In such hips where it is impossible to use spacers and yet a two-stage revision of the prosthetic stem is recommended, we have combined a two-stage revision of the stem with a single revision of the cup. To our knowledge, this approach has not been reported before. (1) What proportion of patients treated with single-stage acetabular reconstruction as part of a two-stage revision for an infected THA remain free from infection at 2 or more years? (2) What are the Harris hip scores after the first stage and at 2 years or more after the definitive reimplantation? Between June 2009 and June 2014, we treated all patients undergoing surgical treatment for an infected THA using a single-stage acetabular revision as part of a two-stage THA exchange if the acetabular defect classification was Paprosky Types 2B, 2C, 3A, 3B, or pelvic discontinuity and a two-stage procedure was preferred for the femur. The procedure included removal of all components, joint débridement, definitive acetabular reconstruction (with a cage to bridge the defect, and a cemented socket), and a temporary cemented femoral component at the first stage; the second stage consisted of repeat joint and femoral débridement and exchange of the femoral component to a cementless device. During the period noted, 35 patients met those definitions and were treated with this approach. No patients were lost to followup before 2 years; mean followup was 42 months (range, 24-84 months). The clinical evaluation was performed with the Harris hip scores and resolution of infection was assessed by the absence of clinical signs of infection and a C-reactive protein level less than 10 mg/L. All

  11. Can the dissociative PTSD subtype be identified across two distinct trauma samples meeting caseness for PTSD?

    Science.gov (United States)

    Hansen, Maj; Műllerová, Jana; Elklit, Ask; Armour, Cherie

    2016-08-01

    For over a century, the occurrence of dissociative symptoms in connection to traumatic exposure has been acknowledged in the scientific literature. Recently, the importance of dissociation has also been recognized in the long-term traumatic response within the DSM-5 nomenclature. Several studies have confirmed the existence of the dissociative posttraumatic stress disorder (PTSD) subtype. However, there is a lack of studies investigating latent profiles of PTSD solely in victims with PTSD. This study investigates the possible presence of PTSD subtypes using latent class analysis (LCA) across two distinct trauma samples meeting caseness for DSM-5 PTSD based on self-reports (N = 787). Moreover, we assessed if a number of risk factors resulted in an increased probability of membership in a dissociative compared with a non-dissociative PTSD class. The results of LCA revealed a two-class solution with two highly symptomatic classes: a dissociative class and a non-dissociative class across both samples. Increased emotion-focused coping increased the probability of individuals being grouped into the dissociative class across both samples. Social support reduced the probability of individuals being grouped into the dissociative class but only in the victims of motor vehicle accidents (MVAs) suffering from whiplash. The results are discussed in light of their clinical implications and suggest that the dissociative subtype can be identified in victims of incest and victims of MVA suffering from whiplash meeting caseness for DSM-5 PTSD.

  12. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  13. Non-Archimedean Probability

    NARCIS (Netherlands)

    Benci, Vieri; Horsten, Leon; Wenmackers, Sylvia

    We propose an alternative approach to probability theory closely related to the framework of numerosity theory: non-Archimedean probability (NAP). In our approach, unlike in classical probability theory, all subsets of an infinite sample space are measurable and only the empty set gets assigned

  14. Probability Aggregates in Probability Answer Set Programming

    OpenAIRE

    Saad, Emad

    2013-01-01

    Probability answer set programming is a declarative programming that has been shown effective for representing and reasoning about a variety of probability reasoning tasks. However, the lack of probability aggregates, e.g. {\\em expected values}, in the language of disjunctive hybrid probability logic programs (DHPP) disallows the natural and concise representation of many interesting problems. In this paper, we extend DHPP to allow arbitrary probability aggregates. We introduce two types of p...

  15. Tumor Control Probability Modeling for Stereotactic Body Radiation Therapy of Early-Stage Lung Cancer Using Multiple Bio-physical Models

    Science.gov (United States)

    Liu, Feng; Tai, An; Lee, Percy; Biswas, Tithi; Ding, George X.; El Naqa, Isaam; Grimm, Jimm; Jackson, Andrew; Kong, Feng-Ming (Spring); LaCouture, Tamara; Loo, Billy; Miften, Moyed; Solberg, Timothy; Li, X Allen

    2017-01-01

    Purpose To analyze pooled clinical data using different radiobiological models and to understand the relationship between biologically effective dose (BED) and tumor control probability (TCP) for stereotactic body radiotherapy (SBRT) of early-stage non-small cell lung cancer (NSCLC). Method and Materials The clinical data of 1-, 2-, 3-, and 5-year actuarial or Kaplan-Meier TCP from 46 selected studies were collected for SBRT of NSCLC in the literature. The TCP data were separated for Stage T1 and T2 tumors if possible, otherwise collected for combined stages. BED was calculated at isocenters using six radiobiological models. For each model, the independent model parameters were determined from a fit to the TCP data using the least chi-square (χ2) method with either one set of parameters regardless of tumor stages or two sets for T1 and T2 tumors separately. Results The fits to the clinic data yield consistent results of large α/β ratios of about 20 Gy for all models investigated. The regrowth model that accounts for the tumor repopulation and heterogeneity leads to a better fit to the data, compared to other 5 models where the fits were indistinguishable between the models. The models based on the fitting parameters predict that the T2 tumors require about additional 1 Gy physical dose at isocenters per fraction (≤5 fractions) to achieve the optimal TCP when compared to the T1 tumors. Conclusion This systematic analysis of a large set of published clinical data using different radiobiological models shows that local TCP for SBRT of early-stage NSCLC has strong dependence on BED with large α/β ratios of about 20 Gy. The six models predict that a BED (calculated with α/β of 20) of 90 Gy is sufficient to achieve TCP ≥ 95%. Among the models considered, the regrowth model leads to a better fit to the clinical data. PMID:27871671

  16. Probability analysis of nuclear power plant hazards

    International Nuclear Information System (INIS)

    Kovacs, Z.

    1985-01-01

    The probability analysis of risk is described used for quantifying the risk of complex technological systems, especially of nuclear power plants. Risk is defined as the product of the probability of the occurrence of a dangerous event and the significance of its consequences. The process of the analysis may be divided into the stage of power plant analysis to the point of release of harmful material into the environment (reliability analysis) and the stage of the analysis of the consequences of this release and the assessment of the risk. The sequence of operations is characterized in the individual stages. The tasks are listed which Czechoslovakia faces in the development of the probability analysis of risk, and the composition is recommended of the work team for coping with the task. (J.C.)

  17. Probability representations of a class of two-way diffusions

    Energy Technology Data Exchange (ETDEWEB)

    Clifford, P.; Green, N.J.P. [Department of Statistics, University of Oxford, Oxford (United Kingdom); Feng, J.F. [COGS, Sussex University, Brighton (United Kingdom); Wei, G. [Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Kowloon, Hong Kong (China)

    2002-07-19

    There has been little progress in the analysis of two-way diffusion in the last few decades due to the difficulties brought by the interface section similar to a free boundary condition. In this paper, however, the equivalent probability model is considered and the interface section is precisely described by an integral equation. The solution of two-way diffusion is then expressed in an integral form with the integrand being the solution of a classical first passage time model and the solution of a one-dimensional integral equation which is relatively easier to solve. The exact expression of the two-way diffusion enables us to find the explicit solution of the model with infinite horizontal boundaries and without drifting. (author)

  18. Collective animal behavior from Bayesian estimation and probability matching.

    Directory of Open Access Journals (Sweden)

    Alfonso Pérez-Escudero

    2011-11-01

    Full Text Available Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior.

  19. Efficient Bayesian inference of subsurface flow models using nested sampling and sparse polynomial chaos surrogates

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    An efficient Bayesian calibration method based on the nested sampling (NS) algorithm and non-intrusive polynomial chaos method is presented. Nested sampling is a Bayesian sampling algorithm that builds a discrete representation of the posterior distributions by iteratively re-focusing a set of samples to high likelihood regions. NS allows representing the posterior probability density function (PDF) with a smaller number of samples and reduces the curse of dimensionality effects. The main difficulty of the NS algorithm is in the constrained sampling step which is commonly performed using a random walk Markov Chain Monte-Carlo (MCMC) algorithm. In this work, we perform a two-stage sampling using a polynomial chaos response surface to filter out rejected samples in the Markov Chain Monte-Carlo method. The combined use of nested sampling and the two-stage MCMC based on approximate response surfaces provides significant computational gains in terms of the number of simulation runs. The proposed algorithm is applied for calibration and model selection of subsurface flow models. © 2013.

  20. Global Analysis of a Model of Viral Infection with Latent Stage and Two Types of Target Cells

    Directory of Open Access Journals (Sweden)

    Shuo Liu

    2013-01-01

    Full Text Available By introducing the probability function describing latency of infected cells, we unify some models of viral infection with latent stage. For the case that the probability function is a step function, which implies that the latency period of the infected cells is constant, the corresponding model is a delay differential system. The model with delay of latency and two types of target cells is investigated, and the obtained results show that when the basic reproduction number is less than or equal to unity, the infection-free equilibrium is globally stable, that is, the in-host free virus will be cleared out finally; when the basic reproduction number is greater than unity, the infection equilibrium is globally stable, that is, the viral infection will be chronic and persist in-host. And by comparing the basic reproduction numbers of ordinary differential system and the associated delayed differential system, we think that it is necessary to elect an appropriate type of probability function for predicting the final outcome of viral infection in-host.

  1. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    International Nuclear Information System (INIS)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.; Park, T. K.; Deng, P.; Yang, G.; Jung, Y. S.; Kim, T. K.; Stauff, N. E.

    2016-01-01

    This report presents the performance characteristics of two ''two-stage'' fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  2. On A Two-Stage Supply Chain Model In The Manufacturing Industry ...

    African Journals Online (AJOL)

    We model a two-stage supply chain where the upstream stage (stage 2) always meet demand from the downstream stage (stage 1).Demand is stochastic hence shortages will occasionally occur at stage 2. Stage 2 must fill these shortages by expediting using overtime production and/or backordering. We derive optimal ...

  3. The probability of malignancy in small pulmonary nodules coexisting with potentially operable lung cancer detected by CT

    International Nuclear Information System (INIS)

    Yuan, Yue; Matsumoto, Tsuneo; Hiyama, Atsuto; Miura, Goji; Tanaka, Nobuyuki; Emoto, Takuya; Kawamura, Takeo; Matsunaga, Naofumi

    2003-01-01

    The aim of this study was to assess the probability of malignancy in one or two small nodules 1 cm or less coexisting with potentially operable lung cancer (coexisting small nodules). The preoperative helical CT scans of 223 patients with lung cancer were retrospectively reviewed. The probability of malignancy of coexisting small nodules was evaluated based on nodule size, location, and clinical stage of the primary lung cancers. Seventy-one coexisting small nodules were found on conventional CT in 58 (26%) of 223 patients, and 14 (6%) patients had malignant nodules. Eighteen (25%) of such nodules were malignant. The probability of malignancy was not significantly different between two groups of nodules larger and smaller than 0.5 cm (p=0.1). The probability of malignancy of such nodules within primary tumor lobe was significantly higher than that in the other lobes (p<0.01). Metastatic nodules were significantly fewer in clinical stage-IA patients than in the patients with the other stage (p<0.01); however, four (57%) of seven synchronous lung cancers were located in the non-primary tumor lobes in the clinical stage-I patients. Malignant coexisting small nodules are not infrequent, and such nodules in the non-primary tumor lobes should be carefully diagnosed. (orig.)

  4. Preservation Effect of Two-Stage Cinnamon Bark (Cinnamomum Burmanii) Oleoresin Microcapsules On Vacuum-Packed Ground Beef During Refrigerated Storage

    Science.gov (United States)

    Irfiana, D.; Utami, R.; Khasanah, L. U.; Manuhara, G. J.

    2017-04-01

    The purpose of this study was to determine the effect of two stage cinnamon bark oleoresin microcapsules (0%, 0.5% and 1%) on the TPC (Total Plate Count), TBA (thiobarbituric acid), pH, and RGB color (Red, Green, and Blue) of vacuum-packed ground beef during refrigerated storage (at 0, 4, 8, 12, and 16 days). This study showed that the addition of two stage cinnamon bark oleoresin microcapsules affected the quality of vacuum-packed ground beef during 16 days of refrigerated storage. The results showed that the TPC value of the vacuum-packed ground beef sample with the addition 0.5% and 1% microcapsules was lower than the value of control sample. The TPC value of the control sample, sample with additional 0.5% and 1% microcapsules were 5.94; 5.46; and 5.16 log CFU/g respectively. The TBA value of vacuum-packed ground beef were 0.055; 0.041; and 0.044 mg malonaldehyde/kg, resepectively on the 16th day of storage. The addition of two-stage cinnamon bark oleoresin microcapsules could inhibit the growth of microbia and decrease the oxidation process of vacuum-packed ground beef. Moreover, the change of vacuum-packed ground beef pH and RGB color with the addition 0.5% and 1% microcapsules were less than those of the control sample. The addition of 1% microcapsules showed the best effect in preserving the vacuum-packed ground beef.

  5. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  6. Chemical and functional properties of cell wall polymers from two cherry varieties at two developmental stages.

    Science.gov (United States)

    Basanta, María F; de Escalada Plá, Marina F; Stortz, Carlos A; Rojas, Ana M

    2013-01-30

    The cell wall polysaccharides of Regina and Sunburst cherry varieties at two developmental stages were extracted sequentially, and their changes in monosaccharide composition and functional properties were studied. The loosely-attached pectins presented a lower d-galacturonic acid/rhamnose ratio than ionically-bound pectins, as well as lower thickening effects of their respective 2% aqueous solution: the lowest Newtonian viscosity and shear rate dependence during the pseudoplastic phase. The main constituents of the cell wall matrix were covalently bound pectins (probably through diferulate cross-linkings), with long arabinan side chains at the RG-I cores. This pectin domain was also anchored into the XG-cellulose elastic network. Ripening occurred with a decrease in the proportion of HGs, water extractable GGM and xylogalacturonan, and with a concomitant increase in neutral sugars. Ripening was also associated with higher viscosities and thickening effects, and to larger distribution of molecular weights. The highest firmness and compactness of Regina cherry may be associated with its higher proportion of calcium-bound HGs localized in the middle lamellae of cell walls, as well as to some higher molar proportion of NS (Rha and Ara) in covalently bound pectins. These pectins showed significantly better hydration properties than hemicellulose and cellulose network. Chemical composition and functional properties of cell wall polymers were dependent on cherry variety and ripening stage, and helped explain the contrasting firmness of Regina and Sunburst varieties. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Probability of coincidental similarity among the orbits of small bodies - I. Pairing

    Science.gov (United States)

    Jopek, Tadeusz Jan; Bronikowska, Małgorzata

    2017-09-01

    Probability of coincidental clustering among orbits of comets, asteroids and meteoroids depends on many factors like: the size of the orbital sample searched for clusters or the size of the identified group, it is different for groups of 2,3,4,… members. Probability of coincidental clustering is assessed by the numerical simulation, therefore, it depends also on the method used for the synthetic orbits generation. We have tested the impact of some of these factors. For a given size of the orbital sample we have assessed probability of random pairing among several orbital populations of different sizes. We have found how these probabilities vary with the size of the orbital samples. Finally, keeping fixed size of the orbital sample we have shown that the probability of random pairing can be significantly different for the orbital samples obtained by different observation techniques. Also for the user convenience we have obtained several formulae which, for given size of the orbital sample can be used to calculate the similarity threshold corresponding to the small value of the probability of coincidental similarity among two orbits.

  8. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  9. Sampling the stream landscape: Improving the applicability of an ecoregion-level capture probability model for stream fishes

    Science.gov (United States)

    Mollenhauer, Robert; Mouser, Joshua B.; Brewer, Shannon K.

    2018-01-01

    Temporal and spatial variability in streams result in heterogeneous gear capture probability (i.e., the proportion of available individuals identified) that confounds interpretation of data used to monitor fish abundance. We modeled tow-barge electrofishing capture probability at multiple spatial scales for nine Ozark Highland stream fishes. In addition to fish size, we identified seven reach-scale environmental characteristics associated with variable capture probability: stream discharge, water depth, conductivity, water clarity, emergent vegetation, wetted width–depth ratio, and proportion of riffle habitat. The magnitude of the relationship between capture probability and both discharge and depth varied among stream fishes. We also identified lithological characteristics among stream segments as a coarse-scale source of variable capture probability. The resulting capture probability model can be used to adjust catch data and derive reach-scale absolute abundance estimates across a wide range of sampling conditions with similar effort as used in more traditional fisheries surveys (i.e., catch per unit effort). Adjusting catch data based on variable capture probability improves the comparability of data sets, thus promoting both well-informed conservation and management decisions and advances in stream-fish ecology.

  10. On the Sampling

    OpenAIRE

    Güleda Doğan

    2017-01-01

    This editorial is on statistical sampling, which is one of the most two important reasons for editorial rejection from our journal Turkish Librarianship. The stages of quantitative research, the stage in which we are sampling, the importance of sampling for a research, deciding on sample size and sampling methods are summarised briefly.

  11. Teaching basic life support with an automated external defibrillator using the two-stage or the four-stage teaching technique.

    Science.gov (United States)

    Bjørnshave, Katrine; Krogh, Lise Q; Hansen, Svend B; Nebsbjerg, Mette A; Thim, Troels; Løfgren, Bo

    2018-02-01

    Laypersons often hesitate to perform basic life support (BLS) and use an automated external defibrillator (AED) because of self-perceived lack of knowledge and skills. Training may reduce the barrier to intervene. Reduced training time and costs may allow training of more laypersons. The aim of this study was to compare BLS/AED skills' acquisition and self-evaluated BLS/AED skills after instructor-led training with a two-stage versus a four-stage teaching technique. Laypersons were randomized to either two-stage or four-stage teaching technique courses. Immediately after training, the participants were tested in a simulated cardiac arrest scenario to assess their BLS/AED skills. Skills were assessed using the European Resuscitation Council BLS/AED assessment form. The primary endpoint was passing the test (17 of 17 skills adequately performed). A prespecified noninferiority margin of 20% was used. The two-stage teaching technique (n=72, pass rate 57%) was noninferior to the four-stage technique (n=70, pass rate 59%), with a difference in pass rates of -2%; 95% confidence interval: -18 to 15%. Neither were there significant differences between the two-stage and four-stage groups in the chest compression rate (114±12 vs. 115±14/min), chest compression depth (47±9 vs. 48±9 mm) and number of sufficient rescue breaths between compression cycles (1.7±0.5 vs. 1.6±0.7). In both groups, all participants believed that their training had improved their skills. Teaching laypersons BLS/AED using the two-stage teaching technique was noninferior to the four-stage teaching technique, although the pass rate was -2% (95% confidence interval: -18 to 15%) lower with the two-stage teaching technique.

  12. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  13. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of twotwo-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  14. Stereotactic ablative radiotherapy versus lobectomy for operable stage I non-small-cell lung cancer : a pooled analysis of two randomised trials

    NARCIS (Netherlands)

    Chang, Joe Y.; Senan, Suresh; Paul, Marinus A.; Mehran, Reza J.; Louie, Alexander V.; Balter, Peter; Groen, Harry; McRae, Stephen E.; Widder, Joachim; Feng, Lei; van den Borne, Ben E. E. M.; Munsell, Mark F.; Hurkmans, Coen; Berry, Donald A.; van Werkhoven, Erik; Kresl, John J.; Dingemans, Anne-Marie; Dawood, Omar; Haasbeek, Cornelis J. A.; Carpenter, Larry S.; De Jaeger, Katrien; Komaki, Ritsuko; Slotman, Ben J.; Smit, Egbert F.; Roth, Jack A.

    Background The standard of care for operable, stage I, non-small-cell lung cancer (NSCLC) is lobectomy with mediastinal lymph node dissection or sampling. Stereotactic ablative radiotherapy (SABR) for inoperable stage I NSCLC has shown promising results, but two independent, randomised, phase 3

  15. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  16. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  17. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    Science.gov (United States)

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  18. The impact of sample non-normality on ANOVA and alternative methods.

    Science.gov (United States)

    Lantz, Björn

    2013-05-01

    In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.

  19. The Two-Word Stage: Motivated by Linguistic or Cognitive Constraints?

    Science.gov (United States)

    Berk, Stephanie; Lillo-Martin, Diane

    2012-01-01

    Child development researchers often discuss a "two-word" stage during language acquisition. However, there is still debate over whether the existence of this stage reflects primarily cognitive or linguistic constraints. Analyses of longitudinal data from two Deaf children, Mei and Cal, not exposed to an accessible first language (American Sign…

  20. A two-stage heating scheme for heat assisted magnetic recording

    Science.gov (United States)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  1. CAN'T MISS--conquer any number task by making important statistics simple. Part 2. Probability, populations, samples, and normal distributions.

    Science.gov (United States)

    Hansen, John P

    2003-01-01

    Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 2, describes probability, populations, and samples. The uses of descriptive and inferential statistics are outlined. The article also discusses the properties and probability of normal distributions, including the standard normal distribution.

  2. Efficacy of single-stage and two-stage Fowler–Stephens laparoscopic orchidopexy in the treatment of intraabdominal high testis

    Directory of Open Access Journals (Sweden)

    Chang-Yuan Wang

    2017-11-01

    Conclusion: In the case of testis with good collateral circulation, single-stage F-S laparoscopic orchidopexy had the same safety and efficacy as the two-stage F-S procedure. Surgical options should be based on comprehensive consideration of intraoperative testicular location, testicular ischemia test, and collateral circumstances surrounding the testes. Under the appropriate conditions, we propose single-stage F-S laparoscopic orchidopexy be preferred. It may be appropriate to avoid unnecessary application of the two-stage procedure that has a higher cost and causes more pain for patients.

  3. p-adic probability prediction of correlations between particles in the two-slit and neutron interferometry experiments

    International Nuclear Information System (INIS)

    Khrennikov, A.

    1998-01-01

    The Author start from Feynman's idea to use negative probabilities to describe the two-slit experiment and other quantum interference experiments. Formally by using negative probability distributions the Author can explain the results of the two-slit experiment on the basis of the pure corpuscular picture of quantum mechanics. However, negative probabilities are absurd objects in the framework of the standard Kolmogorov theory of probability. The Author present a large class of non-Kolmogorovean probability models where negative probabilities are well defined on the frequency basis. These are models with probabilities which belong to the so-called field of p-adic numbers. However, these models are characterized by correlations between trails. Therefore, the Author predict correlations between particles in interference experiments. In fact, the predictions are similar to the predictions of the so-called nonen ergodic interpretation of quantum mechanics, which was proposed by V. Buonomano. The Author propose the concrete experiments (in particular, in the framework of the neutron interferometry) to verify our predictions on the correlations

  4. An inexact two-stage stochastic robust programming for residential micro-grid management-based on random demand

    International Nuclear Information System (INIS)

    Ji, L.; Niu, D.X.; Huang, G.H.

    2014-01-01

    In this paper a stochastic robust optimization problem of residential micro-grid energy management is presented. Combined cooling, heating and electricity technology (CCHP) is introduced to satisfy various energy demands. Two-stage programming is utilized to find the optimal installed capacity investment and operation control of CCHP (combined cooling heating and power). Moreover, interval programming and robust stochastic optimization methods are exploited to gain interval robust solutions under different robustness levels which are feasible for uncertain data. The obtained results can help micro-grid managers minimizing the investment and operation cost with lower system failure risk when facing fluctuant energy market and uncertain technology parameters. The different robustness levels reflect the risk preference of micro-grid manager. The proposed approach is applied to residential area energy management in North China. Detailed computational results under different robustness level are presented and analyzed for providing investment decision and operation strategies. - Highlights: • An inexact two-stage stochastic robust programming model for CCHP management. • The energy market and technical parameters uncertainties were considered. • Investment decision, operation cost, and system safety were analyzed. • Uncertainties expressed as discrete intervals and probability distributions

  5. An empirical probability model of detecting species at low densities.

    Science.gov (United States)

    Delaney, David G; Leung, Brian

    2010-06-01

    False negatives, not detecting things that are actually present, are an important but understudied problem. False negatives are the result of our inability to perfectly detect species, especially those at low density such as endangered species or newly arriving introduced species. They reduce our ability to interpret presence-absence survey data and make sound management decisions (e.g., rapid response). To reduce the probability of false negatives, we need to compare the efficacy and sensitivity of different sampling approaches and quantify an unbiased estimate of the probability of detection. We conducted field experiments in the intertidal zone of New England and New York to test the sensitivity of two sampling approaches (quadrat vs. total area search, TAS), given different target characteristics (mobile vs. sessile). Using logistic regression we built detection curves for each sampling approach that related the sampling intensity and the density of targets to the probability of detection. The TAS approach reduced the probability of false negatives and detected targets faster than the quadrat approach. Mobility of targets increased the time to detection but did not affect detection success. Finally, we interpreted two years of presence-absence data on the distribution of the Asian shore crab (Hemigrapsus sanguineus) in New England and New York, using our probability model for false negatives. The type of experimental approach in this paper can help to reduce false negatives and increase our ability to detect species at low densities by refining sampling approaches, which can guide conservation strategies and management decisions in various areas of ecology such as conservation biology and invasion ecology.

  6. Two-Stage Regularized Linear Discriminant Analysis for 2-D Data.

    Science.gov (United States)

    Zhao, Jianhua; Shi, Lei; Zhu, Ji

    2015-08-01

    Fisher linear discriminant analysis (LDA) involves within-class and between-class covariance matrices. For 2-D data such as images, regularized LDA (RLDA) can improve LDA due to the regularized eigenvalues of the estimated within-class matrix. However, it fails to consider the eigenvectors and the estimated between-class matrix. To improve these two matrices simultaneously, we propose in this paper a new two-stage method for 2-D data, namely a bidirectional LDA (BLDA) in the first stage and the RLDA in the second stage, where both BLDA and RLDA are based on the Fisher criterion that tackles correlation. BLDA performs the LDA under special separable covariance constraints that incorporate the row and column correlations inherent in 2-D data. The main novelty is that we propose a simple but effective statistical test to determine the subspace dimensionality in the first stage. As a result, the first stage reduces the dimensionality substantially while keeping the significant discriminant information in the data. This enables the second stage to perform RLDA in a much lower dimensional subspace, and thus improves the two estimated matrices simultaneously. Experiments on a number of 2-D synthetic and real-world data sets show that BLDA+RLDA outperforms several closely related competitors.

  7. Two-stage precipitation of plutonium trifluoride

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1984-04-01

    Plutonium trifluoride was precipitated using a two-stage precipitation system. A series of precipitation experiments identified the significant process variables affecting precipitate characteristics. A mathematical precipitation model was developed which was based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter that can be used to control particle characteristics

  8. Two-stage stochastic programming model for the regional-scale electricity planning under demand uncertainty

    International Nuclear Information System (INIS)

    Huang, Yun-Hsun; Wu, Jung-Hua; Hsu, Yu-Ju

    2016-01-01

    Traditional electricity supply planning models regard the electricity demand as a deterministic parameter and require the total power output to satisfy the aggregate electricity demand. But in today's world, the electric system planners are facing tremendously complex environments full of uncertainties, where electricity demand is a key source of uncertainty. In addition, electricity demand patterns are considerably different for different regions. This paper developed a multi-region optimization model based on two-stage stochastic programming framework to incorporate the demand uncertainty. Furthermore, the decision tree method and Monte Carlo simulation approach are integrated into the model to simplify electricity demands in the form of nodes and determine the values and probabilities. The proposed model was successfully applied to a real case study (i.e. Taiwan's electricity sector) to show its applicability. Detail simulation results were presented and compared with those generated by a deterministic model. Finally, the long-term electricity development roadmap at a regional level could be provided on the basis of our simulation results. - Highlights: • A multi-region, two-stage stochastic programming model has been developed. • The decision tree and Monte Carlo simulation are integrated into the framework. • Taiwan's electricity sector is used to illustrate the applicability of the model. • The results under deterministic and stochastic cases are shown for comparison. • Optimal portfolios of regional generation technologies can be identified.

  9. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    Science.gov (United States)

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (nresearchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  10. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages.

    Science.gov (United States)

    Peterman, William; Brocato, Emily R; Semlitsch, Raymond D; Eggert, Lori S

    2016-01-01

    In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (F ST and D C ) and isolation-by-distance (IBD) among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using D C , the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  11. Reducing bias in population and landscape genetic inferences: the effects of sampling related individuals and multiple life stages

    Directory of Open Access Journals (Sweden)

    William Peterman

    2016-03-01

    Full Text Available In population or landscape genetics studies, an unbiased sampling scheme is essential for generating accurate results, but logistics may lead to deviations from the sample design. Such deviations may come in the form of sampling multiple life stages. Presently, it is largely unknown what effect sampling different life stages can have on population or landscape genetic inference, or how mixing life stages can affect the parameters being measured. Additionally, the removal of siblings from a data set is considered best-practice, but direct comparisons of inferences made with and without siblings are limited. In this study, we sampled embryos, larvae, and adult Ambystoma maculatum from five ponds in Missouri, and analyzed them at 15 microsatellite loci. We calculated allelic richness, heterozygosity and effective population sizes for each life stage at each pond and tested for genetic differentiation (FST and DC and isolation-by-distance (IBD among ponds. We tested for differences in each of these measures between life stages, and in a pooled population of all life stages. All calculations were done with and without sibling pairs to assess the effect of sibling removal. We also assessed the effect of reducing the number of microsatellites used to make inference. No statistically significant differences were found among ponds or life stages for any of the population genetic measures, but patterns of IBD differed among life stages. There was significant IBD when using adult samples, but tests using embryos, larvae, or a combination of the three life stages were not significant. We found that increasing the ratio of larval or embryo samples in the analysis of genetic distance weakened the IBD relationship, and when using DC, the IBD was no longer significant when larvae and embryos exceeded 60% of the population sample. Further, power to detect an IBD relationship was reduced when fewer microsatellites were used in the analysis.

  12. A New Concept of Two-Stage Multi-Element Resonant-/Cyclo-Converter for Two-Phase IM/SM Motor

    Directory of Open Access Journals (Sweden)

    Mahmud Ali Rzig Abdalmula

    2013-01-01

    Full Text Available The paper deals with a new concept of power electronic two-phase system with two-stage DC/AC/AC converter and two-phase IM/PMSM motor. The proposed system consisting of two-stage converter comprises: input resonant boost converter with AC output, two-phase half-bridge cyclo-converter commutated by HF AC input voltage, and induction or synchronous motor. Such a system with AC interlink, as a whole unit, has better properties as a 3-phase reference VSI inverter: higher efficiency due to soft switching of both converter stages, higher switching frequency, smaller dimensions and weight with lesser number of power semiconductor switches and better price. In comparison with currently used conventional system configurations the proposed system features a good efficiency of electronic converters and also has a good torque overloading of two-phase AC induction or synchronous motors. Design of two-stage multi-element resonant converter and results of simulation experiments are presented in the paper.

  13. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    OpenAIRE

    Chen, Yanju; Wang, Ye

    2015-01-01

    This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the s...

  14. Probability distributions for first neighbor distances between resonances that belong to two different families

    International Nuclear Information System (INIS)

    Difilippo, F.C.

    1994-01-01

    For a mixture of two families of resonances, we found the probability distribution for the distance, as first neighbors, between resonances that belong to different families. Integration of this distribution gives the probability of accidental overlapping of resonances of one isotope by resonances of the other, provided that the resonances of each isotope belong to a single family. (author)

  15. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2012-01-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general

  16. Visualization techniques for spatial probability density function data

    Directory of Open Access Journals (Sweden)

    Udeepta D Bordoloi

    2006-01-01

    Full Text Available Novel visualization methods are presented for spatial probability density function data. These are spatial datasets, where each pixel is a random variable, and has multiple samples which are the results of experiments on that random variable. We use clustering as a means to reduce the information contained in these datasets; and present two different ways of interpreting and clustering the data. The clustering methods are used on two datasets, and the results are discussed with the help of visualization techniques designed for the spatial probability data.

  17. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    Science.gov (United States)

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  18. Randomization-Based Inference about Latent Variables from Complex Samples: The Case of Two-Stage Sampling

    Science.gov (United States)

    Li, Tiandong

    2012-01-01

    In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…

  19. Formulas for Rational-Valued Separability Probabilities of Random Induced Generalized Two-Qubit States

    Directory of Open Access Journals (Sweden)

    Paul B. Slater

    2015-01-01

    Full Text Available Previously, a formula, incorporating a 5F4 hypergeometric function, for the Hilbert-Schmidt-averaged determinantal moments ρPTnρk/ρk of 4×4 density-matrices (ρ and their partial transposes (|ρPT|, was applied with k=0 to the generalized two-qubit separability probability question. The formula can, furthermore, be viewed, as we note here, as an averaging over “induced measures in the space of mixed quantum states.” The associated induced-measure separability probabilities (k=1,2,… are found—via a high-precision density approximation procedure—to assume interesting, relatively simple rational values in the two-re[al]bit (α=1/2, (standard two-qubit (α=1, and two-quater[nionic]bit (α=2 cases. We deduce rather simple companion (rebit, qubit, quaterbit, … formulas that successfully reproduce the rational values assumed for general  k. These formulas are observed to share certain features, possibly allowing them to be incorporated into a single master formula.

  20. Transport fuels from two-stage coal liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Benito, A.; Cebolla, V.; Fernandez, I.; Martinez, M.T.; Miranda, J.L.; Oelert, H.; Prado, J.G. (Instituto de Carboquimica CSIC, Zaragoza (Spain))

    1994-03-01

    Four Spanish lignites and their vitrinite concentrates were evaluated for coal liquefaction. Correlationships between the content of vitrinite and conversion in direct liquefaction were observed for the lignites but not for the vitrinite concentrates. The most reactive of the four coals was processed in two-stage liquefaction at a higher scale. First-stage coal liquefaction was carried out in a continuous unit at Clausthal University at a temperature of 400[degree]C at 20 MPa hydrogen pressure and with anthracene oil as a solvent. The coal conversion obtained was 75.41% being 3.79% gases, 2.58% primary condensate and 69.04% heavy liquids. A hydroprocessing unit was built at the Instituto de Carboquimica for the second-stage coal liquefaction. Whole and deasphalted liquids from the first-stage liquefaction were processed at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The effects of liquid hourly space velocity (LHSV), temperature, gas/liquid ratio and catalyst on the heteroatom liquids, and levels of 5 ppm of nitrogen and 52 ppm of sulphur were reached at 450[degree]C, 10 MPa hydrogen pressure, 0.08 kg H[sub 2]/kg feedstock and with Harshaw HT-500E catalyst. The liquids obtained were hydroprocessed again at 420[degree]C, 10 MPa hydrogen pressure and 0.06 kg H[sub 2]/kg feedstock to hydrogenate the aromatic structures. In these conditions, the aromaticity was reduced considerably, and 39% of naphthas and 35% of kerosene fractions were obtained. 18 refs., 4 figs., 4 tabs.

  1. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  2. Two-stage model of development of heterogeneous uranium-lead systems in zircon

    International Nuclear Information System (INIS)

    Mel'nikov, N.N.; Zevchenkov, O.A.

    1985-01-01

    Behaviour of isotope systems of multiphase zircons at their two-stage distortion is considered. The results of calculations testify to the fact that linear correlations on the diagram with concordance can be explained including two-stage discovery of U-Pb systems of cogenetic zircons if zircon is considered physically heterogeneous and losing in its different part different ratios of accumulated radiogenic lead. ''Metamorphism ages'' obtained by these two-stage opening zircons are intermediate, and they not have geochronological significance while ''crystallization ages'' remain rather close to real ones. Two-stage opening zircons in some cases can be diagnosed by discordance of their crystal component

  3. Two-Stage Multi-Objective Collaborative Scheduling for Wind Farm and Battery Switch Station

    Directory of Open Access Journals (Sweden)

    Zhe Jiang

    2016-10-01

    Full Text Available In order to deal with the uncertainties of wind power, wind farm and electric vehicle (EV battery switch station (BSS were proposed to work together as an integrated system. In this paper, the collaborative scheduling problems of such a system were studied. Considering the features of the integrated system, three indices, which include battery swapping demand curtailment of BSS, wind curtailment of wind farm, and generation schedule tracking of the integrated system are proposed. In addition, a two-stage multi-objective collaborative scheduling model was designed. In the first stage, a day-ahead model was built based on the theory of dependent chance programming. With the aim of maximizing the realization probabilities of these three operating indices, random fluctuations of wind power and battery switch demand were taken into account simultaneously. In order to explore the capability of BSS as reserve, the readjustment process of the BSS within each hour was considered in this stage. In addition, the stored energy rather than the charging/discharging power of BSS during each period was optimized, which will provide basis for hour-ahead further correction of BSS. In the second stage, an hour-ahead model was established. In order to cope with the randomness of wind power and battery swapping demand, the proposed hour-ahead model utilized ultra-short term prediction of the wind power and the battery switch demand to schedule the charging/discharging power of BSS in a rolling manner. Finally, the effectiveness of the proposed models was validated by case studies. The simulation results indicated that the proposed model could realize complement between wind farm and BSS, reduce the dependence on power grid, and facilitate the accommodation of wind power.

  4. Two technicians apply insulation to S-II second stage

    Science.gov (United States)

    1964-01-01

    Two technicians apply insulation to the outer surface of the S-II second stage booster for the Saturn V moon rocket. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.

  5. Correlations and Non-Linear Probability Models

    DEFF Research Database (Denmark)

    Breen, Richard; Holm, Anders; Karlson, Kristian Bernt

    2014-01-01

    the dependent variable of the latent variable model and its predictor variables. We show how this correlation can be derived from the parameters of non-linear probability models, develop tests for the statistical significance of the derived correlation, and illustrate its usefulness in two applications. Under......Although the parameters of logit and probit and other non-linear probability models are often explained and interpreted in relation to the regression coefficients of an underlying linear latent variable model, we argue that they may also be usefully interpreted in terms of the correlations between...... certain circumstances, which we explain, the derived correlation provides a way of overcoming the problems inherent in cross-sample comparisons of the parameters of non-linear probability models....

  6. A New Method Based on Two-Stage Detection Mechanism for Detecting Ships in High-Resolution SAR Images

    Directory of Open Access Journals (Sweden)

    Xu Yongli

    2017-01-01

    Full Text Available Ship detection in synthetic aperture radar (SAR remote sensing images, being a fundamental but challenging problem in the field of satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. Aiming at the requirements of ship detection in high-resolution SAR images, the accuracy, the intelligent level, a better real-time operation and processing efficiency, The characteristics of ocean background and ship target in high-resolution SAR images were analyzed, we put forward a ship detection algorithm in high-resolution SAR images. The algorithm consists of two detection stages: The first step designs a pre-training classifier based on improved spectral residual visual model to obtain the visual salient regions containing ship targets quickly, then achieve the purpose of probably detection of ships. In the second stage, considering the Bayesian theory of binary hypothesis detection, a local maximum posterior probability (MAP classifier is designed for the classification of pixels. After the parameter estimation and judgment criterion, the classification of pixels are carried out in the target areas to achieve the classification of two types of pixels in the salient regions. In the paper, several types of satellite image data, such as TerraSAR-X (TS-X, Radarsat-2, are used to evaluate the performance of detection methods. Comparing with classical CFAR detection algorithms, experimental results show that the algorithm can achieve a better effect of suppressing false alarms, which caused by the speckle noise and ocean clutter background inhomogeneity. At the same time, the detection speed is increased by 25% to 45%.

  7. A probable risk factor of female breast cancer: study on benign and malignant breast tissue samples.

    Science.gov (United States)

    Rehman, Sohaila; Husnain, Syed M

    2014-01-01

    The study reports enhanced Fe, Cu, and Zn contents in breast tissues, a probable risk factor of breast cancer in females. Forty-one formalin-fixed breast tissues were analyzed using atomic absorption spectrophotometry. Twenty malignant, six adjacent to malignant and 15 benign tissues samples were investigated. The malignant tissues samples were of grade 11 and type invasive ductal carcinoma. The quantitative comparison between the elemental levels measured in the two types of specimen (benign and malignant) tissues (removed after surgery) suggests significant elevation of these metals (Fe, Cu, and Zn) in the malignant tissue. The specimens were collected just after mastectomy of women aged 19 to 59 years from the hospitals of Islamabad and Rawalpindi, Pakistan. Most of the patients belong to urban areas of Pakistan. Findings of study depict that these elements have a promising role in the initiation and development of carcinoma as consistent pattern of elevation for Fe, Cu, and Zn was observed. The results showed the excessive accumulation of Fe (229 ± 121 mg/L) in malignant breast tissue samples of patients (p factor of breast cancer. In order to validate our method of analysis, certified reference material muscle tissue lyophilized (IAEA) MA-M-2/TM was analyzed for metal studied. Determined concentrations were quite in good agreement with certified levels. Asymmetric concentration distribution for Fe, Cu, and Zn was observed in both malignant and benign tissue samples.

  8. Two sampling techniques for game meat

    OpenAIRE

    van der Merwe, Maretha; Jooste, Piet J.; Hoffman, Louw C.; Calitz, Frikkie J.

    2013-01-01

    A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g) and square centimetres (cm2) for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12) that statistically proved the two measuring units correlated. The two sampling...

  9. Investigation of Power Losses of Two-Stage Two-Phase Converter with Two-Phase Motor

    Directory of Open Access Journals (Sweden)

    Michal Prazenica

    2011-01-01

    Full Text Available The paper deals with determination of losses of two-stage power electronic system with two-phase variable orthogonal output. The simulation is focused on the investigation of losses in the converter during one period in steady-state operation. Modeling and simulation of two matrix converters with R-L load is shown in the paper. The simulation results confirm a very good time-waveform of the phase current and the system seems to be suitable for low-cost application in automotive/aerospace industries and in application with high frequency voltage sources.

  10. Evaluation and comparison of estimation methods for failure rates and probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, Jussi K. [Fortum Power and Heat Oy, P.O. Box 23, 07901 Loviisa (Finland)]. E-mail: jussi.vaurio@fortum.com; Jaenkaelae, Kalle E. [Fortum Nuclear Services, P.O. Box 10, 00048 Fortum (Finland)

    2006-02-01

    An updated parametric robust empirical Bayes (PREB) estimation methodology is presented as an alternative to several two-stage Bayesian methods used to assimilate failure data from multiple units or plants. PREB is based on prior-moment matching and avoids multi-dimensional numerical integrations. The PREB method is presented for failure-truncated and time-truncated data. Erlangian and Poisson likelihoods with gamma prior are used for failure rate estimation, and Binomial data with beta prior are used for failure probability per demand estimation. Combined models and assessment uncertainties are accounted for. One objective is to compare several methods with numerical examples and show that PREB works as well if not better than the alternative more complex methods, especially in demanding problems of small samples, identical data and zero failures. False claims and misconceptions are straightened out, and practical applications in risk studies are presented.

  11. Janus-faced probability

    CERN Document Server

    Rocchi, Paolo

    2014-01-01

    The problem of probability interpretation was long overlooked before exploding in the 20th century, when the frequentist and subjectivist schools formalized two conflicting conceptions of probability. Beyond the radical followers of the two schools, a circle of pluralist thinkers tends to reconcile the opposing concepts. The author uses two theorems in order to prove that the various interpretations of probability do not come into opposition and can be used in different contexts. The goal here is to clarify the multifold nature of probability by means of a purely mathematical approach and to show how philosophical arguments can only serve to deepen actual intellectual contrasts. The book can be considered as one of the most important contributions in the analysis of probability interpretation in the last 10-15 years.

  12. Epidemiology of undiagnosed trichomoniasis in a probability sample of urban young adults.

    Directory of Open Access Journals (Sweden)

    Susan M Rogers

    Full Text Available T. vaginalis infection (trichomoniasis is the most common curable sexually transmitted infection (STI in the U.S. It is associated with increased HIV risk and adverse pregnancy outcomes. Trichomoniasis surveillance data do not exist for either national or local populations. The Monitoring STIs Survey Program (MSSP collected survey data and specimens which were tested using nucleic acid amplification tests to monitor trichomoniasis and other STIs in 2006-09 among a probability sample of young adults (N = 2,936 in Baltimore, Maryland--an urban area with high rates of reported STIs. The estimated prevalence of trichomoniasis was 7.5% (95% CI 6.3, 9.1 in the overall population and 16.1% (95% CI 13.0, 19.8 among Black women. The overwhelming majority of infected men (98.5% and women (73.3% were asymptomatic. Infections were more common in both women (OR = 3.6, 95% CI 1.6, 8.2 and men (OR = 9.0, 95% CI 1.8, 44.3 with concurrent chlamydial infection. Trichomoniasis did not vary significantly by age for either men or women. Women with two or more partners in the past year and women with a history of personal or partner incarceration were more likely to have an infection. Overall, these results suggest that routine T vaginalis screening in populations at elevated risk of infection should be considered.

  13. Device for two-stage cementing of casing

    Energy Technology Data Exchange (ETDEWEB)

    Kudimov, D A; Goncharevskiy, Ye N; Luneva, L G; Shchelochkov, S N; Shil' nikova, L N; Tereshchenko, V G; Vasiliev, V A; Volkova, V V; Zhdokov, K I

    1981-01-01

    A device is claimed for two-stage cementing of casing. It consists of a body with lateral plugging vents, upper and lower movable sleeves, a check valve with axial channels that's situated in the lower sleeve, and a displacement limiting device for the lower sleeve. To improve the cementing process of the casing by preventing overflow of cementing fluids from the annular space into the first stage casing, the limiter is equipped with a spring rod that is capable of covering the axial channels of the check valve while it's in an operating mode. In addition, the rod in the upper part is equipped with a reinforced area under the axial channels of the check valve.

  14. A two-phase sampling design for increasing detections of rare species in occupancy surveys

    Science.gov (United States)

    Pacifici, Krishna; Dorazio, Robert M.; Dorazio, Michael J.

    2012-01-01

    1. Occupancy estimation is a commonly used tool in ecological studies owing to the ease at which data can be collected and the large spatial extent that can be covered. One major obstacle to using an occupancy-based approach is the complications associated with designing and implementing an efficient survey. These logistical challenges become magnified when working with rare species when effort can be wasted in areas with none or very few individuals. 2. Here, we develop a two-phase sampling approach that mitigates these problems by using a design that places more effort in areas with higher predicted probability of occurrence. We compare our new sampling design to traditional single-season occupancy estimation under a range of conditions and population characteristics. We develop an intuitive measure of predictive error to compare the two approaches and use simulations to assess the relative accuracy of each approach. 3. Our two-phase approach exhibited lower predictive error rates compared to the traditional single-season approach in highly spatially correlated environments. The difference was greatest when detection probability was high (0·75) regardless of the habitat or sample size. When the true occupancy rate was below 0·4 (0·05-0·4), we found that allocating 25% of the sample to the first phase resulted in the lowest error rates. 4. In the majority of scenarios, the two-phase approach showed lower error rates compared to the traditional single-season approach suggesting our new approach is fairly robust to a broad range of conditions and design factors and merits use under a wide variety of settings. 5. Synthesis and applications. Conservation and management of rare species are a challenging task facing natural resource managers. It is critical for studies involving rare species to efficiently allocate effort and resources as they are usually of a finite nature. We believe our approach provides a framework for optimal allocation of effort while

  15. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  16. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa

    Science.gov (United States)

    2013-01-01

    Background Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. Methods A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Results Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. Conclusions We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer. PMID:24355316

  17. Repetitive, small-bore two-stage light gas gun

    International Nuclear Information System (INIS)

    Combs, S.K.; Foust, C.R.; Fehling, D.T.; Gouge, M.J.; Milora, S.L.

    1991-01-01

    A repetitive two-stage light gas gun for high-speed pellet injection has been developed at Oak Ridge National Laboratory. In general, applications of the two-stage light gas gun have been limited to only single shots, with a finite time (at least minutes) needed for recovery and preparation for the next shot. The new device overcomes problems associated with repetitive operation, including rapidly evacuating the propellant gases, reloading the gun breech with a new projectile, returning the piston to its initial position, and refilling the first- and second-stage gas volumes to the appropriate pressure levels. In addition, some components are subjected to and must survive severe operating conditions, which include rapid cycling to high pressures and temperatures (up to thousands of bars and thousands of kelvins) and significant mechanical shocks. Small plastic projectiles (4-mm nominal size) and helium gas have been used in the prototype device, which was equipped with a 1-m-long pump tube and a 1-m-long gun barrel, to demonstrate repetitive operation (up to 1 Hz) at relatively high pellet velocities (up to 3000 m/s). The equipment is described, and experimental results are presented. 124 refs., 6 figs., 5 tabs

  18. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  19. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  20. Markov transition probability-based network from time series for characterizing experimental two-phase flow

    International Nuclear Information System (INIS)

    Gao Zhong-Ke; Hu Li-Dan; Jin Ning-De

    2013-01-01

    We generate a directed weighted complex network by a method based on Markov transition probability to represent an experimental two-phase flow. We first systematically carry out gas—liquid two-phase flow experiments for measuring the time series of flow signals. Then we construct directed weighted complex networks from various time series in terms of a network generation method based on Markov transition probability. We find that the generated network inherits the main features of the time series in the network structure. In particular, the networks from time series with different dynamics exhibit distinct topological properties. Finally, we construct two-phase flow directed weighted networks from experimental signals and associate the dynamic behavior of gas-liquid two-phase flow with the topological statistics of the generated networks. The results suggest that the topological statistics of two-phase flow networks allow quantitative characterization of the dynamic flow behavior in the transitions among different gas—liquid flow patterns. (general)

  1. Quantifying seining detection probability for fishes of Great Plains sand‐bed rivers

    Science.gov (United States)

    Mollenhauer, Robert; Logue, Daniel R.; Brewer, Shannon K.

    2018-01-01

    Species detection error (i.e., imperfect and variable detection probability) is an essential consideration when investigators map distributions and interpret habitat associations. When fish detection error that is due to highly variable instream environments needs to be addressed, sand‐bed streams of the Great Plains represent a unique challenge. We quantified seining detection probability for diminutive Great Plains fishes across a range of sampling conditions in two sand‐bed rivers in Oklahoma. Imperfect detection resulted in underestimates of species occurrence using naïve estimates, particularly for less common fishes. Seining detection probability also varied among fishes and across sampling conditions. We observed a quadratic relationship between water depth and detection probability, in which the exact nature of the relationship was species‐specific and dependent on water clarity. Similarly, the direction of the relationship between water clarity and detection probability was species‐specific and dependent on differences in water depth. The relationship between water temperature and detection probability was also species dependent, where both the magnitude and direction of the relationship varied among fishes. We showed how ignoring detection error confounded an underlying relationship between species occurrence and water depth. Despite imperfect and heterogeneous detection, our results support that determining species absence can be accomplished with two to six spatially replicated seine hauls per 200‐m reach under average sampling conditions; however, required effort would be higher under certain conditions. Detection probability was low for the Arkansas River Shiner Notropis girardi, which is federally listed as threatened, and more than 10 seine hauls per 200‐m reach would be required to assess presence across sampling conditions. Our model allows scientists to estimate sampling effort to confidently assess species occurrence, which

  2. Research on Two-channel Interleaved Two-stage Paralleled Buck DC-DC Converter for Plasma Cutting Power Supply

    DEFF Research Database (Denmark)

    Yang, Xi-jun; Qu, Hao; Yao, Chen

    2014-01-01

    As for high power plasma power supply, due to high efficiency and flexibility, multi-channel interleaved multi-stage paralleled Buck DC-DC Converter becomes the first choice. In the paper, two-channel interleaved two- stage paralleled Buck DC-DC Converter powered by three-phase AC power supply...

  3. Effect of optimum stratification on sampling with varying probabilities under proportional allocation

    OpenAIRE

    Syed Ejaz Husain Rizvi; Jaj P. Gupta; Manoj Bhargava

    2007-01-01

    The problem of optimum stratification on an auxiliary variable when the units from different strata are selected with probability proportional to the value of auxiliary variable (PPSWR) was considered by Singh (1975) for univariate case. In this paper we have extended the same problem, for proportional allocation, when two variates are under study. A cum. 3 R3(x) rule for obtaining approximately optimum strata boundaries has been provided. It has been shown theoretically as well as empiricall...

  4. Production of endo-pectate lyase by two stage cultivation of Erwinia carotovora

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, Satoshi; Kobayashi, Yoshiaki

    1987-02-26

    The productivity of endo-pectate lyase from Erwinia carotovora GIR 1044 was found to be greatly improved by two stage cultivation: in the first stage the bacterium was grown with an inducing carbon source, e.g., pectin, and in the second stage it was cultivated with glycerol, xylose, or fructose with the addition of monosodium L-glutamate as nitrogen source. In the two stage cultivation using pectin or glycerol as the carbon source the enzyme activity reached 400 units/ml, almost 3 times as much as that of one stage cultivation in a 10 liter fermentor. Using two stage cultivation in the 200 liter fermentor improved enzyme productivity over that in the 10 liter fermentor, with 500 units/ml of activity. Compared with the cultivation in Erlenmeyer flasks, fermentor cultivation improved enzyme productivity. The optimum cultivating conditions were agitation of 480 rpm with aeration of 0.5 vvm at 28 /sup 0/C. (4 figs, 4 tabs, 14 refs)

  5. Chronic infections in hip arthroplasties: comparing risk of reinfection following one-stage and two-stage revision: a systematic review and meta-analysis

    Directory of Open Access Journals (Sweden)

    Lange J

    2012-03-01

    Full Text Available Jeppe Lange1,2, Anders Troelsen3, Reimar W Thomsen4, Kjeld Søballe1,51Lundbeck Foundation Centre for Fast-Track Hip and Knee Surgery, Aarhus C, 2Center for Planned Surgery, Silkeborg Regional Hospital, Silkeborg, 3Department of Orthopaedics, Hvidovre Hospital, Hvidovre, 4Department of Clinical Epidemiology, Aarhus University Hospital, Aalborg, 5Department of Orthopaedics, Aarhus University Hospital, Aarhus C, DenmarkBackground: Two-stage revision is regarded by many as the best treatment of chronic infection in hip arthroplasties. Some international reports, however, have advocated one-stage revision. No systematic review or meta-analysis has ever compared the risk of reinfection following one-stage and two-stage revisions for chronic infection in hip arthroplasties.Methods: The review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis. Relevant studies were identified using PubMed and Embase. We assessed studies that included patients with a chronic infection of a hip arthroplasty treated with either one-stage or two-stage revision and with available data on occurrence of reinfections. We performed a meta-analysis estimating absolute risk of reinfection using a random-effects model.Results: We identified 36 studies eligible for inclusion. None were randomized controlled trials or comparative studies. The patients in these studies had received either one-stage revision (n = 375 or two-stage revision (n = 929. Reinfection occurred with an estimated absolute risk of 13.1% (95% confidence interval: 10.0%–17.1% in the one-stage cohort and 10.4% (95% confidence interval: 8.5%–12.7% in the two-stage cohort. The methodological quality of most included studies was considered low, with insufficient data to evaluate confounding factors.Conclusions: Our results may indicate three additional reinfections per 100 reimplanted patients when performing a one-stage versus two-stage revision. However, the

  6. Hypospadias repair: Byar's two stage operation revisited.

    Science.gov (United States)

    Arshad, A R

    2005-06-01

    Hypospadias is a congenital deformity characterised by an abnormally located urethral opening, that could occur anywhere proximal to its normal location on the ventral surface of glans penis to the perineum. Many operations had been described for the management of this deformity. One hundred and fifteen patients with hypospadias were treated at the Department of Plastic Surgery, Hospital Kuala Lumpur, Malaysia between September 1987 and December 2002, of which 100 had Byar's procedure performed on them. The age of the patients ranged from neonates to 26 years old. Sixty-seven patients had penoscrotal (58%), 20 had proximal penile (18%), 13 had distal penile (11%) and 15 had subcoronal hypospadias (13%). Operations performed were Byar's two-staged (100), Bracka's two-staged (11), flip-flap (2) and MAGPI operation (2). The most common complication encountered following hypospadias surgery was urethral fistula at a rate of 18%. There is a higher incidence of proximal hypospadias in the Malaysian community. Byar's procedure is a very versatile technique and can be used for all types of hypospadias. Fistula rate is 18% in this series.

  7. DEVELOPMENT OF THE PROBABLY-GEOGRAPHICAL FORECAST METHOD FOR DANGEROUS WEATHER PHENOMENA

    Directory of Open Access Journals (Sweden)

    Elena S. Popova

    2015-12-01

    Full Text Available This paper presents a scheme method of probably-geographical forecast for dangerous weather phenomena. Discuss two general realization stages of this method. Emphasize that developing method is response to actual questions of modern weather forecast and it’s appropriate phenomena: forecast is carried out for specific point in space and appropriate moment of time.

  8. Assessment procedure and probability determination methods of aircraft crash events in siting for nuclear power plants

    International Nuclear Information System (INIS)

    Zheng Qiyan; Zhang Lijun; Huang Weiqi; Yin Qingliao

    2010-01-01

    Assessment procedure of aircraft crash events in siting for nuclear power plants, and the methods of probability determination in two different stages of prelimi- nary screening and detailed evaluation are introduced in this paper. Except for general air traffic, airport operations and aircraft in the corridor, the probability of aircraft crash by military operation in the military airspaces is considered here. (authors)

  9. Two-step estimation in ratio-of-mediator-probability weighted causal mediation analysis.

    Science.gov (United States)

    Bein, Edward; Deutsch, Jonah; Hong, Guanglei; Porter, Kristin E; Qin, Xu; Yang, Cheng

    2018-04-15

    This study investigates appropriate estimation of estimator variability in the context of causal mediation analysis that employs propensity score-based weighting. Such an analysis decomposes the total effect of a treatment on the outcome into an indirect effect transmitted through a focal mediator and a direct effect bypassing the mediator. Ratio-of-mediator-probability weighting estimates these causal effects by adjusting for the confounding impact of a large number of pretreatment covariates through propensity score-based weighting. In step 1, a propensity score model is estimated. In step 2, the causal effects of interest are estimated using weights derived from the prior step's regression coefficient estimates. Statistical inferences obtained from this 2-step estimation procedure are potentially problematic if the estimated standard errors of the causal effect estimates do not reflect the sampling uncertainty in the estimation of the weights. This study extends to ratio-of-mediator-probability weighting analysis a solution to the 2-step estimation problem by stacking the score functions from both steps. We derive the asymptotic variance-covariance matrix for the indirect effect and direct effect 2-step estimators, provide simulation results, and illustrate with an application study. Our simulation results indicate that the sampling uncertainty in the estimated weights should not be ignored. The standard error estimation using the stacking procedure offers a viable alternative to bootstrap standard error estimation. We discuss broad implications of this approach for causal analysis involving propensity score-based weighting. Copyright © 2018 John Wiley & Sons, Ltd.

  10. Development of a two-stage light gas gun to accelerate hydrogen pellets to high speeds for plasma fueling applications

    International Nuclear Information System (INIS)

    Combs, S.K.; Milora, S.L.; Foust, C.R.; Gouge, M.J.; Fehling, D.T.; Sparks, D.O.

    1988-01-01

    The development of a two-stage light gas gun to accelerate hydrogen isotope pellets to high speeds is under way at Oak Ridge National Laboratory. High velocities (>2 km/s) are desirable for plasma fueling applications, since the faster pellets can penetrate more deeply into large, hot plasmas and deposit atoms of fuel directly in a larger fraction of the plasma volume. In the initial configuration of the two-stage device, a 2.2-l volume (/ 3 for frozen hydrogen isotopes). However, the use of sabots to encase and protect the cryogenic pellets from the high peak pressures will probably be required to realize speeds of ∼3 km/s or greater. The experimental plan includes acceleration of hydrogen isotopes as soon as the gun geometry and operating parameters are optimized; theoretical models are being used to aid in this process. The hardware is being designed to accommodate repetitive operation, which is the objective of this research and is required for future applications. 25 refs., 6 figs., 1 tab

  11. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer

  12. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    Science.gov (United States)

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Assessing representativeness of sampling methods for reaching men who have sex with men: a direct comparison of results obtained from convenience and probability samples.

    Science.gov (United States)

    Schwarcz, Sandra; Spindler, Hilary; Scheer, Susan; Valleroy, Linda; Lansky, Amy

    2007-07-01

    Convenience samples are used to determine HIV-related behaviors among men who have sex with men (MSM) without measuring the extent to which the results are representative of the broader MSM population. We compared results from a cross-sectional survey of MSM recruited from gay bars between June and October 2001 to a random digit dial telephone survey conducted between June 2002 and January 2003. The men in the probability sample were older, better educated, and had higher incomes than men in the convenience sample, the convenience sample enrolled more employed men and men of color. Substance use around the time of sex was higher in the convenience sample but other sexual behaviors were similar. HIV testing was common among men in both samples. Periodic validation, through comparison of data collected by different sampling methods, may be useful when relying on survey data for program and policy development.

  14. Predicting the probability of mortality of gastric cancer patients using decision tree.

    Science.gov (United States)

    Mohammadzadeh, F; Noorkojuri, H; Pourhoseingholi, M A; Saadat, S; Baghestani, A R

    2015-06-01

    Gastric cancer is the fourth most common cancer worldwide. This reason motivated us to investigate and introduce gastric cancer risk factors utilizing statistical methods. The aim of this study was to identify the most important factors influencing the mortality of patients who suffer from gastric cancer disease and to introduce a classification approach according to decision tree model for predicting the probability of mortality from this disease. Data on 216 patients with gastric cancer, who were registered in Taleghani hospital in Tehran,Iran, were analyzed. At first, patients were divided into two groups: the dead and alive. Then, to fit decision tree model to our data, we randomly selected 20% of dataset to the test sample and remaining dataset considered as the training sample. Finally, the validity of the model examined with sensitivity, specificity, diagnosis accuracy and the area under the receiver operating characteristic curve. The CART version 6.0 and SPSS version 19.0 softwares were used for the analysis of the data. Diabetes, ethnicity, tobacco, tumor size, surgery, pathologic stage, age at diagnosis, exposure to chemical weapons and alcohol consumption were determined as effective factors on mortality of gastric cancer. The sensitivity, specificity and accuracy of decision tree were 0.72, 0.75 and 0.74 respectively. The indices of sensitivity, specificity and accuracy represented that the decision tree model has acceptable accuracy to prediction the probability of mortality in gastric cancer patients. So a simple decision tree consisted of factors affecting on mortality of gastric cancer may help clinicians as a reliable and practical tool to predict the probability of mortality in these patients.

  15. Correlations between channel probabilities in collisional dissociation of D3+

    International Nuclear Information System (INIS)

    Abraham, S.; Nir, D.; Rosner, B.

    1984-01-01

    Measurements of the dissociation of D 3 + ions at 300--600 keV under single- and multiple-collision conditions in Ar- and H 2 -gas targets have been performed. A complete separation of all dissociation channels was achieved, including the neutral channels, which were resolved using a fine-mesh technique. Data analysis in the multiple-collision regime confirms the validity of the rate equations governing the charge exchange processes. In the single-collision region the analysis yields constant relations between channel probabilities. Data rearrangement shows probability factorization and suggests that collisional dissociation is a two-stage process, a fast electron exchange followed by rearrangement and branching to the exit channels

  16. Diachronic changes in word probability distributions in daily press

    Directory of Open Access Journals (Sweden)

    Stanković Jelena

    2006-01-01

    Full Text Available Changes in probability distributions of individual words and word types were investigated within two samples of daily press in the span of fifty years. Two samples of daily press were used in this study. The one derived from the Corpus of Serbian Language (CSL /Kostić, Đ., 2001/ that covers period between 1945. and 1957. and the other derived from the Ebart Media Documentation (EBR that was complied from seven daily news and five weekly magazines from 2002. and 2003. Each sample consisted of about 1 million words. The obtained results indicate that nouns and adjectives were more frequent in the CSL, while verbs and prepositions are more frequent in the EBR sample, suggesting a decrease of sentence length in the last five decades. Conspicuous changes in probability distribution of individual words were observed for nouns and adjectives, while minimal or no changes were observed for verbs and prepositions. Such an outcome suggests that nouns and adjectives are most susceptible to diachronic changes, while verbs and prepositions appear to be resistant to such changes.

  17. Modal intersection types, two-level languages, and staged synthesis

    DEFF Research Database (Denmark)

    Henglein, Fritz; Rehof, Jakob

    2016-01-01

    -linguistic framework for staged program synthesis, where metaprograms are automatically synthesized which, when executed, generate code in a target language. We survey the basic theory of staged synthesis and illustrate by example how a two-level language theory specialized from λ∩ ⎕ can be used to understand......A typed λ-calculus, λ∩ ⎕, is introduced, combining intersection types and modal types. We develop the metatheory of λ∩ ⎕, with particular emphasis on the theory of subtyping and distributivity of the modal and intersection type operators. We describe how a stratification of λ∩ ⎕ leads to a multi...... the process of staged synthesis....

  18. Empirical study of classification process for two-stage turbo air classifier in series

    Science.gov (United States)

    Yu, Yuan; Liu, Jiaxiang; Li, Gang

    2013-05-01

    The suitable process parameters for a two-stage turbo air classifier are important for obtaining the ultrafine powder that has a narrow particle-size distribution, however little has been published internationally on the classification process for the two-stage turbo air classifier in series. The influence of the process parameters of a two-stage turbo air classifier in series on classification performance is empirically studied by using aluminum oxide powders as the experimental material. The experimental results show the following: 1) When the rotor cage rotary speed of the first-stage classifier is increased from 2 300 r/min to 2 500 r/min with a constant rotor cage rotary speed of the second-stage classifier, classification precision is increased from 0.64 to 0.67. However, in this case, the final ultrafine powder yield is decreased from 79% to 74%, which means the classification precision and the final ultrafine powder yield can be regulated through adjusting the rotor cage rotary speed of the first-stage classifier. 2) When the rotor cage rotary speed of the second-stage classifier is increased from 2 500 r/min to 3 100 r/min with a constant rotor cage rotary speed of the first-stage classifier, the cut size is decreased from 13.16 μm to 8.76 μm, which means the cut size of the ultrafine powder can be regulated through adjusting the rotor cage rotary speed of the second-stage classifier. 3) When the feeding speed is increased from 35 kg/h to 50 kg/h, the "fish-hook" effect is strengthened, which makes the ultrafine powder yield decrease. 4) To weaken the "fish-hook" effect, the equalization of the two-stage wind speeds or the combination of a high first-stage wind speed with a low second-stage wind speed should be selected. This empirical study provides a criterion of process parameter configurations for a two-stage or multi-stage classifier in series, which offers a theoretical basis for practical production.

  19. Two-stage, high power X-band amplifier experiment

    International Nuclear Information System (INIS)

    Kuang, E.; Davis, T.J.; Ivers, J.D.; Kerslick, G.S.; Nation, J.A.; Schaechter, L.

    1993-01-01

    At output powers in excess of 100 MW the authors have noted the development of sidebands in many TWT structures. To address this problem an experiment using a narrow bandwidth, two-stage TWT is in progress. The TWT amplifier consists of a dielectric (e = 5) slow-wave structure, a 30 dB sever section and a 8.8-9.0 GHz passband periodic, metallic structure. The electron beam used in this experiment is a 950 kV, 1 kA, 50 ns pencil beam propagating along an applied axial field of 9 kG. The dielectric first stage has a maximum gain of 30 dB measured at 8.87 GHz, with output powers of up to 50 MW in the TM 01 mode. In these experiments the dielectric amplifier output power is about 3-5 MW and the output power of the complete two-stage device is ∼160 MW at the input frequency. The sidebands detected in earlier experiments have been eliminated. The authors also report measurements of the energy spread of the electron beam resulting from the amplification process. These experimental results are compared with MAGIC code simulations and analytic work they have carried out on such devices

  20. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    Science.gov (United States)

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. Methods We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. Results MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%–95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. Conclusions National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. PMID:26965869

  1. Novel rare missense variations and risk of autism spectrum disorder: whole-exome sequencing in two families with affected siblings and a two-stage follow-up study in a Japanese population.

    Directory of Open Access Journals (Sweden)

    Jun Egawa

    Full Text Available Rare inherited variations in multiplex families with autism spectrum disorder (ASD are suggested to play a major role in the genetic etiology of ASD. To further investigate the role of rare inherited variations, we performed whole-exome sequencing (WES in two families, each with three affected siblings. We also performed a two-stage follow-up case-control study in a Japanese population. WES of the six affected siblings identified six novel rare missense variations. Among these variations, CLN8 R24H was inherited in one family by three affected siblings from an affected father and thus co-segregated with ASD. In the first stage of the follow-up study, we genotyped the six novel rare missense variations identified by WES in 241 patients and 667 controls (the Niigata sample. Only CLN8 R24H had higher mutant allele frequencies in patients (1/482 compared with controls (1/1334. In the second stage, this variation was further genotyped, yet was not detected in a sample of 309 patients and 350 controls (the Nagoya sample. In the combined Niigata and Nagoya samples, there was no significant association (odds ratio = 1.8, 95% confidence interval = 0.1-29.6. These results suggest that CLN8 R24H plays a role in the genetic etiology of ASD, at least in a subset of ASD patients.

  2. Application of binomial and multinomial probability statistics to the sampling design process of a global grain tracing and recall system

    Science.gov (United States)

    Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...

  3. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    Energy Technology Data Exchange (ETDEWEB)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-07-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  4. Solving no-wait two-stage flexible flow shop scheduling problem with unrelated parallel machines and rework time by the adjusted discrete Multi Objective Invasive Weed Optimization and fuzzy dominance approach

    International Nuclear Information System (INIS)

    Jafarzadeh, Hassan; Moradinasab, Nazanin; Gerami, Ali

    2017-01-01

    Adjusted discrete Multi-Objective Invasive Weed Optimization (DMOIWO) algorithm, which uses fuzzy dominant approach for ordering, has been proposed to solve No-wait two-stage flexible flow shop scheduling problem. Design/methodology/approach: No-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times and probable rework in both stations, different ready times for all jobs and rework times for both stations as well as unrelated parallel machines with regards to the simultaneous minimization of maximum job completion time and average latency functions have been investigated in a multi-objective manner. In this study, the parameter setting has been carried out using Taguchi Method based on the quality indicator for beater performance of the algorithm. Findings: The results of this algorithm have been compared with those of conventional, multi-objective algorithms to show the better performance of the proposed algorithm. The results clearly indicated the greater performance of the proposed algorithm. Originality/value: This study provides an efficient method for solving multi objective no-wait two-stage flexible flow shop scheduling problem by considering sequence-dependent setup times, probable rework in both stations, different ready times for all jobs, rework times for both stations and unrelated parallel machines which are the real constraints.

  5. Noncausal two-stage image filtration at presence of observations with anomalous errors

    OpenAIRE

    S. V. Vishnevyy; S. Ya. Zhuk; A. N. Pavliuchenkova

    2013-01-01

    Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptiv...

  6. CFD simulations of compressed air two stage rotary Wankel expander – Parametric analysis

    International Nuclear Information System (INIS)

    Sadiq, Ghada A.; Tozer, Gavin; Al-Dadah, Raya; Mahmoud, Saad

    2017-01-01

    Highlights: • CFD ANSYS-Fluent 3D simulation of Wankel expander is developed. • Single and two-stage expander’s performance is compared. • Inlet and outlet ports shape and configurations are investigated. • Isentropic efficiency of two stage Wankel expander of 91% is achieved. - Abstract: A small scale volumetric Wankel expander is a powerful device for small-scale power generation in compressed air energy storage (CAES) systems and Organic Rankine cycles powered by different heat sources such as, biomass, low temperature geothermal, solar and waste heat leading to significant reduction in CO_2 emissions. Wankel expanders outperform other types of expander due to their ability to produce two power pulses per revolution per chamber additional to higher compactness, lower noise and vibration and lower cost. In this paper, a computational fluid dynamics (CFD) model was developed using ANSYS 16.2 to simulate the flow dynamics for a single and two stage Wankel expanders and to investigate the effect of port configurations, including size and spacing, on the expander’s power output and isentropic efficiency. Also, single-stage and two-stage expanders were analysed with different operating conditions. Single-stage 3D CFD results were compared to published work showing close agreement. The CFD modelling was used to investigate the performance of the rotary device using air as an ideal gas with various port diameters ranging from 15 mm to 50 mm; port spacing varying from 28 mm to 66 mm; different Wankel expander sizes (r = 48, e = 6.6, b = 32) mm and (r = 58, e = 8, b = 40) mm both as single-stage and as two-stage expanders with different configurations and various operating conditions. Results showed that the best Wankel expander design for a single-stage was (r = 48, e = 6.6, b = 32) mm, with the port diameters 20 mm and port spacing equal to 50 mm. Moreover, combining two Wankel expanders horizontally, with a larger one at front, produced 8.52 kW compared

  7. Estimating the joint survival probabilities of married individuals

    NARCIS (Netherlands)

    Sanders, Lisanne; Melenberg, Bertrand

    We estimate the joint survival probability of spouses using a large random sample drawn from a Dutch census. As benchmarks we use two bivariate Weibull models. We consider more flexible models, using a semi-nonparametric approach, by extending the independent Weibull distribution using squared

  8. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  9. Multi stage electrodialysis for separation of two metal ion species

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, K.; Sakurai, H.; Nii, S.; Sugiura, K. [Nagoya Univ., Nagoya (Japan)

    1995-04-20

    In this article, separation of two metal ions by electrodialysis with a cation exchange membrane has been investigated. In other words, separation of potassium ion and sodium ion has been investigated by using batch dialysis with and without an electric field and continuous electrodialysis with a four-stage dialyzer. As a result, the difference in the permselectivity between the dialysis with and without an electric field has not been appreciable for the system of potassium and sodium ions with the cation exchange membrane. Concerning the continuous electrodialysis, the concentration ratio between potassium and sodium ions in the outlet solution from the recovery side of the dialyzer has increased with the reflux flow rate and the number of stages. In case when the reflux flow rate has been zero, the concentration ratio with the four-stage dialyzer has become 1.5 which is almost the same as with that with a two-stage dialyzer consisting of a simple membrane. When the reflux flow ratio has been 0.7, the concentration ratio has reached 3.6. 20 refs., 8 figs.

  10. Efficient simulation of tail probabilities of sums of correlated lognormals

    DEFF Research Database (Denmark)

    Asmussen, Søren; Blanchet, José; Juneja, Sandeep

    We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown to be eff......We consider the problem of efficient estimation of tail probabilities of sums of correlated lognormals via simulation. This problem is motivated by the tail analysis of portfolios of assets driven by correlated Black-Scholes models. We propose two estimators that can be rigorously shown...... optimize the scaling parameter of the covariance. The second estimator decomposes the probability of interest in two contributions and takes advantage of the fact that large deviations for a sum of correlated lognormals are (asymptotically) caused by the largest increment. Importance sampling...

  11. Hybrid biogas upgrading in a two-stage thermophilic reactor

    DEFF Research Database (Denmark)

    Corbellini, Viola; Kougias, Panagiotis; Treu, Laura

    2018-01-01

    The aim of this study is to propose a hybrid biogas upgrading configuration composed of two-stage thermophilic reactors. Hydrogen is directly injected in the first stage reactor. The output gas from the first reactor (in-situ biogas upgrade) is subsequently transferred to a second upflow reactor...... (ex-situ upgrade), in which enriched hydrogenotrophic culture is responsible for the hydrogenation of carbon dioxide to methane. The overall objective of the work was to perform an initial methane enrichment in the in-situ reactor, avoiding deterioration of the process due to elevated pH levels......, and subsequently, to complete the biogas upgrading process in the ex-situ chamber. The methane content in the first stage reactor reached on average 87% and the corresponding value in the second stage was 91%, with a maximum of 95%. A remarkable accumulation of volatile fatty acids was observed in the first...

  12. Runway Operations Planning: A Two-Stage Solution Methodology

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the

  13. Profile fitting and the two-stage method in neutron powder diffractometry for structure and texture analysis

    International Nuclear Information System (INIS)

    Jansen, E.; Schaefer, W.; Will, G.; Kernforschungsanlage Juelich G.m.b.H.

    1988-01-01

    An outline and an application of the two-stage method in neutron powder diffractometry are presented. Stage (1): Individual reflection data like position, half-width and integrated intensity are analysed by profile fitting. The profile analysis is based on an experimentally determined instrument function and can be applied without prior knowledge of a structural model. A mathematical procedure is described which results in a variance-covariance matrix containing standard deviations and correlations of the refined reflection parameters. Stage (2): The individual reflection data derived from the profile fitting procedure can be used for appropriate purposes either in structure determination or in texture and strain or stress analysis. The integrated intensities are used in the non-diagonal weighted least-squares routine POWLS for structure refinement. The weight matrix is given by the inverted variance-covariance matrix of stage (1). This procedure is the basis for reliable and real Bragg R values and for a realistic estimation of standard deviations of structural parameters. In the case of texture analysis the integrated intensities are compiled into pole figures representing the intensity distribution for all sample orientations of individual hkl. Various examples for the wide application of the two-stage method in structure and texture analysis are given: Structure refinement of a standard quartz specimen, magnetic ordering in the system Tb x Y 1-x Ag, preferred orientation effects in deformed marble and texture investigations of a triclinic plagioclase. (orig.)

  14. Soluble CD44 concentration in the serum and peritoneal fluid samples of patients with different stages of endometriosis.

    Science.gov (United States)

    Mashayekhi, Farhad; Aryaee, Hadis; Mirzajani, Ebrahim; Yasin, Ashraf Ale; Fathi, Abdolsatar

    2015-09-01

    Endometriosis is a gynecological disease defined by the histological presence of endometrial glands and stroma outside the uterine cavity, most commonly implanted over visceral and peritoneal surface within the female pelvis. CD44 is a membrane protein expressed by human endometrial cells, and it has been shown to promote the adhesion of endometrial cells. The aim of this study was to determine the levels of soluble CD44 (sCD44) in the serum and peritoneal fluid (PF) samples of patients with different stages of endometriosis. 39 PF and serum samples from normal healthy and 130 samples from different stages of patients with endometriosis (33 cases of stage I, 38 stage II, 30 stage III and 29 stage IV) were included in this study. Total protein concentration (TPC) and the level of s-cMet in the serum were determined by Bio-Rad protein assay based on the Bradford dye procedure and enzyme-linked immunosorbent assay, respectively. No significant change in the TPC was seen in the serum of patients with endometriosis when compared to normal controls. Results obtained demonstrated that all serum and peritoneal fluid samples, presented sCD44 expression, whereas, starting from stages I to IV endometriosis, a significant increase of sCD44 expression was observed as compared to control group. The results of this study show that a high expression of sCD44 is correlated with advanced stages of endometriosis. It is also concluded that the detection of serum and/or peritoneal fluid sCD44 may be useful in classifying endometriosis.

  15. Single-stage versus two-stage anaerobic fluidized bed bioreactors in treating municipal wastewater: Performance, foulant characteristics, and microbial community.

    Science.gov (United States)

    Wu, Bing; Li, Yifei; Lim, Weikang; Lee, Shi Lin; Guo, Qiming; Fane, Anthony G; Liu, Yu

    2017-03-01

    This study examined the receptive performance, membrane foulant characteristics, and microbial community in the single-stage and two-stage anaerobic fluidized membrane bioreactor (AFMBR) treating settled raw municipal wastewater with the aims to explore fouling mechanisms and microbial community structure in both systems. Both AFMBRs exhibited comparable organic removal efficiency and membrane performances. In the single-stage AFMBR, less soluble organic substances were removed through biosorption by GAC and biodegradation than those in the two-stage AFMBR. Compared to the two-stage AFMBR, the formation of cake layer was the main cause of the observed membrane fouling in the single-stage AFMBR at the same employed flux. The accumulation rate of the biopolymers was linearly correlated with the membrane fouling rate. In the chemical-cleaned foulants, humic acid-like substances and silicon were identified as the predominant organic and inorganic fouants respectively. As such, the fluidized GAC particles might not be effective in removing these substances from the membrane surfaces. High-throughout pyrosequencing analysis further revealed that beta-Proteobacteria were predominant members in both AFMBRs, which contributed to the development of biofilms on the fluidized GAC and membrane surfaces. However, it was also noted that the abundance of the identified dominant in the membrane surface-associated biofilm seemed to be related to the permeate flux and reactor configuration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys.

    Science.gov (United States)

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-09-01

    To examine sociodemographic and behavioural differences between men who have sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey. We compared 148 MSM aged 18-64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010-2012, with men in the same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European MSM Internet Survey (EMIS); 797 in the London Gay Men's Sexual Health Survey; and 1234 in Scotland's Gay Men's Sexual Health Survey. Analyses compared men reporting at least one male sexual partner (past year) on similarly worded questions and multivariable analyses accounted for sociodemographic differences between the surveys. MSM in convenience surveys were younger and better educated than MSM in Natsal-3, and a larger proportion identified as gay (85%-95% vs 62%). Partner numbers were higher and same-sex anal sex more common in convenience surveys. Unprotected anal intercourse was more commonly reported in EMIS. Compared with Natsal-3, MSM in convenience surveys were more likely to report gonorrhoea diagnoses and HIV testing (both past year). Differences between the samples were reduced when restricting analysis to gay-identifying MSM. National probability surveys better reflect the population of MSM but are limited by their smaller samples of MSM. Convenience surveys recruit larger samples of MSM but tend to over-represent MSM identifying as gay and reporting more sexual risk behaviours. Because both sampling strategies have strengths and weaknesses, methods are needed to triangulate data from probability and convenience surveys. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. TWO-STAGE HEAT PUMPS FOR ENERGY SAVING TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. E. Denysova

    2017-09-01

    Full Text Available The problem of energy saving becomes one of the most important in power engineering. It is caused by exhaustion of world reserves in hydrocarbon fuel, such as gas, oil and coal representing sources of traditional heat supply. Conventional sources have essential shortcomings: low power, ecological and economic efficiencies, that can be eliminated by using alternative methods of power supply, like the considered one: low-temperature natural heat of ground waters of on the basis of heat pump installations application. The heat supply system considered provides an effective use of two stages heat pump installation operating as heat source at ground waters during the lowest ambient temperature period. Proposed is a calculation method of heat pump installations on the basis of groundwater energy. Calculated are the values of electric energy consumption by the compressors’ drive, and the heat supply system transformation coefficient µ for a low-potential source of heat from ground waters allowing to estimate high efficiency of two stages heat pump installations.

  18. Development and testing of a two stage granular filter to improve collection efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Rangan, R.S.; Prakash, S.G.; Chakravarti, S.; Rao, S.R.

    1999-07-01

    A circulating bed granular filter (CBGF) with a single filtration stage was tested with a PFB combustor in the Coal Research Facility of BHEL R and D in Hyderabad during the years 1993--95. Filter outlet dust loading varied between 20--50 mg/Nm{sup 3} for an inlet dust loading of 5--8 gms/Nm{sup 3}. The results were reported in Fluidized Bed Combustion-Volume 2, ASME 1995. Though the outlet consists of predominantly fine particulates below 2 microns, it is still beyond present day gas turbine specifications for particulate concentration. In order to enhance the collection efficiency, a two-stage granular filtration concept was evolved, wherein the filter depth is divided between two stages, accommodated in two separate vertically mounted units. The design also incorporates BHEL's scale-up concept of multiple parallel stages. The two-stage concept minimizes reentrainment of captured dust by providing clean granules in the upper stage, from where gases finally exit the filter. The design ensures that dusty gases come in contact with granules having a higher dust concentration at the bottom of the two-stage unit, where most of the cleaning is completed. A second filtration stage of cleaned granules is provided in the top unit (where the granules are returned to the system after dedusting) minimizing reentrainment. Tests were conducted to determine the optimum granule to dust ratio (G/D ratio) which decides the granule circulation rate required for the desired collection efficiency. The data brings out the importance of pre-separation and the limitation on inlet dust loading for any continuous system of granular filtration. Collection efficiencies obtained were much higher (outlet dust being 3--9 mg/Nm{sub 3}) than in the single stage filter tested earlier for similar dust loading at the inlet. The results indicate that two-stage granular filtration has a high potential for HTHT application with fewer risks as compared to other systems under development.

  19. Automatic sleep stage classification using two facial electrodes.

    Science.gov (United States)

    Virkkala, Jussi; Velin, Riitta; Himanen, Sari-Leena; Värri, Alpo; Müller, Kiti; Hasan, Joel

    2008-01-01

    Standard sleep stage classification is based on visual analysis of central EEG, EOG and EMG signals. Automatic analysis with a reduced number of sensors has been studied as an easy alternative to the standard. In this study, a single-channel electro-oculography (EOG) algorithm was developed for separation of wakefulness, SREM, light sleep (S1, S2) and slow wave sleep (S3, S4). The algorithm was developed and tested with 296 subjects. Additional validation was performed on 16 subjects using a low weight single-channel Alive Monitor. In the validation study, subjects attached the disposable EOG electrodes themselves at home. In separating the four stages total agreement (and Cohen's Kappa) in the training data set was 74% (0.59), in the testing data set 73% (0.59) and in the validation data set 74% (0.59). Self-applicable electro-oculography with only two facial electrodes was found to provide reasonable sleep stage information.

  20. Excluding joint probabilities from quantum theory

    Science.gov (United States)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  1. Daniel Courgeau: Probability and social science: methodological relationships between the two approaches [Review of: . Probability and social science: methodological relationships between the two approaches

    NARCIS (Netherlands)

    Willekens, F.J.C.

    2013-01-01

    Throughout history, humans engaged in games in which randomness plays a role. In the 17th century, scientists started to approach chance scientifically and to develop a theory of probability. Courgeau describes how the relationship between probability theory and social sciences emerged and evolved

  2. Development of Explosive Ripper with Two-Stage Combustion

    Science.gov (United States)

    1974-10-01

    inch pipe duct work, the width of this duct proved to be detrimental in marginally rippable material; the duct, instead of the penetrator tip, was...marginally rippable rock. ID. Operating Requirements 2. Fuel The two-stage combustion device is designed to operate using S A 42. the same diesel

  3. Sexual behaviors, relationships, and perceived health among adult men in the United States: results from a national probability sample.

    Science.gov (United States)

    Reece, Michael; Herbenick, Debby; Schick, Vanessa; Sanders, Stephanie A; Dodge, Brian; Fortenberry, J Dennis

    2010-10-01

    To provide a foundation for those who provide sexual health services and programs to men in the United States, the need for population-based data that describes men's sexual behaviors and their correlates remains. The purpose of this study was to, in a national probability survey of men ages 18-94 years, assess the occurrence and frequency of sexual behaviors and their associations with relationship status and health status. A national probability sample of 2,522 men aged 18 to 94 completed a cross-sectional survey about their sexual behaviors, relationship status, and health. Relationship status; health status; experience of solo masturbation, partnered masturbation, giving oral sex, receiving oral sex, vaginal intercourse and anal intercourse, in the past 90 days; frequency of solo masturbation, vaginal intercourse and anal intercourse in the past year. Masturbation, oral intercourse, and vaginal intercourse are prevalent among men throughout most of their adult life, with both occurrence and frequency varying with age and as functions of relationship type and physical health status. Masturbation is prevalent and frequent across various stages of life and for both those with and without a relational partner, with fewer men with fair to poor health reporting recent masturbation. Patterns of giving oral sex to a female partner were similar to those for receiving oral sex. Vaginal intercourse in the past 90 days was more prevalent among men in their late 20s and 30s than in the other age groups, although being reported by approximately 50% of men in the sixth and seventh decades of life. Anal intercourse and sexual interactions with other men were less common than all other sexual behaviors. Contemporary men in the United States engage in diverse solo and partnered sexual activities; however, sexual behavior is less common and more infrequent among older age cohorts. © 2010 International Society for Sexual Medicine.

  4. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...... that the austenitization kinetics is governed by Ni-diffusion and that slow transformation kinetics separating the two stages is caused by soft impingement in the martensite phase. Increasing the lath width in the kinetics model had a similar effect on the austenitization kinetics as increasing the heating-rate....

  5. Model of unplanned smoking initiation of children and adolescents: an integrated stage model of smoking behavior.

    Science.gov (United States)

    Kremers, S P J; Mudde, A N; De Vries, H

    2004-05-01

    Two lines of psychological research have attempted to spell out the stages of adolescent smoking initiation. The first has focused on behavioral stages of smoking initiation, while the second line emphasized motivational stages. A large international sample of European adolescents (N = 10,170, mean age = 13.3 years) was followed longitudinally. Self-reported motivational and behavioral stages of smoking initiation were integrated, leading to the development of the Model of Unplanned Smoking Initiation of Children and Adolescents (MUSICA). The MUSICA postulates that youngsters experiment with smoking while they are in an unmotivated state as regards their plans for smoking regularly in the future. More than 95% of the total population resided in one of the seven stages distinguished by MUSICA. The probability of starting to smoke regularly during the 12 months follow-up period increased with advanced stage assignment at baseline. Unique social cognitive predictors of stage progression from the various stages were identified, but effect sizes of predictors of transitions were small. The integration of motivational and behavioral dimensions improves our understanding of the process of smoking initiation. In contrast to current theories of smoking initiation, adolescent uptake of smoking behavior was found to be an unplanned action.

  6. Quantification of physical activity using the QAPACE Questionnaire: a two stage cluster sample design survey of children and adolescents attending urban school.

    Science.gov (United States)

    Barbosa, Nicolas; Sanchez, Carlos E; Patino, Efrain; Lozano, Benigno; Thalabard, Jean C; LE Bozec, Serge; Rieu, Michel

    2016-05-01

    Quantification of physical activity as energy expenditure is important since youth for the prevention of chronic non communicable diseases in adulthood. It is necessary to quantify physical activity expressed in daily energy expenditure (DEE) in school children and adolescents between 8-16 years, by age, gender and socioeconomic level (SEL) in Bogotá. This is a Two Stage Cluster Survey Sample. From a universe of 4700 schools and 760000 students from three existing socioeconomic levels in Bogotá (low, medium and high). The random sample was 20 schools and 1840 students (904 boys and 936 girls). Foreshadowing desertion of participants and inconsistency in the questionnaire responses, the sample size was increased. Thus, 6 individuals of each gender for each of the nine age groups were selected, resulting in a total sample of 2160 individuals. Selected students filled the QAPACE questionnaire under supervision. The data was analyzed comparing means with multivariate general linear model. Fixed factors used were: gender (boys and girls), age (8 to 16 years old) and tri-strata SEL (low, medium and high); as independent variables were assessed: height, weight, leisure time, expressed in hours/day and dependent variable: daily energy expenditure DEE (kJ.kg-1.day-1): during leisure time (DEE-LT), during school time (DEE-ST), during vacation time (DEE-VT), and total mean DEE per year (DEEm-TY) RESULTS: Differences in DEE by gender, in boys, LT and all DEE, with the SEL all variables were significant; but age-SEL was only significant in DEE-VT. In girls, with the SEL all variables were significant. The post hoc multiple comparisons tests were significant with age using Fisher's Least Significant Difference (LSD) test in all variables. For both genders and for all SELs the values in girls had the higher value except SEL high (5-6) The boys have higher values in DEE-LT, DEE-ST, DEE-VT; except in DEEm-TY in SEL (5-6) In SEL (5-6) all DEEs for both genders are highest. For SEL

  7. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    Science.gov (United States)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  8. An adaptive two-stage dose-response design method for establishing proof of concept.

    Science.gov (United States)

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  9. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  10. Persistence Probabilities of Two-Sided (Integrated) Sums of Correlated Stationary Gaussian Sequences

    Science.gov (United States)

    Aurzada, Frank; Buck, Micha

    2018-02-01

    We study the persistence probability for some two-sided, discrete-time Gaussian sequences that are discrete-time analogues of fractional Brownian motion and integrated fractional Brownian motion, respectively. Our results extend the corresponding ones in continuous time in Molchan (Commun Math Phys 205(1):97-111, 1999) and Molchan (J Stat Phys 167(6):1546-1554, 2017) to a wide class of discrete-time processes.

  11. A two-stage preventive maintenance optimization model incorporating two-dimensional extended warranty

    International Nuclear Information System (INIS)

    Su, Chun; Wang, Xiaolin

    2016-01-01

    In practice, customers can decide whether to buy an extended warranty or not, at the time of item sale or at the end of the basic warranty. In this paper, by taking into account the moments of customers purchasing two-dimensional extended warranty, the optimization of imperfect preventive maintenance for repairable items is investigated from the manufacturer's perspective. A two-dimensional preventive maintenance strategy is proposed, under which the item is preventively maintained according to a specified age interval or usage interval, whichever occurs first. It is highlighted that when the extended warranty is purchased upon the expiration of the basic warranty, the manufacturer faces a two-stage preventive maintenance optimization problem. Moreover, in the second stage, the possibility of reducing the servicing cost over the extended warranty period is explored by classifying customers on the basis of their usage rates and then providing them with customized preventive maintenance programs. Numerical examples show that offering customized preventive maintenance programs can reduce the manufacturer's warranty cost, while a larger saving in warranty cost comes from encouraging customers to buy the extended warranty at the time of item sale. - Highlights: • A two-dimensional PM strategy is investigated. • Imperfect PM strategy is optimized by considering both two-dimensional BW and EW. • Customers are categorized based on their usage rates throughout the BW period. • Servicing cost of the EW is reduced by offering customized PM programs. • Customers buying the EW at the time of sale is preferred for the manufacturer.

  12. Two-stage agglomeration of fine-grained herbal nettle waste

    Science.gov (United States)

    Obidziński, Sławomir; Joka, Magdalena; Fijoł, Olga

    2017-10-01

    This paper compares the densification work necessary for the pressure agglomeration of fine-grained dusty nettle waste, with the densification work involved in two-stage agglomeration of the same material. In the first stage, the material was pre-densified through coating with a binder material in the form of a 5% potato starch solution, and then subjected to pressure agglomeration. A number of tests were conducted to determine the effect of the moisture content in the nettle waste (15, 18 and 21%), as well as the process temperature (50, 70, 90°C) on the values of densification work and the density of the obtained pellets. For pre-densified pellets from a mixture of nettle waste and a starch solution, the conducted tests determined the effect of pellet particle size (1, 2, and 3 mm) and the process temperature (50, 70, 90°C) on the same values. On the basis of the tests, we concluded that the introduction of a binder material and the use of two-stage agglomeration in nettle waste densification resulted in increased densification work (as compared to the densification of nettle waste alone) and increased pellet density.

  13. Introduction to probability with R

    CERN Document Server

    Baclawski, Kenneth

    2008-01-01

    FOREWORD PREFACE Sets, Events, and Probability The Algebra of Sets The Bernoulli Sample Space The Algebra of Multisets The Concept of Probability Properties of Probability Measures Independent Events The Bernoulli Process The R Language Finite Processes The Basic Models Counting Rules Computing Factorials The Second Rule of Counting Computing Probabilities Discrete Random Variables The Bernoulli Process: Tossing a Coin The Bernoulli Process: Random Walk Independence and Joint Distributions Expectations The Inclusion-Exclusion Principle General Random Variable

  14. Two-stage hepatectomy: who will not jump over the second hurdle?

    Science.gov (United States)

    Turrini, O; Ewald, J; Viret, F; Sarran, A; Goncalves, A; Delpero, J-R

    2012-03-01

    Two-stage hepatectomy uses compensatory liver regeneration after a first noncurative hepatectomy to enable a second curative resection in patients with bilobar colorectal liver metastasis (CLM). To determine the predictive factors of failure of two-stage hepatectomy. Between 2000 and 2010, 48 patients with irresectable CLM were eligible for two-stage hepatectomy. The planned strategy was a) cleaning of the left hepatic lobe (first hepatectomy), b) right portal vein embolisation and c) right hepatectomy (second hepatectomy). Six patients had occult CLM (n = 5) or extra-hepatic disease (n = 1), which was discovered during the first hepatectomy. Thus, 42 patients completed the first hepatectomy and underwent portal vein embolisation in order to receive the second hepatectomy. Eight patients did not undergo a second hepatectomy due to disease progression. Upon univariate analysis, two factors were identified that precluded patients from having the second hepatectomy: the combined resection of a primary tumour during the first hepatectomy (p = 0.01) and administration of chemotherapy between the two hepatectomies (p = 0.03). An independent association with impairment to perform the two-stage strategy was demonstrated by multivariate analysis for only the combined resection of the primary colorectal cancer during the first hepatectomy (p = 0.04). Due to the small number of patients and the absence of equivalent conclusions in other studies, we cannot recommend performance of an isolated colorectal resection prior to chemotherapy. However, resection of an asymptomatic primary tumour before chemotherapy should not be considered as an outdated procedure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Labor Union Effects on Innovation and Commercialization Productivity: An Integrated Propensity Score Matching and Two-Stage Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Dongphil Chun

    2015-04-01

    Full Text Available Research and development (R&D is a critical factor in sustaining a firm’s competitive advantage. Accurate measurement of R&D productivity and investigation of its influencing factors are of value for R&D productivity improvements. This study is divided into two sections. The first section outlines the innovation and commercialization stages of firm-level R&D activities. This section analyzes the productivity of each stage using a propensity score matching (PSM and two-stage data envelopment analysis (DEA integrated model to solve the selection bias problem. Second, this study conducts a comparative analysis among subgroups categorized as labor unionized or non-labor unionized on productivity at each stage. We used Korea Innovation Survey (KIS data for analysis using a sample of 400 Korean manufacturers. The key findings of this study include: (1 firm innovation and commercialization productivity are balanced and show relatively low innovation productivity; and (2 labor unions have a positive effect on commercialization productivity. Moreover, labor unions are an influential factor in determining manufacturing firms’ commercialization productivity.

  16. Managing uncertainty - a qualitative study of surgeons' decision-making for one-stage and two-stage revision surgery for prosthetic hip joint infection.

    Science.gov (United States)

    Moore, Andrew J; Blom, Ashley W; Whitehouse, Michael R; Gooberman-Hill, Rachael

    2017-04-12

    Approximately 88,000 primary hip replacements are performed in England and Wales each year. Around 1% go on to develop deep prosthetic joint infection. Between one-stage and two-stage revision arthroplasty best treatment options remain unclear. Our aims were to characterise consultant orthopaedic surgeons' decisions about performing either one-stage or two-stage revision surgery for patients with deep prosthetic infection (PJI) after hip arthroplasty, and to identify whether a randomised trial comparing one-stage with two-stage revision would be feasible. Semi-structured interviews were conducted with 12 consultant surgeons who perform revision surgery for PJI after hip arthroplasty at 5 high-volume National Health Service (NHS) orthopaedic departments in England and Wales. Surgeons were interviewed before the development of a multicentre randomised controlled trial. Data were analysed using a thematic approach. There is no single standardised surgical intervention for the treatment of PJI. Surgeons balance multiple factors when choosing a surgical strategy which include multiple patient-related factors, their own knowledge and expertise, available infrastructure and the infecting organism. Surgeons questioned whether it was appropriate that the two-stage revision remained the best treatment, and some surgeons' willingness to consider more one-stage revisions had increased over recent years and were influenced by growing evidence showing equivalence between surgical techniques, and local observations of successful one-stage revisions. Custom-made articulating spacers was a practice that enabled uncertainty to be managed in the absence of definitive evidence about the superiority of one surgical technique over the other. Surgeons highlighted the need for research evidence to inform practice and thought that a randomised trial to compare treatments was needed. Most surgeons thought that patients who they treated would be eligible for trial participation in instances

  17. Use of Electronic Health Records in Residential Care Communities

    Science.gov (United States)

    ... The 2010 NSRCF used a stratified two-stage probability sampling design. The first stage was the selection ... 3,605 residential care communities were sampled with probability proportional to size. Interviews were completed with 2, ...

  18. Fixation Probability in a Two-Locus Model by the Ancestral Recombination–Selection Graph

    Science.gov (United States)

    Lessard, Sabin; Kermany, Amir R.

    2012-01-01

    We use the ancestral influence graph (AIG) for a two-locus, two-allele selection model in the limit of a large population size to obtain an analytic approximation for the probability of ultimate fixation of a single mutant allele A. We assume that this new mutant is introduced at a given locus into a finite population in which a previous mutant allele B is already segregating with a wild type at another linked locus. We deduce that the fixation probability increases as the recombination rate increases if allele A is either in positive epistatic interaction with B and allele B is beneficial or in no epistatic interaction with B and then allele A itself is beneficial. This holds at least as long as the recombination fraction and the selection intensity are small enough and the population size is large enough. In particular this confirms the Hill–Robertson effect, which predicts that recombination renders more likely the ultimate fixation of beneficial mutants at different loci in a population in the presence of random genetic drift even in the absence of epistasis. More importantly, we show that this is true from weak negative epistasis to positive epistasis, at least under weak selection. In the case of deleterious mutants, the fixation probability decreases as the recombination rate increases. This supports Muller’s ratchet mechanism to explain the accumulation of deleterious mutants in a population lacking recombination. PMID:22095080

  19. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  20. Diagnosis Of Persistent Infection In Prosthetic Two-Stage Exchange: PCR analysis of Sonication fluid From Bone Cement Spacers.

    Science.gov (United States)

    Mariaux, Sandrine; Tafin, Ulrika Furustrand; Borens, Olivier

    2017-01-01

    Introduction: When treating periprosthetic joint infections with a two-stage procedure, antibiotic-impregnated spacers are used in the interval between removal of prosthesis and reimplantation. According to our experience, cultures of sonicated spacers are most often negative. The objective of our study was to investigate whether PCR analysis would improve the detection of bacteria in the spacer sonication fluid. Methods: A prospective monocentric study was performed from September 2014 to January 2016. Inclusion criteria were two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Beside tissues samples and sonication, broad range bacterial PCRs, specific S. aureus PCRs and Unyvero-multiplex PCRs were performed on the sonicated spacer fluid. Results: 30 patients were identified (15 hip, 14 knee and 1 ankle replacements). At reimplantation, cultures of tissue samples and spacer sonication fluid were all negative. Broad range PCRs were all negative. Specific S. aureus PCRs were positive in 5 cases. We had two persistent infections and four cases of infection recurrence were observed, with bacteria different than for the initial infection in three cases. Conclusion: The three different types of PCRs did not detect any bacteria in spacer sonication fluid that was culture-negative. In our study, PCR did not improve the bacterial detection and did not help to predict whether the patient will present a persistent or recurrent infection. Prosthetic 2-stage exchange with short interval and antibiotic-impregnated spacer is an efficient treatment to eradicate infection as both culture- and molecular-based methods were unable to detect bacteria in spacer sonication fluid after reimplantation.

  1. High throughput nonparametric probability density estimation.

    Science.gov (United States)

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  2. Two-stage exchange knee arthroplasty: does resistance of the infecting organism influence the outcome?

    Science.gov (United States)

    Kurd, Mark F; Ghanem, Elie; Steinbrecher, Jill; Parvizi, Javad

    2010-08-01

    Periprosthetic joint infection after TKA is a challenging complication. Two-stage exchange arthroplasty is the accepted standard of care, but reported failure rates are increasing. It has been suggested this is due to the increased prevalence of methicillin-resistant infections. We asked the following questions: (1) What is the reinfection rate after two-stage exchange arthroplasty? (2) Which risk factors predict failure? (3) Which variables are associated with acquiring a resistant organism periprosthetic joint infection? This was a case-control study of 102 patients with infected TKA who underwent a two-stage exchange arthroplasty. Ninety-six patients were followed for a minimum of 2 years (mean, 34.5 months; range, 24-90.1 months). Cases were defined as failures of two-stage exchange arthroplasty. Two-stage exchange arthroplasty was successful in controlling the infection in 70 patients (73%). Patients who failed two-stage exchange arthroplasty were 3.37 times more likely to have been originally infected with a methicillin-resistant organism. Older age, higher body mass index, and history of thyroid disease were predisposing factors to infection with a methicillin-resistant organism. Innovative interventions are needed to improve the effectiveness of two-stage exchange arthroplasty for TKA infection with a methicillin-resistant organism as current treatment protocols may not be adequate for control of these virulent pathogens. Level IV, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.

  3. Unification of field theory and maximum entropy methods for learning probability densities

    OpenAIRE

    Kinney, Justin B.

    2014-01-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy de...

  4. Analysis of U and Pu resin bead samples with a single stage mass spectrometer

    International Nuclear Information System (INIS)

    Smith, D.H.; Walker, R.L.; Bertram, L.K.; Carter, J.A.

    1979-01-01

    Resin bead sampling enables the shipment of nanogram U and Pu quantities for analysis. Application of this sampling technique to safeguards was investigated with a single-stage mass spectrometer. Standards gave results in good agreement with NBS certified values. External precisions of +-0.5% were obtained on isotopic ratios of approx. 0.01; precisions on quantitative measurements are +-1.0%

  5. A Two-Stage Maximum Entropy Prior of Location Parameter with a Stochastic Multivariate Interval Constraint and Its Properties

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2016-05-01

    Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.

  6. Kinetics of two-stage fermentation process for the production of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Kaushik [Department of Chemical Engineering, G.H. Patel College of Engineering and Technology, Vallabh Vidyanagar 388 120, Gujarat (India); Muthukumar, Manoj; Kumar, Anish; Das, Debabrata [Fermentation Technology Laboratory, Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India)

    2008-02-15

    Two-stage process described in the present work is a combination of dark and photofermentation in a sequential batch mode. In the first stage glucose is fermented to acetate, CO{sub 2} and H{sub 2} in an anaerobic dark fermentation by Enterobacter cloacae DM11. This is followed by a successive second stage where acetate is converted to H{sub 2} and CO{sub 2} in a photobioreactor by photosynthetic bacteria, Rhodobacter sphaeroides O.U. 001. The yield of hydrogen in the first stage was about 3.31molH{sub 2}(molglucose){sup -1} (approximately 82% of theoretical) and that in the second stage was about 1.5-1.72molH{sub 2}(molaceticacid){sup -1} (approximately 37-43% of theoretical). The overall yield of hydrogen in two-stage process considering glucose as preliminary substrate was found to be higher compared to a single stage process. Monod model, with incorporation of substrate inhibition term, has been used to determine the growth kinetic parameters for the first stage. The values of maximum specific growth rate ({mu} {sub max}) and K{sub s} (saturation constant) were 0.398h{sup -1} and 5.509gl{sup -1}, respectively, using glucose as substrate. The experimental substrate and biomass concentration profiles have good resemblance with those obtained by kinetic model predictions. A model based on logistic equation has been developed to describe the growth of R. sphaeroides O.U 001 in the second stage. Modified Gompertz equation was applied to estimate the hydrogen production potential, rate and lag phase time in a batch process for various initial concentration of glucose, based on the cumulative hydrogen production curves. Both the curve fitting and statistical analysis showed that the equation was suitable to describe the progress of cumulative hydrogen production. (author)

  7. Two-stage energy storage equalization system for lithium-ion battery pack

    Science.gov (United States)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  8. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed...

  9. Gardner's Two Children Problems and Variations: Puzzles with Conditional Probability and Sample Spaces

    Science.gov (United States)

    Taylor, Wendy; Stacey, Kaye

    2014-01-01

    This article presents "The Two Children Problem," published by Martin Gardner, who wrote a famous and widely-read math puzzle column in the magazine "Scientific American," and a problem presented by puzzler Gary Foshee. This paper explains the paradox of Problems 2 and 3 and many other variations of the theme. Then the authors…

  10. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    Science.gov (United States)

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  11. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    Science.gov (United States)

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  12. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  13. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  14. Towards a Categorical Account of Conditional Probability

    Directory of Open Access Journals (Sweden)

    Robert Furber

    2015-11-01

    Full Text Available This paper presents a categorical account of conditional probability, covering both the classical and the quantum case. Classical conditional probabilities are expressed as a certain "triangle-fill-in" condition, connecting marginal and joint probabilities, in the Kleisli category of the distribution monad. The conditional probabilities are induced by a map together with a predicate (the condition. The latter is a predicate in the logic of effect modules on this Kleisli category. This same approach can be transferred to the category of C*-algebras (with positive unital maps, whose predicate logic is also expressed in terms of effect modules. Conditional probabilities can again be expressed via a triangle-fill-in property. In the literature, there are several proposals for what quantum conditional probability should be, and also there are extra difficulties not present in the classical case. At this stage, we only describe quantum systems with classical parametrization.

  15. Experimental study on an innovative multifunction heat pipe type heat recovery two-stage sorption refrigeration system

    International Nuclear Information System (INIS)

    Li, T.X.; Wang, R.Z.; Wang, L.W.; Lu, Z.S.

    2008-01-01

    An innovative multifunction heat pipe type sorption refrigeration system is designed, in which a two-stage sorption thermodynamic cycle based on two heat recovery processes was employed to reduce the driving heat source temperature, and the composite sorbent of CaCl 2 and activated carbon was used to improve the mass and heat transfer performances. For this test unit, the heating, cooling and heat recovery processes between two reactive beds are performed by multifunction heat pipes. The aim of this paper is to investigate the cycled characteristics of two-stage sorption refrigeration system with heat recovery processes. The two sub-cycles of a two-stage cycle have different sorption platforms though the adsorption and desorption temperatures are equivalent. The experimental results showed that the pressure evolutions of two beds are nearly equivalent during the first stage, and desorption pressure during the second stage is large higher than that in the first stage while the desorption temperatures are same during the two operation stages. In comparison with conventional two-stage cycle, the two-stage cycle with heat recovery processes can reduce the heating load for desorber and cooling load for adsorber, the coefficient of performance (COP) has been improved more than 23% when both cycles have the same regeneration temperature of 103 deg. C and the cooling water temperature of 30 deg. C. The advanced two-stage cycle provides an effective method for application of sorption refrigeration technology under the condition of low-grade temperature heat source or utilization of renewable energy

  16. Comparison of sample types and diagnostic methods for in vivo detection of Mycoplasma hyopneumoniae during early stages of infection.

    Science.gov (United States)

    Pieters, Maria; Daniels, Jason; Rovira, Albert

    2017-05-01

    Detection of Mycoplasma hyopneumoniae in live pigs during the early stages of infection is critical for timely implementation of control measures, but is technically challenging. This study compared the sensitivity of various sample types and diagnostic methods for detection of M. hyopneumoniae during the first 28days after experimental exposure. Twenty-one 8-week old pigs were intra-tracheally inoculated on day 0 with M. hyopneumoniae strain 232. Two age matched pigs were mock inoculated and maintained as negative controls. On post-inoculation days 0, 2, 5, 9, 14, 21 and 28, nasal swabs, laryngeal swabs, tracheobronchial lavage fluid, and blood samples were obtained from each pig and oral fluid samples were obtained from each room in which pigs were housed. Serum samples were assayed by ELISA for IgM and IgG M. hyopneumoniae antibodies and C-reactive protein. All other samples were tested for M. hyopneumoniae DNA by species-specific real-time PCR. Serum antibodies (IgG) to M. hyopneumoniae were detected in challenge-inoculated pigs on days 21 and 28. M. hyopneumoniae DNA was detected in samples from experimentally inoculated pigs beginning at 5days post-inoculation. Laryngeal swabs at all samplings beginning on day 5 showed the highest sensitivity for M. hyopneumoniae DNA Detection, while oral fluids showed the lowest sensitivity. Although laryngeal swabs are not considered the typical M. hyopneumoniae diagnostic sample, under the conditions of this study laryngeal swabs tested by PCR proved to be a practical and reliable diagnostic sample for M. hyopneumoniae detection in vivo during early-stage infection. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Genetic variants at 1p11.2 and breast cancer risk: a two-stage study in Chinese women.

    Directory of Open Access Journals (Sweden)

    Yue Jiang

    Full Text Available BACKGROUND: Genome-wide association studies (GWAS have identified several breast cancer susceptibility loci, and one genetic variant, rs11249433, at 1p11.2 was reported to be associated with breast cancer in European populations. To explore the genetic variants in this region associated with breast cancer in Chinese women, we conducted a two-stage fine-mapping study with a total of 1792 breast cancer cases and 1867 controls. METHODOLOGY/PRINCIPAL FINDINGS: Seven single nucleotide polymorphisms (SNPs including rs11249433 in a 277 kb region at 1p11.2 were selected and genotyping was performed by using TaqMan® OpenArray™ Genotyping System for stage 1 samples (878 cases and 900 controls. In stage 2 (914 cases and 967 controls, three SNPs (rs2580520, rs4844616 and rs11249433 were further selected and genotyped for validation. The results showed that one SNP (rs2580520 located at a predicted enhancer region of SRGAP2 was consistently associated with a significantly increased risk of breast cancer in a recessive genetic model [Odds Ratio (OR  =  1.66, 95% confidence interval (CI  =  1.16-2.36 for stage 2 samples; OR  =  1.51, 95% CI  =  1.16-1.97 for combined samples, respectively]. However, no significant association was observed between rs11249433 and breast cancer risk in this Chinese population (dominant genetic model in combined samples: OR  =  1.20, 95% CI  =  0.92-1.57. CONCLUSIONS/SIGNIFICANCE: Genotypes of rs2580520 at 1p11.2 suggest that Chinese women may have different breast cancer susceptibility loci, which may contribute to the development of breast cancer in this population.

  18. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  19. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    Energy Technology Data Exchange (ETDEWEB)

    Garza, J. [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States); Millwater, H., E-mail: harry.millwater@utsa.edu [University of Texas at San Antonio, Mechanical Engineering, 1 UTSA circle, EB 3.04.50, San Antonio, TX 78249 (United States)

    2012-04-15

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: Black-Right-Pointing-Pointer Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. Black-Right-Pointing-Pointer The sensitivities are computed with negligible cost using Monte Carlo sampling. Black-Right-Pointing-Pointer The change in the POF due to a change in the POD curve parameters can be easily estimated.

  20. Sensitivity of probability-of-failure estimates with respect to probability of detection curve parameters

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2012-01-01

    A methodology has been developed and demonstrated that can be used to compute the sensitivity of the probability-of-failure (POF) with respect to the parameters of inspection processes that are simulated using probability of detection (POD) curves. The formulation is such that the probabilistic sensitivities can be obtained at negligible cost using sampling methods by reusing the samples used to compute the POF. As a result, the methodology can be implemented for negligible cost in a post-processing non-intrusive manner thereby facilitating implementation with existing or commercial codes. The formulation is generic and not limited to any specific random variables, fracture mechanics formulation, or any specific POD curve as long as the POD is modeled parametrically. Sensitivity estimates for the cases of different POD curves at multiple inspections, and the same POD curves at multiple inspections have been derived. Several numerical examples are presented and show excellent agreement with finite difference estimates with significant computational savings. - Highlights: ► Sensitivity of the probability-of-failure with respect to the probability-of-detection curve. ►The sensitivities are computed with negligible cost using Monte Carlo sampling. ► The change in the POF due to a change in the POD curve parameters can be easily estimated.

  1. Reliability and validity of the Modified Erikson Psychosocial Stage Inventory in diverse samples.

    Science.gov (United States)

    Leidy, N K; Darling-Fisher, C S

    1995-04-01

    The Modified Erikson Psychosocial Stage Inventory (MEPSI) is a relatively simple survey measure designed to assess the strength of psychosocial attributes that arise from progression through Erikson's eight stages of development. The purpose of this study was to employ secondary analysis to evaluate the internal-consistency reliability and construct validity of the MEPSI across four diverse samples: healthy young adults, hemophilic men, healthy older adults, and older adults with chronic obstructive pulmonary disease. Special attention was given to the performance of the measure across gender, with exploratory analyses examining possible age cohort and health status effects. Internal-consistency estimates for the aggregate measure were high, whereas subscale reliability levels varied across age groups. Construct validity was supported across samples. Gender, cohort, and health effects offered interesting psychometric and theoretical insights and direction for further research. Findings indicated that the MEPSI might be a useful instrument for operationalizing and testing Eriksonian developmental theory in adults.

  2. One-Stage and Two-Stage Schemes of High Performance Synchronous PWM with Smooth Pulses-Ratio Changing

    DEFF Research Database (Denmark)

    Oleschuk, V.; Blaabjerg, Frede

    2002-01-01

    This paper presents detailed description of one-stage and two-stage schemes of a novel method of synchronous, pulsewidth modulation (PWM) for voltage source inverters for ac drive application. The proposed control functions provide accurate realization of different versions of voltage space vector...... modulation with synchronization of the voltage waveform of the inverter and with smooth pulse-ratio changing. Voltage spectra do not contain even harmonic and sub-harmonics (combined harmonics) during the whole control range including the zone of overmodulation. Examples of determination of the basic control...

  3. One-stage exchange with antibacterial hydrogel coated implants provides similar results to two-stage revision, without the coating, for the treatment of peri-prosthetic infection.

    Science.gov (United States)

    Capuano, Nicola; Logoluso, Nicola; Gallazzi, Enrico; Drago, Lorenzo; Romanò, Carlo Luca

    2018-03-16

    Aim of this study was to verify the hypothesis that a one-stage exchange procedure, performed with an antibiotic-loaded, fast-resorbable hydrogel coating, provides similar infection recurrence rate than a two-stage procedure without the coating, in patients affected by peri-prosthetic joint infection (PJI). In this two-center case-control, study, 22 patients, treated with a one-stage procedure, using implants coated with an antibiotic-loaded hydrogel [defensive antibacterial coating (DAC)], were compared with 22 retrospective matched controls, treated with a two-stage revision procedure, without the coating. At a mean follow-up of 29.3 ± 5.0 months, two patients (9.1%) in the DAC group showed an infection recurrence, compared to three patients (13.6%) in the two-stage group. Clinical scores were similar between groups, while average hospital stay and antibiotic treatment duration were significantly reduced after one-stage, compared to two-stage (18.9 ± 2.9 versus 35.8 ± 3.4 and 23.5 ± 3.3 versus 53.7 ± 5.6 days, respectively). Although in a relatively limited series of patients, our data shows similar infection recurrence rate after one-stage exchange with DAC-coated implants, compared to two-stage revision without coating, with reduced overall hospitalization time and antibiotic treatment duration. These findings warrant further studies in the possible applications of antibacterial coating technologies to treat implant-related infections. III.

  4. Importance Sampling Simulation of Population Overflow in Two-node Tandem Networks

    NARCIS (Netherlands)

    Nicola, V.F.; Zaburnenko, T.S.; Baier, C; Chiola, G.; Smirni, E.

    2005-01-01

    In this paper we consider the application of importance sampling in simulations of Markovian tandem networks in order to estimate the probability of rare events, such as network population overflow. We propose a heuristic methodology to obtain a good approximation to the 'optimal' state-dependent

  5. A two-phase inspection model for a single component system with three-stage degradation

    International Nuclear Information System (INIS)

    Wang, Huiying; Wang, Wenbin; Peng, Rui

    2017-01-01

    This paper presents a two-phase inspection schedule and an age-based replacement policy for a single plant item contingent on a three-stage degradation process. The two phase inspection schedule can be observed in practice. The three stages are defined as the normal working stage, low-grade defective stage and critical defective stage. When an inspection detects that an item is in the low-grade defective stage, we may delay the preventive replacement action if the time to the age-based replacement is less than or equal to a threshold level. However, if it is above this threshold level, the item will be replaced immediately. If the item is found in the critical defective stage, it is replaced immediately. A hybrid bee colony algorithm is developed to find the optimal solution for the proposed model which has multiple decision variables. A numerical example is conducted to show the efficiency of this algorithm, and simulations are conducted to verify the correctness of the model. - Highlights: • A two-phase inspection model is studied. • The failure process has three stages. • The delayed replacement is considered.

  6. Two sampling techniques for game meat

    Directory of Open Access Journals (Sweden)

    Maretha van der Merwe

    2013-03-01

    Full Text Available A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g and square centimetres (cm2 for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12 that statistically proved the two measuring units correlated. The two sampling techniques were conducted on the same game carcasses (n = 13 and analyses performed for aerobic plate count (APC, Escherichia coli and Staphylococcus aureus, for both techniques. A more representative result was obtained by swabbing and no damage was caused to the carcass. Conversely, the excision technique yielded fewer organisms and caused minor damage to the carcass. The recovery ratio from the sampling technique improved 5.4 times for APC, 108.0 times for E. coli and 3.4 times for S. aureus over the results obtained from the excision technique. It was concluded that the sampling methods of excision and swabbing can be used to obtain bacterial profiles from both export and local carcasses and could be used to indicate whether game carcasses intended for the local market are possibly on par with game carcasses intended for the export market and therefore safe for human consumption.

  7. Two sampling techniques for game meat.

    Science.gov (United States)

    van der Merwe, Maretha; Jooste, Piet J; Hoffman, Louw C; Calitz, Frikkie J

    2013-03-20

    A study was conducted to compare the excision sampling technique used by the export market and the sampling technique preferred by European countries, namely the biotrace cattle and swine test. The measuring unit for the excision sampling was grams (g) and square centimetres (cm2) for the swabbing technique. The two techniques were compared after a pilot test was conducted on spiked approved beef carcasses (n = 12) that statistically proved the two measuring units correlated. The two sampling techniques were conducted on the same game carcasses (n = 13) and analyses performed for aerobic plate count (APC), Escherichia coli and Staphylococcus aureus, for both techniques. A more representative result was obtained by swabbing and no damage was caused to the carcass. Conversely, the excision technique yielded fewer organisms and caused minor damage to the carcass. The recovery ratio from the sampling technique improved 5.4 times for APC, 108.0 times for E. coli and 3.4 times for S. aureus over the results obtained from the excision technique. It was concluded that the sampling methods of excision and swabbing can be used to obtain bacterial profiles from both export and local carcasses and could be used to indicate whether game carcasses intended for the local market are possibly on par with game carcasses intended for the export market and therefore safe for human consumption.

  8. Experimental Results of the First Two Stages of an Advanced Transonic Core Compressor Under Isolated and Multi-Stage Conditions

    Science.gov (United States)

    Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.

    2015-01-01

    NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency of large gas turbine engines. Under ERA the task for a High Pressure Ratio Core Technology program calls for a higher overall pressure ratio of 60 to 70. This mean that the HPC would have to almost double in pressure ratio and keep its high level of efficiency. The challenge is how to match the corrected mass flow rate of the front two supersonic high reaction and high corrected tip speed stages with a total pressure ratio of 3.5. NASA and GE teamed to address this challenge by using the initial geometry of an advanced GE compressor design to meet the requirements of the first 2 stages of the very high pressure ratio core compressor. The rig was configured to run as a 2 stage machine, with Strut and IGV, Rotor 1 and Stator 1 run as independent tests which were then followed by adding the second stage. The goal is to fully understand the stage performances under isolated and multi-stage conditions and fully understand any differences and provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to isolate fluid dynamics loss source mechanisms due to interaction and endwalls. The paper will present the description of the compressor test article, its predicted performance and operability, and the experimental results for both the single stage and two stage configurations. We focus the detailed measurements on 97 and 100 of design speed at 3 vane setting angles.

  9. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  10. Romantic Relationship Characteristics and Adolescent Relationship Abuse in a Probability-Based Sample of Youth.

    Science.gov (United States)

    Taylor, Bruce; Joseph, Hannah; Mumford, Elizabeth

    2017-09-01

    This study examines the longitudinal association between baseline adolescent romantic relationship characteristics and later adolescent relationship abuse (ARA). Data are from the first two waves of the National Survey on Teen Relationships and Intimate Violence (STRiV). Girls and boys ages 10 to 18 were recruited randomly from the children of adults participating in a larger national household probability sample panel. About three quarters of the sample identified as White, non-Hispanic. Controlling behavior by a romantic partner consistently predicted later ARA. Higher levels of controlling behavior in the relationship was associated with higher rates of sexual and/or physical ARA victimization and higher rates for similar acts of perpetration. More controlling behavior by the partner was also associated with higher rates of psychological ARA victimization (and higher rates for psychological ARA perpetration). Our results suggest that ARA prevention programs should have explicit discussions of the deleterious effects of controlling behavior with adolescents. Respondents reporting higher feelings of passionate love were also at higher risk of experiencing sexual and/or physical ARA victimization. This finding will need to be considered by clinicians and prevention specialist in their work with youth as a potential risk marker for ARA. Baseline reports of at least one form of ARA were predictive of 1-year follow-up rates of ARA in all of our models, underscoring a long line of research that past aggressive or violent behavior is one of the strongest predictors of current aggressive or violent behavior. We also observed that female respondents were twice as likely to be perpetrators of physical and/or sexual ARA as male respondents. Prevention messaging often is focused on girls as ARA victims and our results imply that messaging should also be directed toward girls as perpetrators.

  11. Compressed gas combined single- and two-stage light-gas gun

    Science.gov (United States)

    Lamberson, L. E.; Boettcher, P. A.

    2018-02-01

    With more than 1 trillion artificial objects smaller than 1 μm in low and geostationary Earth orbit, space assets are subject to the constant threat of space debris impact. These collisions occur at hypervelocity or speeds greater than 3 km/s. In order to characterize material behavior under this extreme event as well as study next-generation materials for space exploration, this paper presents a unique two-stage light-gas gun capable of replicating hypervelocity impacts. While a limited number of these types of facilities exist, they typically are extremely large and can be costly and dangerous to operate. The design presented in this paper is novel in two distinct ways. First, it does not use a form of combustion in the first stage. The projectile is accelerated from a pressure differential using air and inert gases (or purely inert gases), firing a projectile in a nominal range of 1-4 km/s. Second, the design is modular in that the first stage sits on a track sled and can be pulled back and used in itself to study lower speed impacts without any further modifications, with the first stage piston as the impactor. The modularity of the instrument allows the ability to investigate three orders of magnitude of impact velocities or between 101 and 103 m/s in a single, relatively small, cost effective instrument.

  12. Models for probability and statistical inference theory and applications

    CERN Document Server

    Stapleton, James H

    2007-01-01

    This concise, yet thorough, book is enhanced with simulations and graphs to build the intuition of readersModels for Probability and Statistical Inference was written over a five-year period and serves as a comprehensive treatment of the fundamentals of probability and statistical inference. With detailed theoretical coverage found throughout the book, readers acquire the fundamentals needed to advance to more specialized topics, such as sampling, linear models, design of experiments, statistical computing, survival analysis, and bootstrapping.Ideal as a textbook for a two-semester sequence on probability and statistical inference, early chapters provide coverage on probability and include discussions of: discrete models and random variables; discrete distributions including binomial, hypergeometric, geometric, and Poisson; continuous, normal, gamma, and conditional distributions; and limit theory. Since limit theory is usually the most difficult topic for readers to master, the author thoroughly discusses mo...

  13. Attitudes toward Bisexual Men and Women among a Nationally Representative Probability Sample of Adults in the United States.

    Science.gov (United States)

    Dodge, Brian; Herbenick, Debby; Friedman, M Reuel; Schick, Vanessa; Fu, Tsung-Chieh Jane; Bostwick, Wendy; Bartelt, Elizabeth; Muñoz-Laboy, Miguel; Pletta, David; Reece, Michael; Sandfort, Theo G M

    2016-01-01

    As bisexual individuals in the United States (U.S.) face significant health disparities, researchers have posited that these differences may be fueled, at least in part, by negative attitudes, prejudice, stigma, and discrimination toward bisexual individuals from heterosexual and gay/lesbian individuals. Previous studies of individual and social attitudes toward bisexual men and women have been conducted almost exclusively with convenience samples, with limited generalizability to the broader U.S. Our study provides an assessment of attitudes toward bisexual men and women among a nationally representative probability sample of heterosexual, gay, lesbian, and other-identified adults in the U.S. Data were collected from the 2015 National Survey of Sexual Health and Behavior (NSSHB), via an online questionnaire with a probability sample of adults (18 years and over) from throughout the U.S. We included two modified 5-item versions of the Bisexualities: Indiana Attitudes Scale (BIAS), validated sub-scales that were developed to measure attitudes toward bisexual men and women. Data were analyzed using descriptive statistics, gamma regression, and paired t-tests. Gender, sexual identity, age, race/ethnicity, income, and educational attainment were all significantly associated with participants' attitudes toward bisexual individuals. In terms of responses to individual scale items, participants were most likely to "neither agree nor disagree" with all attitudinal statements. Across sexual identities, self-identified other participants reported the most positive attitudes, while heterosexual male participants reported the least positive attitudes. As in previous research on convenience samples, we found a wide range of demographic characteristics were related with attitudes toward bisexual individuals in our nationally-representative study of heterosexual, gay/lesbian, and other-identified adults in the U.S. In particular, gender emerged as a significant characteristic

  14. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  15. The Work Sample Verification and the Calculation of the Statistical, Mathematical and Economical Probability for the Risks of the Direct Procurement

    Directory of Open Access Journals (Sweden)

    Lazăr Cristiana Daniela

    2017-01-01

    Full Text Available Each organization has among its multiple secondary endpoints subordinated to a centralobjective that one of avoiding the contingencies. The direct procurement is carried out on themarket in SEAP (Electronic System of Public Procurement, and a performing management in apublic institution has as sub-base and risk management. The risks may be investigated byeconometric simulation, which is calculated by the use of calculus of probability and the sample fordetermining the relevance of these probabilities.

  16. Design and construction of a two-stage centrifugal pump | Nordiana ...

    African Journals Online (AJOL)

    Centrifugal pumps are widely used in moving liquids from one location to another in homes, offices and industries. Due to the ever increasing demand for centrifugal pumps it became necessary to design and construction of a two-stage centrifugal pump. The pump consisted of an electric motor, a shaft, two rotating impellers ...

  17. Area G perimeter surface-soil and single-stage water sampling. Environmental surveillance for fiscal year 95. Progress report

    International Nuclear Information System (INIS)

    Childs, M.; Conrad, R.

    1997-09-01

    ESH-19 personnel collected soil and single-stage water samples around the perimeter of Area G at Los Alamos National Laboratory (LANL) during FY 95 to characterize possible radionuclide movement out of Area G through surface water and entrained sediment runoff. Soil samples were analyzed for tritium, total uranium, isotopic plutonium, americium-241, and cesium-137. The single-stage water samples were analyzed for tritium and plutonium isotopes. All radiochemical data was compared with analogous samples collected during FY 93 and 94 and reported in LA-12986 and LA-13165-PR. Six surface soils were also submitted for metal analyses. These data were included with similar data generated for soil samples collected during FY 94 and compared with metals in background samples collected at the Area G expansion area

  18. A Two-stage Improvement Method for Robot Based 3D Surface Scanning

    Science.gov (United States)

    He, F. B.; Liang, Y. D.; Wang, R. F.; Lin, Y. S.

    2018-03-01

    As known that the surface of unknown object was difficult to measure or recognize precisely, hence the 3D laser scanning technology was introduced and used properly in surface reconstruction. Usually, the surface scanning speed was slower and the scanning quality would be better, while the speed was faster and the quality would be worse. In this case, the paper presented a new two-stage scanning method in order to pursuit the quality of surface scanning in a faster speed. The first stage was rough scanning to get general point cloud data of object’s surface, and then the second stage was specific scanning to repair missing regions which were determined by chord length discrete method. Meanwhile, a system containing a robotic manipulator and a handy scanner was also developed to implement the two-stage scanning method, and relevant paths were planned according to minimum enclosing ball and regional coverage theories.

  19. A high-power two stage traveling-wave tube amplifier

    International Nuclear Information System (INIS)

    Shiffler, D.; Nation, J.A.; Schachter, L.; Ivers, J.D.; Kerslick, G.S.

    1991-01-01

    Results are presented on the development of a two stage high-efficiency, high-power 8.76-GHz traveling-wave tube amplifier. The work presented augments previously reported data on a single stage amplifier and presents new data on the operational characteristics of two identical amplifiers operated in series and separated from each other by a sever. Peak powers of 410 MW have been obtained over the complete pulse duration of the device, with a conversion efficiency from the electron beam to microwave energy of 45%. In all operating conditions the severed amplifier showed a ''sideband''-like structure in the frequency spectrum of the microwave radiation. A similar structure was apparent at output powers in excess of 70 MW in the single stage device. The frequencies of the ''sidebands'' are not symmetric with respect to the center frequency. The maximum, single frequency, average output power was 210 MW corresponding to an amplifier efficiency of 24%. Simulation data is also presented that indicates that the short amplifiers used in this work exhibit significant differences in behavior from conventional low-power amplifiers. These include finite length effects on the gain characteristics, which may account for the observed narrow bandwidth of the amplifiers and for the appearance of the sidebands. It is also found that the bunching length for the beam may be a significant fraction of the total amplifier length

  20. Biogas production of Chicken Manure by Two-stage fermentation process

    Science.gov (United States)

    Liu, Xin Yuan; Wang, Jing Jing; Nie, Jia Min; Wu, Nan; Yang, Fang; Yang, Ren Jie

    2018-06-01

    This paper performs a batch experiment for pre-acidification treatment and methane production from chicken manure by the two-stage anaerobic fermentation process. Results shows that the acetate was the main component in volatile fatty acids produced at the end of pre-acidification stage, accounting for 68% of the total amount. The daily biogas production experienced three peak period in methane production stage, and the methane content reached 60% in the second period and then slowly reduced to 44.5% in the third period. The cumulative methane production was fitted by modified Gompertz equation, and the kinetic parameters of the methane production potential, the maximum methane production rate and lag phase time were 345.2 ml, 0.948 ml/h and 343.5 h, respectively. The methane yield of 183 ml-CH4/g-VSremoved during the methane production stage and VS removal efficiency of 52.7% for the whole fermentation process were achieved.

  1. Two-stage high frequency pulse tube refrigerator with base temperature below 10 K

    Science.gov (United States)

    Chen, Liubiao; Wu, Xianlin; Liu, Sixue; Zhu, Xiaoshuang; Pan, Changzhao; Guo, Jia; Zhou, Yuan; Wang, Junjie

    2017-12-01

    This paper introduces our recent experimental results of pulse tube refrigerator driven by linear compressor. The working frequency is 23-30 Hz, which is much higher than the G-M type cooler (the developed cryocooler will be called high frequency pulse tube refrigerator in this paper). To achieve a temperature below 10 K, two types of two-stage configuration, gas coupled and thermal coupled, have been designed, built and tested. At present, both types can achieve a no-load temperature below 10 K by using only one compressor. As to gas-coupled HPTR, the second stage can achieve a cooling power of 16 mW/10K when the first stage applied a 400 mW heat load at 60 K with a total input power of 400 W. As to thermal-coupled HPTR, the designed cooling power of the first stage is 10W/80K, and then the temperature of the second stage can get a temperature below 10 K with a total input power of 300 W. In the current preliminary experiment, liquid nitrogen is used to replace the first coaxial configuration as the precooling stage, and a no-load temperature 9.6 K can be achieved with a stainless steel mesh regenerator. Using Er3Ni sphere with a diameter about 50-60 micron, the simulation results show it is possible to achieve a temperature below 8 K. The configuration, the phase shifters and the regenerative materials of the developed two types of two-stage high frequency pulse tube refrigerator will be discussed, and some typical experimental results and considerations for achieving a better performance will also be presented in this paper.

  2. The Stages of Change in Smoking Cessation in a Representative Sample of Korean Adult Smokers

    OpenAIRE

    Jhun, Hyung-Joon; Seo, Hong-Gwan

    2006-01-01

    This study reports the stages of change in smoking cessation in a representative sample of Korean adult smokers. The study subjects, all adult smokers (n=2,422), were recruited from the second Korea National Health and Nutrition Examination Survey conducted in 2001. The stages of change were categorized using demographic (age and sex), socioeconomic (education, residence, and household income), and smoking characteristics (age at smoking onset, duration of smoking, and number of cigarettes sm...

  3. Quantum probabilities as Dempster-Shafer probabilities in the lattice of subspaces

    International Nuclear Information System (INIS)

    Vourdas, A.

    2014-01-01

    The orthocomplemented modular lattice of subspaces L[H(d)], of a quantum system with d-dimensional Hilbert space H(d), is considered. A generalized additivity relation which holds for Kolmogorov probabilities is violated by quantum probabilities in the full lattice L[H(d)] (it is only valid within the Boolean subalgebras of L[H(d)]). This suggests the use of more general (than Kolmogorov) probability theories, and here the Dempster-Shafer probability theory is adopted. An operator D(H 1 ,H 2 ), which quantifies deviations from Kolmogorov probability theory is introduced, and it is shown to be intimately related to the commutator of the projectors P(H 1 ),P(H 2 ), to the subspaces H 1 , H 2 . As an application, it is shown that the proof of the inequalities of Clauser, Horne, Shimony, and Holt for a system of two spin 1/2 particles is valid for Kolmogorov probabilities, but it is not valid for Dempster-Shafer probabilities. The violation of these inequalities in experiments supports the interpretation of quantum probabilities as Dempster-Shafer probabilities

  4. Bayesian Probability Theory

    Science.gov (United States)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  5. Data-driven probability concentration and sampling on manifold

    Energy Technology Data Exchange (ETDEWEB)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr [Université Paris-Est, Laboratoire Modélisation et Simulation Multi-Echelle, MSME UMR 8208 CNRS, 5 bd Descartes, 77454 Marne-La-Vallée Cedex 2 (France); Ghanem, R., E-mail: ghanem@usc.edu [University of Southern California, 210 KAP Hall, Los Angeles, CA 90089 (United States)

    2016-09-15

    A new methodology is proposed for generating realizations of a random vector with values in a finite-dimensional Euclidean space that are statistically consistent with a dataset of observations of this vector. The probability distribution of this random vector, while a priori not known, is presumed to be concentrated on an unknown subset of the Euclidean space. A random matrix is introduced whose columns are independent copies of the random vector and for which the number of columns is the number of data points in the dataset. The approach is based on the use of (i) the multidimensional kernel-density estimation method for estimating the probability distribution of the random matrix, (ii) a MCMC method for generating realizations for the random matrix, (iii) the diffusion-maps approach for discovering and characterizing the geometry and the structure of the dataset, and (iv) a reduced-order representation of the random matrix, which is constructed using the diffusion-maps vectors associated with the first eigenvalues of the transition matrix relative to the given dataset. The convergence aspects of the proposed methodology are analyzed and a numerical validation is explored through three applications of increasing complexity. The proposed method is found to be robust to noise levels and data complexity as well as to the intrinsic dimension of data and the size of experimental datasets. Both the methodology and the underlying mathematical framework presented in this paper contribute new capabilities and perspectives at the interface of uncertainty quantification, statistical data analysis, stochastic modeling and associated statistical inverse problems.

  6. Two-stage Catalytic Reduction of NOx with Hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Umit S. Ozkan; Erik M. Holmgreen; Matthew M. Yung; Jonathan Halter; Joel Hiltner

    2005-12-21

    A two-stage system for the catalytic reduction of NO from lean-burn natural gas reciprocating engine exhaust is investigated. Each of the two stages uses a distinct catalyst. The first stage is oxidation of NO to NO{sub 2} and the second stage is reduction of NO{sub 2} to N{sub 2} with a hydrocarbon. The central idea is that since NO{sub 2} is a more easily reduced species than NO, it should be better able to compete with oxygen for the combustion reaction of hydrocarbon, which is a challenge in lean conditions. Early work focused on demonstrating that the N{sub 2} yield obtained when NO{sub 2} was reduced was greater than when NO was reduced. NO{sub 2} reduction catalysts were designed and silver supported on alumina (Ag/Al{sub 2}O{sub 3}) was found to be quite active, able to achieve 95% N{sub 2} yield in 10% O{sub 2} using propane as the reducing agent. The design of a catalyst for NO oxidation was also investigated, and a Co/TiO{sub 2} catalyst prepared by sol-gel was shown to have high activity for the reaction, able to reach equilibrium conversion of 80% at 300 C at GHSV of 50,000h{sup -1}. After it was shown that NO{sub 2} could be more easily reduced to N{sub 2} than NO, the focus shifted on developing a catalyst that could use methane as the reducing agent. The Ag/Al{sub 2}O{sub 3} catalyst was tested and found to be inactive for NOx reduction with methane. Through iterative catalyst design, a palladium-based catalyst on a sulfated-zirconia support (Pd/SZ) was synthesized and shown to be able to selectively reduce NO{sub 2} in lean conditions using methane. Development of catalysts for the oxidation reaction also continued and higher activity, as well as stability in 10% water, was observed on a Co/ZrO{sub 2} catalyst, which reached equilibrium conversion of 94% at 250 C at the same GHSV. The Co/ZrO{sub 2} catalyst was also found to be extremely active for oxidation of CO, ethane, and propane, which could potential eliminate the need for any separate

  7. The global stability of a delayed predator-prey system with two stage-structure

    International Nuclear Information System (INIS)

    Wang Fengyan; Pang Guoping

    2009-01-01

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  8. Experimental and numerical studies on two-stage combustion of biomass

    Energy Technology Data Exchange (ETDEWEB)

    Houshfar, Eshan

    2012-07-01

    In this thesis, two-stage combustion of biomass was experimentally/numerically investigated in a multifuel reactor. The following emissions issues have been the main focus of the work: 1- NOx and N2O 2- Unburnt species (CO and CxHy) 3- Corrosion related emissions.The study had a focus on two-stage combustion in order to reduce pollutant emissions (primarily NOx emissions). It is well known that pollutant emissions are very dependent on the process conditions such as temperature, reactant concentrations and residence times. On the other hand, emissions are also dependent on the fuel properties (moisture content, volatiles, alkali content, etc.). A detailed study of the important parameters with suitable biomass fuels in order to optimize the various process conditions was performed. Different experimental studies were carried out on biomass fuels in order to study the effect of fuel properties and combustion parameters on pollutant emissions. Process conditions typical for biomass combustion processes were studied. Advanced experimental equipment was used in these studies. The experiments showed the effects of staged air combustion, compared to non-staged combustion, on the emission levels clearly. A NOx reduction of up to 85% was reached with staged air combustion using demolition wood as fuel. An optimum primary excess air ratio of 0.8-0.95 was found as a minimizing parameter for the NOx emissions for staged air combustion. Air staging had, however, a negative effect on N2O emissions. Even though the trends showed a very small reduction in the NOx level as temperature increased for non-staged combustion, the effect of temperature was not significant for NOx and CxHy, neither in staged air combustion or non-staged combustion, while it had a great influence on the N2O and CO emissions, with decreasing levels with increasing temperature. Furthermore, flue gas recirculation (FGR) was used in combination with staged combustion to obtain an enhanced NOx reduction. The

  9. Articulating spacers used in two-stage revision of infected hip and knee prostheses abrade with time.

    Science.gov (United States)

    Fink, Bernd; Rechtenbach, Annett; Büchner, Hubert; Vogt, Sebastian; Hahn, Michael

    2011-04-01

    Articulating spacers used in two-stage revision surgery of infected prostheses have the potential to abrade and subsequently induce third-body wear of the new prosthesis. We asked whether particulate material abraded from spacers could be detected in the synovial membrane 6 weeks after implantation when the spacers were removed for the second stage of the revision. Sixteen hip spacers (cemented prosthesis stem articulating with a cement cup) and four knee spacers (customized mobile cement spacers) were explanted 6 weeks after implantation and the synovial membranes were removed at the same time. The membranes were examined by xray fluorescence spectroscopy, xray diffraction for the presence of abraded particles originating from the spacer material, and analyzed in a semiquantitative manner by inductively coupled plasma mass spectrometry. Histologic analyses also were performed. We found zirconium dioxide in substantial amounts in all samples, and in the specimens of the hip synovial lining, we detected particles that originated from the metal heads of the spacers. Histologically, zirconium oxide particles were seen in the synovial membrane of every spacer and bone cement particles in one knee and two hip spacers. The observations suggest cement spacers do abrade within 6 weeks. Given the presence of abrasion debris, we recommend total synovectomy and extensive lavage during the second-stage reimplantation surgery to minimize the number of abraded particles and any retained bacteria.

  10. Correlation between Cervical Vertebral Maturation Stages and Dental Maturation in a Saudi Sample

    Directory of Open Access Journals (Sweden)

    Nayef H Felemban

    2017-01-01

    Full Text Available Background: The aim of the present study was to compare the cervical vertebra maturation stages method and dental maturity using tooth calcification stages. Methods: The current study comprised of 405 subjects selected from orthodontic patients of Saudi origin coming to clinics of the specialized dental centers in western region of Saudi Arabia. Dental age was assessed according to the developmental stages of upper and lower third molars and skeletal maturation according to the cervical vertebrae maturation stage method. Statistical analysis was done using Kruskal-Wallis H, Mann-Whitney U test, Chi-Square test; t-test and Spearman correlation coefficient for inter group comparison. Results: The females were younger than males in all cervical stages. The CS1-CS2 show the period before the peak of growth, during CS3-CS5 it’s the pubertal growth spurt and CS6 is the period after the peak of the growth. The mean age and standard deviation for cervical stages of CS2, CS3 and CS4 were 12.09 ±1.72 years, 13.19 ±1.62 and 14.88 ±1.52 respectively. The Spearman correlation coefficients between cervical vertebrae and dental maturation were between 0.166 and 0.612, 0.243 and 0.832 for both sexes for upper and lower third molars. The significance levels for all coefficients were equal at 0.01 and 0.05. Conclusion: The results of this study showed that the skeletal maturity increased with the increase in dental ages for both genders. An early rate of skeletal maturation stage was observed in females. This study needs further analysis using a larger sample covering the entire dentition.

  11. Collision probability in two-dimensional lattice by ray-trace method and its applications to cell calculations

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro

    1985-03-01

    A series of formulations to evaluate collision probability for multi-region cells expressed by either of three one-dimensional coordinate systems (plane, sphere and cylinder) or by the general two-dimensional cylindrical coordinate system is presented. They are expressed in a suitable form to have a common numerical process named ''Ray-Trace'' method. Applications of the collision probability method to two optional treatments for the resonance absorption are presented. One is a modified table-look-up method based on the intermediate resonance approximation, and the other is a rigorous method to calculate the resonance absorption in a multi-region cell in which nearly continuous energy spectra of the resonance neutron range can be solved and interaction effect between different resonance nuclides can be evaluated. Two works on resonance absorption in a doubly heterogeneous system with grain structure are presented. First, the effect of a random distribution of particles embedded in graphite diluent on the resonance integral is studied. Next, the ''Accretion'' method proposed by Leslie and Jonsson to define the collision probability in a doubly heterogeneous system is applied to evaluate the resonance absorption in coated particles dispersed in fuel pellet of the HTGR. Several optional models are proposed to define the collision rates in the medium with the microscopic heterogeneity. By making use of the collision probability method developed by the present study, the JAERI thermal reactor standard nuclear design code system SRAC has been developed. Results of several benchmark tests for the SRAC are presented. The analyses of critical experiments of the SHE, DCA, and FNR show good agreement of critical masses with their experimental values. (J.P.N.)

  12. Market-implied risk-neutral probabilities, actual probabilities, credit risk and news

    Directory of Open Access Journals (Sweden)

    Shashidhar Murthy

    2011-09-01

    Full Text Available Motivated by the credit crisis, this paper investigates links between risk-neutral probabilities of default implied by markets (e.g. from yield spreads and their actual counterparts (e.g. from ratings. It discusses differences between the two and clarifies underlying economic intuition using simple representations of credit risk pricing. Observed large differences across bonds in the ratio of the two probabilities are shown to imply that apparently safer securities can be more sensitive to news.

  13. Introduction to probability with Mathematica

    CERN Document Server

    Hastings, Kevin J

    2009-01-01

    Discrete ProbabilityThe Cast of Characters Properties of Probability Simulation Random SamplingConditional ProbabilityIndependenceDiscrete DistributionsDiscrete Random Variables, Distributions, and ExpectationsBernoulli and Binomial Random VariablesGeometric and Negative Binomial Random Variables Poisson DistributionJoint, Marginal, and Conditional Distributions More on ExpectationContinuous ProbabilityFrom the Finite to the (Very) Infinite Continuous Random Variables and DistributionsContinuous ExpectationContinuous DistributionsThe Normal Distribution Bivariate Normal DistributionNew Random Variables from OldOrder Statistics Gamma DistributionsChi-Square, Student's t, and F-DistributionsTransformations of Normal Random VariablesAsymptotic TheoryStrong and Weak Laws of Large Numbers Central Limit TheoremStochastic Processes and ApplicationsMarkov ChainsPoisson Processes QueuesBrownian MotionFinancial MathematicsAppendixIntroduction to Mathematica Glossary of Mathematica Commands for Probability Short Answers...

  14. Adjuvant therapy in stage I and stage II epithelial ovarian cancer. Results of two prospective randomized trials

    International Nuclear Information System (INIS)

    Young, R.C.; Walton, L.A.; Ellenberg, S.S.; Homesley, H.D.; Wilbanks, G.D.; Decker, D.G.; Miller, A.; Park, R.; Major, F. Jr.

    1990-01-01

    About a third of patients with ovarian cancer present with localized disease; despite surgical resection, up to half the tumors recur. Since it has not been established whether adjuvant treatment can benefit such patients, we conducted two prospective, randomized national cooperative trials of adjuvant therapy in patients with localized ovarian carcinoma. All patients underwent surgical resection plus comprehensive staging and, 18 months later, surgical re-exploration. In the first trial, 81 patients with well-differentiated or moderately well differentiated cancers confined to the ovaries (Stages Iai and Ibi) were assigned to receive either no chemotherapy or melphalan (0.2 mg per kilogram of body weight per day for five days, repeated every four to six weeks for up to 12 cycles). After a median follow-up of more than six years, there were no significant differences between the patients given no chemotherapy and those treated with melphalan with respect to either five-year disease-free survival or overall survival. In the second trial, 141 patients with poorly differentiated Stage I tumors or with cancer outside the ovaries but limited to the pelvis (Stage II) were randomly assigned to treatment with either melphalan (in the same regimen as above) or a single intraperitoneal dose of 32P (15 mCi) at the time of surgery. In this trial (median follow-up, greater than 6 years) the outcomes for the two treatment groups were similar with respect to five-year disease-free survival (80 percent in both groups) and overall survival (81 percent with melphalan vs. 78 percent with 32P; P = 0.48). We conclude that in patients with localized ovarian cancer, comprehensive staging at the time of surgical resection can serve to identify those patients (as defined by the first trial) who can be followed without adjuvant chemotherapy

  15. Fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy logical relationships.

    Science.gov (United States)

    Chen, Shyi-Ming; Chen, Shen-Wen

    2015-03-01

    In this paper, we present a new method for fuzzy forecasting based on two-factors second-order fuzzy-trend logical relationship groups and the probabilities of trends of fuzzy-trend logical relationships. Firstly, the proposed method fuzzifies the historical training data of the main factor and the secondary factor into fuzzy sets, respectively, to form two-factors second-order fuzzy logical relationships. Then, it groups the obtained two-factors second-order fuzzy logical relationships into two-factors second-order fuzzy-trend logical relationship groups. Then, it calculates the probability of the "down-trend," the probability of the "equal-trend" and the probability of the "up-trend" of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group, respectively. Finally, it performs the forecasting based on the probabilities of the down-trend, the equal-trend, and the up-trend of the two-factors second-order fuzzy-trend logical relationships in each two-factors second-order fuzzy-trend logical relationship group. We also apply the proposed method to forecast the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) and the NTD/USD exchange rates. The experimental results show that the proposed method outperforms the existing methods.

  16. Hybrid alkali-hydrodynamic disintegration of waste-activated sludge before two-stage anaerobic digestion process.

    Science.gov (United States)

    Grübel, Klaudiusz; Suschka, Jan

    2015-05-01

    The first step of anaerobic digestion, the hydrolysis, is regarded as the rate-limiting step in the degradation of complex organic compounds, such as waste-activated sludge (WAS). The aim of lab-scale experiments was to pre-hydrolyze the sludge by means of low intensive alkaline sludge conditioning before applying hydrodynamic disintegration, as the pre-treatment procedure. Application of both processes as a hybrid disintegration sludge technology resulted in a higher organic matter release (soluble chemical oxygen demand (SCOD)) to the liquid sludge phase compared with the effects of processes conducted separately. The total SCOD after alkalization at 9 pH (pH in the range of 8.96-9.10, SCOD = 600 mg O2/L) and after hydrodynamic (SCOD = 1450 mg O2/L) disintegration equaled to 2050 mg/L. However, due to the synergistic effect, the obtained SCOD value amounted to 2800 mg/L, which constitutes an additional chemical oxygen demand (COD) dissolution of about 35 %. Similarly, the synergistic effect after alkalization at 10 pH was also obtained. The applied hybrid pre-hydrolysis technology resulted in a disintegration degree of 28-35%. The experiments aimed at selection of the most appropriate procedures in terms of optimal sludge digestion results, including high organic matter degradation (removal) and high biogas production. The analyzed soft hybrid technology influenced the effectiveness of mesophilic/thermophilic anaerobic digestion in a positive way and ensured the sludge minimization. The adopted pre-treatment technology (alkalization + hydrodynamic cavitation) resulted in 22-27% higher biogas production and 13-28% higher biogas yield. After two stages of anaerobic digestion (mesophilic conditions (MAD) + thermophilic anaerobic digestion (TAD)), the highest total solids (TS) reduction amounted to 45.6% and was received for the following sample at 7 days MAD + 17 days TAD. About 7% higher TS reduction was noticed compared with the sample after 9

  17. Advances in delimiting the Hilbert-Schmidt separability probability of real two-qubit systems

    International Nuclear Information System (INIS)

    Slater, Paul B

    2010-01-01

    We seek to derive the probability-expressed in terms of the Hilbert-Schmidt (Euclidean or flat) metric-that a generic (nine-dimensional) real two-qubit system is separable, by implementing the well-known Peres-Horodecki test on the partial transposes (PTs) of the associated 4 x 4 density matrices (ρ). But the full implementation of the test-requiring that the determinant of the PT be nonnegative for separability to hold-appears to be, at least presently, computationally intractable. So, we have previously implemented-using the auxiliary concept of a diagonal-entry-parameterized separability function (DESF)-the weaker implied test of nonnegativity of the six 2 x 2 principal minors of the PT. This yielded an exact upper bound on the separability probability of 1024/135π 2 ∼0.76854. Here, we piece together (reflection-symmetric) results obtained by requiring that each of the four 3 x 3 principal minors of the PT, in turn, be nonnegative, giving an improved/reduced upper bound of 22/35∼0.628571. Then, we conclude that a still further improved upper bound of 1129/2100∼0.537619 can be found by similarly piecing together the (reflection-symmetric) results of enforcing the simultaneous nonnegativity of certain pairs of the four 3 x 3 principal minors. Numerical simulations-as opposed to exact symbolic calculations-indicate, on the other hand, that the true probability is certainly less than 1/2 . Our analyses lead us to suggest a possible form for the true DESF, yielding a separability probability of 29/64∼0.453125, while the absolute separability probability of (6928-2205π)/(2 9/2 )∼0.0348338 provides the best exact lower bound established so far. In deriving our improved upper bounds, we rely repeatedly upon the use of certain integrals over cubes that arise. Finally, we apply an independence assumption to a pair of DESFs that comes close to reproducing our numerical estimate of the true separability function.

  18. Probability-1

    CERN Document Server

    Shiryaev, Albert N

    2016-01-01

    This book contains a systematic treatment of probability from the ground up, starting with intuitive ideas and gradually developing more sophisticated subjects, such as random walks, martingales, Markov chains, the measure-theoretic foundations of probability theory, weak convergence of probability measures, and the central limit theorem. Many examples are discussed in detail, and there are a large number of exercises. The book is accessible to advanced undergraduates and can be used as a text for independent study. To accommodate the greatly expanded material in the third edition of Probability, the book is now divided into two volumes. This first volume contains updated references and substantial revisions of the first three chapters of the second edition. In particular, new material has been added on generating functions, the inclusion-exclusion principle, theorems on monotonic classes (relying on a detailed treatment of “π-λ” systems), and the fundamental theorems of mathematical statistics.

  19. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism

    Science.gov (United States)

    Yang, B.; Guo, X.; Wang, Q. H.; Lu, C. F.; Hu, D.

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s)2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  20. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  1. Divergent methylation pattern in adult stage between two forms of Tetranychus urticae (Acari: Tetranychidae).

    Science.gov (United States)

    Yang, Si-Xia; Guo, Chao; Zhao, Xiu-Ting; Sun, Jing-Tao; Hong, Xiao-Yue

    2017-02-19

    The two-spotted spider mite, Tetranychus urticae Koch has two forms: green form and red form. Understanding the molecular basis of how these two forms established without divergent genetic background is an intriguing area. As a well-known epigenetic process, DNA methylation has particularly important roles in gene regulation and developmental variation across diverse organisms that do not alter genetic background. Here, to investigate whether DNA methylation could be associated with different phenotypic consequences in the two forms of T. urticae, we surveyed the genome-wide cytosine methylation status and expression level of DNA methyltransferase 3 (Tudnmt3) throughout their entire life cycle. Methylation-sensitive amplification polymorphism (MSAP) analyses of 585 loci revealed variable methylation patterns in the different developmental stages. In particular, principal coordinates analysis (PCoA) indicates a significant epigenetic differentiation between female adults of the two forms. The gene expression of Tudnmt3 was detected in all examined developmental stages, which was significantly different in the adult stage of the two forms. Together, our results reveal the epigenetic distance between the two forms of T. urticae, suggesting that DNA methylation might be implicated in different developmental demands, and contribute to different phenotypes in the adult stage of these two forms. © 2017 Institute of Zoology, Chinese Academy of Sciences.

  2. Hydrodeoxygenation of oils from cellulose in single and two-stage hydropyrolysis

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, J.D.; Snape, C.E. [Strathclyde Univ., Glasgow (United Kingdom); Luengo, C.A. [Universidade Estadual de Campinas, SP (Brazil). Dept. de Fisica Aplicada

    1996-09-01

    To investigate the removal of oxygen (hydrodeoxygenation) during the hydropyrolysis of cellulose, single and two-stage experiments on pure cellulose have been carried out using hydrogen pressures up to 10 MPa and temperatures over the range 300-520{sup o}C. Carbon, oxygen and aromaticity balances have been determined from the product yields and compositions. For the two-stage tests, the primary oils were passed through a bed of commercial Ni/Mo {gamma}-alumina-supported catalyst (Criterion 424, presulphided) at 400{sup o}C. Raising the hydrogen pressure from atmospheric to 10 MPa increased the carbon conversion by 10 mole % which was roughly equally divided between the oil and hydrocarbon gases. The oxygen content of the primary oil was reduced by over 10% to below 20% w/w. The addition of a dispersed iron sulphide catalyst further increased the oil yield at 10 MPa and reduces the oxygen content of the oil by a further 10%. The effect of hydrogen pressure on oil yields was most pronounced at low flow rates where it is beneficial in helping to overcome diffusional resistances. Unlike the dispersed iron sulphide in the first stage, the use of the Ni-Mo catalyst in the second stage reduced both the oxygen content and aromaticity of the oils. (Author)

  3. "I Don't Really Understand Probability at All": Final Year Pre-Service Teachers' Understanding of Probability

    Science.gov (United States)

    Maher, Nicole; Muir, Tracey

    2014-01-01

    This paper reports on one aspect of a wider study that investigated a selection of final year pre-service primary teachers' responses to four probability tasks. The tasks focused on foundational ideas of probability including sample space, independence, variation and expectation. Responses suggested that strongly held intuitions appeared to…

  4. Soybean P34 Probable Thiol Protease Probably Has Proteolytic Activity on Oleosins.

    Science.gov (United States)

    Zhao, Luping; Kong, Xiangzhen; Zhang, Caimeng; Hua, Yufei; Chen, Yeming

    2017-07-19

    P34 probable thiol protease (P34) and Gly m Bd 30K (30K) show high relationship with the protease of 24 kDa oleosin of soybean oil bodies. In this study, 9 day germinated soybean was used to separate bioprocessed P34 (P32) from bioprocessed 30K (28K). Interestingly, P32 existed as dimer, whereas 28K existed as monomer; a P32-rich sample had proteolytic activity and high cleavage site specificity (Lys-Thr of 24 kDa oleosin), whereas a 28K-rich sample showed low proteolytic activity; the P32-rich sample contained one thiol protease. After mixing with purified oil bodies, all P32 dimers were dissociated and bound to 24 kDa oleosins to form P32-24 kDa oleosin complexes. By incubation, 24 kDa oleosin was preferentially hydrolyzed, and two hydrolyzed products (HPs; 17 and 7 kDa) were confirmed. After most of 24 kDa oleosin was hydrolyzed, some P32 existed as dimer, and the other as P32-17 kDa HP. It was suggested that P32 was the protease.

  5. A scenario tree model for the Canadian Notifiable Avian Influenza Surveillance System and its application to estimation of probability of freedom and sample size determination.

    Science.gov (United States)

    Christensen, Jette; Stryhn, Henrik; Vallières, André; El Allaki, Farouk

    2011-05-01

    In 2008, Canada designed and implemented the Canadian Notifiable Avian Influenza Surveillance System (CanNAISS) with six surveillance activities in a phased-in approach. CanNAISS was a surveillance system because it had more than one surveillance activity or component in 2008: passive surveillance; pre-slaughter surveillance; and voluntary enhanced notifiable avian influenza surveillance. Our objectives were to give a short overview of two active surveillance components in CanNAISS; describe the CanNAISS scenario tree model and its application to estimation of probability of populations being free of NAI virus infection and sample size determination. Our data from the pre-slaughter surveillance component included diagnostic test results from 6296 serum samples representing 601 commercial chicken and turkey farms collected from 25 August 2008 to 29 January 2009. In addition, we included data from a sub-population of farms with high biosecurity standards: 36,164 samples from 55 farms sampled repeatedly over the 24 months study period from January 2007 to December 2008. All submissions were negative for Notifiable Avian Influenza (NAI) virus infection. We developed the CanNAISS scenario tree model, so that it will estimate the surveillance component sensitivity and the probability of a population being free of NAI at the 0.01 farm-level and 0.3 within-farm-level prevalences. We propose that a general model, such as the CanNAISS scenario tree model, may have a broader application than more detailed models that require disease specific input parameters, such as relative risk estimates. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.

  6. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  7. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  8. Probability of failure of the watershed algorithm for peak detection in comprehensive two-dimensional chromatography

    NARCIS (Netherlands)

    Vivó-Truyols, G.; Janssen, H.-G.

    2010-01-01

    The watershed algorithm is the most common method used for peak detection and integration In two-dimensional chromatography However, the retention time variability in the second dimension may render the algorithm to fail A study calculating the probabilities of failure of the watershed algorithm was

  9. Anaerobic digestion of citrus waste using two-stage membrane bioreactor

    Science.gov (United States)

    Millati, Ria; Lukitawesa; Dwi Permanasari, Ervina; Wulan Sari, Kartika; Nur Cahyanto, Muhammad; Niklasson, Claes; Taherzadeh, Mohammad J.

    2018-03-01

    Anaerobic digestion is a promising method to treat citrus waste. However, the presence of limonene in citrus waste inhibits anaerobic digestion process. Limonene is an antimicrobial compound and could inhibit methane forming bacteria that takes a longer time to recover than the injured acid forming bacteria. Hence, volatile fatty acids will be accumulated and methane production will be decreased. One way to solve this problem is by conducting anaerobic digestion process into two stages. The first step is aimed for hydrolysis, acidogenesis, and acetogenesis reactions and the second stage is aimed for methanogenesis reaction. The separation of the system would further allow each stage in their optimum conditions making the process more stable. In this research, anaerobic digestion was carried out in batch operations using 120 ml-glass bottle bioreactors in 2 stages. The first stage was performed in free-cells bioreactor, whereas the second stage was performed in both bioreactor of free cells and membrane bioreactor. In the first stage, the reactor was set into ‘anaerobic’ and ‘semi-aerobic’ conditions to examine the effect of oxygen on facultative anaerobic bacteria in acid production. In the second stage, the protection of membrane towards the cells against limonene was tested. For the first stage, the basal medium was prepared with 1.5 g VS of inoculum and 4.5 g VS of citrus waste. The digestion process was carried out at 55°C for four days. For the second stage, the membrane bioreactor was prepared with 3 g of cells that were encased and sealed in a 3×6 cm2 polyvinylidene fluoride membrane. The medium contained 40 ml basal medium and 10 ml liquid from the first stage. The bioreactors were incubated at 55°C for 2 days under anaerobic condition. The results from the first stage showed that the maximum total sugar under ‘anaerobic’ and ‘semi-aerobic’ conditions was 294.3 g/l and 244.7 g/l, respectively. The corresponding values for total volatile

  10. Is the continuous two-stage anaerobic digestion process well suited for all substrates?

    Science.gov (United States)

    Lindner, Jonas; Zielonka, Simon; Oechsner, Hans; Lemmer, Andreas

    2016-01-01

    Two-stage anaerobic digestion systems are often considered to be advantageous compared to one-stage processes. Although process conditions and fermenter setups are well examined, overall substrate degradation in these systems is controversially discussed. Therefore, the aim of this study was to investigate how substrates with different fibre and sugar contents (hay/straw, maize silage, sugar beet) influence the degradation rate and methane production. Intermediates and gas compositions, as well as methane yields and VS-degradation degrees were recorded. The sugar beet substrate lead to a higher pH-value drop 5.67 in the acidification reactor, which resulted in a six time higher hydrogen production in comparison to the hay/straw substrate (pH-value drop 5.34). As the achieved yields in the two-stage system showed a difference of 70.6% for the hay/straw substrate, and only 7.8% for the sugar beet substrate. Therefore two-stage systems seem to be only recommendable for digesting sugar rich substrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Assessing efficiency and effectiveness of Malaysian Islamic banks: A two stage DEA analysis

    Science.gov (United States)

    Kamarudin, Norbaizura; Ismail, Wan Rosmanira; Mohd, Muhammad Azri

    2014-06-01

    Islamic banks in Malaysia are indispensable players in the financial industry with the growing needs for syariah compliance system. In the banking industry, most recent studies concerned only on operational efficiency. However rarely on the operational effectiveness. Since the production process of banking industry can be described as a two-stage process, two-stage Data Envelopment Analysis (DEA) can be applied to measure the bank performance. This study was designed to measure the overall performance in terms of efficiency and effectiveness of Islamic banks in Malaysia using Two-Stage DEA approach. This paper presents analysis of a DEA model which split the efficiency and effectiveness in order to evaluate the performance of ten selected Islamic Banks in Malaysia for the financial year period ended 2011. The analysis shows average efficient score is more than average effectiveness score thus we can say that Malaysian Islamic banks were more efficient rather than effective. Furthermore, none of the bank exhibit best practice in both stages as we can say that a bank with better efficiency does not always mean having better effectiveness at the same time.

  12. Efficiency of primary care in rural Burkina Faso. A two-stage DEA analysis.

    Science.gov (United States)

    Marschall, Paul; Flessa, Steffen

    2011-07-20

    Providing health care services in Africa is hampered by severe scarcity of personnel, medical supplies and financial funds. Consequently, managers of health care institutions are called to measure and improve the efficiency of their facilities in order to provide the best possible services with their resources. However, very little is known about the efficiency of health care facilities in Africa and instruments of performance measurement are hardly applied in this context. This study determines the relative efficiency of primary care facilities in Nouna, a rural health district in Burkina Faso. Furthermore, it analyses the factors influencing the efficiency of these institutions. We apply a two-stage Data Envelopment Analysis (DEA) based on data from a comprehensive provider and household information system. In the first stage, the relative efficiency of each institution is calculated by a traditional DEA model. In the second stage, we identify the reasons for being inefficient by regression technique. The DEA projections suggest that inefficiency is mainly a result of poor utilization of health care facilities as they were either too big or the demand was too low. Regression results showed that distance is an important factor influencing the efficiency of a health care institution Compared to the findings of existing one-stage DEA analyses of health facilities in Africa, the share of relatively efficient units is slightly higher. The difference might be explained by a rather homogenous structure of the primary care facilities in the Burkina Faso sample. The study also indicates that improving the accessibility of primary care facilities will have a major impact on the efficiency of these institutions. Thus, health decision-makers are called to overcome the demand-side barriers in accessing health care.

  13. Classroom Research: Assessment of Student Understanding of Sampling Distributions of Means and the Central Limit Theorem in Post-Calculus Probability and Statistics Classes

    Science.gov (United States)

    Lunsford, M. Leigh; Rowell, Ginger Holmes; Goodson-Espy, Tracy

    2006-01-01

    We applied a classroom research model to investigate student understanding of sampling distributions of sample means and the Central Limit Theorem in post-calculus introductory probability and statistics courses. Using a quantitative assessment tool developed by previous researchers and a qualitative assessment tool developed by the authors, we…

  14. Engineering analysis of the two-stage trifluoride precipitation process

    International Nuclear Information System (INIS)

    Luerkens, D.w.W.

    1984-06-01

    An engineering analysis of two-stage trifluoride precipitation processes is developed. Precipitation kinetics are modeled using consecutive reactions to represent fluoride complexation. Material balances across the precipitators are used to model the time dependent concentration profiles of the main chemical species. The results of the engineering analysis are correlated with previous experimental work on plutonium trifluoride and cerium trifluoride

  15. Eliciting Subjective Probabilities with Binary Lotteries

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Martínez-Correa, Jimmy; Swarthout, J. Todd

    objective probabilities. Drawing a sample from the same subject population, we find evidence that the binary lottery procedure induces linear utility in a subjective probability elicitation task using the Quadratic Scoring Rule. We also show that the binary lottery procedure can induce direct revelation...

  16. Recent developments of a two-stage light gas gun for pellet injection

    International Nuclear Information System (INIS)

    Reggiori, A.

    1984-01-01

    A report is given on a two-stage pneumatic gun operated with ambient air as first stage driver which has been built and tested. Cylindrical polyethylene pellets of 1 mm diameter and 1 mm length have been launched at velocities up to 1800 m/s, with divergence angles of the pellet trajectory less than 1 0 . It is possible to optimize the pressure pulse for pellets of different masses, simply changing the mass of the piston and/or the initial pressures in the second stage. (author)

  17. Considerations Regarding Age at Surgery and Fistula Incidence Using One- and Two-stage Closure for Cleft Palate

    Directory of Open Access Journals (Sweden)

    Simona Stoicescu

    2013-12-01

    Full Text Available Introduction: Although cleft lip and palate (CLP is one of the most common congenital malformations, occurring in 1 in 700 live births, there is still no generally accepted treatment protocol. Numerous surgical techniques have been described for cleft palate repair; these techniques can be divided into one-stage (one operation cleft palate repair and two-stage cleft palate closure. The aim of this study is to present our cleft palate team experience in using the two-stage cleft palate closure and the clinical outcomes in terms of oronasal fistula rate. Material and methods: A retrospective analysis was performed on medical records of 80 patients who underwent palate repair over a five-year period, from 2008 to 2012. All cleft palate patients were incorporated. Information on patient’s gender, cleft type, age at repair, one- or two-stage cleft palate repair were collected and analyzed. Results: Fifty-three (66% and twenty-seven (34% patients underwent two-stage and one-stage repair, respectively. According to Veau classification, more than 60% of them were Veau III and IV, associating cleft lip to cleft palate. Fistula occurred in 34% of the two-stage repairs versus 7% of one-stage repairs, with an overall incidence of 24%. Conclusions: Our study has shown that a two-stage cleft palate closure has a higher rate of fistula formation when compared with the one-stage repair. Two-stage repair is the protocol of choice in wide complete cleft lip and palate cases, while one-stage procedure is a good option for cleft palate alone, or some specific cleft lip and palate cases (narrow cleft palate, older age at surgery

  18. Two-stage acid saccharification of fractionated Gelidium amansii minimizing the sugar decomposition.

    Science.gov (United States)

    Jeong, Tae Su; Kim, Young Soo; Oh, Kyeong Keun

    2011-11-01

    Two-stage acid hydrolysis was conducted on easy reacting cellulose and resistant reacting cellulose of fractionated Gelidium amansii (f-GA). Acid hydrolysis of f-GA was performed at between 170 and 200 °C for a period of 0-5 min, and an acid concentration of 2-5% (w/v, H2SO4) to determine the optimal conditions for acid hydrolysis. In the first stage of the acid hydrolysis, an optimum glucose yield of 33.7% was obtained at a reaction temperature of 190 °C, an acid concentration of 3.0%, and a reaction time of 3 min. In the second stage, a glucose yield of 34.2%, on the basis the amount of residual cellulose from the f-GA, was obtained at a temperature of 190 °C, a sulfuric acid concentration of 4.0%, and a reaction time 3.7 min. Finally, 68.58% of the cellulose derived from f-GA was converted into glucose through two-stage acid saccharification under aforementioned conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Two-stage nuclear refrigeration with enhanced nuclear moments

    International Nuclear Information System (INIS)

    Hunik, R.

    1979-01-01

    Experiments are described in which an enhanced nuclear system is used as a precoolant for a nuclear demagnetisation stage. The results show the promising advantages of such a system in those circumstances for which a large cooling power is required at extremely low temperatures. A theoretical review of nuclear enhancement at the microscopic level and its macroscopic thermodynamical consequences is given. The experimental equipment for the implementation of the nuclear enhanced refrigeration method is described and the experiments on two-stage nuclear demagnetisation are discussed. With the nuclear enhanced system PrCu 6 the author could precool a nuclear stage of indium in a magnetic field of 6 T down to temperatures below 10 mK; this resulted in temperature below 1 mK after demagnetisation of the indium. It is demonstrated that the interaction energy between the nuclear moments in an enhanced nuclear system can exceed the nuclear dipolar interaction. Several experiments are described on pulsed nuclear magnetic resonance, as utilised for thermometry purposes. It is shown that platinum NMR-thermometry gives very satisfactory results around 1 mK. The results of experiments on nuclear orientation of radioactive nuclei, e.g. the brute force polarisation of 95 NbPt and 60 CoCu, are presented, some of which are of major importance for the thermometry in the milli-Kelvin region. (Auth.)

  20. Quantitative determination on heavy metals in different stages of wine production by Total Reflection X-ray Fluorescence and Energy Dispersive X-ray Fluorescence: Comparison on two vineyards

    Energy Technology Data Exchange (ETDEWEB)

    Pessanha, Sofia [Centro Fisica Atomica, Departamento de Fisica, Faculdade de Ciencias, Universidade de Lisboa, Av. Prof. Gama Pinto, 2, 1649-003 Lisboa (Portugal); Carvalho, Maria Luisa, E-mail: luisa@cii.fc.ul.p [Centro Fisica Atomica, Departamento de Fisica, Faculdade de Ciencias, Universidade de Lisboa, Av. Prof. Gama Pinto, 2, 1649-003 Lisboa (Portugal); Becker, Maria; Bohlen, Alex von [Institute for analytical Sciences, Bunsen-Kirchhoff-Str. 11, 44139 Dortmund (Germany)

    2010-06-15

    The purpose of this study is to determine the elemental content, namely heavy metals, of samples of vine-leaves, grapes must and wine. In order to assess the influence of the vineyard age on the elemental content throughout the several stages of wine production, elemental determinations of trace elements were made on products obtained from two vineyards aged 6 and 14 years from Douro region. The elemental content of vine-leaves and grapes was determined by Energy Dispersive X-Ray Fluorescence (EDXRF), while analysis of the must and wine was performed by Total Reflection X-ray Fluorescence (TXRF). Almost all elements present in wine and must samples did not exceed the recommended values found in literature for wine. Bromine was present in the 6 years old wine in a concentration 1 order of magnitude greater than what is usually detected. The Cu content in vine-leaves from the older vineyard was found to be extremely high probably due to excessive use of Cu-based fungicides to control vine downy mildew. Higher Cu content was also detected in grapes although not so pronounced. Concerning the wine a slightly higher level was detected on the older vineyard, even so not exceeding the recommended value.

  1. Quantitative determination on heavy metals in different stages of wine production by Total Reflection X-ray Fluorescence and Energy Dispersive X-ray Fluorescence: Comparison on two vineyards

    International Nuclear Information System (INIS)

    Pessanha, Sofia; Carvalho, Maria Luisa; Becker, Maria; Bohlen, Alex von

    2010-01-01

    The purpose of this study is to determine the elemental content, namely heavy metals, of samples of vine-leaves, grapes must and wine. In order to assess the influence of the vineyard age on the elemental content throughout the several stages of wine production, elemental determinations of trace elements were made on products obtained from two vineyards aged 6 and 14 years from Douro region. The elemental content of vine-leaves and grapes was determined by Energy Dispersive X-Ray Fluorescence (EDXRF), while analysis of the must and wine was performed by Total Reflection X-ray Fluorescence (TXRF). Almost all elements present in wine and must samples did not exceed the recommended values found in literature for wine. Bromine was present in the 6 years old wine in a concentration 1 order of magnitude greater than what is usually detected. The Cu content in vine-leaves from the older vineyard was found to be extremely high probably due to excessive use of Cu-based fungicides to control vine downy mildew. Higher Cu content was also detected in grapes although not so pronounced. Concerning the wine a slightly higher level was detected on the older vineyard, even so not exceeding the recommended value.

  2. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    Science.gov (United States)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  3. Attitudes toward Bisexual Men and Women among a Nationally Representative Probability Sample of Adults in the United States

    Science.gov (United States)

    Herbenick, Debby; Friedman, M. Reuel; Schick, Vanessa; Fu, Tsung-Chieh (Jane); Bostwick, Wendy; Bartelt, Elizabeth; Muñoz-Laboy, Miguel; Pletta, David; Reece, Michael; Sandfort, Theo G. M.

    2016-01-01

    As bisexual individuals in the United States (U.S.) face significant health disparities, researchers have posited that these differences may be fueled, at least in part, by negative attitudes, prejudice, stigma, and discrimination toward bisexual individuals from heterosexual and gay/lesbian individuals. Previous studies of individual and social attitudes toward bisexual men and women have been conducted almost exclusively with convenience samples, with limited generalizability to the broader U.S. population. Our study provides an assessment of attitudes toward bisexual men and women among a nationally representative probability sample of heterosexual, gay, lesbian, and other-identified adults in the U.S. Data were collected from the 2015 National Survey of Sexual Health and Behavior (NSSHB), via an online questionnaire with a probability sample of adults (18 years and over) from throughout the U.S. We included two modified 5-item versions of the Bisexualities: Indiana Attitudes Scale (BIAS), validated sub-scales that were developed to measure attitudes toward bisexual men and women. Data were analyzed using descriptive statistics, gamma regression, and paired t-tests. Gender, sexual identity, age, race/ethnicity, income, and educational attainment were all significantly associated with participants' attitudes toward bisexual individuals. In terms of responses to individual scale items, participants were most likely to “neither agree nor disagree” with all attitudinal statements. Across sexual identities, self-identified other participants reported the most positive attitudes, while heterosexual male participants reported the least positive attitudes. As in previous research on convenience samples, we found a wide range of demographic characteristics were related with attitudes toward bisexual individuals in our nationally-representative study of heterosexual, gay/lesbian, and other-identified adults in the U.S. In particular, gender emerged as a significant

  4. A two-stage procedure for determining unsaturated hydraulic characteristics using a syringe pump and outflow observations

    DEFF Research Database (Denmark)

    Wildenschild, Dorthe; Jensen, Karsten Høgh; Hollenbeck, Karl-Josef

    1997-01-01

    A fast two-stage methodology for determining unsaturated flow characteristics is presented. The procedure builds on direct measurement of the retention characteristic using a syringe pump technique, combined with inverse estimation of the hydraulic conductivity characteristic based on one......-step outflow experiments. The direct measurements are obtained with a commercial syringe pump, which continuously withdraws fluid from a soil sample at a very low and accurate how rate, thus providing the water content in the soil sample. The retention curve is then established by simultaneously monitoring......-step outflow data and the independently measured retention data are included in the objective function of a traditional least-squares minimization routine, providing unique estimates of the unsaturated hydraulic characteristics by means of numerical inversion of Richards equation. As opposed to what is often...

  5. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...... pays any attention to this. In this paper, we show how various capacity and time constraints influence the performance of a specific two-stage system. We study the effects of several basic scheduling and sequencing rules in the presence of these constraints in order to learn the characteristics...

  6. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  7. Two-Stage Load Shedding for Secondary Control in Hierarchical Operation of Islanded Microgrids

    DEFF Research Database (Denmark)

    Zhou, Quan; Li, Zhiyi; Wu, Qiuwei

    2018-01-01

    A two-stage load shedding scheme is presented to cope with the severe power deficit caused by microgrid islanding. Coordinated with the fast response of inverter-based distributed energy resources (DERs), load shedding at each stage and the resulting power flow redistribution are estimated....... The first stage of load shedding will cease rapid frequency decline in which the measured frequency deviation is employed to guide the load shedding level and process. Once a new steady-state is reached, the second stage is activated, which performs load shedding according to the priorities of loads...

  8. Comparison of two-stage thermophilic (68 degrees C/55 degrees C) anaerobic digestion with one-stage thermophilic (55 degrees C) digestion of cattle manure

    DEFF Research Database (Denmark)

    Nielsen, H.B.; Mladenovska, Zuzana; Westermann, Peter

    2004-01-01

    A two-stage 68degreesC/55degreesC anaerobic degradation process for treatment of cattle manure was studied. In batch experiments, an increase of the specific methane yield, ranging from 24% to 56%, was obtained when cattle manure and its fractions (fibers and liquid) were pretreated at 68degrees......, was compared with a conventional single-stage reactor running at 55degreesC with 15-days HRT. When an organic loading of 3 g volatile solids (VS) per liter per day was applied, the two-stage setup had a 6% to 8% higher specific methane yield and a 9% more effective VS-removal than the conventional single......-stage reactor. The 68degreesC reactor generated 7% to 9% of the total amount of methane of the two-stage system and maintained a volatile fatty acids (VFA) concentration of 4.0 to 4.4 g acetate per liter. Population size and activity of aceticlastic methanogens, syntrophic bacteria, and hydrolytic...

  9. Application of two-stage biofilter system for the removal of odorous compounds.

    Science.gov (United States)

    Jeong, Gwi-Taek; Park, Don-Hee; Lee, Gwang-Yeon; Cha, Jin-Myeong

    2006-01-01

    Biofiltration is a biological process which is considered to be one of the more successful examples of biotechnological applications to environmental engineering, and is most commonly used in the removal of odoriferous compounds. In this study, we have attempted to assess the efficiency with which both single and complex odoriferous compounds could be removed, using one- or two-stage biofiltration systems. The tested single odor gases, limonene, alpha-pinene, and iso-butyl alcohol, were separately evaluated in the biofilters. Both limonene and alpha-pinene were removed by 90% or more EC (elimination capacity), 364 g/m3/h and 321 g/m3/h, respectively, at an input concentration of 50 ppm and a retention time of 30 s. The iso-butyl alcohol was maintained with an effective removal yield of more than 90% (EC 375 g/m3/h) at an input concentration of 100 ppm. The complex gas removal scheme was applied with a 200 ppm inlet concentration of ethanol, 70 ppm of acetaldehyde, and 70 ppm of toluene with residence time of 45 s in a one- or two-stage biofiltration system. The removal yield of toluene was determined to be lower than that of the other gases in the one-stage biofilter. Otherwise, the complex gases were sufficiently eliminated by the two-stage biofiltration system.

  10. A Concept of Two-Stage-To-Orbit Reusable Launch Vehicle

    Science.gov (United States)

    Yang, Yong; Wang, Xiaojun; Tang, Yihua

    2002-01-01

    Reusable Launch Vehicle (RLV) has a capability of delivering a wide rang of payload to earth orbit with greater reliability, lower cost, more flexibility and operability than any of today's launch vehicles. It is the goal of future space transportation systems. Past experience on single stage to orbit (SSTO) RLVs, such as NASA's NASP project, which aims at developing an rocket-based combined-cycle (RBCC) airplane and X-33, which aims at developing a rocket RLV, indicates that SSTO RLV can not be realized in the next few years based on the state-of-the-art technologies. This paper presents a concept of all rocket two-stage-to-orbit (TSTO) reusable launch vehicle. The TSTO RLV comprises an orbiter and a booster stage. The orbiter is mounted on the top of the booster stage. The TSTO RLV takes off vertically. At the altitude about 50km the booster stage is separated from the orbiter, returns and lands by parachutes and airbags, or lands horizontally by means of its own propulsion system. The orbiter continues its ascent flight and delivers the payload into LEO orbit. After completing orbit mission, the orbiter will reenter into the atmosphere, automatically fly to the ground base and finally horizontally land on the runway. TSTO RLV has less technology difficulties and risk than SSTO, and maybe the practical approach to the RLV in the near future.

  11. Evaluating damping elements for two-stage suspension vehicles

    Directory of Open Access Journals (Sweden)

    Ronald M. Martinod R.

    2012-01-01

    Full Text Available The technical state of the damping elements for a vehicle having two-stage suspension was evaluated by using numerical models based on the multi-body system theory; a set of virtual tests used the eigenproblem mathematical method. A test was developed based on experimental modal analysis (EMA applied to a physical system as the basis for validating the numerical models. The study focused on evaluating vehicle dynamics to determine the influence of the dampers’ technical state in each suspension state.

  12. Determination and Variation of Core Bacterial Community in a Two-Stage Full-Scale Anaerobic Reactor Treating High-Strength Pharmaceutical Wastewater.

    Science.gov (United States)

    Ma, Haijun; Ye, Lin; Hu, Haidong; Zhang, Lulu; Ding, Lili; Ren, Hongqiang

    2017-10-28

    Knowledge on the functional characteristics and temporal variation of anaerobic bacterial populations is important for better understanding of the microbial process of two-stage anaerobic reactors. However, owing to the high diversity of anaerobic bacteria, close attention should be prioritized to the frequently abundant bacteria that were defined as core bacteria and putatively functionally important. In this study, using MiSeq sequencing technology, the core bacterial community of 98 operational taxonomic units (OTUs) was determined in a two-stage upflow blanket filter reactor treating pharmaceutical wastewater. The core bacterial community accounted for 61.66% of the total sequences and accurately predicted the sample location in the principal coordinates analysis scatter plot as the total bacterial OTUs did. The core bacterial community in the first-stage (FS) and second-stage (SS) reactors were generally distinct, in that the FS core bacterial community was indicated to be more related to a higher-level fermentation process, and the SS core bacterial community contained more microbes in syntrophic cooperation with methanogens. Moreover, the different responses of the FS and SS core bacterial communities to the temperature shock and influent disturbance caused by solid contamination were fully investigated. Co-occurring analysis at the Order level implied that Bacteroidales, Selenomonadales, Anaerolineales, Syneristales, and Thermotogales might play key roles in anaerobic digestion due to their high abundance and tight correlation with other microbes. These findings advance our knowledge about the core bacterial community and its temporal variability for future comparative research and improvement of the two-stage anaerobic system operation.

  13. Theoretical and experimental investigations on the cooling capacity distributions at the stages in the thermally-coupled two-stage Stirling-type pulse tube cryocooler without external precooling

    Science.gov (United States)

    Tan, Jun; Dang, Haizheng

    2017-03-01

    The two-stage Stirling-type pulse tube cryocooler (SPTC) has advantages in simultaneously providing the cooling powers at two different temperatures, and the capacity in distributing these cooling capacities between the stages is significant to its practical applications. In this paper, a theoretical model of the thermally-coupled two-stage SPTC without external precooling is established based on the electric circuit analogy with considering real gas effects, and the simulations of both the cooling performances and PV power distribution between stages are conducted. The results indicate that the PV power is inversely proportional to the acoustic impedance of each stage, and the cooling capacity distribution is determined by the cold finger cooling efficiency and the PV power into each stage together. The design methods of the cold fingers to achieve both the desired PV power and the cooling capacity distribution between the stages are summarized. The two-stage SPTC is developed and tested based on the above theoretical investigations, and the experimental results show that it can simultaneously achieve 0.69 W at 30 K and 3.1 W at 85 K with an electric input power of 330 W and a reject temperature of 300 K. The consistency between the simulated and the experimental results is observed and the theoretical investigations are experimentally verified.

  14. Lingual mucosal graft two-stage Bracka technique for redo hypospadias repair

    Directory of Open Access Journals (Sweden)

    Ahmed Sakr

    2017-09-01

    Conclusion: Lingual mucosa is a reliable and versatile graft material in the armamentarium of two-stage Bracka hypospadias repair with the merits of easy harvesting and minor donor-site complications.

  15. Building fast well-balanced two-stage numerical schemes for a model of two-phase flows

    Science.gov (United States)

    Thanh, Mai Duc

    2014-06-01

    We present a set of well-balanced two-stage schemes for an isentropic model of two-phase flows arisen from the modeling of deflagration-to-detonation transition in granular materials. The first stage is to absorb the source term in nonconservative form into equilibria. Then in the second stage, these equilibria will be composed into a numerical flux formed by using a convex combination of the numerical flux of a stable Lax-Friedrichs-type scheme and the one of a higher-order Richtmyer-type scheme. Numerical schemes constructed in such a way are expected to get the interesting property: they are fast and stable. Tests show that the method works out until the parameter takes on the value CFL, and so any value of the parameter between zero and this value is expected to work as well. All the schemes in this family are shown to capture stationary waves and preserves the positivity of the volume fractions. The special values of the parameter 0,1/2,1/(1+CFL), and CFL in this family define the Lax-Friedrichs-type, FAST1, FAST2, and FAST3 schemes, respectively. These schemes are shown to give a desirable accuracy. The errors and the CPU time of these schemes and the Roe-type scheme are calculated and compared. The constructed schemes are shown to be well-balanced and faster than the Roe-type scheme.

  16. Two-stage pervaporation process for effective in situ removal acetone-butanol-ethanol from fermentation broth.

    Science.gov (United States)

    Cai, Di; Hu, Song; Miao, Qi; Chen, Changjing; Chen, Huidong; Zhang, Changwei; Li, Ping; Qin, Peiyong; Tan, Tianwei

    2017-01-01

    Two-stage pervaporation for ABE recovery from fermentation broth was studied to reduce the energy cost. The permeate after the first stage in situ pervaporation system was further used as the feedstock in the second stage of pervaporation unit using the same PDMS/PVDF membrane. A total 782.5g/L of ABE (304.56g/L of acetone, 451.98g/L of butanol and 25.97g/L of ethanol) was achieved in the second stage permeate, while the overall acetone, butanol and ethanol separation factors were: 70.7-89.73, 70.48-84.74 and 9.05-13.58, respectively. Furthermore, the theoretical evaporation energy requirement for ABE separation in the consolidate fermentation, which containing two-stage pervaporation and the following distillation process, was estimated less than ∼13.2MJ/kg-butanol. The required evaporation energy was only 36.7% of the energy content of butanol. The novel two-stage pervaporation process was effective in increasing ABE production and reducing energy consumption of the solvents separation system. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. The association of diagnosis in the private or NHS sector on prostate cancer stage and treatment.

    Science.gov (United States)

    Barbiere, J M; Greenberg, D C; Wright, K A; Brown, C H; Palmer, C; Neal, D E; Lyratzopoulos, G

    2012-03-01

    To examine associations of private healthcare with stage and management of prostate cancer. Regional population-based cancer registry information on 15 916 prostate cancer patients. Compared with patients diagnosed in the National Health Service (NHS) (94%), those diagnosed in private hospitals (5%) were significantly more affluent (69 versus 52% in deprivation quintiles 1-2), younger (mean 69 versus 73 years) and diagnosed at earlier stage (72 versus 79% in Stages Private hospital of diagnosis was independently associated with lower probability of advanced disease stage [odds ratio (OR) 0.75, P = 0.002], higher probability of surgery use (OR 1.28, P = 0.037) and lower probability of radiotherapy use (OR 0.75, P = 0.001). Private hospital of diagnosis independently predicted higher surgery and lower radiotherapy use, particularly in more deprived patients aged ≤ 70. In prostate cancer patients, private hospital diagnosis predicts earlier disease stage, higher use of surgery and lower use of radiotherapy, independently of case-mix differences between the two sectors. Substantial socioeconomic differences in stage and treatment patterns remain across centres in the NHS, even after adjusting for private sector diagnosis. Cancer registration data could be used to identify private care use on a population basis and the potential associated treatment disparities.

  18. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  19. Fixation Probability in a Haploid-Diploid Population.

    Science.gov (United States)

    Bessho, Kazuhiro; Otto, Sarah P

    2017-01-01

    Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright-Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. Copyright © 2017 by the Genetics Society of America.

  20. Health system delay and its effect on clinical stage of breast cancer: Multicenter study.

    Science.gov (United States)

    Unger-Saldaña, Karla; Miranda, Alfonso; Zarco-Espinosa, Gelasio; Mainero-Ratchelous, Fernando; Bargalló-Rocha, Enrique; Miguel Lázaro-León, Jesús

    2015-07-01

    The objective of this study was to determine the correlation between health system delay and clinical disease stage in patients with breast cancer. This was a cross-sectional study of 886 patients who were referred to 4 of the largest public cancer hospitals in Mexico City for the evaluation of a probable breast cancer. Data on time intervals, sociodemographic factors, and clinical stage at diagnosis were retrieved. A logistic regression model was used to estimate the average marginal effects of delay on the probability of being diagnosed with advanced breast cancer (stages III and IV). The median time between problem identification and the beginning of treatment was 7 months. The subinterval with the largest delay was that between the first medical consultation and diagnosis (median, 4 months). Only 15% of the patients who had cancer were diagnosed with stage 0 and I disease, and 48% were diagnosed with stage III and IV disease. Multivariate analyses confirmed independent correlations for the means of problem identification, patient delay, health system delay, and age with a higher probability that patients would begin cancer treatment in an advanced stage. In the sample studied, the majority of patients with breast cancer began treatment after a delay. Both patient delays and provider delays were associated with advanced disease. Research aimed at identifying specific access barriers to medical services is much needed to guide the design of tailored health policies that go beyond the promotion of breast care awareness and screening participation to include improvements in health services that facilitate access to timely diagnosis and treatment. © 2015 The Authors. Cancer published by Wiley Periodicals, Inc. on behalf of American Cancer Society.

  1. EOG and EMG: two important switches in automatic sleep stage classification.

    Science.gov (United States)

    Estrada, E; Nazeran, H; Barragan, J; Burk, J R; Lucas, E A; Behbehani, K

    2006-01-01

    Sleep is a natural periodic state of rest for the body, in which the eyes are usually closed and consciousness is completely or partially lost. In this investigation we used the EOG and EMG signals acquired from 10 patients undergoing overnight polysomnography with their sleep stages determined by expert sleep specialists based on RK rules. Differentiation between Stage 1, Awake and REM stages challenged a well trained neural network classifier to distinguish between classes when only EEG-derived signal features were used. To meet this challenge and improve the classification rate, extra features extracted from EOG and EMG signals were fed to the classifier. In this study, two simple feature extraction algorithms were applied to EOG and EMG signals. The statistics of the results were calculated and displayed in an easy to visualize fashion to observe tendencies for each sleep stage. Inclusion of these features show a great promise to improve the classification rate towards the target rate of 100%

  2. Improvement of two-stage GM refrigerator performance using a hybrid regenerator

    International Nuclear Information System (INIS)

    Ke, G.; Makuuchi, H.; Hashimoto, T.; Onishi, A.; Li, R.; Satoh, T.; Kanazawa, Y.

    1994-01-01

    To improve the performance of two-stage GM refrigerators, a hybrid regenerator with magnetic materials of Er 3 Ni and ErNi 0.9 Co 0.1 was used in the 2nd stage regenerator because of its large heat exchange capacity. The largest refrigeration capacity achieved with the hybrid regenerator was 0.95W at helium liquefied temperature of 4.2K. This capacity is 15.9% greater than the 0.82W refrigerator with only Er 3 Ni as the 2nd regenerator material. Use of the hybrid regenerator not only increases the refrigeration capacity at 4.2K, but also allows the 4K GM refrigerator to be used with large 1st stage refrigeration capacity, thus making it more practical

  3. NxStage dialysis system-associated thrombocytopenia: a report of two cases.

    Science.gov (United States)

    Sekkarie, Mohamed; Waldron, Michelle; Reynolds, Texas

    2016-01-01

    Thrombocytopenia in hemodialysis patients has recently been reported to be commonly caused by electron-beam sterilization of dialysis filters. We report the occurrence of thrombocytopenia in the first two patients of a newly established home hemodialysis program. The 2 patients switched from conventional hemodialysis using polysulfone electron-beam sterilized dialyzers to a NxStage system, which uses gamma sterilized polyehersulfone dialyzers incorporated into a drop-in cartridge. The thrombocytopenia resolved after return to conventional dialysis in both patients and recurred upon rechallenge in the patient who opted to retry NxStage. This is the first report of thrombocytopenia with the NxStage system according to the authors’ knowledge. Dialysis-associated thrombocytopenia pathophysiology and clinical significance are not well understood and warrant additional investigations.

  4. Conditional Probability Modulates Visual Search Efficiency

    Directory of Open Access Journals (Sweden)

    Bryan eCort

    2013-10-01

    Full Text Available We investigated the effects of probability on visual search. Previous work has shown that people can utilize spatial and sequential probability information to improve target detection. We hypothesized that performance improvements from probability information would extend to the efficiency of visual search. Our task was a simple visual search in which the target was always present among a field of distractors, and could take one of two colors. The absolute probability of the target being either color was 0.5; however, the conditional probability – the likelihood of a particular color given a particular combination of two cues – varied from 0.1 to 0.9. We found that participants searched more efficiently for high conditional probability targets and less efficiently for low conditional probability targets, but only when they were explicitly informed of the probability relationship between cues and target color.

  5. Evaluation of the effect of one stage versus two stage full mouth disinfection on C-reactive protein and leucocyte count in patients with chronic periodontitis.

    Science.gov (United States)

    Pabolu, Chandra Mohan; Mutthineni, Ramesh Babu; Chintala, Srikanth; Naheeda; Mutthineni, Navya

    2013-07-01

    Conventional non-surgical periodontal therapy is carried out in quadrant basis with 1-2 week interval. This time lag may result in re-infection of instrumented pocket and may impair healing. Therefore, a new approach to full-mouth non-surgical therapy to be completed within two consecutive days with full-mouth disinfection has been suggested. In periodontitis, leukocyte counts and levels of C-reactive protein (CRP) are likely to be slightly elevated, indicating the presence of infection or inflammation. The aim of this study is to compare the efficacy of one stage and two stage non-surgical therapy on clinical parameters along with CRP levels and total white blood cell (TWBC) count. A total of 20 patients were selected and were divided into two groups. Group 1 received one stage full mouth dis-infection and Group 2 received two stages FMD. Plaque index, sulcus bleeding index, probing depth, clinical attachment loss, serum CRP and TWBC count were evaluated for both the groups at baseline and at 1 month post-treatment. The results were analyzed using the Student t-test. Both treatment modalities lead to a significant improvement of the clinical and hematological parameters; however comparison between the two groups showed no significant difference after 1 month. The therapeutic intervention may have a systemic effect on blood count in periodontitis patients. Though one stage FMD had limited benefits over two stages FMD, the therapy can be accomplished in a shorter duration.

  6. A Two-Stage Fuzzy Logic Control Method of Traffic Signal Based on Traffic Urgency Degree

    OpenAIRE

    Yan Ge

    2014-01-01

    City intersection traffic signal control is an important method to improve the efficiency of road network and alleviate traffic congestion. This paper researches traffic signal fuzzy control method on a single intersection. A two-stage traffic signal control method based on traffic urgency degree is proposed according to two-stage fuzzy inference on single intersection. At the first stage, calculate traffic urgency degree for all red phases using traffic urgency evaluation module and select t...

  7. Joint probabilities reproducing three EPR experiments on two qubits

    NARCIS (Netherlands)

    Roy, S. M.; Atkinson, D.; Auberson, G.; Mahoux, G.; Singh, V.

    2007-01-01

    An eight-parameter family of the most general non-negative quadruple probabilities is constructed for EPR-Bohm-Aharonov experiments when only three pairs of analyser settings are used. It is a simultaneous representation of three different Bohr-incompatible experimental configurations involving

  8. Probability and stochastic modeling

    CERN Document Server

    Rotar, Vladimir I

    2012-01-01

    Basic NotionsSample Space and EventsProbabilitiesCounting TechniquesIndependence and Conditional ProbabilityIndependenceConditioningThe Borel-Cantelli TheoremDiscrete Random VariablesRandom Variables and VectorsExpected ValueVariance and Other Moments. Inequalities for DeviationsSome Basic DistributionsConvergence of Random Variables. The Law of Large NumbersConditional ExpectationGenerating Functions. Branching Processes. Random Walk RevisitedBranching Processes Generating Functions Branching Processes Revisited More on Random WalkMarkov ChainsDefinitions and Examples. Probability Distributions of Markov ChainsThe First Step Analysis. Passage TimesVariables Defined on a Markov ChainErgodicity and Stationary DistributionsA Classification of States and ErgodicityContinuous Random VariablesContinuous DistributionsSome Basic Distributions Continuous Multivariate Distributions Sums of Independent Random Variables Conditional Distributions and ExpectationsDistributions in the General Case. SimulationDistribution F...

  9. Design and construction of a heat stage for investigations of samples by atomic force microscopy above ambient temperatures

    DEFF Research Database (Denmark)

    Bækmark, Thomas Rosleff; Bjørnholm, Thomas; Mouritsen, Ole G.

    1997-01-01

    The construction from simple and cheap commercially available parts of a miniature heat stage for the direct heating of samples studied with a commercially available optical-lever-detection atomic force microscope is reported. We demonstrate that by using this heat stage, atomic resolution can...... be obtained on highly oriented pyrolytic graphite at 52 °C. The heat stage is of potential use for the investigation of biological material at physiological temperatures. ©1997 American Institute of Physics....

  10. Single-staged vs. two-staged implant placement using bone ring technique in vertically deficient alveolar ridges - Part 1: histomorphometric and micro-CT analysis.

    Science.gov (United States)

    Nakahara, Ken; Haga-Tsujimura, Maiko; Sawada, Kosaku; Kobayashi, Eizaburo; Mottini, Matthias; Schaller, Benoit; Saulacic, Nikola

    2016-11-01

    Simultaneous implant placement with bone grafting shortens the overall treatment period, but might lead to the peri-implant bone loss or even implant failure. The aim of this study was to compare the single-staged to two-staged implant placement using the bone ring technique. Four standardized alveolar bone defects were made in the mandibles of nine dogs. Dental implants (Straumann BL ® , Basel, Switzerland) were inserted simultaneously with bone ring technique in test group and after 6 months of healing period in control group. Animals of both groups were euthanized at 3 and 6 months of osseointegration period. The harvested samples were analyzed by means of histology and micro-CT. The amount of residual bone decreased while the amount of new bone increased up to 9 months of healing period. All morphometric parameters remained stable between 3 and 6 months of osseointegration period within groups. Per a given time point, median area of residual bone graft was higher in test group and area of new bone in control group. The volume of bone ring was greater in test than in control group, reaching the significance at 6 months of osseointegration period (P = 0.002). In the present type of bone defect, single-staged implant placement may be potentially useful to shorten an overall treatment period. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  11. Multisite tumor sampling enhances the detection of intratumor heterogeneity at all different temporal stages of tumor evolution.

    Science.gov (United States)

    Erramuzpe, Asier; Cortés, Jesús M; López, José I

    2018-02-01

    Intratumor heterogeneity (ITH) is an inherent process of tumor development that has received much attention in previous years, as it has become a major obstacle for the success of targeted therapies. ITH is also temporally unpredictable across tumor evolution, which makes its precise characterization even more problematic since detection success depends on the precise temporal snapshot at which ITH is analyzed. New and more efficient strategies for tumor sampling are needed to overcome these difficulties which currently rely entirely on the pathologist's interpretation. Recently, we showed that a new strategy, the multisite tumor sampling, works better than the routine sampling protocol for the ITH detection when the tumor time evolution was not taken into consideration. Here, we extend this work and compare the ITH detections of multisite tumor sampling and routine sampling protocols across tumor time evolution, and in particular, we provide in silico analyses of both strategies at early and late temporal stages for four different models of tumor evolution (linear, branched, neutral, and punctuated). Our results indicate that multisite tumor sampling outperforms routine protocols in detecting ITH at all different temporal stages of tumor evolution. We conclude that multisite tumor sampling is more advantageous than routine protocols in detecting intratumor heterogeneity.

  12. Thermodynamics analysis of a modified dual-evaporator CO2 transcritical refrigeration cycle with two-stage ejector

    International Nuclear Information System (INIS)

    Bai, Tao; Yan, Gang; Yu, Jianlin

    2015-01-01

    In this paper, a modified dual-evaporator CO 2 transcritical refrigeration cycle with two-stage ejector (MDRC) is proposed. In MDRC, the two-stage ejector are employed to recover the expansion work from cycle throttling processes and enhance the system performance and obtain dual-temperature refrigeration simultaneously. The effects of some key parameters on the thermodynamic performance of the modified cycle are theoretically investigated based on energetic and exergetic analyses. The simulation results for the modified cycle show that two-stage ejector exhibits more effective system performance improvement than the single ejector in CO 2 dual-temperature refrigeration cycle, and the improvements of the maximum system COP (coefficient of performance) and system exergy efficiency could reach 37.61% and 31.9% over those of the conventional dual-evaporator cycle under the given operating conditions. The exergetic analysis for each component at optimum discharge pressure indicates that the gas cooler, compressor, two-stage ejector and expansion valves contribute main portion to the total system exergy destruction, and the exergy destruction caused by the two-stage ejector could amount to 16.91% of the exergy input. The performance characteristics of the proposed cycle show its promise in dual-evaporator refrigeration system. - Highlights: • Two-stage ejector is used in dual-evaporator CO 2 transcritical refrigeration cycle. • Energetic and exergetic methods are carried out to analyze the system performance. • The modified cycle could obtain dual-temperature refrigeration simultaneously. • Two-stage ejector could effectively improve system COP and exergy efficiency

  13. The experimental study of a two-stage photovoltaic thermal system based on solar trough concentration

    International Nuclear Information System (INIS)

    Tan, Lijun; Ji, Xu; Li, Ming; Leng, Congbin; Luo, Xi; Li, Haili

    2014-01-01

    Highlights: • A two-stage photovoltaic thermal system based on solar trough concentration. • Maximum cell efficiency of 5.21% with the mirror opening width of 57 cm. • With single cycle, maximum temperatures rise in the heating stage is 12.06 °C. • With 30 min multiple cycles, working medium temperature 62.8 °C, increased 28.7 °C. - Abstract: A two-stage photovoltaic thermal system based on solar trough concentration is proposed, in which the metal cavity heating stage is added on the basis of the PV/T stage, and thermal energy with higher temperature is output while electric energy is output. With the 1.8 m 2 mirror PV/T system, the characteristic parameters of the space solar cell under non-concentrating solar radiation and concentrating solar radiation are respectively tested experimentally, and the solar cell output characteristics at different opening widths of concentrating mirror of the PV/T stage under condensation are also tested experimentally. When the mirror opening width was 57 cm, the solar cell efficiency reached maximum value of 5.21%. The experimental platform of the two-stage photovoltaic thermal system was established, with a 1.8 m 2 mirror PV/T stage and a 15 m 2 mirror heating stage, or a 1.8 m 2 mirror PV/T stage and a 30 m 2 mirror heating stage. The results showed that with single cycle, the long metal cavity heating stage would bring lower thermal efficiency, but temperature rise of the working medium is higher, up to 12.06 °C with only single cycle. With 30 min closed multiple cycles, the temperature of the working medium in the water tank was 62.8 °C, with an increase of 28.7 °C, and thermal energy with higher temperature could be output

  14. Stage-specific sampling by pattern recognition receptors during Candida albicans phagocytosis.

    Directory of Open Access Journals (Sweden)

    Sigrid E M Heinsbroek

    2008-11-01

    Full Text Available Candida albicans is a medically important pathogen, and recognition by innate immune cells is critical for its clearance. Although a number of pattern recognition receptors have been shown to be involved in recognition and phagocytosis of this fungus, the relative role of these receptors has not been formally examined. In this paper, we have investigated the contribution of the mannose receptor, Dectin-1, and complement receptor 3; and we have demonstrated that Dectin-1 is the main non-opsonic receptor involved in fungal uptake. However, both Dectin-1 and complement receptor 3 were found to accumulate at the site of uptake, while mannose receptor accumulated on C. albicans phagosomes at later stages. These results suggest a potential role for MR in phagosome sampling; and, accordingly, MR deficiency led to a reduction in TNF-alpha and MCP-1 production in response to C. albicans uptake. Our data suggest that pattern recognition receptors sample the fungal phagosome in a sequential fashion.

  15. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  16. One-stage or two-stage revision surgery for prosthetic hip joint infection--the INFORM trial: a study protocol for a randomised controlled trial.

    Science.gov (United States)

    Strange, Simon; Whitehouse, Michael R; Beswick, Andrew D; Board, Tim; Burston, Amanda; Burston, Ben; Carroll, Fran E; Dieppe, Paul; Garfield, Kirsty; Gooberman-Hill, Rachael; Jones, Stephen; Kunutsor, Setor; Lane, Athene; Lenguerrand, Erik; MacGowan, Alasdair; Moore, Andrew; Noble, Sian; Simon, Joanne; Stockley, Ian; Taylor, Adrian H; Toms, Andrew; Webb, Jason; Whittaker, John-Paul; Wilson, Matthew; Wylde, Vikki; Blom, Ashley W

    2016-02-17

    Periprosthetic joint infection (PJI) affects approximately 1% of patients following total hip replacement (THR) and often results in severe physical and emotional suffering. Current surgical treatment options are debridement, antibiotics and implant retention; revision THR; excision of the joint and amputation. Revision surgery can be done as either a one-stage or two-stage operation. Both types of surgery are well-established practice in the NHS and result in similar rates of re-infection, but little is known about the impact of these treatments from the patient's perspective. The main aim of this randomised controlled trial is to determine whether there is a difference in patient-reported outcome measures 18 months after randomisation for one-stage or two-stage revision surgery. INFORM (INFection ORthopaedic Management) is an open, two-arm, multi-centre, randomised, superiority trial. We aim to randomise 148 patients with eligible PJI of the hip from approximately seven secondary care NHS orthopaedic units from across England and Wales. Patients will be randomised via a web-based system to receive either a one-stage revision or a two-stage revision THR. Blinding is not possible due to the nature of the intervention. All patients will be followed up for 18 months. The primary outcome is the WOMAC Index, which assesses hip pain, function and stiffness, collected by questionnaire at 18 months. Secondary outcomes include the following: cost-effectiveness, complications, re-infection rates, objective hip function assessment and quality of life. A nested qualitative study will explore patients' and surgeons' experiences, including their views about trial participation and randomisation. INFORM is the first ever randomised trial to compare two widely accepted surgical interventions for the treatment of PJI: one-stage and two-stage revision THR. The results of the trial will benefit patients in the future as the main focus is on patient-reported outcomes: pain, function

  17. Two-stage laparoscopic approaches for high anorectal malformation: transumbilical colostomy and anorectoplasty.

    Science.gov (United States)

    Yang, Li; Tang, Shao-Tao; Li, Shuai; Aubdoollah, T H; Cao, Guo-Qing; Lei, Hai-Yan; Wang, Xin-Xing

    2014-11-01

    Trans-umbilical colostomy (TUC) has been previously created in patients with Hirschsprung's disease and intermediate anorectal malformation (ARM), but not in patients with high-ARM. The purposes of this study were to assess the feasibility, safety, complications and cosmetic results of TUC in a divided fashion, and subsequently stoma closure and laparoscopic assisted anorectoplasty (LAARP) were simultaneously completed by using the colostomy site for a laparoscopic port in high-ARM patients. Twenty male patients with high-ARMs were chosen for this two-stage procedure. The first-stage consisted of creating the TUC in double-barreled fashion colostomy with a high chimney at the umbilicus, and the loop was divided at the same time, in such a way that the two diverting ends were located at the umbilical incision with the distal end half closed and slightly higher than proximal end. In the second-stage, 3 to 7 months later, the stoma was closed through a peristomal skin incision followed by end-to-end anastomosis and simultaneously LAARP was performed by placing a laparoscopic port at the umbilicus, which was previously the colonostomy site. Umbilical wound closure was performed in a semi-opened fashion to create a deep umbilicus. TUC and LAARP were successfully performed in 20 patients. Four cases with bladder neck fistulas and 16 cases with prostatic urethra fistulas were found. Postoperative complications were rectal mucosal prolapsed in three cases, anal stricture in two cases and wound dehiscence in one case. Neither umbilical ring narrowing, parastomal hernia nor obstructive symptoms was observed. Neither umbilical nor perineal wound infection was observed. Stoma care was easily carried-out by attaching stoma bag. Healing of umbilical wounds after the second-stage was excellent. Early functional stooling outcome were satisfactory. The umbilicus may be an alternative stoma site for double-barreled colostomy in high-ARM patients. The two-stage laparoscopic

  18. Probability in physics

    CERN Document Server

    Hemmo, Meir

    2012-01-01

    What is the role and meaning of probability in physical theory, in particular in two of the most successful theories of our age, quantum physics and statistical mechanics? Laws once conceived as universal and deterministic, such as Newton‘s laws of motion, or the second law of thermodynamics, are replaced in these theories by inherently probabilistic laws. This collection of essays by some of the world‘s foremost experts presents an in-depth analysis of the meaning of probability in contemporary physics. Among the questions addressed are: How are probabilities defined? Are they objective or subjective? What is their  explanatory value? What are the differences between quantum and classical probabilities? The result is an informative and thought-provoking book for the scientifically inquisitive. 

  19. Two stage approach to dynamic soil structure interaction

    International Nuclear Information System (INIS)

    Nelson, I.

    1981-01-01

    A two stage approach is used to reduce the effective size of soil island required to solve dynamic soil structure interaction problems. The ficticious boundaries of the conventional soil island are chosen sufficiently far from the structure so that the presence of the structure causes only a slight perturbation on the soil response near the boundaries. While the resulting finite element model of the soil structure system can be solved, it requires a formidable computational effort. Currently, a two stage approach is used to reduce this effort. The combined soil structure system has many frequencies and wavelengths. For a stiff structure, the lowest frequencies are those associated with the motion of the structure as a rigid body. In the soil, these modes have the longest wavelengths and attenuate most slowly. The higher frequency deformational modes of the structure have shorter wavelengths and their effect attenuates more rapidly with distance from the structure. The difference in soil response between a computation with a refined structural model, and one with a crude model, tends towards zero a very short distance from the structure. In the current work, the 'crude model' is a rigid structure with the same geometry and inertial properties as the refined model. Preliminary calculations indicated that a rigid structure would be a good low frequency approximation to the actual structure, provided the structure was much stiffer than the native soil. (orig./RW)

  20. Environmental DNA (eDNA) Detection Probability Is Influenced by Seasonal Activity of Organisms.

    Science.gov (United States)

    de Souza, Lesley S; Godwin, James C; Renshaw, Mark A; Larson, Eric

    2016-01-01

    Environmental DNA (eDNA) holds great promise for conservation applications like the monitoring of invasive or imperiled species, yet this emerging technique requires ongoing testing in order to determine the contexts over which it is effective. For example, little research to date has evaluated how seasonality of organism behavior or activity may influence detection probability of eDNA. We applied eDNA to survey for two highly imperiled species endemic to the upper Black Warrior River basin in Alabama, US: the Black Warrior Waterdog (Necturus alabamensis) and the Flattened Musk Turtle (Sternotherus depressus). Importantly, these species have contrasting patterns of seasonal activity, with N. alabamensis more active in the cool season (October-April) and S. depressus more active in the warm season (May-September). We surveyed sites historically occupied by these species across cool and warm seasons over two years with replicated eDNA water samples, which were analyzed in the laboratory using species-specific quantitative PCR (qPCR) assays. We then used occupancy estimation with detection probability modeling to evaluate both the effects of landscape attributes on organism presence and season of sampling on detection probability of eDNA. Importantly, we found that season strongly affected eDNA detection probability for both species, with N. alabamensis having higher eDNA detection probabilities during the cool season and S. depressus have higher eDNA detection probabilities during the warm season. These results illustrate the influence of organismal behavior or activity on eDNA detection in the environment and identify an important role for basic natural history in designing eDNA monitoring programs.

  1. Probability an introduction with statistical applications

    CERN Document Server

    Kinney, John J

    2014-01-01

    Praise for the First Edition""This is a well-written and impressively presented introduction to probability and statistics. The text throughout is highly readable, and the author makes liberal use of graphs and diagrams to clarify the theory.""  - The StatisticianThoroughly updated, Probability: An Introduction with Statistical Applications, Second Edition features a comprehensive exploration of statistical data analysis as an application of probability. The new edition provides an introduction to statistics with accessible coverage of reliability, acceptance sampling, confidence intervals, h

  2. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  3. Two-Way Tables: Issues at the Heart of Statistics and Probability for Students and Teachers

    Science.gov (United States)

    Watson, Jane; Callingham, Rosemary

    2014-01-01

    Some problems exist at the intersection of statistics and probability, creating a dilemma in relation to the best approach to assist student understanding. Such is the case with problems presented in two-way tables representing conditional information. The difficulty can be confounded if the context within which the problem is set is one where…

  4. A Novel Two-Stage Dynamic Spectrum Sharing Scheme in Cognitive Radio Networks

    Institute of Scientific and Technical Information of China (English)

    Guodong Zhang; Wei Heng; Tian Liang; Chao Meng; Jinming Hu

    2016-01-01

    In order to enhance the efficiency of spectrum utilization and reduce communication overhead in spectrum sharing process,we propose a two-stage dynamic spectrum sharing scheme in which cooperative and noncooperative modes are analyzed in both stages.In particular,the existence and the uniqueness of Nash Equilibrium (NE) strategies for noncooperative mode are proved.In addition,a distributed iterative algorithm is proposed to obtain the optimal solutions of the scheme.Simulation studies are carried out to show the performance comparison between two modes as well as the system revenue improvement of the proposed scheme compared with a conventional scheme without a virtual price control factor.

  5. Experimental studies of two-stage centrifugal dust concentrator

    Science.gov (United States)

    Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.

    2018-03-01

    The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.

  6. Graphics for the multivariate two-sample problem

    International Nuclear Information System (INIS)

    Friedman, J.H.; Rafsky, L.C.

    1981-01-01

    Some graphical methods for comparing multivariate samples are presented. These methods are based on minimal spanning tree techniques developed for multivariate two-sample tests. The utility of these methods is illustrated through examples using both real and artificial data

  7. Opposed piston linear compressor driven two-stage Stirling Cryocooler for cooling of IR sensors in space application

    Science.gov (United States)

    Bhojwani, Virendra; Inamdar, Asif; Lele, Mandar; Tendolkar, Mandar; Atrey, Milind; Bapat, Shridhar; Narayankhedkar, Kisan

    2017-04-01

    A two-stage Stirling Cryocooler has been developed and tested for cooling IR sensors in space application. The concept uses an opposed piston linear compressor to drive the two-stage Stirling expander. The configuration used a moving coil linear motor for the compressor as well as for the expander unit. Electrical phase difference of 80 degrees was maintained between the voltage waveforms supplied to the compressor motor and expander motor. The piston and displacer surface were coated with Rulon an anti-friction material to ensure oil less operation of the unit. The present article discusses analysis results, features of the cryocooler and experimental tests conducted on the developed unit. The two-stages of Cryo-cylinder and the expander units were manufactured from a single piece to ensure precise alignment between the two-stages. Flexure bearings were used to suspend the piston and displacer about its mean position. The objective of the work was to develop a two-stage Stirling cryocooler with 2 W at 120 K and 0.5 W at 60 K cooling capacity for the two-stages and input power of less than 120 W. The Cryocooler achieved a minimum temperature of 40.7 K at stage 2.

  8. Probabilistic interpretation of command and control signals: Bayesian updating of the probability of nuclear attack

    International Nuclear Information System (INIS)

    Pate-Cornell, M.Elisabeth; Fischbeck, Paul S.

    1995-01-01

    A warning system such as the Command, Control, Communication, and Intelligence system (C 3 I) for the United States nuclear forces operates on the basis of various sources of information among which are signals from sensors. A fundamental problem in the use of such signals is that these sensors provide only imperfect information. Bayesian probability, defined as a degree of belief in the possibility of each event, is therefore a key concept in the logical treatment of the signals. However, the base of evidence for estimation of these probabilities may be small and, therefore, the results of the updating (posterior probabilities of attack) may also be uncertain. In this paper, we examine the case where uncertainties hinge upon the existence of several possible underlying hypotheses (or models), and where the decision-maker attributes a different probability of attack to each of these fundamental hypotheses. We present a two-stage Bayesian updating process, first of the probabilities of the fundamental hypotheses, then of the probabilities of attack conditional on each hypothesis, given a positive signal from the C 3 I. We illustrate the method in the discrete case where there are only two possible fundamental hypotheses, and in the case of a continuous set of hypotheses. We discuss briefly the implications of the results for decision-making. The method can be generalized to other warning systems with imperfect signals, when the prior probability of the event of interest is uncertain

  9. Finding needles in a haystack: a methodology for identifying and sampling community-based youth smoking cessation programs.

    Science.gov (United States)

    Emery, Sherry; Lee, Jungwha; Curry, Susan J; Johnson, Tim; Sporer, Amy K; Mermelstein, Robin; Flay, Brian; Warnecke, Richard

    2010-02-01

    Surveys of community-based programs are difficult to conduct when there is virtually no information about the number or locations of the programs of interest. This article describes the methodology used by the Helping Young Smokers Quit (HYSQ) initiative to identify and profile community-based youth smoking cessation programs in the absence of a defined sample frame. We developed a two-stage sampling design, with counties as the first-stage probability sampling units. The second stage used snowball sampling to saturation, to identify individuals who administered youth smoking cessation programs across three economic sectors in each county. Multivariate analyses modeled the relationship between program screening, eligibility, and response rates and economic sector and stratification criteria. Cumulative logit models analyzed the relationship between the number of contacts in a county and the number of programs screened, eligible, or profiled in a county. The snowball process yielded 9,983 unique and traceable contacts. Urban and high-income counties yielded significantly more screened program administrators; urban counties produced significantly more eligible programs, but there was no significant association between the county characteristics and program response rate. There is a positive relationship between the number of informants initially located and the number of programs screened, eligible, and profiled in a county. Our strategy to identify youth tobacco cessation programs could be used to create a sample frame for other nonprofit organizations that are difficult to identify due to a lack of existing directories, lists, or other traditional sample frames.

  10. Two Stage Fuzzy Methodology to Evaluate the Credit Risks of Investment Projects

    OpenAIRE

    O. Badagadze; G. Sirbiladze; I. Khutsishvili

    2014-01-01

    The work proposes a decision support methodology for the credit risk minimization in selection of investment projects. The methodology provides two stages of projects’ evaluation. Preliminary selection of projects with minor credit risks is made using the Expertons Method. The second stage makes ranking of chosen projects using the Possibilistic Discrimination Analysis Method. The latter is a new modification of a well-known Method of Fuzzy Discrimination Analysis.

  11. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Martínez-Jerónimo, Fernando; Flores-Ortíz, Cesar Mateo

    2017-11-20

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL -1 to 17.98gL -1 with an increase in initial glucose concentration from 10.6gL -1 to 30.3gL -1 . However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Y x/s ), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  12. One-stage (Warsaw) and two-stage (Oslo) repair of unilateral cleft lip and palate: Craniofacial outcomes.

    Science.gov (United States)

    Fudalej, Piotr Stanislaw; Wegrodzka, Ewa; Semb, Gunvor; Hortis-Dzierzbicka, Maria

    2015-09-01

    The aim of this study was to compare facial development in subjects with complete unilateral cleft lip and palate (CUCLP) treated with two different surgical protocols. Lateral cephalometric radiographs of 61 patients (42 boys, 19 girls; mean age, 10.9 years; SD, 1) treated consecutively in Warsaw with one-stage repair and 61 age-matched and sex-matched patients treated in Oslo with two-stage surgery were selected to evaluate craniofacial morphology. On each radiograph 13 angular and two ratio variables were measured in order to describe hard and soft tissues of the facial region. The analysis showed that differences between the groups were limited to hard tissues – the maxillary prominence in subjects from the Warsaw group was decreased by almost 4° in comparison with the Oslo group (sella-nasion-A-point (SNA) = 75.3° and 79.1°, respectively) and maxillo-mandibular morphology was less favorable in the Warsaw group than the Oslo group (ANB angle = 0.8° and 2.8°, respectively). The soft tissue contour was comparable in both groups. In conclusion, inter-group differences suggest a more favorable outcome in the Oslo group. However, the distinctiveness of facial morphology in background populations (ie, in Poles and Norwegians) could have contributed to the observed results. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  13. A two-stage cognitive theory of the positive symptoms of psychosis. Highlighting the role of lowered decision thresholds.

    Science.gov (United States)

    Moritz, Steffen; Pfuhl, Gerit; Lüdtke, Thies; Menon, Mahesh; Balzan, Ryan P; Andreou, Christina

    2017-09-01

    We outline a two-stage heuristic account for the pathogenesis of the positive symptoms of psychosis. A narrative review on the empirical evidence of the liberal acceptance (LA) account of positive symptoms is presented. At the heart of our theory is the idea that psychosis is characterized by a lowered decision threshold, which results in the premature acceptance of hypotheses that a nonpsychotic individual would reject. Once the hypothesis is judged as valid, counterevidence is not sought anymore due to a bias against disconfirmatory evidence as well as confirmation biases, consolidating the false hypothesis. As a result of LA, confidence in errors is enhanced relative to controls. Subjective probabilities are initially low for hypotheses in individuals with delusions, and delusional ideas at stage 1 (belief formation) are often fragile. In the course of the second stage (belief maintenance), fleeting delusional ideas evolve into fixed false beliefs, particularly if the delusional idea is congruent with the emotional state and provides "meaning". LA may also contribute to hallucinations through a misattribution of (partially) normal sensory phenomena. Interventions such as metacognitive training that aim to "plant the seeds of doubt" decrease positive symptoms by encouraging individuals to seek more information and to attenuate confidence. The effect of antipsychotic medication is explained by its doubt-inducing properties. The model needs to be confirmed by longitudinal designs that allow an examination of causal relationships. Evidence is currently weak for hallucinations. The theory may account for positive symptoms in a subgroup of patients. Future directions are outlined. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  15. A Two-stage DC-DC Converter for the Fuel Cell-Supercapacitor Hybrid System

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2009-01-01

    A wide input range multi-stage converter is proposed with the fuel cells and supercapacitors as a hybrid system. The front-end two-phase boost converter is used to optimize the output power and to reduce the current ripple of fuel cells. The supercapacitor power module is connected by push...... and designed. A 1kW prototype controlled by TMS320F2808 DSP is built in the lab. Simulation and experimental results confirm the feasibility of the proposed two stage dc-dc converter system.......-pull-forward half bridge (PPFHB) converter with coupled inductors in the second stage to handle the slow transient response of the fuel cells and realize the bidirectional power flow control. Moreover, this cascaded structure simplifies the power management. The control strategy for the whole system is analyzed...

  16. Device for sampling HTGR recycle fuel particles

    International Nuclear Information System (INIS)

    Suchomel, R.R.; Lackey, W.J.

    1977-03-01

    Devices for sampling High-Temperature Gas-Cooled Reactor fuel microspheres were evaluated. Analysis of samples obtained with each of two specially designed passive samplers were compared with data generated by more common techniques. A ten-stage two-way sampler was found to produce a representative sample with a constant batch-to-sample ratio

  17. Numerical simulation of brain tumor growth model using two-stage ...

    African Journals Online (AJOL)

    In the recent years, the study of glioma growth to be an active field of research Mathematical models that describe the proliferation and diffusion properties of the growth have been developed by many researchers. In this work, the performance analysis of two-stage Gauss-Seidel (TSGS) method to solve the glioma growth ...

  18. A novel two-stage stochastic programming model for uncertainty characterization in short-term optimal strategy for a distribution company

    International Nuclear Information System (INIS)

    Ahmadi, Abdollah; Charwand, Mansour; Siano, Pierluigi; Nezhad, Ali Esmaeel; Sarno, Debora; Gitizadeh, Mohsen; Raeisi, Fatima

    2016-01-01

    In order to supply the demands of the end users in a competitive market, a distribution company purchases energy from the wholesale market while other options would be in access in the case of possessing distributed generation units and interruptible loads. In this regard, this study presents a two-stage stochastic programming model for a distribution company energy acquisition market model to manage the involvement of different electric energy resources characterized by uncertainties with the minimum cost. In particular, the distribution company operations planning over a day-ahead horizon is modeled as a stochastic mathematical optimization, with the objective of minimizing costs. By this, distribution company decisions on grid purchase, owned distributed generation units and interruptible load scheduling are determined. Then, these decisions are considered as boundary constraints to a second step, which deals with distribution company's operations in the hour-ahead market with the objective of minimizing the short-term cost. The uncertainties in spot market prices and wind speed are modeled by means of probability distribution functions of their forecast errors and the roulette wheel mechanism and lattice Monte Carlo simulation are used to generate scenarios. Numerical results show the capability of the proposed method. - Highlights: • Proposing a new a stochastic-based two-stage operations framework in retail competitive markets. • Proposing a Mixed Integer Non-Linear stochastic programming. • Employing roulette wheel mechanism and Lattice Monte Carlo Simulation.

  19. Approximate solutions of the two-dimensional integral transport equation by collision probability methods

    International Nuclear Information System (INIS)

    Sanchez, Richard

    1977-01-01

    A set of approximate solutions for the isotropic two-dimensional neutron transport problem has been developed using the Interface Current formalism. The method has been applied to regular lattices of rectangular cells containing a fuel pin, cladding and water, or homogenized structural material. The cells are divided into zones which are homogeneous. A zone-wise flux expansion is used to formulate a direct collision probability problem within a cell. The coupling of the cells is made by making extra assumptions on the currents entering and leaving the interfaces. Two codes have been written: the first uses a cylindrical cell model and one or three terms for the flux expansion; the second uses a two-dimensional flux representation and does a truly two-dimensional calculation inside each cell. In both codes one or three terms can be used to make a space-independent expansion of the angular fluxes entering and leaving each side of the cell. The accuracies and computing times achieved with the different approximations are illustrated by numerical studies on two benchmark pr

  20. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  1. Two-stage single-volume exchange transfusion in severe hemolytic disease of the newborn.

    Science.gov (United States)

    Abbas, Wael; Attia, Nayera I; Hassanein, Sahar M A

    2012-07-01

    Evaluation of two-stage single-volume exchange transfusion (TSSV-ET) in decreasing the post-exchange rebound increase in serum bilirubin level, with subsequent reduction of the need for repeated exchange transfusions. The study included 104 neonates with hyperbilirubinemia needing exchange transfusion. They were randomly enrolled into two equal groups, each group comprised 52 neonates. TSSV-ET was performed for the 52 neonates and the traditional single-stage double-volume exchange transfusion (SSDV-ET) was performed to 52 neonates. TSSV-ET significantly lowered rebound serum bilirubin level (12.7 ± 1.1 mg/dL), compared to SSDV-ET (17.3 ± 1.7 mg/dL), p < 0.001. Need for repeated exchange transfusions was significantly lower in TSSV-ET group (13.5%), compared to 32.7% in SSDV-ET group, p < 0.05. No significant difference was found between the two groups as regards the morbidity (11.5% and 9.6%, respectively) and the mortality (1.9% for both groups). Two-stage single-volume exchange transfusion proved to be more effective in reducing rebound serum bilirubin level post-exchange and in decreasing the need for repeated exchange transfusions.

  2. Probability Sampling - A Guideline for Quantitative Health Care ...

    African Journals Online (AJOL)

    A more direct definition is the method used for selecting a given ... description of the chosen population, the sampling procedure giving ... target population, precision, and stratification. The ... survey estimates, it is recommended that researchers first analyze a .... The optimum sample size has a relation to the type of planned ...

  3. Insights into cadmium diffusion mechanisms in two-stage diffusion profiles in solar-grade Cu(In,Ga)Se2 thin films

    International Nuclear Information System (INIS)

    Biderman, N. J.; Sundaramoorthy, R.; Haldar, Pradeep; Novak, Steven W.; Lloyd, J. R.

    2015-01-01

    Cadmium diffusion experiments were performed on polished copper indium gallium diselenide (Cu(In,Ga)Se 2 or CIGS) samples with resulting cadmium diffusion profiles measured by time-of-flight secondary ion mass spectroscopy. Experiments done in the annealing temperature range between 275 °C and 425 °C reveal two-stage cadmium diffusion profiles which may be indicative of multiple diffusion mechanisms. Each stage can be described by the standard solutions of Fick's second law. The slower cadmium diffusion in the first stage can be described by the Arrhenius equation D 1  = 3 × 10 −4  exp (− 1.53 eV/k B T) cm 2  s −1 , possibly representing vacancy-meditated diffusion. The faster second-stage diffusion coefficients determined in these experiments match the previously reported cadmium diffusion Arrhenius equation of D 2  = 4.8 × 10 −4  exp (−1.04 eV/k B T) cm 2  s −1 , suggesting an interstitial-based mechanism

  4. Two-sample discrimination of Poisson means

    Science.gov (United States)

    Lampton, M.

    1994-01-01

    This paper presents a statistical test for detecting significant differences between two random count accumulations. The null hypothesis is that the two samples share a common random arrival process with a mean count proportional to each sample's exposure. The model represents the partition of N total events into two counts, A and B, as a sequence of N independent Bernoulli trials whose partition fraction, f, is determined by the ratio of the exposures of A and B. The detection of a significant difference is claimed when the background (null) hypothesis is rejected, which occurs when the observed sample falls in a critical region of (A, B) space. The critical region depends on f and the desired significance level, alpha. The model correctly takes into account the fluctuations in both the signals and the background data, including the important case of small numbers of counts in the signal, the background, or both. The significance can be exactly determined from the cumulative binomial distribution, which in turn can be inverted to determine the critical A(B) or B(A) contour. This paper gives efficient implementations of these tests, based on lookup tables. Applications include the detection of clustering of astronomical objects, the detection of faint emission or absorption lines in photon-limited spectroscopy, the detection of faint emitters or absorbers in photon-limited imaging, and dosimetry.

  5. Jenis Sample: Keuntungan dan Kerugiannya

    OpenAIRE

    Suprapto, Agus

    1994-01-01

    Sample is a part of a population that are used in a study for purposes of making estimation about the nature of the total population that is obtained with sampling technic. Sampling technic is more adventagous than cencus because it can reduce cost, time, and it can gather deeper information and more accurate data. It is useful to distinguish two major types of sampling technics. First, Prob bility sampling i.e. simple random sampling. Second, Non Probability sampling i.e. systematic sam­plin...

  6. Uncertainty relation and probability. Numerical illustration

    International Nuclear Information System (INIS)

    Fujikawa, Kazuo; Umetsu, Koichiro

    2011-01-01

    The uncertainty relation and the probability interpretation of quantum mechanics are intrinsically connected, as is evidenced by the evaluation of standard deviations. It is thus natural to ask if one can associate a very small uncertainty product of suitably sampled events with a very small probability. We have shown elsewhere that some examples of the evasion of the uncertainty relation noted in the past are in fact understood in this way. We here numerically illustrate that a very small uncertainty product is realized if one performs a suitable sampling of measured data that occur with a very small probability. We introduce a notion of cyclic measurements. It is also shown that our analysis is consistent with the Landau-Pollak-type uncertainty relation. It is suggested that the present analysis may help reconcile the contradicting views about the 'standard quantum limit' in the detection of gravitational waves. (author)

  7. Two-stage heterotopic urethroplasty with usage of groin flap. Case report

    Directory of Open Access Journals (Sweden)

    R. T. Adamyan

    2014-11-01

    Full Text Available The article is devoted to the issues of struggle with problems of the urogenital region, arising as a consequence iatrogeny, with the help of plastic surgery. The article provides the case report, which deals with a two-step treatment of the patient with complete loss of the part of the urethra and bladder neck due to iatrogeny. The first stage of surgical treatment is the development of the artificial urethra formation by the rotation groin flap with the axis blood supply. The second stage is connection private urethra with artificial, with a heterotopic location of the lower urinary tract.

  8. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  9. Measuring public opinion on alcohol policy: a factor analytic study of a US probability sample.

    Science.gov (United States)

    Latimer, William W; Harwood, Eileen M; Newcomb, Michael D; Wagenaar, Alexander C

    2003-03-01

    Public opinion has been one factor affecting change in policies designed to reduce underage alcohol use. Extant research, however, has been criticized for using single survey items of unknown reliability to define adult attitudes on alcohol policy issues. The present investigation addresses a critical gap in the literature by deriving scales on public attitudes, knowledge, and concerns pertinent to alcohol policies designed to reduce underage drinking using a US probability sample survey of 7021 adults. Five attitudinal scales were derived from exploratory and confirmatory factor analyses addressing policies to: (1) regulate alcohol marketing, (2) regulate alcohol consumption in public places, (3) regulate alcohol distribution, (4) increase alcohol taxes, and (5) regulate youth access. The scales exhibited acceptable psychometric properties and were largely consistent with a rational framework which guided the survey construction.

  10. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Science.gov (United States)

    Bril, Aleksander; Kalinina, Olga; Levina, Anastasia

    2018-03-01

    The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  11. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Directory of Open Access Journals (Sweden)

    Bril Aleksander

    2018-01-01

    Full Text Available The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  12. A Risk-Based Interval Two-Stage Programming Model for Agricultural System Management under Uncertainty

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2016-01-01

    Full Text Available Nonpoint source (NPS pollution caused by agricultural activities is main reason that water quality in watershed becomes worse, even leading to deterioration. Moreover, pollution control is accompanied with revenue’s fall for agricultural system. How to design and generate a cost-effective and environmentally friendly agricultural production pattern is a critical issue for local managers. In this study, a risk-based interval two-stage programming model (RBITSP was developed. Compared to general ITSP model, significant contribution made by RBITSP model was that it emphasized importance of financial risk under various probabilistic levels, rather than only being concentrated on expected economic benefit, where risk is expressed as the probability of not meeting target profit under each individual scenario realization. This way effectively avoided solutions’ inaccuracy caused by traditional expected objective function and generated a variety of solutions through adjusting weight coefficients, which reflected trade-off between system economy and reliability. A case study of agricultural production management with the Tai Lake watershed was used to demonstrate superiority of proposed model. Obtained results could be a base for designing land-structure adjustment patterns and farmland retirement schemes and realizing balance of system benefit, system-failure risk, and water-body protection.

  13. Gambling problems in the family – A stratified probability sample study of prevalence and reported consequences

    Directory of Open Access Journals (Sweden)

    Øren Anita

    2008-12-01

    Full Text Available Abstract Background Prior studies on the impact of problem gambling in the family mainly include help-seeking populations with small numbers of participants. The objective of the present stratified probability sample study was to explore the epidemiology of problem gambling in the family in the general population. Methods Men and women 16–74 years-old randomly selected from the Norwegian national population database received an invitation to participate in this postal questionnaire study. The response rate was 36.1% (3,483/9,638. Given the lack of validated criteria, two survey questions ("Have you ever noticed that a close relative spent more and more money on gambling?" and "Have you ever experienced that a close relative lied to you about how much he/she gambles?" were extrapolated from the Lie/Bet Screen for pathological gambling. Respondents answering "yes" to both questions were defined as Concerned Significant Others (CSOs. Results Overall, 2.0% of the study population was defined as CSOs. Young age, female gender, and divorced marital status were factors positively associated with being a CSO. CSOs often reported to have experienced conflicts in the family related to gambling, worsening of the family's financial situation, and impaired mental and physical health. Conclusion Problematic gambling behaviour not only affects the gambling individual but also has a strong impact on the quality of life of family members.

  14. Quick pace of property acquisitions requires two-stage evaluations

    International Nuclear Information System (INIS)

    Hollo, R.; Lockwood, S.

    1994-01-01

    The traditional method of evaluating oil and gas reserves may be too cumbersome for the quick pace of oil and gas property acquisition. An acquisition evaluator must decide quickly if a property meets basic purchase criteria. The current business climate requires a two-stage approach. First, the evaluator makes a quick assessment of the property and submits a bid. If the bid is accepted then the evaluator goes on with a detailed analysis, which represents the second stage. Acquisition of producing properties has become an important activity for many independent oil and gas producers, who must be able to evaluate reserves quickly enough to make effective business decisions yet accurately enough to avoid costly mistakes. Independent thus must be familiar with how transactions usually progress as well as with the basic methods of property evaluation. The paper discusses acquisition activity, the initial offer, the final offer, property evaluation, and fair market value

  15. Influence of one- or two-stage methods for polymerizing complete dentures on adaptation and teeth movements

    Directory of Open Access Journals (Sweden)

    Moises NOGUEIRA

    Full Text Available Abstract Introduction The quality of complete dentures might be influenced by the method of confection. Objective To evaluate the influence of two different methods of processing muco-supported complete dentures on their adaptation and teeth movements. Material and method Denture confection was assigned in two groups (n=10 for upper and lower arches according to polymerization method: 1 conventional one-stage - a wax trial base was made, teeth were arranged and polymerized; 2 two-stage method - the base was waxed and first polymerized. With the denture base polymerized, the teeth were arranged and then, performed the final polymerization. Teeth movements were evaluated in the distances between incisive (I-I, pre-molars (P-P, molars (M-M, left incisor to left molar (LI-LM and right incisor to right molar (RI-RM. For the adaptation analysis, dentures were cut in three different positions: (A distal face of canines, (B mesial face of the first molars, and (C distal face of second molars. Result Denture bases have shown a significant better adaptation when polymerized in the one-stage procedure for both the upper (p=0.000 and the lower (p=0.000 arches, with region A presenting significant better adaptation than region C. In the upper arch, significant reduction in the distance between I-I was observed in the one-stage technique, while the two-stage technique promoted significant reduction in the RI-RM distance. In the lower arch, one-stage technique promoted significant reduction in the distance for RI-RM and two-stage promoted significant reduction in the LI-LM distance. Conclusion Conventional one-stage method presented the better results for denture adaptation. Both fabrication methods presented some alteration in teeth movements.

  16. Enhancing the hydrolysis process of a two-stage biogas technology for the organic fraction of municipal solid waste

    DEFF Research Database (Denmark)

    Nasir, Zeeshan; Uellendahl, Hinrich

    2015-01-01

    The Danish company Solum A/S has developed a two-stage dry anaerobic digestion process labelled AIKAN® for the biological conversion of the organic fraction of municipal solid waste (OFMSW) into biogas and compost. In the AIKAN® process design the methanogenic (2nd) stage is separated from...... the hydrolytic (1st) stage, which enables pump-free feeding of the waste into the 1st stage (processing module), and eliminates the risk for blocking of pumps and pipes by pumping only the percolate from the 1st stage into the 2nd stage (biogas reactor tank). The biogas yield of the AIKAN® two-stage process......, however, has shown to be only about 60% of the theoretical maximum. Previous monitoring of the hydrolytic and methanogenic activity in the two stages of the process revealed that the bottleneck of the whole degradation process is rather found in the hydrolytic first stage while the methanogenic second...

  17. Two-stage combustion for reducing pollutant emissions from gas turbine combustors

    Science.gov (United States)

    Clayton, R. M.; Lewis, D. H.

    1981-01-01

    Combustion and emission results are presented for a premix combustor fueled with admixtures of JP5 with neat H2 and of JP5 with simulated partial-oxidation product gas. The combustor was operated with inlet-air state conditions typical of cruise power for high performance aviation engines. Ultralow NOx, CO and HC emissions and extended lean burning limits were achieved simultaneously. Laboratory scale studies of the non-catalyzed rich-burning characteristics of several paraffin-series hydrocarbon fuels and of JP5 showed sooting limits at equivalence ratios of about 2.0 and that in order to achieve very rich sootless burning it is necessary to premix the reactants thoroughly and to use high levels of air preheat. The application of two-stage combustion for the reduction of fuel NOx was reviewed. An experimental combustor designed and constructed for two-stage combustion experiments is described.

  18. A cause and effect two-stage BSC-DEA method for measuring the relative efficiency of organizations

    Directory of Open Access Journals (Sweden)

    Seyed Esmaeel Najafi

    2011-01-01

    Full Text Available This paper presents an integration of balanced score card (BSE with two-stage data envelopment analysis (DEA. The proposed model of this paper uses different financial and non-financial perspectives to evaluate the performance of decision making units in different BSC stages. At each stage, a two-stage DEA method is implemented to measure the relative efficiency of decision making units and the results are monitored using the cause and effect relationships. An empirical study for a banking sector is also performed using the method developed in this paper and the results are briefly analyzed.

  19. Associations between problematic gaming and psychiatric symptoms among adolescents in two samples.

    Science.gov (United States)

    Vadlin, Sofia; Åslund, Cecilia; Hellström, Charlotta; Nilsson, Kent W

    2016-10-01

    The aim of the present study was to investigate associations between problematic gaming and psychiatric symptoms among adolescents. Data from adolescents in the SALVe cohort, including adolescents in Västmanland who were born in 1997 and 1999 (N=1868; 1034 girls), and data from consecutive adolescent psychiatric outpatients in Västmanland (N=242; 169 girls) were analyzed. Adolescents self-rated on the Gaming Addiction Identification Test (GAIT), Adult ADHD Self-Report Scale Adolescent version (ASRS-A), Depression Self-Rating Scale Adolescent version (DSRS-A), Spence Children's Anxiety Scale (SCAS), and psychotic-like experiences (PLEs). Multivariable logistic regression analyses were performed, and adjusted for sex, age, study population, school bullying, family maltreatment, and interactions by sex, with two-way interactions between psychiatric measurements. Boys had higher self-rated problematic gaming in both samples, whereas girls self-rated higher in all psychiatric domains. Boys had more than eight times the probability, odds ratio (OR), of having problematic gaming. Symptoms of ADHD, depression and anxiety were associated with ORs of 2.43 (95% CI 1.44-4.11), 2.47 (95% CI 1.44-4.25), and 2.06 (95% CI 1.27-3.33), respectively, in relation to coexisting problematic gaming. Problematic gaming was associated with psychiatric symptoms in adolescents; when problematic gaming is considered, the probability of coexisting psychiatric symptoms should also be considered, and vice versa. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Concentration of polycyclic aromatic hydrocarbons in water samples from different stages of treatment

    Science.gov (United States)

    Pogorzelec, Marta; Piekarska, Katarzyna

    2017-11-01

    The aim of this study was to analyze the presence and concentration of selected polycyclic aromatic hydrocarbons in water samples from different stages of treatment and to verify the usefulness of semipermeable membrane devices for analysis of drinking water. For this purpose, study was conducted for a period of 5 months. Semipermeable membrane devices were deployed in a surface water treatment plant located in Lower Silesia (Poland). To determine the effect of water treatment on concentration of PAHs, three sampling places were chosen: raw water input, stream of water just before disinfection and treated water output. After each month of sampling SPMDs were changed for fresh ones and prepared for further analysis. Concentrations of fifteen polycyclic aromatic hydrocarbons were determined by high performance liquid chromatography (HPLC). Presented study indicates that the use of semipermeable membrane devices can be an effective tool for the analysis of aquatic environment, including monitoring of drinking water, where organic micropollutants are present at very low concentrations.

  1. Concentration of polycyclic aromatic hydrocarbons in water samples from different stages of treatment

    Directory of Open Access Journals (Sweden)

    Pogorzelec Marta

    2017-01-01

    Full Text Available The aim of this study was to analyze the presence and concentration of selected polycyclic aromatic hydrocarbons in water samples from different stages of treatment and to verify the usefulness of semipermeable membrane devices for analysis of drinking water. For this purpose, study was conducted for a period of 5 months. Semipermeable membrane devices were deployed in a surface water treatment plant located in Lower Silesia (Poland. To determine the effect of water treatment on concentration of PAHs, three sampling places were chosen: raw water input, stream of water just before disinfection and treated water output. After each month of sampling SPMDs were changed for fresh ones and prepared for further analysis. Concentrations of fifteen polycyclic aromatic hydrocarbons were determined by high performance liquid chromatography (HPLC. Presented study indicates that the use of semipermeable membrane devices can be an effective tool for the analysis of aquatic environment, including monitoring of drinking water, where organic micropollutants are present at very low concentrations.

  2. A two staged condensation of vapors of an isobutane tower in installations for sulfuric acid alkylation

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, N.P.; Feyzkhanov, R.I.; Idrisov, A.D.; Navalikhin, P.G.; Sakharov, V.D.

    1983-01-01

    In order to increase the concentration of isobutane to greater than 72 to 76 percent in an installation for sulfuric acid alkylation, a system of two staged condensation of vapors from an isobutane tower is placed into operation. The first stage condenses the heavier part of the upper distillate of the tower, which is achieved through somewhat of an increase in the condensate temperature. The product which is condensed in the first stage is completely returned to the tower as a live irrigation. The vapors of the isobutane fraction which did not condense in the first stage are sent to two newly installed condensers, from which the product after condensation passes through intermediate tanks to further depropanization. The two staged condensation of vapors of the isobutane tower reduces the content of the inert diluents, the propane and n-butane in the upper distillate of the isobutane tower and creates more favorable conditions for the operation of the isobutane and propane tower.

  3. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  4. One-stage vs two-stage cartilage repair: a current review

    Directory of Open Access Journals (Sweden)

    Daniel Meyerkort

    2010-10-01

    Full Text Available Daniel Meyerkort, David Wood, Ming-Hao ZhengCenter for Orthopaedic Research, School of Surgery and Pathology, University of Western Australia, Perth, AustraliaIntroduction: Articular cartilage has a poor capacity for regeneration if damaged. Various methods have been used to restore the articular surface, improve pain, function, and slow progression to osteoarthritis.Method: A PubMed review was performed on 18 March, 2010. Search terms included “autologous chondrocyte implantation (ACI” and “microfracture” or “mosaicplasty”. The aim of this review was to determine if 1-stage or 2-stage procedures for cartilage repair produced different functional outcomes.Results: The main procedures currently used are ACI and microfracture. Both first-generation ACI and microfracture result in clinical and functional improvement with no significant differences. A significant increase in functional outcome has been observed in second-generation procedures such as Hyalograft C, matrix-induced ACI, and ChondroCelect compared with microfracture. ACI results in a higher percentage of patients with clinical improvement than mosaicplasty; however, these results may take longer to achieve.Conclusion: Clinical and functional improvements have been demonstrated with ACI, microfracture, mosaicplasty, and synthetic cartilage constructs. Heterogeneous products and lack of good-quality randomized-control trials make product comparison difficult. Future developments involve scaffolds, gene therapy, growth factors, and stem cells to create a single-stage procedure that results in hyaline articular cartilage.Keywords: autologous chondrocyte implantation, microfracture, cartilage repair

  5. Design and control of a decoupled two degree of freedom translational parallel micro-positioning stage.

    Science.gov (United States)

    Lai, Lei-Jie; Gu, Guo-Ying; Zhu, Li-Min

    2012-04-01

    This paper presents a novel decoupled two degrees of freedom (2-DOF) translational parallel micro-positioning stage. The stage consists of a monolithic compliant mechanism driven by two piezoelectric actuators. The end-effector of the stage is connected to the base by four independent kinematic limbs. Two types of compound flexure module are serially connected to provide 2-DOF for each limb. The compound flexure modules and mirror symmetric distribution of the four limbs significantly reduce the input and output cross couplings and the parasitic motions. Based on the stiffness matrix method, static and dynamic models are constructed and optimal design is performed under certain constraints. The finite element analysis results are then given to validate the design model and a prototype of the XY stage is fabricated for performance tests. Open-loop tests show that maximum static and dynamic cross couplings between the two linear motions are below 0.5% and -45 dB, which are low enough to utilize the single-input-single-out control strategies. Finally, according to the identified dynamic model, an inversion-based feedforward controller in conjunction with a proportional-integral-derivative controller is applied to compensate for the nonlinearities and uncertainties. The experimental results show that good positioning and tracking performances are achieved, which verifies the effectiveness of the proposed mechanism and controller design. The resonant frequencies of the loaded stage at 2 kg and 5 kg are 105 Hz and 68 Hz, respectively. Therefore, the performance of the stage is reasonably good in term of a 200 N load capacity. © 2012 American Institute of Physics

  6. Operation of a two-stage continuous fermentation process producing hydrogen and methane from artificial food wastes

    Energy Technology Data Exchange (ETDEWEB)

    Nagai, Kohki; Mizuno, Shiho; Umeda, Yoshito; Sakka, Makiko [Toho Gas Co., Ltd. (Japan); Osaka, Noriko [Tokyo Gas Co. Ltd. (Japan); Sakka, Kazuo [Mie Univ. (Japan)

    2010-07-01

    An anaerobic two-stage continuous fermentation process with combined thermophilic hydrogenogenic and methanogenic stages (two-stage fermentation process) was applied to artificial food wastes on a laboratory scale. In this report, organic loading rate (OLR) conditions for hydrogen fermentation were optimized before operating the two-stage fermentation process. The OLR was set at 11.2, 24.3, 35.2, 45.6, 56.1, and 67.3 g-COD{sub cr} L{sup -1} day{sup -1} with a temperature of 60 C, pH5.5 and 5.0% total solids. As a result, approximately 1.8-2.0 mol-H{sub 2} mol-hexose{sup -1} was obtained at the OLR of 11.2-56.1 g-COD{sub cr} L{sup -1} day{sup -1}. In contrast, it was inferred that the hydrogen yield at the OLR of 67.3 g-COD{sub cr} L{sup -1} day{sup -1} decreased because of an increase in lactate concentration in the culture medium. The performance of the two-stage fermentation process was also evaluated over three months. The hydraulic retention time (HRT) of methane fermentation was able to be shortened 5.0 days (under OLR 12.4 g-COD{sub cr} L{sup -1} day{sup -1} conditions) when the OLR of hydrogen fermentation was 44.0 g-COD{sub cr} L{sup -1} day{sup -1}, and the average gasification efficiency of the two-stage fermentation process was 81% at the time. (orig.)

  7. The two-parametric scaling and new temporal asymptotic of survival probability of diffusing particle in the medium with traps.

    Science.gov (United States)

    Arkhincheev, V E

    2017-03-01

    The new asymptotic behavior of the survival probability of particles in a medium with absorbing traps in an electric field has been established in two ways-by using the scaling approach and by the direct solution of the diffusion equation in the field. It has shown that at long times, this drift mechanism leads to a new temporal behavior of the survival probability of particles in a medium with absorbing traps.

  8. The Probability of Extinction of Infectious Salmon Anemia Virus in One and Two Patches.

    Science.gov (United States)

    Milliken, Evan

    2017-12-01

    Single-type and multitype branching processes have been used to study the dynamics of a variety of stochastic birth-death type phenomena in biology and physics. Their use in epidemiology goes back to Whittle's study of a susceptible-infected-recovered (SIR) model in the 1950s. In the case of an SIR model, the presence of only one infectious class allows for the use of single-type branching processes. Multitype branching processes allow for multiple infectious classes and have latterly been used to study metapopulation models of disease. In this article, we develop a continuous time Markov chain (CTMC) model of infectious salmon anemia virus in two patches, two CTMC models in one patch and companion multitype branching process (MTBP) models. The CTMC models are related to deterministic models which inform the choice of parameters. The probability of extinction is computed for the CTMC via numerical methods and approximated by the MTBP in the supercritical regime. The stochastic models are treated as toy models, and the parameter choices are made to highlight regions of the parameter space where CTMC and MTBP agree or disagree, without regard to biological significance. Partial extinction events are defined and their relevance discussed. A case is made for calculating the probability of such events, noting that MTBPs are not suitable for making these calculations.

  9. Capacitor blocks for linear transformer driver stages.

    Science.gov (United States)

    Kovalchuk, B M; Kharlov, A V; Kumpyak, E V; Smorudov, G V; Zherlitsyn, A A

    2014-01-01

    In the Linear Transformer Driver (LTD) technology, the low inductance energy storage components and switches are directly incorporated into the individual cavities (named stages) to generate a fast output voltage pulse, which is added along a vacuum coaxial line like in an inductive voltage adder. LTD stages with air insulation were recently developed, where air is used both as insulation in a primary side of the stages and as working gas in the LTD spark gap switches. A custom designed unit, referred to as a capacitor block, was developed for use as a main structural element of the transformer stages. The capacitor block incorporates two capacitors GA 35426 (40 nF, 100 kV) and multichannel multigap gas switch. Several modifications of the capacitor blocks were developed and tested on the life time and self breakdown probability. Blocks were tested both as separate units and in an assembly of capacitive module, consisting of five capacitor blocks. This paper presents detailed design of capacitor blocks, description of operation regimes, numerical simulation of electric field in the switches, and test results.

  10. A two-stage method for microcalcification cluster segmentation in mammography by deformable models

    International Nuclear Information System (INIS)

    Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.; Karahaliou, A.; Costaridou, L.; Vassiou, K.

    2015-01-01

    Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods are applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST cluster , average of minimum distance—AMINDIST cluster ) and the area overlap measure (AOM cluster ). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross

  11. Men who have sex with men in Great Britain: comparing methods and estimates from probability and convenience sample surveys

    OpenAIRE

    Prah, Philip; Hickson, Ford; Bonell, Chris; McDaid, Lisa M; Johnson, Anne M; Wayal, Sonali; Clifton, Soazig; Sonnenberg, Pam; Nardone, Anthony; Erens, Bob; Copas, Andrew J; Riddell, Julie; Weatherburn, Peter; Mercer, Catherine H

    2016-01-01

    Objective: To examine sociodemographic and behavioural differences between men whohave sex with men (MSM) participating in recent UK convenience surveys and a national probability sample survey.\\ud Methods: We compared 148 MSM aged 18–64 years interviewed for Britain's third National Survey of Sexual Attitudes and Lifestyles (Natsal-3) undertaken in 2010–2012, with men inthe same age range participating in contemporaneous convenience surveys of MSM: 15 500 British resident men in the European...

  12. Insufficient sensitivity of joint aspiration during the two-stage exchange of the hip with spacers.

    Science.gov (United States)

    Boelch, Sebastian Philipp; Weissenberger, Manuel; Spohn, Frederik; Rudert, Maximilian; Luedemann, Martin

    2018-01-10

    Evaluation of infection persistence during the two-stage exchange of the hip is challenging. Joint aspiration before reconstruction is supposed to rule out infection persistence. Sensitivity and specificity of synovial fluid culture and synovial leucocyte count for detecting infection persistence during the two-stage exchange of the hip were evaluated. Ninety-two aspirations before planned joint reconstruction during the two-stage exchange with spacers of the hip were retrospectively analyzed. The sensitivity and specificity of synovial fluid culture was 4.6 and 94.3%. The sensitivity and specificity of synovial leucocyte count at a cut-off value of 2000 cells/μl was 25.0 and 96.9%. C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) values were significantly higher before prosthesis removal and reconstruction or spacer exchange (p = 0.00; p = 0.013 and p = 0.039; p = 0.002) in the infection persistence group. Receiver operating characteristic area under the curve values before prosthesis removal and reconstruction or spacer exchange for ESR were lower (0.516 and 0.635) than for CRP (0.720 and 0.671). Synovial fluid culture and leucocyte count cannot rule out infection persistence during the two-stage exchange of the hip.

  13. Performance of an iterative two-stage bayesian technique for population pharmacokinetic analysis of rich data sets

    NARCIS (Netherlands)

    Proost, Johannes H.; Eleveld, Douglas J.

    2006-01-01

    Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with

  14. Two-stage soil infiltration treatment system for treating ammonium wastewaters of low COD/TN ratios.

    Science.gov (United States)

    Lei, Zhongfang; Wu, Ting; Zhang, Yi; Liu, Xiang; Wan, Chunli; Lee, Duu-Jong; Tay, Joo-Hwa

    2013-01-01

    Soil infiltration treatment (SIT) is ineffective to treat ammonium wastewaters of total nitrogen (TN) > 100 mg l(-1). This study applied a novel two-stage SIT process for effective TN removal from wastewaters of TN>100 mg l(-1) and of chemical oxygen demand (COD)/TN ratio of 3.2-8.6. The wastewater was first fed into the soil column (stage 1) at hydraulic loading rate (HLR) of 0.06 m(3) m(-2) d(-1) for COD removal and total phosphorus (TP) immobilization. Then the effluent from stage 1 was fed individually into four soil columns (stage 2) at 0.02 m(3) m(-2) d(-1) of HLR with different proportions of raw wastewater as additional carbon source. Over the one-year field test, balanced nitrification and denitrification in the two-stage SIT revealed excellent TN removal (>90%) from the tested wastewaters. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. High-speed double-disc TMP [thermomechanical pulp] from northern and southern softwoods: One or two refining stages

    Energy Technology Data Exchange (ETDEWEB)

    Sabourin, M.J. (Andritz Sprout-Bauer, Inc., Springfield, OH (United States)); Cort, J.B.; Musselman, R.L. (Andritz Sprout-Bauer, Inc., Muncy, PA (United States))

    1994-01-01

    Pilot-plant studies were carried out to evaluate one- and two-stage high-speed refining processes for production of thermomechanical pulp (TMP) at minimal energy consumption. Both northern (black spruce/balsam fir) and southern (lobolly pine) wood species were tested. Preliminary results indicate both one- and two-stage high-speed refining are suitable for the production of TMP from spruce and fir. Single-stage, high-speed refining of spruce/fir resulted in over 25% energy savings compared to conventional TMP production. The resulting TMP had improved optical and shive content properties, with slightly reduced pulp strength and long fiber content. Two stages of refining were necessary to optimize pulp quality from the lobolly pine furnish. A 15% energy reduction was obtained when comparing high-speed and conventional TMP pulping of lobolly pine at similar operating conditions. The high-speed pine TMP had comparable bonding strength, shive content, and lower tear than conventional two-stage lobolly pine TMP. 14 refs., 11 figs., 6 tabs.

  16. Wave functions and two-electron probability distributions of the Hooke's-law atom and helium

    International Nuclear Information System (INIS)

    O'Neill, Darragh P.; Gill, Peter M. W.

    2003-01-01

    The Hooke's-law atom (hookium) provides an exactly soluble model for a two-electron atom in which the nuclear-electron Coulombic attraction has been replaced by a harmonic one. Starting from the known exact position-space wave function for the ground state of hookium, we present the momentum-space wave function. We also look at the intracules, two-electron probability distributions, for hookium in position, momentum, and phase space. These are compared with the Hartree-Fock results and the Coulomb holes (the difference between the exact and Hartree-Fock intracules) in position, momentum, and phase space are examined. We then compare these results with analogous results for the ground state of helium using a simple, explicitly correlated wave function

  17. A two-stage stochastic programming approach for operating multi-energy systems

    DEFF Research Database (Denmark)

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  18. UT Biomedical Informatics Lab (BMIL) probability wheel

    Science.gov (United States)

    Huang, Sheng-Cheng; Lee, Sara; Wang, Allen; Cantor, Scott B.; Sun, Clement; Fan, Kaili; Reece, Gregory P.; Kim, Min Soon; Markey, Mia K.

    A probability wheel app is intended to facilitate communication between two people, an "investigator" and a "participant", about uncertainties inherent in decision-making. Traditionally, a probability wheel is a mechanical prop with two colored slices. A user adjusts the sizes of the slices to indicate the relative value of the probabilities assigned to them. A probability wheel can improve the adjustment process and attenuate the effect of anchoring bias when it is used to estimate or communicate probabilities of outcomes. The goal of this work was to develop a mobile application of the probability wheel that is portable, easily available, and more versatile. We provide a motivating example from medical decision-making, but the tool is widely applicable for researchers in the decision sciences.

  19. Prevalence of food sensitization and probable food allergy among adults in India: the EuroPrevall INCO study.

    Science.gov (United States)

    Mahesh, P A; Wong, Gary W K; Ogorodova, L; Potts, J; Leung, T F; Fedorova, O; Holla, Amrutha D; Fernandez-Rivas, M; Clare Mills, E N; Kummeling, I; Versteeg, S A; van Ree, R; Yazdanbakhsh, M; Burney, P

    2016-07-01

    Data are lacking regarding the prevalence of food sensitization and probable food allergy among general population in India. We report the prevalence of sensitization and probable food allergy to 24 common foods among adults from general population in Karnataka, South India. The study was conducted in two stages: a screening study and a case-control study. A total of 11 791 adults in age group 20-54 were randomly sampled from general population in South India and answered a screening questionnaire. A total of 588 subjects (236 cases and 352 controls) participated in the case-control study involving a detailed questionnaire and specific IgE estimation for 24 common foods. A high level of sensitization (26.5%) was observed for most of the foods in the general population, higher than that observed among adults in Europe, except for those foods that cross-react with birch pollen. Most of the sensitization was observed in subjects who had total IgE above the median IgE level. A high level of cross-reactivity was observed among different pollens and foods and among foods. The prevalence of probable food allergy (self-reports of adverse symptoms after the consumption of food and specific IgE to the same food) was 1.2%, which was mainly accounted for cow's milk (0.5%) and apple (0.5%). Very high levels of sensitization were observed for most foods, including those not commonly consumed in the general population. For the levels of sensitization, the prevalence of probable food allergy was low. This disassociation needs to be further explored in future studies. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. What Are Probability Surveys used by the National Aquatic Resource Surveys?

    Science.gov (United States)

    The National Aquatic Resource Surveys (NARS) use probability-survey designs to assess the condition of the nation’s waters. In probability surveys (also known as sample-surveys or statistical surveys), sampling sites are selected randomly.

  1. Multiple Genes Related to Muscle Identified through a Joint Analysis of a Two-stage Genome-wide Association Study for Racing Performance of 1,156 Thoroughbreds

    Directory of Open Access Journals (Sweden)

    Dong-Hyun Shin

    2015-06-01

    Full Text Available Thoroughbred, a relatively recent horse breed, is best known for its use in horse racing. Although myostatin (MSTN variants have been reported to be highly associated with horse racing performance, the trait is more likely to be polygenic in nature. The purpose of this study was to identify genetic variants strongly associated with racing performance by using estimated breeding value (EBV for race time as a phenotype. We conducted a two-stage genome-wide association study to search for genetic variants associated with the EBV. In the first stage of genome-wide association study, a relatively large number of markers (~54,000 single-nucleotide polymorphisms, SNPs were evaluated in a small number of samples (240 horses. In the second stage, a relatively small number of markers identified to have large effects (170 SNPs were evaluated in a much larger number of samples (1,156 horses. We also validated the SNPs related to MSTN known to have large effects on racing performance and found significant associations in the stage two analysis, but not in stage one. We identified 28 significant SNPs related to 17 genes. Among these, six genes have a function related to myogenesis and five genes are involved in muscle maintenance. To our knowledge, these genes are newly reported for the genetic association with racing performance of Thoroughbreds. It complements a recent horse genome-wide association studies of racing performance that identified other SNPs and genes as the most significant variants. These results will help to expand our knowledge of the polygenic nature of racing performance in Thoroughbreds.

  2. Examination Of Gifted Students’ Probability Problem Solving Process In Terms Of Mathematical Thinking

    Directory of Open Access Journals (Sweden)

    Serdal BALTACI

    2016-10-01

    Full Text Available It is a widely known fact that gifted students have different skills compared to their peers. However, to what extent gifted students use mathematical thinking skills during probability problem solving process emerges as a significant question. Thence, the main aim of the present study is to examine 8th grade gifted students’ probability problem-solving process related to daily life in terms of mathematical thinking skills. In this regard, a case study was used in the study. The participants of the study were six students at 8th grade (four girls and two boys from the Science and Art Center. One of the purposeful sampling methods, maximum variation sampling was used for selecting the participants. Clinical interview and problems were used as a data collection tool. As a results of the study, it was determined that gifted students use reasoning and strategies skill, which is one of the mathematical thinking skills, mostly on the process of probability problem solving, and communication skills at least.

  3. Two-stage categorization in brand extension evaluation: electrophysiological time course evidence.

    Directory of Open Access Journals (Sweden)

    Qingguo Ma

    Full Text Available A brand name can be considered a mental category. Similarity-based categorization theory has been used to explain how consumers judge a new product as a member of a known brand, a process called brand extension evaluation. This study was an event-related potential study conducted in two experiments. The study found a two-stage categorization process reflected by the P2 and N400 components in brand extension evaluation. In experiment 1, a prime-probe paradigm was presented in a pair consisting of a brand name and a product name in three conditions, i.e., in-category extension, similar-category extension, and out-of-category extension. Although the task was unrelated to brand extension evaluation, P2 distinguished out-of-category extensions from similar-category and in-category ones, and N400 distinguished similar-category extensions from in-category ones. In experiment 2, a prime-probe paradigm with a related task was used, in which product names included subcategory and major-category product names. The N400 elicited by subcategory products was more significantly negative than that elicited by major-category products, with no salient difference in P2. We speculated that P2 could reflect the early low-level and similarity-based processing in the first stage, whereas N400 could reflect the late analytic and category-based processing in the second stage.

  4. Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling.

    Science.gov (United States)

    Boos, Moritz; Seer, Caroline; Lange, Florian; Kopp, Bruno

    2016-01-01

    Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modeling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities) by two (likelihoods) design. Five computational models of cognitive processes were compared with the observed behavior. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted) S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model's success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modeling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modeling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  5. Probabilistic inference: Task dependency and individual differences of probability weighting revealed by hierarchical Bayesian modelling

    Directory of Open Access Journals (Sweden)

    Moritz eBoos

    2016-05-01

    Full Text Available Cognitive determinants of probabilistic inference were examined using hierarchical Bayesian modelling techniques. A classic urn-ball paradigm served as experimental strategy, involving a factorial two (prior probabilities by two (likelihoods design. Five computational models of cognitive processes were compared with the observed behaviour. Parameter-free Bayesian posterior probabilities and parameter-free base rate neglect provided inadequate models of probabilistic inference. The introduction of distorted subjective probabilities yielded more robust and generalizable results. A general class of (inverted S-shaped probability weighting functions had been proposed; however, the possibility of large differences in probability distortions not only across experimental conditions, but also across individuals, seems critical for the model’s success. It also seems advantageous to consider individual differences in parameters of probability weighting as being sampled from weakly informative prior distributions of individual parameter values. Thus, the results from hierarchical Bayesian modelling converge with previous results in revealing that probability weighting parameters show considerable task dependency and individual differences. Methodologically, this work exemplifies the usefulness of hierarchical Bayesian modelling techniques for cognitive psychology. Theoretically, human probabilistic inference might be best described as the application of individualized strategic policies for Bayesian belief revision.

  6. Introduction to probability and statistics for science, engineering, and finance

    CERN Document Server

    Rosenkrantz, Walter A

    2008-01-01

    Data Analysis Orientation The Role and Scope of Statistics in Science and Engineering Types of Data: Examples from Engineering, Public Health, and Finance The Frequency Distribution of a Variable Defined on a Population Quantiles of a Distribution Measures of Location (Central Value) and Variability Covariance, Correlation, and Regression: Computing a Stock's Beta Mathematical Details and Derivations Large Data Sets Probability Theory Orientation Sample Space, Events, Axioms of Probability Theory Mathematical Models of Random Sampling Conditional Probability and Baye

  7. Measuring demand for flat water recreation using a two-stage/disequilibrium travel cost model with adjustment for overdispersion and self-selection

    Science.gov (United States)

    McKean, John R.; Johnson, Donn; Taylor, R. Garth

    2003-04-01

    An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.

  8. Modelling of an air-cooled two-stage Rankine cycle for electricity production

    International Nuclear Information System (INIS)

    Liu, Bo

    2014-01-01

    This work considers a two stage Rankine cycle architecture slightly different from a standard Rankine cycle for electricity generation. Instead of expanding the steam to extremely low pressure, the vapor leaves the turbine at a higher pressure then having a much smaller specific volume. It is thus possible to greatly reduce the size of the steam turbine. The remaining energy is recovered by a bottoming cycle using a working fluid which has a much higher density than the water steam. Thus, the turbines and heat exchangers are more compact; the turbine exhaust velocity loss is lower. This configuration enables to largely reduce the global size of the steam water turbine and facilitate the use of a dry cooling system. The main advantage of such an air cooled two stage Rankine cycle is the possibility to choose the installation site of a large or medium power plant without the need of a large and constantly available water source; in addition, as compared to water cooled cycles, the risk regarding future operations is reduced (climate conditions may affect water availability or temperature, and imply changes in the water supply regulatory rules). The concept has been investigated by EDF R and D. A 22 MW prototype was developed in the 1970's using ammonia as the working fluid of the bottoming cycle for its high density and high latent heat. However, this fluid is toxic. In order to search more suitable working fluids for the two stage Rankine cycle application and to identify the optimal cycle configuration, we have established a working fluid selection methodology. Some potential candidates have been identified. We have evaluated the performances of the two stage Rankine cycles operating with different working fluids in both design and off design conditions. For the most acceptable working fluids, components of the cycle have been sized. The power plant concept can then be evaluated on a life cycle cost basis. (author)

  9. A Two-Stage Layered Mixture Experiment Design for a Nuclear Waste Glass Application-Part 1

    International Nuclear Information System (INIS)

    Cooley, Scott K.; Piepel, Gregory F.; Gan, Hao; Kot, Wing; Pegg, Ian L.

    2003-01-01

    A layered experimental design involving mixture variables was generated to support developing property-composition models for high-level waste (HLW) glasses. The design was generated in two stages, each having unique characteristics. Each stage used a layered design having an outer layer, an inner layer, a center point, and some replicates. The layers were defined by single- and multi-variable constraints. The first stage involved 15 glass components treated as mixture variables. For each layer, vertices were generated and optimal design software was used to select alternative subsets of vertices and calculate design optimality measures. Two partial quadratic mixture models, containing 25 terms for the outer layer and 30 terms for the inner layer, were the basis for the optimal design calculations. Distributions of predicted glass property values were plotted and evaluated for the alternative subsets of vertices. Based on the optimality measures and the predicted property distributions, a ''best'' subset of vertices was selected for each layer to form a layered design for the first stage. The design for the second stage was selected to augment the first-stage design. The discussion of the second-stage design begins in this Part 1 and is continued in Part 2 (Cooley and Piepel, 2003b)

  10. Failure probability under parameter uncertainty.

    Science.gov (United States)

    Gerrard, R; Tsanakas, A

    2011-05-01

    In many problems of risk analysis, failure is equivalent to the event of a random risk factor exceeding a given threshold. Failure probabilities can be controlled if a decisionmaker is able to set the threshold at an appropriate level. This abstract situation applies, for example, to environmental risks with infrastructure controls; to supply chain risks with inventory controls; and to insurance solvency risks with capital controls. However, uncertainty around the distribution of the risk factor implies that parameter error will be present and the measures taken to control failure probabilities may not be effective. We show that parameter uncertainty increases the probability (understood as expected frequency) of failures. For a large class of loss distributions, arising from increasing transformations of location-scale families (including the log-normal, Weibull, and Pareto distributions), the article shows that failure probabilities can be exactly calculated, as they are independent of the true (but unknown) parameters. Hence it is possible to obtain an explicit measure of the effect of parameter uncertainty on failure probability. Failure probability can be controlled in two different ways: (1) by reducing the nominal required failure probability, depending on the size of the available data set, and (2) by modifying of the distribution itself that is used to calculate the risk control. Approach (1) corresponds to a frequentist/regulatory view of probability, while approach (2) is consistent with a Bayesian/personalistic view. We furthermore show that the two approaches are consistent in achieving the required failure probability. Finally, we briefly discuss the effects of data pooling and its systemic risk implications. © 2010 Society for Risk Analysis.

  11. Two-Stage Performance Engineering of Container-based Virtualization

    Directory of Open Access Journals (Sweden)

    Zheng Li

    2018-02-01

    Full Text Available Cloud computing has become a compelling paradigm built on compute and storage virtualization technologies. The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Given the recent booming of the container ecosystem, the container-based virtualization starts receiving more attention for being a promising alternative. Although the container technologies are generally considered to be lightweight, no virtualization solution is ideally resource-free, and the corresponding performance overheads will lead to negative impacts on the quality of Cloud services. To facilitate understanding container technologies from the performance engineering’s perspective, we conducted two-stage performance investigations into Docker containers as a concrete example. At the first stage, we used a physical machine with “just-enough” resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM. With findings contrary to the related work, our evaluation results show that the virtualization’s performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Moreover, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. At the ongoing second stage, we employed a physical machine with “fair-enough” resource to implement a container-based MapReduce application and try to optimize its performance. In fact, this machine failed in affording VM-based MapReduce clusters in the same scale. The performance tuning results show that the effects of different optimization strategies could largely be related to the data characteristics. For example, LZO compression can bring the most significant performance improvement when dealing with text data in our case.

  12. Standard practices for sampling uranium-Ore concentrate

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 These practices are intended to provide the nuclear industry with procedures for obtaining representative bulk samples from uranium-ore concentrates (UOC) (see Specification C967). 1.2 These practices also provide for obtaining a series of representative secondary samples from the original bulk sample for the determination of moisture and other test purposes, and for the preparation of pulverized analytical samples (see Test Methods C1022). 1.3 These practices consist of a number of alternative procedures for sampling and sample preparation which have been shown to be satisfactory through long experience in the nuclear industry. These procedures are described in the following order. Stage Procedure Section Primary Sampling One-stage falling stream 4 Two-stage falling stream 5 Auger 6 Secondary Sampling Straight-path (reciprocating) 7 Rotating (Vezin) 8, 9 Sample Preparation 10 Concurrent-drying 11-13 Natural moisture 14-16 Calcination 17, 18 Sample Packaging 19 Wax s...

  13. Comparison of two-stage thermophilic (68 degrees C/55 degrees C) anaerobic digestion with one-stage thermophilic (55 degrees C) digestion of cattle manure.

    Science.gov (United States)

    Nielsen, H B; Mladenovska, Z; Westermann, P; Ahring, B K

    2004-05-05

    A two-stage 68 degrees C/55 degrees C anaerobic degradation process for treatment of cattle manure was studied. In batch experiments, an increase of the specific methane yield, ranging from 24% to 56%, was obtained when cattle manure and its fractions (fibers and liquid) were pretreated at 68 degrees C for periods of 36, 108, and 168 h, and subsequently digested at 55 degrees C. In a lab-scale experiment, the performance of a two-stage reactor system, consisting of a digester operating at 68 degrees C with a hydraulic retention time (HRT) of 3 days, connected to a 55 degrees C reactor with 12-day HRT, was compared with a conventional single-stage reactor running at 55 degrees C with 15-days HRT. When an organic loading of 3 g volatile solids (VS) per liter per day was applied, the two-stage setup had a 6% to 8% higher specific methane yield and a 9% more effective VS-removal than the conventional single-stage reactor. The 68 degrees C reactor generated 7% to 9% of the total amount of methane of the two-stage system and maintained a volatile fatty acids (VFA) concentration of 4.0 to 4.4 g acetate per liter. Population size and activity of aceticlastic methanogens, syntrophic bacteria, and hydrolytic/fermentative bacteria were significantly lower in the 68 degrees C reactor than in the 55 degrees C reactors. The density levels of methanogens utilizing H2/CO2 or formate were, however, in the same range for all reactors, although the degradation of these substrates was significantly lower in the 68 degrees C reactor than in the 55 degrees C reactors. Temporal temperature gradient electrophoresis profiles (TTGE) of the 68 degrees C reactor demonstrated a stable bacterial community along with a less divergent community of archaeal species. Copyright 2004 Wiley Periodicals, Inc.

  14. Prevalence of probable Attention-Deficit/Hyperactivity Disorder symptoms: result from a Spanish sample of children.

    Science.gov (United States)

    Cerrillo-Urbina, Alberto José; García-Hermoso, Antonio; Martínez-Vizcaíno, Vicente; Pardo-Guijarro, María Jesús; Ruiz-Hermosa, Abel; Sánchez-López, Mairena

    2018-03-15

    The aims of our study were to: (i) determine the prevalence of children aged 4 to 6 years with probable Attention-Deficit/Hyperactivity Disorder (ADHD) symptoms in the Spanish population; and (ii) analyse the association of probable ADHD symptoms with sex, age, type of school, origin (native or foreign) and socio-economic status in these children. This cross-sectional study included 1189 children (4 to 6 years-old) from 21 primary schools in 19 towns from the Ciudad Real and Cuenca provinces, Castilla-La Mancha region, Spain. The ADHD Rating Scales IV for parents and teachers was administered to determine the probability of ADHD. The 90th percentile cut-off was used to establish the prevalence of inattention, hyperactivity/impulsivity and combined subtype. The prevalence of children with probable ADHD symptoms was 5.4% (2.6% inattention subtype symptoms, 1.5% hyperactivity/impulsivity subtype symptoms, and 1.3% combined subtype symptoms). Children aged 4 to 5 years showed a higher prevalence of probable ADHD in the inattention subtype symptoms and in total of all subtypes than children aged 6 years, and children with low socio-economic status reported a higher prevalence of probable ADHD symptoms (each subtype and total of all of them) than those with medium and high socio-economic status. Early diagnosis and an understanding of the predictors of being probable ADHD are needed to direct appropriate identification and intervention efforts. These screening efforts should be especially addressed to vulnerable groups, particularly low socio-economic status families and younger children.

  15. Two stage study of wound microorganisms affecting burns and plastic surgery inpatients.

    Science.gov (United States)

    Miranda, Benjamin H; Ali, Syed N; Jeffery, Steven L A; Thomas, Sunil S

    2008-01-01

    culture and sensitivity results. Stage 2 demonstrated that ABAU, MRSA, and PAER were significantly more prevalent in the ICU setting. Furthermore, military inpatient wounds grew more ABAU, MRSA, and PAER than civilians, probably due to the longer inpatient stay, dirty nature of wounds, site and complex mechanism of injury. Finally, this study suggests that ABAU was brought into the unit by military patients.

  16. The proportionator: unbiased stereological estimation using biased automatic image analysis and non-uniform probability proportional to size sampling

    DEFF Research Database (Denmark)

    Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb

    2008-01-01

    examined, which in turn leads to any of the known stereological estimates, including size distributions and spatial distributions. The unbiasedness is not a function of the assumed relation between the weight and the structure, which is in practice always a biased relation from a stereological (integral......, the desired number of fields are sampled automatically with probability proportional to the weight and presented to the expert observer. Using any known stereological probe and estimator, the correct count in these fields leads to a simple, unbiased estimate of the total amount of structure in the sections...... geometric) point of view. The efficiency of the proportionator depends, however, directly on this relation to be positive. The sampling and estimation procedure is simulated in sections with characteristics and various kinds of noises in possibly realistic ranges. In all cases examined, the proportionator...

  17. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  18. The dynamics of patients' life quality indices under the conditions of two-stage knee joint replacement

    Directory of Open Access Journals (Sweden)

    Shpinyak S.P.

    2017-09-01

    Full Text Available  The aim: to analyze the changes in the life quality indices of the patients with deep periprosthetic joint infection of the knee under the two-stage surgical treatment. Material and Methods. 57 patients who underwent two-stage revision-ary treatment in Research Institute of Traumatology, Orthopedics and Neurosurgery were interviewed with life quality questionnaire Short Form Medical Outcomes Study (SF 36 v.1. Interview results were compared with standardized population indices of SF-36 scales for males and females. Results. In all groups regardless of sex there was a general tendency for an increase in physical and psychological health component up to mean population values after the first stage of surgery and further growth after the second stage. Rehabilitation potential of psychomotor health was higher in women than in men. The ability to handle stress was lower in direct ratio with the patients' age. Conclusion. Two-stage reendoprosthetic treatment with articulating antimicrobial spacer implantation having high grade of fixation is an effective treatment method for deep periprosthetic infection which increases physical health and improves social functioning of patients.

  19. Exploring non-signalling polytopes with negative probability

    International Nuclear Information System (INIS)

    Oas, G; Barros, J Acacio de; Carvalhaes, C

    2014-01-01

    Bipartite and tripartite EPR–Bell type systems are examined via joint quasi-probability distributions where probabilities are permitted to be negative. It is shown that such distributions exist only when the no-signalling condition is satisfied. A characteristic measure, the probability mass, is introduced and, via its minimization, limits the number of quasi-distributions describing a given marginal probability distribution. The minimized probability mass is shown to be an alternative way to characterize non-local systems. Non-signalling polytopes for two to eight settings in the bipartite scenario are examined and compared to prior work. Examining perfect cloning of non-local systems within the tripartite scenario suggests defining two categories of signalling. It is seen that many properties of non-local systems can be efficiently described by quasi-probability theory. (paper)

  20. Sampling design for the Study of Cardiovascular Risks in Adolescents (ERICA

    Directory of Open Access Journals (Sweden)

    Mauricio Teixeira Leite de Vasconcellos

    2015-05-01

    Full Text Available The Study of Cardiovascular Risk in Adolescents (ERICA aims to estimate the prevalence of cardiovascular risk factors and metabolic syndrome in adolescents (12-17 years enrolled in public and private schools of the 273 municipalities with over 100,000 inhabitants in Brazil. The study population was stratified into 32 geographical strata (27 capitals and five sets with other municipalities in each macro-region of the country and a sample of 1,251 schools was selected with probability proportional to size. In each school three combinations of shift (morning and afternoon and grade were selected, and within each of these combinations, one class was selected. All eligible students in the selected classes were included in the study. The design sampling weights were calculated by the product of the reciprocals of the inclusion probabilities in each sampling stage, and were later calibrated considering the projections of the numbers of adolescents enrolled in schools located in the geographical strata by sex and age.