WorldWideScience

Sample records for two-stage stochastic combinatorial

  1. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    W. Romeijnders; L. Stougie (Leen); M. van der Vlerk

    2014-01-01

    htmlabstractApproximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value.

  2. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    Romeijnders, W.; Stougie, L.; van der Vlerk, M.H.

    2014-01-01

    Approximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value. However,

  3. A two-stage stochastic programming approach for operating multi-energy systems

    DEFF Research Database (Denmark)

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  4. Two-stage stochastic programming model for the regional-scale electricity planning under demand uncertainty

    International Nuclear Information System (INIS)

    Huang, Yun-Hsun; Wu, Jung-Hua; Hsu, Yu-Ju

    2016-01-01

    Traditional electricity supply planning models regard the electricity demand as a deterministic parameter and require the total power output to satisfy the aggregate electricity demand. But in today's world, the electric system planners are facing tremendously complex environments full of uncertainties, where electricity demand is a key source of uncertainty. In addition, electricity demand patterns are considerably different for different regions. This paper developed a multi-region optimization model based on two-stage stochastic programming framework to incorporate the demand uncertainty. Furthermore, the decision tree method and Monte Carlo simulation approach are integrated into the model to simplify electricity demands in the form of nodes and determine the values and probabilities. The proposed model was successfully applied to a real case study (i.e. Taiwan's electricity sector) to show its applicability. Detail simulation results were presented and compared with those generated by a deterministic model. Finally, the long-term electricity development roadmap at a regional level could be provided on the basis of our simulation results. - Highlights: • A multi-region, two-stage stochastic programming model has been developed. • The decision tree and Monte Carlo simulation are integrated into the framework. • Taiwan's electricity sector is used to illustrate the applicability of the model. • The results under deterministic and stochastic cases are shown for comparison. • Optimal portfolios of regional generation technologies can be identified.

  5. A two-stage stochastic programming model for the optimal design of distributed energy systems

    International Nuclear Information System (INIS)

    Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.

    2013-01-01

    Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.

  6. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  7. Risk averse optimal operation of a virtual power plant using two stage stochastic programming

    International Nuclear Information System (INIS)

    Tajeddini, Mohammad Amin; Rahimi-Kian, Ashkan; Soroudi, Alireza

    2014-01-01

    VPP (Virtual Power Plant) is defined as a cluster of energy conversion/storage units which are centrally operated in order to improve the technical and economic performance. This paper addresses the optimal operation of a VPP considering the risk factors affecting its daily operation profits. The optimal operation is modelled in both day ahead and balancing markets as a two-stage stochastic mixed integer linear programming in order to maximize a GenCo (generation companies) expected profit. Furthermore, the CVaR (Conditional Value at Risk) is used as a risk measure technique in order to control the risk of low profit scenarios. The uncertain parameters, including the PV power output, wind power output and day-ahead market prices are modelled through scenarios. The proposed model is successfully applied to a real case study to show its applicability and the results are presented and thoroughly discussed. - Highlights: • Virtual power plant modelling considering a set of energy generating and conversion units. • Uncertainty modelling using two stage stochastic programming technique. • Risk modelling using conditional value at risk. • Flexible operation of renewable energy resources. • Electricity price uncertainty in day ahead energy markets

  8. Optimal design of distributed energy resource systems based on two-stage stochastic programming

    International Nuclear Information System (INIS)

    Yang, Yun; Zhang, Shijie; Xiao, Yunhan

    2017-01-01

    Highlights: • A two-stage stochastic programming model is built to design DER systems under uncertainties. • Uncertain energy demands have a significant effect on the optimal design. • Uncertain energy prices and renewable energy intensity have little effect on the optimal design. • The economy is overestimated if the system is designed without considering the uncertainties. • The uncertainty in energy prices has the significant and greatest effect on the economy. - Abstract: Multiple uncertainties exist in the optimal design of distributed energy resource (DER) systems. The expected energy, economic, and environmental benefits may not be achieved and a deficit in energy supply may occur if the uncertainties are not handled properly. This study focuses on the optimal design of DER systems with consideration of the uncertainties. A two-stage stochastic programming model is built in consideration of the discreteness of equipment capacities, equipment partial load operation and output bounds as well as of the influence of ambient temperature on gas turbine performance. The stochastic model is then transformed into its deterministic equivalent and solved. For an illustrative example, the model is applied to a hospital in Lianyungang, China. Comparative studies are performed to evaluate the effect of the uncertainties in load demands, energy prices, and renewable energy intensity separately and simultaneously on the system’s economy and optimal design. Results show that the uncertainties in load demands have a significant effect on the optimal system design, whereas the uncertainties in energy prices and renewable energy intensity have almost no effect. Results regarding economy show that it is obviously overestimated if the system is designed without considering the uncertainties.

  9. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  10. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  11. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    Science.gov (United States)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  12. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  13. A Two-Stage Maximum Entropy Prior of Location Parameter with a Stochastic Multivariate Interval Constraint and Its Properties

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2016-05-01

    Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.

  14. Effects of Risk Aversion on Market Outcomes: A Stochastic Two-Stage Equilibrium Model

    DEFF Research Database (Denmark)

    Kazempour, Jalal; Pinson, Pierre

    2016-01-01

    This paper evaluates how different risk preferences of electricity producers alter the market-clearing outcomes. Toward this goal, we propose a stochastic equilibrium model for electricity markets with two settlements, i.e., day-ahead and balancing, in which a number of conventional and stochastic...... by its optimality conditions, resulting in a mixed complementarity problem. Numerical results from a case study based on the IEEE one-area reliability test system are derived and discussed....

  15. An Efficient Robust Solution to the Two-Stage Stochastic Unit Commitment Problem

    DEFF Research Database (Denmark)

    Blanco, Ignacio; Morales González, Juan Miguel

    2017-01-01

    This paper proposes a reformulation of the scenario-based two-stage unitcommitment problem under uncertainty that allows finding unit-commitment plansthat perform reasonably well both in expectation and for the worst caserealization of the uncertainties. The proposed reformulation is based onpart...

  16. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  17. A primal-dual decomposition based interior point approach to two-stage stochastic linear programming

    NARCIS (Netherlands)

    A.B. Berkelaar (Arjan); C.L. Dert (Cees); K.P.B. Oldenkamp; S. Zhang (Shuzhong)

    1999-01-01

    textabstractDecision making under uncertainty is a challenge faced by many decision makers. Stochastic programming is a major tool developed to deal with optimization with uncertainties that has found applications in, e.g. finance, such as asset-liability and bond-portfolio management.

  18. Stochastic Real-World Drive Cycle Generation Based on a Two Stage Markov Chain Approach

    NARCIS (Netherlands)

    Balau, A.E.; Kooijman, D.; Vazquez Rodarte, I.; Ligterink, N.

    2015-01-01

    This paper presents a methodology and tool that stochastically generates drive cycles based on measured data, with the purpose of testing and benchmarking light duty vehicles in a simulation environment or on a test-bench. The WLTP database, containing real world driving measurements, was used as

  19. An inexact two-stage stochastic robust programming for residential micro-grid management-based on random demand

    International Nuclear Information System (INIS)

    Ji, L.; Niu, D.X.; Huang, G.H.

    2014-01-01

    In this paper a stochastic robust optimization problem of residential micro-grid energy management is presented. Combined cooling, heating and electricity technology (CCHP) is introduced to satisfy various energy demands. Two-stage programming is utilized to find the optimal installed capacity investment and operation control of CCHP (combined cooling heating and power). Moreover, interval programming and robust stochastic optimization methods are exploited to gain interval robust solutions under different robustness levels which are feasible for uncertain data. The obtained results can help micro-grid managers minimizing the investment and operation cost with lower system failure risk when facing fluctuant energy market and uncertain technology parameters. The different robustness levels reflect the risk preference of micro-grid manager. The proposed approach is applied to residential area energy management in North China. Detailed computational results under different robustness level are presented and analyzed for providing investment decision and operation strategies. - Highlights: • An inexact two-stage stochastic robust programming model for CCHP management. • The energy market and technical parameters uncertainties were considered. • Investment decision, operation cost, and system safety were analyzed. • Uncertainties expressed as discrete intervals and probability distributions

  20. Capacity expansion of stochastic power generation under two-stage electricity markets

    DEFF Research Database (Denmark)

    Pineda, Salvador; Morales González, Juan Miguel

    2016-01-01

    are first formulated from the standpoint of a social planner to characterize a perfectly competitive market. We investigate the effect of two paradigmatic market designs on generation expansion planning: a day-ahead market that is cleared following a conventional cost merit-order principle, and an ideal...... of stochastic power generating units. This framework includes the explicit representation of a day-ahead and a balancing market-clearing mechanisms to properly capture the impact of forecast errors of power production on the short-term operation of a power system. The proposed generation expansion problems...... market-clearing procedure that determines day-ahead dispatch decisions accounting for their impact on balancing operation costs. Furthermore, we reformulate the proposed models to determine the optimal expansion decisions that maximize the profit of a collusion of stochastic power producers in order...

  1. A simulation-based interval two-stage stochastic model for agricultural nonpoint source pollution control through land retirement

    International Nuclear Information System (INIS)

    Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.

    2006-01-01

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties

  2. A Two-Stage Stochastic Mixed-Integer Programming Approach to the Smart House Scheduling Problem

    Science.gov (United States)

    Ozoe, Shunsuke; Tanaka, Yoichi; Fukushima, Masao

    A “Smart House” is a highly energy-optimized house equipped with photovoltaic systems (PV systems), electric battery systems, fuel cell cogeneration systems (FC systems), electric vehicles (EVs) and so on. Smart houses are attracting much attention recently thanks to their enhanced ability to save energy by making full use of renewable energy and by achieving power grid stability despite an increased power draw for installed PV systems. Yet running a smart house's power system, with its multiple power sources and power storages, is no simple task. In this paper, we consider the problem of power scheduling for a smart house with a PV system, an FC system and an EV. We formulate the problem as a mixed integer programming problem, and then extend it to a stochastic programming problem involving recourse costs to cope with uncertain electricity demand, heat demand and PV power generation. Using our method, we seek to achieve the optimal power schedule running at the minimum expected operation cost. We present some results of numerical experiments with data on real-life demands and PV power generation to show the effectiveness of our method.

  3. An inexact two-stage stochastic energy systems planning model for managing greenhouse gas emission at a municipal level

    International Nuclear Information System (INIS)

    Lin, Q.G.; Huang, G.H.

    2010-01-01

    Energy management systems are highly complicated with greenhouse-gas emission reduction issues and a variety of social, economic, political, environmental and technical factors. To address such complexities, municipal energy systems planning models are desired as they can take account of these factors and their interactions within municipal energy management systems. This research is to develop an interval-parameter two-stage stochastic municipal energy systems planning model (ITS-MEM) for supporting decisions of energy systems planning and GHG (greenhouse gases) emission management at a municipal level. ITS-MEM is then applied to a case study. The results indicated that the developed model was capable of supporting municipal energy systems planning and environmental management under uncertainty. Solutions of ITS-MEM would provide an effective linkage between the pre-regulated environmental policies (GHG-emission reduction targets) and the associated economic implications (GHG-emission credit trading).

  4. An inexact fuzzy two-stage stochastic model for quantifying the efficiency of nonpoint source effluent trading under uncertainty

    International Nuclear Information System (INIS)

    Luo, B.; Maqsood, I.; Huang, G.H.; Yin, Y.Y.; Han, D.J.

    2005-01-01

    Reduction of nonpoint source (NPS) pollution from agricultural lands is a major concern in most countries. One method to reduce NPS pollution is through land retirement programs. This method, however, may result in enormous economic costs especially when large sums of croplands need to be retired. To reduce the cost, effluent trading can be employed to couple with land retirement programs. However, the trading efforts can also become inefficient due to various uncertainties existing in stochastic, interval, and fuzzy formats in agricultural systems. Thus, it is desired to develop improved methods to effectively quantify the efficiency of potential trading efforts by considering those uncertainties. In this respect, this paper presents an inexact fuzzy two-stage stochastic programming model to tackle such problems. The proposed model can facilitate decision-making to implement trading efforts for agricultural NPS pollution reduction through land retirement programs. The applicability of the model is demonstrated through a hypothetical effluent trading program within a subcatchment of the Lake Tai Basin in China. The study results indicate that the efficiency of the trading program is significantly influenced by precipitation amount, agricultural activities, and level of discharge limits of pollutants. The results also show that the trading program will be more effective for low precipitation years and with stricter discharge limits

  5. Combined Two-Stage Stochastic Programming and Receding Horizon Control Strategy for Microgrid Energy Management Considering Uncertainty

    Directory of Open Access Journals (Sweden)

    Zhongwen Li

    2016-06-01

    Full Text Available Microgrids (MGs are presented as a cornerstone of smart grids. With the potential to integrate intermittent renewable energy sources (RES in a flexible and environmental way, the MG concept has gained even more attention. Due to the randomness of RES, load, and electricity price in MG, the forecast errors of MGs will affect the performance of the power scheduling and the operating cost of an MG. In this paper, a combined stochastic programming and receding horizon control (SPRHC strategy is proposed for microgrid energy management under uncertainty, which combines the advantages of two-stage stochastic programming (SP and receding horizon control (RHC strategy. With an SP strategy, a scheduling plan can be derived that minimizes the risk of uncertainty by involving the uncertainty of MG in the optimization model. With an RHC strategy, the uncertainty within the MG can be further compensated through a feedback mechanism with the lately updated forecast information. In our approach, a proper strategy is also proposed to maintain the SP model as a mixed integer linear constrained quadratic programming (MILCQP problem, which is solvable without resorting to any heuristics algorithms. The results of numerical experiments explicitly demonstrate the superiority of the proposed strategy for both island and grid-connected operating modes of an MG.

  6. River water quality management considering agricultural return flows: application of a nonlinear two-stage stochastic fuzzy programming.

    Science.gov (United States)

    Tavakoli, Ali; Nikoo, Mohammad Reza; Kerachian, Reza; Soltani, Maryam

    2015-04-01

    In this paper, a new fuzzy methodology is developed to optimize water and waste load allocation (WWLA) in rivers under uncertainty. An interactive two-stage stochastic fuzzy programming (ITSFP) method is utilized to handle parameter uncertainties, which are expressed as fuzzy boundary intervals. An iterative linear programming (ILP) is also used for solving the nonlinear optimization model. To accurately consider the impacts of the water and waste load allocation strategies on the river water quality, a calibrated QUAL2Kw model is linked with the WWLA optimization model. The soil, water, atmosphere, and plant (SWAP) simulation model is utilized to determine the quantity and quality of each agricultural return flow. To control pollution loads of agricultural networks, it is assumed that a part of each agricultural return flow can be diverted to an evaporation pond and also another part of it can be stored in a detention pond. In detention ponds, contaminated water is exposed to solar radiation for disinfecting pathogens. Results of applying the proposed methodology to the Dez River system in the southwestern region of Iran illustrate its effectiveness and applicability for water and waste load allocation in rivers. In the planning phase, this methodology can be used for estimating the capacities of return flow diversion system and evaporation and detention ponds.

  7. A production planning model considering uncertain demand using two-stage stochastic programming in a fresh vegetable supply chain context.

    Science.gov (United States)

    Mateo, Jordi; Pla, Lluis M; Solsona, Francesc; Pagès, Adela

    2016-01-01

    Production planning models are achieving more interest for being used in the primary sector of the economy. The proposed model relies on the formulation of a location model representing a set of farms susceptible of being selected by a grocery shop brand to supply local fresh products under seasonal contracts. The main aim is to minimize overall procurement costs and meet future demand. This kind of problem is rather common in fresh vegetable supply chains where producers are located in proximity either to processing plants or retailers. The proposed two-stage stochastic model determines which suppliers should be selected for production contracts to ensure high quality products and minimal time from farm-to-table. Moreover, Lagrangian relaxation and parallel computing algorithms are proposed to solve these instances efficiently in a reasonable computational time. The results obtained show computational gains from our algorithmic proposals in front of the usage of plain CPLEX solver. Furthermore, the results ensure the competitive advantages of using the proposed model by purchase managers in the fresh vegetables industry.

  8. Implementation of equity in resource allocation for regional earthquake risk mitigation using two-stage stochastic programming.

    Science.gov (United States)

    Zolfaghari, Mohammad R; Peyghaleh, Elnaz

    2015-03-01

    This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two-stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user-defined level to achieve the equity-efficiency tradeoff in the decision-making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk-return tradeoff, equity-reconstruction expenditures tradeoff, and variation of per-capita expected earthquake loss in different income classes are also presented. © 2015 Society for Risk Analysis.

  9. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    OpenAIRE

    Cunha, P.S.A.; Oliveira, F.; Raupp, Fernanda M.P.

    2017-01-01

    ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which a...

  10. A novel two-stage stochastic programming model for uncertainty characterization in short-term optimal strategy for a distribution company

    International Nuclear Information System (INIS)

    Ahmadi, Abdollah; Charwand, Mansour; Siano, Pierluigi; Nezhad, Ali Esmaeel; Sarno, Debora; Gitizadeh, Mohsen; Raeisi, Fatima

    2016-01-01

    In order to supply the demands of the end users in a competitive market, a distribution company purchases energy from the wholesale market while other options would be in access in the case of possessing distributed generation units and interruptible loads. In this regard, this study presents a two-stage stochastic programming model for a distribution company energy acquisition market model to manage the involvement of different electric energy resources characterized by uncertainties with the minimum cost. In particular, the distribution company operations planning over a day-ahead horizon is modeled as a stochastic mathematical optimization, with the objective of minimizing costs. By this, distribution company decisions on grid purchase, owned distributed generation units and interruptible load scheduling are determined. Then, these decisions are considered as boundary constraints to a second step, which deals with distribution company's operations in the hour-ahead market with the objective of minimizing the short-term cost. The uncertainties in spot market prices and wind speed are modeled by means of probability distribution functions of their forecast errors and the roulette wheel mechanism and lattice Monte Carlo simulation are used to generate scenarios. Numerical results show the capability of the proposed method. - Highlights: • Proposing a new a stochastic-based two-stage operations framework in retail competitive markets. • Proposing a Mixed Integer Non-Linear stochastic programming. • Employing roulette wheel mechanism and Lattice Monte Carlo Simulation.

  11. Risk-Based Two-Stage Stochastic Optimization Problem of Micro-Grid Operation with Renewables and Incentive-Based Demand Response Programs

    Directory of Open Access Journals (Sweden)

    Pouria Sheikhahmadi

    2018-03-01

    Full Text Available The operation problem of a micro-grid (MG in grid-connected mode is an optimization one in which the main objective of the MG operator (MGO is to minimize the operation cost with optimal scheduling of resources and optimal trading energy with the main grid. The MGO can use incentive-based demand response programs (DRPs to pay an incentive to the consumers to change their demands in the peak hours. Moreover, the MGO forecasts the output power of renewable energy resources (RERs and models their uncertainties in its problem. In this paper, the operation problem of an MGO is modeled as a risk-based two-stage stochastic optimization problem. To model the uncertainties of RERs, two-stage stochastic programming is considered and conditional value at risk (CVaR index is used to manage the MGO’s risk-level. Moreover, the non-linear economic models of incentive-based DRPs are used by the MGO to change the peak load. The numerical studies are done to investigate the effect of incentive-based DRPs on the operation problem of the MGO. Moreover, to show the effect of the risk-averse parameter on MGO decisions, a sensitivity analysis is carried out.

  12. Electricity price forecast using Combinatorial Neural Network trained by a new stochastic search method

    International Nuclear Information System (INIS)

    Abedinia, O.; Amjady, N.; Shafie-khah, M.; Catalão, J.P.S.

    2015-01-01

    Highlights: • Presenting a Combinatorial Neural Network. • Suggesting a new stochastic search method. • Adapting the suggested method as a training mechanism. • Proposing a new forecast strategy. • Testing the proposed strategy on real-world electricity markets. - Abstract: Electricity price forecast is key information for successful operation of electricity market participants. However, the time series of electricity price has nonlinear, non-stationary and volatile behaviour and so its forecast method should have high learning capability to extract the complex input/output mapping function of electricity price. In this paper, a Combinatorial Neural Network (CNN) based forecasting engine is proposed to predict the future values of price data. The CNN-based forecasting engine is equipped with a new training mechanism for optimizing the weights of the CNN. This training mechanism is based on an efficient stochastic search method, which is a modified version of chemical reaction optimization algorithm, giving high learning ability to the CNN. The proposed price forecast strategy is tested on the real-world electricity markets of Pennsylvania–New Jersey–Maryland (PJM) and mainland Spain and its obtained results are extensively compared with the results obtained from several other forecast methods. These comparisons illustrate effectiveness of the proposed strategy.

  13. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    Directory of Open Access Journals (Sweden)

    P.S.A. Cunha

    Full Text Available ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which are then approximately linearized. To deal with the uncertain nature of the item demand levels, we apply a Monte Carlo simulation-based method to generate finite and discrete sets of scenarios. Moreover, the proposed approach does not require restricted assumptions to the behavior of the probabilistic phenomena, as does several existing methods in the literature. Numerical experiments with the proposed approach for randomly generated instances of the problem show results with errors around 1%.

  14. Design of problem-specific evolutionary algorithm/mixed-integer programming hybrids: two-stage stochastic integer programming applied to chemical batch scheduling

    Science.gov (United States)

    Urselmann, Maren; Emmerich, Michael T. M.; Till, Jochen; Sand, Guido; Engell, Sebastian

    2007-07-01

    Engineering optimization often deals with large, mixed-integer search spaces with a rigid structure due to the presence of a large number of constraints. Metaheuristics, such as evolutionary algorithms (EAs), are frequently suggested as solution algorithms in such cases. In order to exploit the full potential of these algorithms, it is important to choose an adequate representation of the search space and to integrate expert-knowledge into the stochastic search operators, without adding unnecessary bias to the search. Moreover, hybridisation with mathematical programming techniques such as mixed-integer programming (MIP) based on a problem decomposition can be considered for improving algorithmic performance. In order to design problem-specific EAs it is desirable to have a set of design guidelines that specify properties of search operators and representations. Recently, a set of guidelines has been proposed that gives rise to so-called Metric-based EAs (MBEAs). Extended by the minimal moves mutation they allow for a generalization of EA with self-adaptive mutation strength in discrete search spaces. In this article, a problem-specific EA for process engineering task is designed, following the MBEA guidelines and minimal moves mutation. On the background of the application, the usefulness of the design framework is discussed, and further extensions and corrections proposed. As a case-study, a two-stage stochastic programming problem in chemical batch process scheduling is considered. The algorithm design problem can be viewed as the choice of a hierarchical decision structure, where on different layers of the decision process symmetries and similarities can be exploited for the design of minimal moves. After a discussion of the design approach and its instantiation for the case-study, the resulting problem-specific EA/MIP is compared to a straightforward application of a canonical EA/MIP and to a monolithic mathematical programming algorithm. In view of the

  15. A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems

    Directory of Open Access Journals (Sweden)

    Angel A. Juan

    2015-12-01

    Full Text Available Many combinatorial optimization problems (COPs encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but they frequently assume that the problem inputs, the underlying objective function, and the set of optimization constraints are deterministic. However, uncertainty is all around us, which often makes deterministic models oversimplified versions of real-life systems. After completing an extensive review of related work, this paper describes a general methodology that allows for extending metaheuristics through simulation to solve stochastic COPs. ‘Simheuristics’ allow modelers for dealing with real-life uncertainty in a natural way by integrating simulation (in any of its variants into a metaheuristic-driven framework. These optimization-driven algorithms rely on the fact that efficient metaheuristics already exist for the deterministic version of the corresponding COP. Simheuristics also facilitate the introduction of risk and/or reliability analysis criteria during the assessment of alternative high-quality solutions to stochastic COPs. Several examples of applications in different fields illustrate the potential of the proposed methodology.

  16. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  17. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  18. Two-stage stochastic day-ahead optimal resource scheduling in a distribution network with intensive use of distributed energy resources

    DEFF Research Database (Denmark)

    Sousa, Tiago; Ghazvini, Mohammad Ali Fotouhi; Morais, Hugo

    2015-01-01

    The integration of renewable sources and electric vehicles will introduce new uncertainties to the optimal resource scheduling, namely at the distribution level. These uncertainties are mainly originated by the power generated by renewables sources and by the electric vehicles charge requirements....... This paper proposes a two-state stochastic programming approach to solve the day-ahead optimal resource scheduling problem. The case study considers a 33-bus distribution network with 66 distributed generation units and 1000 electric vehicles....

  19. ANALISIS FAKTOR-FAKTOR YANG MEMPENGARUHI EFISIENSI BANK UMUM SYARIAH DI INDONESIA DENGAN PENDEKATAN TWO STAGE STOCHASTIC FRONTIER APROACH (Studi Analisis di Bank Umum Syariah

    Directory of Open Access Journals (Sweden)

    Wahab Wahab

    2015-10-01

    Full Text Available Kinerja perbankan syariah dapat diukur dengan menggunakan salah satu parameter yaitu efisiensi. Data efisiensi dari sampel Bank Umum Syariah pada tahun 2006 – 2009 adalah sebesar 0.9467, data ini dapat mencerminkan kondisi tingkat efisiensi bank syariah selama periode tersebut, Tingkat efisiensi tertinggi berada di Bank Syariah Mandiri (BSM pada periode 2009 yakni sebesar 0.9631 yang berarti sangat mendekati nilai efisiensi optimal. Penelitian ini diukur dengan menggunakan pendekatan parametrik Stochastic Frontier Approach (SFA untuk mengetahui nilai efisiensi pada BSM. Sedangkan variabel yang diukur adalah ROA, CAR, FDR, BOPO, PPAP, dan NPF. Hasilnya bahwa Return On Asset (ROA berpengaruh positif tidak signifikan,Sedangkan Capital Adaquacy Ratio (CAR berpengaruh positif tidak signifikan, Financing Deposit Ratio (FDR berpengaruh positif signifikan, Biaya Operasional Pendapatan Operasional  BOPO berpengaruh negatif tidak signifikan, Penyisihan Piutang Aktiva Produktif (PPAP berpengaruh positif tidak signifikan, Non Performing Finance (NPF berpengaruh negatif tidak signifikan terhadap tingkat efisiensi Bank Syariah Mandiri dengan pendekatan SFA.

  20. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  1. A robust decision-making approach for p-hub median location problems based on two-stage stochastic programming and mean-variance theory : a real case study

    NARCIS (Netherlands)

    Ahmadi, T.; Karimi, H.; Davoudpour, H.

    2015-01-01

    The stochastic location-allocation p-hub median problems are related to long-term decisions made in risky situations. Due to the importance of this type of problems in real-world applications, the authors were motivated to propose an approach to obtain more reliable policies in stochastic

  2. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  3. Combinatorial chemistry

    DEFF Research Database (Denmark)

    Nielsen, John

    1994-01-01

    An overview of combinatorial chemistry is presented. Combinatorial chemistry, sometimes referred to as `irrational drug design,' involves the generation of molecular diversity. The resulting chemical library is then screened for biologically active compounds.......An overview of combinatorial chemistry is presented. Combinatorial chemistry, sometimes referred to as `irrational drug design,' involves the generation of molecular diversity. The resulting chemical library is then screened for biologically active compounds....

  4. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  5. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  6. Two-stage precipitation of plutonium trifluoride

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1984-04-01

    Plutonium trifluoride was precipitated using a two-stage precipitation system. A series of precipitation experiments identified the significant process variables affecting precipitate characteristics. A mathematical precipitation model was developed which was based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter that can be used to control particle characteristics

  7. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  8. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  9. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  10. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  11. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  12. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  13. On A Two-Stage Supply Chain Model In The Manufacturing Industry ...

    African Journals Online (AJOL)

    We model a two-stage supply chain where the upstream stage (stage 2) always meet demand from the downstream stage (stage 1).Demand is stochastic hence shortages will occasionally occur at stage 2. Stage 2 must fill these shortages by expediting using overtime production and/or backordering. We derive optimal ...

  14. Hypospadias repair: Byar's two stage operation revisited.

    Science.gov (United States)

    Arshad, A R

    2005-06-01

    Hypospadias is a congenital deformity characterised by an abnormally located urethral opening, that could occur anywhere proximal to its normal location on the ventral surface of glans penis to the perineum. Many operations had been described for the management of this deformity. One hundred and fifteen patients with hypospadias were treated at the Department of Plastic Surgery, Hospital Kuala Lumpur, Malaysia between September 1987 and December 2002, of which 100 had Byar's procedure performed on them. The age of the patients ranged from neonates to 26 years old. Sixty-seven patients had penoscrotal (58%), 20 had proximal penile (18%), 13 had distal penile (11%) and 15 had subcoronal hypospadias (13%). Operations performed were Byar's two-staged (100), Bracka's two-staged (11), flip-flap (2) and MAGPI operation (2). The most common complication encountered following hypospadias surgery was urethral fistula at a rate of 18%. There is a higher incidence of proximal hypospadias in the Malaysian community. Byar's procedure is a very versatile technique and can be used for all types of hypospadias. Fistula rate is 18% in this series.

  15. Applications of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2013-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management. The three volumes of the Combinatorial Optimization series aims to cover a wide range of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. "Applications of Combinatorial Optimization" is presenting a certain number among the most common and well-known applications of Combinatorial Optimization.

  16. Runway Operations Planning: A Two-Stage Solution Methodology

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the

  17. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  18. Quantum fields and processes a combinatorial approach

    CERN Document Server

    Gough, John

    2018-01-01

    Wick ordering of creation and annihilation operators is of fundamental importance for computing averages and correlations in quantum field theory and, by extension, in the Hudson-Parthasarathy theory of quantum stochastic processes, quantum mechanics, stochastic processes, and probability. This book develops the unified combinatorial framework behind these examples, starting with the simplest mathematically, and working up to the Fock space setting for quantum fields. Emphasizing ideas from combinatorics such as the role of lattice of partitions for multiple stochastic integrals by Wallstrom-Rota and combinatorial species by Joyal, it presents insights coming from quantum probability. It also introduces a 'field calculus' which acts as a succinct alternative to standard Feynman diagrams and formulates quantum field theory (cumulant moments, Dyson-Schwinger equation, tree expansions, 1-particle irreducibility) in this language. Featuring many worked examples, the book is aimed at mathematical physicists, quant...

  19. Quantum fields and processes a combinatorial approach

    CERN Document Server

    Gough, John

    2018-01-01

    Wick ordering of creation and annihilation operators is of fundamental importance for computing averages and correlations in quantum field theory and, by extension, in the Hudson–Parthasarathy theory of quantum stochastic processes, quantum mechanics, stochastic processes, and probability. This book develops the unified combinatorial framework behind these examples, starting with the simplest mathematically, and working up to the Fock space setting for quantum fields. Emphasizing ideas from combinatorics such as the role of lattice of partitions for multiple stochastic integrals by Wallstrom–Rota and combinatorial species by Joyal, it presents insights coming from quantum probability. It also introduces a 'field calculus' which acts as a succinct alternative to standard Feynman diagrams and formulates quantum field theory (cumulant moments, Dyson–Schwinger equation, tree expansions, 1-particle irreducibility) in this language. Featuring many worked examples, the book is aimed at mathematical physicists,...

  20. Short term load forecasting: two stage modelling

    Directory of Open Access Journals (Sweden)

    SOARES, L. J.

    2009-06-01

    Full Text Available This paper studies the hourly electricity load demand in the area covered by a utility situated in the Seattle, USA, called Puget Sound Power and Light Company. Our proposal is put into proof with the famous dataset from this company. We propose a stochastic model which employs ANN (Artificial Neural Networks to model short-run dynamics and the dependence among adjacent hours. The model proposed treats each hour's load separately as individual single series. This approach avoids modeling the intricate intra-day pattern (load profile displayed by the load, which varies throughout days of the week and seasons. The forecasting performance of the model is evaluated in similiar mode a TLSAR (Two-Level Seasonal Autoregressive model proposed by Soares (2003 using the years of 1995 and 1996 as the holdout sample. Moreover, we conclude that non linearity is present in some series of these data. The model results are analyzed. The experiment shows that our tool can be used to produce load forecasting in tropical climate places.

  1. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  2. Concepts of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2014-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management.  The three volumes of the Combinatorial Optimization series aim to cover a wide range  of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization.Concepts of Combinatorial Optimization, is divided into three parts:- On the complexity of combinatorial optimization problems, presenting basics about worst-case and randomi

  3. Combinatorial commutative algebra

    CERN Document Server

    Miller, Ezra

    2005-01-01

    Offers an introduction to combinatorial commutative algebra, focusing on combinatorial techniques for multigraded polynomial rings, semigroup algebras, and determined rings. The chapters in this work cover topics ranging from homological invariants of monomial ideals and their polyhedral resolutions, to tools for studying algebraic varieties.

  4. Dynamic combinatorial chemistry

    NARCIS (Netherlands)

    Otto, Sijbren; Furlan, Ricardo L.E.; Sanders, Jeremy K.M.

    2002-01-01

    A combinatorial library that responds to its target by increasing the concentration of strong binders at the expense of weak binders sounds ideal. Dynamic combinatorial chemistry has the potential to achieve exactly this. In this review, we will highlight the unique features that distinguish dynamic

  5. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  6. A Two Stage Solution Procedure for Production Planning System with Advance Demand Information

    Science.gov (United States)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji

    We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.

  7. Integer and combinatorial optimization

    CERN Document Server

    Nemhauser, George L

    1999-01-01

    Rave reviews for INTEGER AND COMBINATORIAL OPTIMIZATION ""This book provides an excellent introduction and survey of traditional fields of combinatorial optimization . . . It is indeed one of the best and most complete texts on combinatorial optimization . . . available. [And] with more than 700 entries, [it] has quite an exhaustive reference list.""-Optima ""A unifying approach to optimization problems is to formulate them like linear programming problems, while restricting some or all of the variables to the integers. This book is an encyclopedic resource for such f

  8. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi; Jin, Bangti; Zou, Jun

    2013-01-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer

  9. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  10. Nonparametric combinatorial sequence models.

    Science.gov (United States)

    Wauthier, Fabian L; Jordan, Michael I; Jojic, Nebojsa

    2011-11-01

    This work considers biological sequences that exhibit combinatorial structures in their composition: groups of positions of the aligned sequences are "linked" and covary as one unit across sequences. If multiple such groups exist, complex interactions can emerge between them. Sequences of this kind arise frequently in biology but methodologies for analyzing them are still being developed. This article presents a nonparametric prior on sequences which allows combinatorial structures to emerge and which induces a posterior distribution over factorized sequence representations. We carry out experiments on three biological sequence families which indicate that combinatorial structures are indeed present and that combinatorial sequence models can more succinctly describe them than simpler mixture models. We conclude with an application to MHC binding prediction which highlights the utility of the posterior distribution over sequence representations induced by the prior. By integrating out the posterior, our method compares favorably to leading binding predictors.

  11. Combinatorial Hybrid Systems

    DEFF Research Database (Denmark)

    Larsen, Jesper Abildgaard; Wisniewski, Rafal; Grunnet, Jacob Deleuran

    2008-01-01

    indicates for a given face the future simplex. In the suggested definition we allow nondeterminacy in form of splitting and merging of solution trajectories. The combinatorial vector field gives rise to combinatorial counterparts of most concepts from dynamical systems, such as duals to vector fields, flow......, flow lines, fixed points and Lyapunov functions. Finally it will be shown how this theory extends to switched dynamical systems and an algorithmic overview of how to do supervisory control will be shown towards the end....

  12. Stochastic programming with integer recourse

    NARCIS (Netherlands)

    van der Vlerk, Maarten Hendrikus

    1995-01-01

    In this thesis we consider two-stage stochastic linear programming models with integer recourse. Such models are at the intersection of two different branches of mathematical programming. On the one hand some of the model parameters are random, which places the problem in the field of stochastic

  13. Stochastic optimization: beyond mathematical programming

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Stochastic optimization, among which bio-inspired algorithms, is gaining momentum in areas where more classical optimization algorithms fail to deliver satisfactory results, or simply cannot be directly applied. This presentation will introduce baseline stochastic optimization algorithms, and illustrate their efficiency in different domains, from continuous non-convex problems to combinatorial optimization problem, to problems for which a non-parametric formulation can help exploring unforeseen possible solution spaces.

  14. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  15. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  16. Development of Explosive Ripper with Two-Stage Combustion

    Science.gov (United States)

    1974-10-01

    inch pipe duct work, the width of this duct proved to be detrimental in marginally rippable material; the duct, instead of the penetrator tip, was...marginally rippable rock. ID. Operating Requirements 2. Fuel The two-stage combustion device is designed to operate using S A 42. the same diesel

  17. Engineering analysis of the two-stage trifluoride precipitation process

    International Nuclear Information System (INIS)

    Luerkens, D.w.W.

    1984-06-01

    An engineering analysis of two-stage trifluoride precipitation processes is developed. Precipitation kinetics are modeled using consecutive reactions to represent fluoride complexation. Material balances across the precipitators are used to model the time dependent concentration profiles of the main chemical species. The results of the engineering analysis are correlated with previous experimental work on plutonium trifluoride and cerium trifluoride

  18. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  19. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2012-01-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general

  20. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  1. Introduction to combinatorial designs

    CERN Document Server

    Wallis, WD

    2007-01-01

    Combinatorial theory is one of the fastest growing areas of modern mathematics. Focusing on a major part of this subject, Introduction to Combinatorial Designs, Second Edition provides a solid foundation in the classical areas of design theory as well as in more contemporary designs based on applications in a variety of fields. After an overview of basic concepts, the text introduces balanced designs and finite geometries. The author then delves into balanced incomplete block designs, covering difference methods, residual and derived designs, and resolvability. Following a chapter on the e

  2. A Two-Stage Approach to the Orienteering Problem with Stochastic Weights

    NARCIS (Netherlands)

    Evers, L.; Glorie, K.; Ster, S. van der; Barros, A.I.; Monsuur, H.

    2014-01-01

    The Orienteering Problem (OP) is a routing problem which has many interesting applications in logistics, tourism and defense. The aim of the OP is to find a maximum profit path or tour, which is feasible with respect to a capacity constraint on the total weight of the selected arcs. In this paper we

  3. A two-stage approach to the orienteering problem with stochastic weights

    NARCIS (Netherlands)

    Evers, L.; Glorie, K.M.; van der Ster, S.L.; Barros, A.I.; Monsuur, H.

    2014-01-01

    The Orienteering Problem (OP) is a routing problem which has many interesting applications in logistics, tourism and defense. The aim of the OP is to find a maximum profit path or tour, which is feasible with respect to a capacity constraint on the total weight of the selected arcs. In this paper we

  4. Infinitary Combinatory Reduction Systems

    DEFF Research Database (Denmark)

    Ketema, Jeroen; Simonsen, Jakob Grue

    2011-01-01

    We define infinitary Combinatory Reduction Systems (iCRSs), thus providing the first notion of infinitary higher-order rewriting. The systems defined are sufficiently general that ordinary infinitary term rewriting and infinitary ¿-calculus are special cases. Furthermore,we generalise a number...

  5. Manipulating Combinatorial Structures.

    Science.gov (United States)

    Labelle, Gilbert

    This set of transparencies shows how the manipulation of combinatorial structures in the context of modern combinatorics can easily lead to interesting teaching and learning activities at every level of education from elementary school to university. The transparencies describe: (1) the importance and relations of combinatorics to science and…

  6. Introduction to combinatorial geometry

    International Nuclear Information System (INIS)

    Gabriel, T.A.; Emmett, M.B.

    1985-01-01

    The combinatorial geometry package as used in many three-dimensional multimedia Monte Carlo radiation transport codes, such as HETC, MORSE, and EGS, is becoming the preferred way to describe simple and complicated systems. Just about any system can be modeled using the package with relatively few input statements. This can be contrasted against the older style geometry packages in which the required input statements could be large even for relatively simple systems. However, with advancements come some difficulties. The users of combinatorial geometry must be able to visualize more, and, in some instances, all of the system at a time. Errors can be introduced into the modeling which, though slight, and at times hard to detect, can have devastating effects on the calculated results. As with all modeling packages, the best way to learn the combinatorial geometry is to use it, first on a simple system then on more complicated systems. The basic technique for the description of the geometry consists of defining the location and shape of the various zones in terms of the intersections and unions of geometric bodies. The geometric bodies which are generally included in most combinatorial geometry packages are: (1) box, (2) right parallelepiped, (3) sphere, (4) right circular cylinder, (5) right elliptic cylinder, (6) ellipsoid, (7) truncated right cone, (8) right angle wedge, and (9) arbitrary polyhedron. The data necessary to describe each of these bodies are given. As can be easily noted, there are some subsets included for simplicity

  7. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  8. Energy demand in Portuguese manufacturing: a two-stage model

    International Nuclear Information System (INIS)

    Borges, A.M.; Pereira, A.M.

    1992-01-01

    We use a two-stage model of factor demand to estimate the parameters determining energy demand in Portuguese manufacturing. In the first stage, a capital-labor-energy-materials framework is used to analyze the substitutability between energy as a whole and other factors of production. In the second stage, total energy demand is decomposed into oil, coal and electricity demands. The two stages are fully integrated since the energy composite used in the first stage and its price are obtained from the second stage energy sub-model. The estimates obtained indicate that energy demand in manufacturing responds significantly to price changes. In addition, estimation results suggest that there are important substitution possibilities among energy forms and between energy and other factors of production. The role of price changes in energy-demand forecasting, as well as in energy policy in general, is clearly established. (author)

  9. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  10. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    OpenAIRE

    Chen, Yanju; Wang, Ye

    2015-01-01

    This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the s...

  11. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  12. Combinatorial vector fields and the valley structure of fitness landscapes.

    Science.gov (United States)

    Stadler, Bärbel M R; Stadler, Peter F

    2010-12-01

    Adaptive (downhill) walks are a computationally convenient way of analyzing the geometric structure of fitness landscapes. Their inherently stochastic nature has limited their mathematical analysis, however. Here we develop a framework that interprets adaptive walks as deterministic trajectories in combinatorial vector fields and in return associate these combinatorial vector fields with weights that measure their steepness across the landscape. We show that the combinatorial vector fields and their weights have a product structure that is governed by the neutrality of the landscape. This product structure makes practical computations feasible. The framework presented here also provides an alternative, and mathematically more convenient, way of defining notions of valleys, saddle points, and barriers in landscape. As an application, we propose a refined approximation for transition rates between macrostates that are associated with the valleys of the landscape.

  13. A two-stage inexact joint-probabilistic programming method for air quality management under uncertainty.

    Science.gov (United States)

    Lv, Y; Huang, G H; Li, Y P; Yang, Z F; Sun, W

    2011-03-01

    A two-stage inexact joint-probabilistic programming (TIJP) method is developed for planning a regional air quality management system with multiple pollutants and multiple sources. The TIJP method incorporates the techniques of two-stage stochastic programming, joint-probabilistic constraint programming and interval mathematical programming, where uncertainties expressed as probability distributions and interval values can be addressed. Moreover, it can not only examine the risk of violating joint-probability constraints, but also account for economic penalties as corrective measures against any infeasibility. The developed TIJP method is applied to a case study of a regional air pollution control problem, where the air quality index (AQI) is introduced for evaluation of the integrated air quality management system associated with multiple pollutants. The joint-probability exists in the environmental constraints for AQI, such that individual probabilistic constraints for each pollutant can be efficiently incorporated within the TIJP model. The results indicate that useful solutions for air quality management practices have been generated; they can help decision makers to identify desired pollution abatement strategies with minimized system cost and maximized environmental efficiency. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa

    Science.gov (United States)

    2013-01-01

    Background Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. Methods A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Results Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. Conclusions We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer. PMID:24355316

  15. Experimental studies of two-stage centrifugal dust concentrator

    Science.gov (United States)

    Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.

    2018-03-01

    The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.

  16. Evaluating damping elements for two-stage suspension vehicles

    Directory of Open Access Journals (Sweden)

    Ronald M. Martinod R.

    2012-01-01

    Full Text Available The technical state of the damping elements for a vehicle having two-stage suspension was evaluated by using numerical models based on the multi-body system theory; a set of virtual tests used the eigenproblem mathematical method. A test was developed based on experimental modal analysis (EMA applied to a physical system as the basis for validating the numerical models. The study focused on evaluating vehicle dynamics to determine the influence of the dampers’ technical state in each suspension state.

  17. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  18. Two-stage, high power X-band amplifier experiment

    International Nuclear Information System (INIS)

    Kuang, E.; Davis, T.J.; Ivers, J.D.; Kerslick, G.S.; Nation, J.A.; Schaechter, L.

    1993-01-01

    At output powers in excess of 100 MW the authors have noted the development of sidebands in many TWT structures. To address this problem an experiment using a narrow bandwidth, two-stage TWT is in progress. The TWT amplifier consists of a dielectric (e = 5) slow-wave structure, a 30 dB sever section and a 8.8-9.0 GHz passband periodic, metallic structure. The electron beam used in this experiment is a 950 kV, 1 kA, 50 ns pencil beam propagating along an applied axial field of 9 kG. The dielectric first stage has a maximum gain of 30 dB measured at 8.87 GHz, with output powers of up to 50 MW in the TM 01 mode. In these experiments the dielectric amplifier output power is about 3-5 MW and the output power of the complete two-stage device is ∼160 MW at the input frequency. The sidebands detected in earlier experiments have been eliminated. The authors also report measurements of the energy spread of the electron beam resulting from the amplification process. These experimental results are compared with MAGIC code simulations and analytic work they have carried out on such devices

  19. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  20. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  1. TWO-STAGE HEAT PUMPS FOR ENERGY SAVING TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    A. E. Denysova

    2017-09-01

    Full Text Available The problem of energy saving becomes one of the most important in power engineering. It is caused by exhaustion of world reserves in hydrocarbon fuel, such as gas, oil and coal representing sources of traditional heat supply. Conventional sources have essential shortcomings: low power, ecological and economic efficiencies, that can be eliminated by using alternative methods of power supply, like the considered one: low-temperature natural heat of ground waters of on the basis of heat pump installations application. The heat supply system considered provides an effective use of two stages heat pump installation operating as heat source at ground waters during the lowest ambient temperature period. Proposed is a calculation method of heat pump installations on the basis of groundwater energy. Calculated are the values of electric energy consumption by the compressors’ drive, and the heat supply system transformation coefficient µ for a low-potential source of heat from ground waters allowing to estimate high efficiency of two stages heat pump installations.

  2. Two stage approach to dynamic soil structure interaction

    International Nuclear Information System (INIS)

    Nelson, I.

    1981-01-01

    A two stage approach is used to reduce the effective size of soil island required to solve dynamic soil structure interaction problems. The ficticious boundaries of the conventional soil island are chosen sufficiently far from the structure so that the presence of the structure causes only a slight perturbation on the soil response near the boundaries. While the resulting finite element model of the soil structure system can be solved, it requires a formidable computational effort. Currently, a two stage approach is used to reduce this effort. The combined soil structure system has many frequencies and wavelengths. For a stiff structure, the lowest frequencies are those associated with the motion of the structure as a rigid body. In the soil, these modes have the longest wavelengths and attenuate most slowly. The higher frequency deformational modes of the structure have shorter wavelengths and their effect attenuates more rapidly with distance from the structure. The difference in soil response between a computation with a refined structural model, and one with a crude model, tends towards zero a very short distance from the structure. In the current work, the 'crude model' is a rigid structure with the same geometry and inertial properties as the refined model. Preliminary calculations indicated that a rigid structure would be a good low frequency approximation to the actual structure, provided the structure was much stiffer than the native soil. (orig./RW)

  3. Repetitive, small-bore two-stage light gas gun

    International Nuclear Information System (INIS)

    Combs, S.K.; Foust, C.R.; Fehling, D.T.; Gouge, M.J.; Milora, S.L.

    1991-01-01

    A repetitive two-stage light gas gun for high-speed pellet injection has been developed at Oak Ridge National Laboratory. In general, applications of the two-stage light gas gun have been limited to only single shots, with a finite time (at least minutes) needed for recovery and preparation for the next shot. The new device overcomes problems associated with repetitive operation, including rapidly evacuating the propellant gases, reloading the gun breech with a new projectile, returning the piston to its initial position, and refilling the first- and second-stage gas volumes to the appropriate pressure levels. In addition, some components are subjected to and must survive severe operating conditions, which include rapid cycling to high pressures and temperatures (up to thousands of bars and thousands of kelvins) and significant mechanical shocks. Small plastic projectiles (4-mm nominal size) and helium gas have been used in the prototype device, which was equipped with a 1-m-long pump tube and a 1-m-long gun barrel, to demonstrate repetitive operation (up to 1 Hz) at relatively high pellet velocities (up to 3000 m/s). The equipment is described, and experimental results are presented. 124 refs., 6 figs., 5 tabs

  4. On the prior probabilities for two-stage Bayesian estimates

    International Nuclear Information System (INIS)

    Kohut, P.

    1992-01-01

    The method of Bayesian inference is reexamined for its applicability and for the required underlying assumptions in obtaining and using prior probability estimates. Two different approaches are suggested to determine the first-stage priors in the two-stage Bayesian analysis which avoid certain assumptions required for other techniques. In the first scheme, the prior is obtained through a true frequency based distribution generated at selected intervals utilizing actual sampling of the failure rate distributions. The population variability distribution is generated as the weighed average of the frequency distributions. The second method is based on a non-parametric Bayesian approach using the Maximum Entropy Principle. Specific features such as integral properties or selected parameters of prior distributions may be obtained with minimal assumptions. It is indicated how various quantiles may also be generated with a least square technique

  5. Two-stage hydroprocessing of synthetic crude gas oil

    Energy Technology Data Exchange (ETDEWEB)

    Mahay, A.; Chmielowiec, J.; Fisher, I.P.; Monnier, J. (Petro-Canada Products, Missisauga, ON (Canada). Research and Development Centre)

    1992-02-01

    The hydrocracking of synthetic crude gas oils (SGO), which are commercially produced from Canadian oil sands, is strongly inhibited by nitrogen-containing species. To alleviate the pronounced effect of these nitrogenous compounds, SGO was hydrotreated at severe conditions prior to hydrocracking to reduce its N content from 1665 to about 390 ppm (by weight). Hydrocracking was then performed using a commercial nickel-tungsten catalyst supported on silica-alumina. Two-stage hydroprocessing of SGO was assessed in terms of product yields and quality. As expected, higher gas oil conversion were achieved mostly from an increase in naphtha yield. The middle distillate product quality was also clearly improved as the diesel fuel cetane number increased by 13%. Diesel engine tests indicated that particulate emissions in exhaust gases were lowered by 20%. Finally, pseudo first-order kinetic equations were derived for the overall conversion of the major gas oil components. 17 refs., 2 figs., 8 tabs.

  6. Quick pace of property acquisitions requires two-stage evaluations

    International Nuclear Information System (INIS)

    Hollo, R.; Lockwood, S.

    1994-01-01

    The traditional method of evaluating oil and gas reserves may be too cumbersome for the quick pace of oil and gas property acquisition. An acquisition evaluator must decide quickly if a property meets basic purchase criteria. The current business climate requires a two-stage approach. First, the evaluator makes a quick assessment of the property and submits a bid. If the bid is accepted then the evaluator goes on with a detailed analysis, which represents the second stage. Acquisition of producing properties has become an important activity for many independent oil and gas producers, who must be able to evaluate reserves quickly enough to make effective business decisions yet accurately enough to avoid costly mistakes. Independent thus must be familiar with how transactions usually progress as well as with the basic methods of property evaluation. The paper discusses acquisition activity, the initial offer, the final offer, property evaluation, and fair market value

  7. Hybrid biogas upgrading in a two-stage thermophilic reactor

    DEFF Research Database (Denmark)

    Corbellini, Viola; Kougias, Panagiotis; Treu, Laura

    2018-01-01

    The aim of this study is to propose a hybrid biogas upgrading configuration composed of two-stage thermophilic reactors. Hydrogen is directly injected in the first stage reactor. The output gas from the first reactor (in-situ biogas upgrade) is subsequently transferred to a second upflow reactor...... (ex-situ upgrade), in which enriched hydrogenotrophic culture is responsible for the hydrogenation of carbon dioxide to methane. The overall objective of the work was to perform an initial methane enrichment in the in-situ reactor, avoiding deterioration of the process due to elevated pH levels......, and subsequently, to complete the biogas upgrading process in the ex-situ chamber. The methane content in the first stage reactor reached on average 87% and the corresponding value in the second stage was 91%, with a maximum of 95%. A remarkable accumulation of volatile fatty acids was observed in the first...

  8. GENERALISED MODEL BASED CONFIDENCE INTERVALS IN TWO STAGE CLUSTER SAMPLING

    Directory of Open Access Journals (Sweden)

    Christopher Ouma Onyango

    2010-09-01

    Full Text Available Chambers and Dorfman (2002 constructed bootstrap confidence intervals in model based estimation for finite population totals assuming that auxiliary values are available throughout a target population and that the auxiliary values are independent. They also assumed that the cluster sizes are known throughout the target population. We now extend to two stage sampling in which the cluster sizes are known only for the sampled clusters, and we therefore predict the unobserved part of the population total. Jan and Elinor (2008 have done similar work, but unlike them, we use a general model, in which the auxiliary values are not necessarily independent. We demonstrate that the asymptotic properties of our proposed estimator and its coverage rates are better than those constructed under the model assisted local polynomial regression model.

  9. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  10. Device for two-stage cementing of casing

    Energy Technology Data Exchange (ETDEWEB)

    Kudimov, D A; Goncharevskiy, Ye N; Luneva, L G; Shchelochkov, S N; Shil' nikova, L N; Tereshchenko, V G; Vasiliev, V A; Volkova, V V; Zhdokov, K I

    1981-01-01

    A device is claimed for two-stage cementing of casing. It consists of a body with lateral plugging vents, upper and lower movable sleeves, a check valve with axial channels that's situated in the lower sleeve, and a displacement limiting device for the lower sleeve. To improve the cementing process of the casing by preventing overflow of cementing fluids from the annular space into the first stage casing, the limiter is equipped with a spring rod that is capable of covering the axial channels of the check valve while it's in an operating mode. In addition, the rod in the upper part is equipped with a reinforced area under the axial channels of the check valve.

  11. Two-stage decision approach to material accounting

    International Nuclear Information System (INIS)

    Opelka, J.H.; Sutton, W.B.

    1982-01-01

    The validity of the alarm threshold 4sigma has been checked for hypothetical large and small facilities using a two-stage decision model in which the diverter's strategic variable is the quantity diverted, and the defender's strategic variables are the alarm threshold and the effectiveness of the physical security and material control systems in the possible presence of a diverter. For large facilities, the material accounting system inherently appears not to be a particularly useful system for the deterrence of diversions, and essentially no improvement can be made by lowering the alarm threshold below 4sigma. For small facilities, reduction of the threshold to 2sigma or 3sigma is a cost effective change for the accounting system, but is probably less cost effective than making improvements in the material control and physical security systems

  12. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  13. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    , air preheating and pyrolysis, hereby very high energy efficiencies can be achieved. Encouraging results are obtained at a 100 kWth laboratory facility. The tar content in the raw gas is measured to be below 25 mg/Nm3 and around 5 mg/Nm3 after gas cleaning with traditional baghouse filter. Furthermore...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...... fuels, and is a suitable design for medium size gasifiers....

  14. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  15. The combinatorial derivation

    Directory of Open Access Journals (Sweden)

    Igor V. Protasov

    2013-09-01

    $\\Delta(A=\\{g\\in G:|gA\\cap A|=\\infty\\}$. The mapping $\\Delta:\\mathcal{P}_G\\rightarrow\\mathcal{P}_G$, $A\\mapsto\\Delta(A$, is called a combinatorial derivation and can be considered as an analogue of the topological derivation $d:\\mathcal{P}_X\\rightarrow\\mathcal{P}_X$, $A\\mapsto A^d$, where $X$ is a topological space and $A^d$ is the set of all limit points of $A$. Content: elementary properties, thin and almost thin subsets, partitions, inverse construction and $\\Delta$-trajectories,  $\\Delta$ and $d$.

  16. Dynamic Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Lisbjerg, Micke

    This thesis is divided into seven chapters, which can all be read individually. The first chapter, however, contains a general introduction to the chemistry used in the remaining six chapters, and it is therefore recommended to read chapter one before reading the other chapters. Chapter 1...... is a general introductory chapter for the whole thesis. The history and concepts of dynamic combinatorial chemistry are described, as are some of the new and intriguing results recently obtained. Finally, the properties of a broad range of hexameric macrocycles are described in detail. Chapter 2 gives...

  17. Boltzmann Oracle for Combinatorial Systems

    OpenAIRE

    Pivoteau , Carine; Salvy , Bruno; Soria , Michèle

    2008-01-01

    International audience; Boltzmann random generation applies to well-defined systems of recursive combinatorial equations. It relies on oracles giving values of the enumeration generating series inside their disk of convergence. We show that the combinatorial systems translate into numerical iteration schemes that provide such oracles. In particular, we give a fast oracle based on Newton iteration.

  18. Combinatorial matrix theory

    CERN Document Server

    Mitjana, Margarida

    2018-01-01

    This book contains the notes of the lectures delivered at an Advanced Course on Combinatorial Matrix Theory held at Centre de Recerca Matemàtica (CRM) in Barcelona. These notes correspond to five series of lectures. The first series is dedicated to the study of several matrix classes defined combinatorially, and was delivered by Richard A. Brualdi. The second one, given by Pauline van den Driessche, is concerned with the study of spectral properties of matrices with a given sign pattern. Dragan Stevanović delivered the third one, devoted to describing the spectral radius of a graph as a tool to provide bounds of parameters related with properties of a graph. The fourth lecture was delivered by Stephen Kirkland and is dedicated to the applications of the Group Inverse of the Laplacian matrix. The last one, given by Ángeles Carmona, focuses on boundary value problems on finite networks with special in-depth on the M-matrix inverse problem.

  19. Cryptographic Combinatorial Securities Exchanges

    Science.gov (United States)

    Thorpe, Christopher; Parkes, David C.

    We present a useful new mechanism that facilitates the atomic exchange of many large baskets of securities in a combinatorial exchange. Cryptography prevents information about the securities in the baskets from being exploited, enhancing trust. Our exchange offers institutions who wish to trade large positions a new alternative to existing methods of block trading: they can reduce transaction costs by taking advantage of other institutions’ available liquidity, while third party liquidity providers guarantee execution—preserving their desired portfolio composition at all times. In our exchange, institutions submit encrypted orders which are crossed, leaving a “remainder”. The exchange proves facts about the portfolio risk of this remainder to third party liquidity providers without revealing the securities in the remainder, the knowledge of which could also be exploited. The third parties learn either (depending on the setting) the portfolio risk parameters of the remainder itself, or how their own portfolio risk would change if they were to incorporate the remainder into a portfolio they submit. In one setting, these third parties submit bids on the commission, and the winner supplies necessary liquidity for the entire exchange to clear. This guaranteed clearing, coupled with external price discovery from the primary markets for the securities, sidesteps difficult combinatorial optimization problems. This latter method of proving how taking on the remainder would change risk parameters of one’s own portfolio, without revealing the remainder’s contents or its own risk parameters, is a useful protocol of independent interest.

  20. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  1. Eliminating Survivor Bias in Two-stage Instrumental Variable Estimators.

    Science.gov (United States)

    Vansteelandt, Stijn; Walter, Stefan; Tchetgen Tchetgen, Eric

    2018-07-01

    Mendelian randomization studies commonly focus on elderly populations. This makes the instrumental variables analysis of such studies sensitive to survivor bias, a type of selection bias. A particular concern is that the instrumental variable conditions, even when valid for the source population, may be violated for the selective population of individuals who survive the onset of the study. This is potentially very damaging because Mendelian randomization studies are known to be sensitive to bias due to even minor violations of the instrumental variable conditions. Interestingly, the instrumental variable conditions continue to hold within certain risk sets of individuals who are still alive at a given age when the instrument and unmeasured confounders exert additive effects on the exposure, and moreover, the exposure and unmeasured confounders exert additive effects on the hazard of death. In this article, we will exploit this property to derive a two-stage instrumental variable estimator for the effect of exposure on mortality, which is insulated against the above described selection bias under these additivity assumptions.

  2. Two-stage image denoising considering interscale and intrascale dependencies

    Science.gov (United States)

    Shahdoosti, Hamid Reza

    2017-11-01

    A solution to the problem of reducing the noise of grayscale images is presented. To consider the intrascale and interscale dependencies, this study makes use of a model. It is shown that the dependency between a wavelet coefficient and its predecessors can be modeled by the first-order Markov chain, which means that the parent conveys all of the information necessary for efficient estimation. Using this fact, the proposed method employs the Kalman filter in the wavelet domain for image denoising. The proposed method has two stages. The first stage employs a simple denoising algorithm to provide the noise-free image, by which the parameters of the model such as state transition matrix, variance of the process noise, the observation model, and the covariance of the observation noise are estimated. In the second stage, the Kalman filter is applied to the wavelet coefficients of the noisy image to estimate the noise-free coefficients. In fact, the Kalman filter is used to estimate the coefficients of high-frequency subbands from the coefficients of coarser scales and noisy observations of neighboring coefficients. In this way, both the interscale and intrascale dependencies are taken into account. Results are presented and discussed on a set of standard 8-bit grayscale images. The experimental results demonstrate that the proposed method achieves performances competitive with the state-of-the-art denoising methods in terms of both peak-signal-to-noise ratio and subjective visual quality.

  3. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  4. FIRST DIRECT EVIDENCE OF TWO STAGES IN FREE RECALL

    Directory of Open Access Journals (Sweden)

    Eugen Tarnow

    2015-12-01

    Full Text Available I find that exactly two stages can be seen directly in sequential free recall distributions. These distributions show that the first three recalls come from the emptying of working memory, recalls 6 and above come from a second stage and the 4th and 5th recalls are mixtures of the two.A discontinuity, a rounded step function, is shown to exist in the fitted linear slope of the recall distributions as the recall shifts from the emptying of working memory (positive slope to the second stage (negative slope. The discontinuity leads to a first estimate of the capacity of working memory at 4-4.5 items. The total recall is shown to be a linear combination of the content of working memory and items recalled in the second stage with 3.0-3.9 items coming from working memory, a second estimate of the capacity of working memory. A third, separate upper limit on the capacity of working memory is found (3.06 items, corresponding to the requirement that the content of working memory cannot exceed the total recall, item by item. This third limit is presumably the best limit on the average capacity of unchunked working memory.The second stage of recall is shown to be reactivation: The average times to retrieve additional items in free recall obey a linear relationship as a function of the recall probability which mimics recognition and cued recall, both mechanisms using reactivation (Tarnow, 2008.

  5. A two-stage DEA approach for environmental efficiency measurement.

    Science.gov (United States)

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  6. Two-stage nuclear refrigeration with enhanced nuclear moments

    International Nuclear Information System (INIS)

    Hunik, R.

    1979-01-01

    Experiments are described in which an enhanced nuclear system is used as a precoolant for a nuclear demagnetisation stage. The results show the promising advantages of such a system in those circumstances for which a large cooling power is required at extremely low temperatures. A theoretical review of nuclear enhancement at the microscopic level and its macroscopic thermodynamical consequences is given. The experimental equipment for the implementation of the nuclear enhanced refrigeration method is described and the experiments on two-stage nuclear demagnetisation are discussed. With the nuclear enhanced system PrCu 6 the author could precool a nuclear stage of indium in a magnetic field of 6 T down to temperatures below 10 mK; this resulted in temperature below 1 mK after demagnetisation of the indium. It is demonstrated that the interaction energy between the nuclear moments in an enhanced nuclear system can exceed the nuclear dipolar interaction. Several experiments are described on pulsed nuclear magnetic resonance, as utilised for thermometry purposes. It is shown that platinum NMR-thermometry gives very satisfactory results around 1 mK. The results of experiments on nuclear orientation of radioactive nuclei, e.g. the brute force polarisation of 95 NbPt and 60 CoCu, are presented, some of which are of major importance for the thermometry in the milli-Kelvin region. (Auth.)

  7. Two-stage Catalytic Reduction of NOx with Hydrocarbons

    Energy Technology Data Exchange (ETDEWEB)

    Umit S. Ozkan; Erik M. Holmgreen; Matthew M. Yung; Jonathan Halter; Joel Hiltner

    2005-12-21

    A two-stage system for the catalytic reduction of NO from lean-burn natural gas reciprocating engine exhaust is investigated. Each of the two stages uses a distinct catalyst. The first stage is oxidation of NO to NO{sub 2} and the second stage is reduction of NO{sub 2} to N{sub 2} with a hydrocarbon. The central idea is that since NO{sub 2} is a more easily reduced species than NO, it should be better able to compete with oxygen for the combustion reaction of hydrocarbon, which is a challenge in lean conditions. Early work focused on demonstrating that the N{sub 2} yield obtained when NO{sub 2} was reduced was greater than when NO was reduced. NO{sub 2} reduction catalysts were designed and silver supported on alumina (Ag/Al{sub 2}O{sub 3}) was found to be quite active, able to achieve 95% N{sub 2} yield in 10% O{sub 2} using propane as the reducing agent. The design of a catalyst for NO oxidation was also investigated, and a Co/TiO{sub 2} catalyst prepared by sol-gel was shown to have high activity for the reaction, able to reach equilibrium conversion of 80% at 300 C at GHSV of 50,000h{sup -1}. After it was shown that NO{sub 2} could be more easily reduced to N{sub 2} than NO, the focus shifted on developing a catalyst that could use methane as the reducing agent. The Ag/Al{sub 2}O{sub 3} catalyst was tested and found to be inactive for NOx reduction with methane. Through iterative catalyst design, a palladium-based catalyst on a sulfated-zirconia support (Pd/SZ) was synthesized and shown to be able to selectively reduce NO{sub 2} in lean conditions using methane. Development of catalysts for the oxidation reaction also continued and higher activity, as well as stability in 10% water, was observed on a Co/ZrO{sub 2} catalyst, which reached equilibrium conversion of 94% at 250 C at the same GHSV. The Co/ZrO{sub 2} catalyst was also found to be extremely active for oxidation of CO, ethane, and propane, which could potential eliminate the need for any separate

  8. Causes for the two stages of the disruption energy quench

    Energy Technology Data Exchange (ETDEWEB)

    Schueller, F.C.; Donne, A.J.H.; Heijnen, S.H.; Rommers, J.R.; Tanzi, C.P. [FOM-Instituut voor Plasmafysica, Rijnhuizen (Netherlands); Vries, P.C. de; Waidmann, G. [Forschungszentrum Juelich GmbH (Germany). Inst. fuer Plasmaphysik

    1994-12-31

    It is a well-established fact that the energy quench of tokamak disruptions takes place in two stages separated by a plateau period. The total quench duration of typically a few hundred {mu}s is thought to be a combination of Alfven and magnetic diffusion times: Phase 1: a large cold m=1 bubble eats out the hot core within the q=1 surface. Since the normal thermal isolation of the outer layers is still intact this phase means an adiabatic flattening of the inner temperature distribution. Phase 2: after a plateau period the second quench occurs when the edge thermal barrier collapses and a major part of the plasma energy is lost in conjunction with a negative surface voltage spike and a positive spike of the plasma current. In the experimental and theoretical literature on this subject not much attention is given to the evolution of the density distribution during these two phases. This may be caused by the great difficulties one has to keep the fringe counters of multichannel interferometers on track during the very fast changing evolution. The interferometer at TEXTOR can follow this evolution. The spatial resolution after inversion is limited because of the modest number of interferometer channels. In RTP an 18-channel fast interferometer is available next to a 4-channel pulse radar reflectometer which makes it possible to investigate the density profile evolution with both good time (2 {mu}s)- and spatial (0.1a)-resolution. A fast 20-channel ECE-heterodyne radiometer and a 5-camera SXR system allows to follow the temperature profile evolution as well. In this paper theoretical models will be revisited and compared to the new experimental evidence. (author) 9 refs., 3 figs.

  9. Causes for the two stages of the disruption energy quench

    International Nuclear Information System (INIS)

    Schueller, F.C.; Donne, A.J.H.; Heijnen, S.H.; Rommers, J.R.; Tanzi, C.P.; Vries, P.C. de; Waidmann, G.

    1994-01-01

    It is a well-established fact that the energy quench of tokamak disruptions takes place in two stages separated by a plateau period. The total quench duration of typically a few hundred μs is thought to be a combination of Alfven and magnetic diffusion times: Phase 1: a large cold m=1 bubble eats out the hot core within the q=1 surface. Since the normal thermal isolation of the outer layers is still intact this phase means an adiabatic flattening of the inner temperature distribution. Phase 2: after a plateau period the second quench occurs when the edge thermal barrier collapses and a major part of the plasma energy is lost in conjunction with a negative surface voltage spike and a positive spike of the plasma current. In the experimental and theoretical literature on this subject not much attention is given to the evolution of the density distribution during these two phases. This may be caused by the great difficulties one has to keep the fringe counters of multichannel interferometers on track during the very fast changing evolution. The interferometer at TEXTOR can follow this evolution. The spatial resolution after inversion is limited because of the modest number of interferometer channels. In RTP an 18-channel fast interferometer is available next to a 4-channel pulse radar reflectometer which makes it possible to investigate the density profile evolution with both good time (2 μs)- and spatial (0.1a)-resolution. A fast 20-channel ECE-heterodyne radiometer and a 5-camera SXR system allows to follow the temperature profile evolution as well. In this paper theoretical models will be revisited and compared to the new experimental evidence. (author) 9 refs., 3 figs

  10. Transport fuels from two-stage coal liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Benito, A.; Cebolla, V.; Fernandez, I.; Martinez, M.T.; Miranda, J.L.; Oelert, H.; Prado, J.G. (Instituto de Carboquimica CSIC, Zaragoza (Spain))

    1994-03-01

    Four Spanish lignites and their vitrinite concentrates were evaluated for coal liquefaction. Correlationships between the content of vitrinite and conversion in direct liquefaction were observed for the lignites but not for the vitrinite concentrates. The most reactive of the four coals was processed in two-stage liquefaction at a higher scale. First-stage coal liquefaction was carried out in a continuous unit at Clausthal University at a temperature of 400[degree]C at 20 MPa hydrogen pressure and with anthracene oil as a solvent. The coal conversion obtained was 75.41% being 3.79% gases, 2.58% primary condensate and 69.04% heavy liquids. A hydroprocessing unit was built at the Instituto de Carboquimica for the second-stage coal liquefaction. Whole and deasphalted liquids from the first-stage liquefaction were processed at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The effects of liquid hourly space velocity (LHSV), temperature, gas/liquid ratio and catalyst on the heteroatom liquids, and levels of 5 ppm of nitrogen and 52 ppm of sulphur were reached at 450[degree]C, 10 MPa hydrogen pressure, 0.08 kg H[sub 2]/kg feedstock and with Harshaw HT-500E catalyst. The liquids obtained were hydroprocessed again at 420[degree]C, 10 MPa hydrogen pressure and 0.06 kg H[sub 2]/kg feedstock to hydrogenate the aromatic structures. In these conditions, the aromaticity was reduced considerably, and 39% of naphthas and 35% of kerosene fractions were obtained. 18 refs., 4 figs., 4 tabs.

  11. Two-Stage Performance Engineering of Container-based Virtualization

    Directory of Open Access Journals (Sweden)

    Zheng Li

    2018-02-01

    Full Text Available Cloud computing has become a compelling paradigm built on compute and storage virtualization technologies. The current virtualization solution in the Cloud widely relies on hypervisor-based technologies. Given the recent booming of the container ecosystem, the container-based virtualization starts receiving more attention for being a promising alternative. Although the container technologies are generally considered to be lightweight, no virtualization solution is ideally resource-free, and the corresponding performance overheads will lead to negative impacts on the quality of Cloud services. To facilitate understanding container technologies from the performance engineering’s perspective, we conducted two-stage performance investigations into Docker containers as a concrete example. At the first stage, we used a physical machine with “just-enough” resource as a baseline to investigate the performance overhead of a standalone Docker container against a standalone virtual machine (VM. With findings contrary to the related work, our evaluation results show that the virtualization’s performance overhead could vary not only on a feature-by-feature basis but also on a job-to-job basis. Moreover, the hypervisor-based technology does not come with higher performance overhead in every case. For example, Docker containers particularly exhibit lower QoS in terms of storage transaction speed. At the ongoing second stage, we employed a physical machine with “fair-enough” resource to implement a container-based MapReduce application and try to optimize its performance. In fact, this machine failed in affording VM-based MapReduce clusters in the same scale. The performance tuning results show that the effects of different optimization strategies could largely be related to the data characteristics. For example, LZO compression can bring the most significant performance improvement when dealing with text data in our case.

  12. Combinatorial optimization games

    Energy Technology Data Exchange (ETDEWEB)

    Deng, X. [York Univ., North York, Ontario (Canada); Ibaraki, Toshihide; Nagamochi, Hiroshi [Kyoto Univ. (Japan)

    1997-06-01

    We introduce a general integer programming formulation for a class of combinatorial optimization games, which immediately allows us to improve the algorithmic result for finding amputations in the core (an important solution concept in cooperative game theory) of the network flow game on simple networks by Kalai and Zemel. An interesting result is a general theorem that the core for this class of games is nonempty if and only if a related linear program has an integer optimal solution. We study the properties for this mathematical condition to hold for several interesting problems, and apply them to resolve algorithmic and complexity issues for their cores along the line as put forward in: decide whether the core is empty; if the core is empty, find an imputation in the core; given an imputation x, test whether x is in the core. We also explore the properties of totally balanced games in this succinct formulation of cooperative games.

  13. Combinatorial Testing for VDM

    DEFF Research Database (Denmark)

    Larsen, Peter Gorm; Lausdahl, Kenneth; Battle, Nick

    2010-01-01

    by forgotten preconditions as well as broken invariants and post-conditions. Trace definitions are defined as regular expressions describing possible sequences of operation calls, and are conceptually similar to UML sequence diagrams. In this paper we present a tool enabling test automation based on VDM traces......Abstract—Combinatorial testing in VDM involves the automatic generation and execution of a large collection of test cases derived from templates provided in the form of trace definitions added to a VDM specification. The main value of this is the rapid detection of run-time errors caused......, and explain how it is possible to reduce large collections of test cases in different ways. Its use is illustrated with a small case study....

  14. On an extension of a combinatorial identity

    Indian Academy of Sciences (India)

    to an infinite family of 4-way combinatorial identities. In some particular cases we get even 5-way combinatorial identities which give us four new combinatorial versions of. Göllnitz–Gordon identities. Keywords. n-Color partitions; lattice paths; Frobenius partitions; Göllnitz–Gordon identities; combinatorial interpretations. 1.

  15. Stochastic integer programming by dynamic programming

    NARCIS (Netherlands)

    Lageweg, B.J.; Lenstra, J.K.; Rinnooy Kan, A.H.G.; Stougie, L.; Ermoliev, Yu.; Wets, R.J.B.

    1988-01-01

    Stochastic integer programming is a suitable tool for modeling hierarchical decision situations with combinatorial features. In continuation of our work on the design and analysis of heuristics for such problems, we now try to find optimal solutions. Dynamic programming techniques can be used to

  16. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  17. Fleet Planning Decision-Making: Two-Stage Optimization with Slot Purchase

    Directory of Open Access Journals (Sweden)

    Lay Eng Teoh

    2016-01-01

    Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

  18. Combinatorial Nano-Bio Interfaces.

    Science.gov (United States)

    Cai, Pingqiang; Zhang, Xiaoqian; Wang, Ming; Wu, Yun-Long; Chen, Xiaodong

    2018-06-08

    Nano-bio interfaces are emerging from the convergence of engineered nanomaterials and biological entities. Despite rapid growth, clinical translation of biomedical nanomaterials is heavily compromised by the lack of comprehensive understanding of biophysicochemical interactions at nano-bio interfaces. In the past decade, a few investigations have adopted a combinatorial approach toward decoding nano-bio interfaces. Combinatorial nano-bio interfaces comprise the design of nanocombinatorial libraries and high-throughput bioevaluation. In this Perspective, we address challenges in combinatorial nano-bio interfaces and call for multiparametric nanocombinatorics (composition, morphology, mechanics, surface chemistry), multiscale bioevaluation (biomolecules, organelles, cells, tissues/organs), and the recruitment of computational modeling and artificial intelligence. Leveraging combinatorial nano-bio interfaces will shed light on precision nanomedicine and its potential applications.

  19. Combinatorial designs constructions and analysis

    CERN Document Server

    Stinson, Douglas R

    2004-01-01

    Created to teach students many of the most important techniques used for constructing combinatorial designs, this is an ideal textbook for advanced undergraduate and graduate courses in combinatorial design theory. The text features clear explanations of basic designs, such as Steiner and Kirkman triple systems, mutual orthogonal Latin squares, finite projective and affine planes, and Steiner quadruple systems. In these settings, the student will master various construction techniques, both classic and modern, and will be well-prepared to construct a vast array of combinatorial designs. Design theory offers a progressive approach to the subject, with carefully ordered results. It begins with simple constructions that gradually increase in complexity. Each design has a construction that contains new ideas or that reinforces and builds upon similar ideas previously introduced. A new text/reference covering all apsects of modern combinatorial design theory. Graduates and professionals in computer science, applie...

  20. Combinatorial Mathematics: Research into Practice

    Science.gov (United States)

    Sriraman, Bharath; English, Lyn D.

    2004-01-01

    Implications and suggestions for using combinatorial mathematics in the classroom through a survey and synthesis of numerous research studies are presented. The implications revolve around five major themes that emerge from analysis of these studies.

  1. A combinatorial framework to quantify peak/pit asymmetries in complex dynamics

    NARCIS (Netherlands)

    Hasson, Uri; Iacovacci, Jacopo; Davis, Ben; Flanagan, Ryan; Tagliazucchi, E.; Laufs, Helmut; Lacasa, Lucas

    2018-01-01

    We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic

  2. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  3. Number systems and combinatorial problems

    OpenAIRE

    Yordzhev, Krasimir

    2014-01-01

    The present work has been designed for students in secondary school and their teachers in mathematics. We will show how with the help of our knowledge of number systems we can solve problems from other fields of mathematics for example in combinatorial analysis and most of all when proving some combinatorial identities. To demonstrate discussed in this article method we have chosen several suitable mathematical tasks.

  4. Relativity in Combinatorial Gravitational Fields

    Directory of Open Access Journals (Sweden)

    Mao Linfan

    2010-04-01

    Full Text Available A combinatorial spacetime $(mathscr{C}_G| uboverline{t}$ is a smoothly combinatorial manifold $mathscr{C}$ underlying a graph $G$ evolving on a time vector $overline{t}$. As we known, Einstein's general relativity is suitable for use only in one spacetime. What is its disguise in a combinatorial spacetime? Applying combinatorial Riemannian geometry enables us to present a combinatorial spacetime model for the Universe and suggest a generalized Einstein gravitational equation in such model. Forfinding its solutions, a generalized relativity principle, called projective principle is proposed, i.e., a physics law ina combinatorial spacetime is invariant under a projection on its a subspace and then a spherically symmetric multi-solutions ofgeneralized Einstein gravitational equations in vacuum or charged body are found. We also consider the geometrical structure in such solutions with physical formations, and conclude that an ultimate theory for the Universe maybe established if all such spacetimes in ${f R}^3$. Otherwise, our theory is only an approximate theory and endless forever.

  5. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  6. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  7. Combinatorial synthesis of natural products

    DEFF Research Database (Denmark)

    Nielsen, John

    2002-01-01

    Combinatorial syntheses allow production of compound libraries in an expeditious and organized manner immediately applicable for high-throughput screening. Natural products possess a pedigree to justify quality and appreciation in drug discovery and development. Currently, we are seeing a rapid...... increase in application of natural products in combinatorial chemistry and vice versa. The therapeutic areas of infectious disease and oncology still dominate but many new areas are emerging. Several complex natural products have now been synthesised by solid-phase methods and have created the foundation...... for preparation of combinatorial libraries. In other examples, natural products or intermediates have served as building blocks or scaffolds in the synthesis of complex natural products, bioactive analogues or designed hybrid molecules. Finally, structural motifs from the biologically active parent molecule have...

  8. Combinatorial optimization theory and algorithms

    CERN Document Server

    Korte, Bernhard

    2018-01-01

    This comprehensive textbook on combinatorial optimization places special emphasis on theoretical results and algorithms with provably good performance, in contrast to heuristics. It is based on numerous courses on combinatorial optimization and specialized topics, mostly at graduate level. This book reviews the fundamentals, covers the classical topics (paths, flows, matching, matroids, NP-completeness, approximation algorithms) in detail, and proceeds to advanced and recent topics, some of which have not appeared in a textbook before. Throughout, it contains complete but concise proofs, and also provides numerous exercises and references. This sixth edition has again been updated, revised, and significantly extended. Among other additions, there are new sections on shallow-light trees, submodular function maximization, smoothed analysis of the knapsack problem, the (ln 4+ɛ)-approximation for Steiner trees, and the VPN theorem. Thus, this book continues to represent the state of the art of combinatorial opti...

  9. Effect of the Implicit Combinatorial Model on Combinatorial Reasoning in Secondary School Pupils.

    Science.gov (United States)

    Batanero, Carmen; And Others

    1997-01-01

    Elementary combinatorial problems may be classified into three different combinatorial models: (1) selection; (2) partition; and (3) distribution. The main goal of this research was to determine the effect of the implicit combinatorial model on pupils' combinatorial reasoning before and after instruction. Gives an analysis of variance of the…

  10. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  11. Two-stage model of development of heterogeneous uranium-lead systems in zircon

    International Nuclear Information System (INIS)

    Mel'nikov, N.N.; Zevchenkov, O.A.

    1985-01-01

    Behaviour of isotope systems of multiphase zircons at their two-stage distortion is considered. The results of calculations testify to the fact that linear correlations on the diagram with concordance can be explained including two-stage discovery of U-Pb systems of cogenetic zircons if zircon is considered physically heterogeneous and losing in its different part different ratios of accumulated radiogenic lead. ''Metamorphism ages'' obtained by these two-stage opening zircons are intermediate, and they not have geochronological significance while ''crystallization ages'' remain rather close to real ones. Two-stage opening zircons in some cases can be diagnosed by discordance of their crystal component

  12. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  13. Distributing the computation in combinatorial optimization experiments over the cloud

    Directory of Open Access Journals (Sweden)

    Mario Brcic

    2017-12-01

    Full Text Available Combinatorial optimization is an area of great importance since many of the real-world problems have discrete parameters which are part of the objective function to be optimized. Development of combinatorial optimization algorithms is guided by the empirical study of the candidate ideas and their performance over a wide range of settings or scenarios to infer general conclusions. Number of scenarios can be overwhelming, especially when modeling uncertainty in some of the problem’s parameters. Since the process is also iterative and many ideas and hypotheses may be tested, execution time of each experiment has an important role in the efficiency and successfulness. Structure of such experiments allows for significant execution time improvement by distributing the computation. We focus on the cloud computing as a cost-efficient solution in these circumstances. In this paper we present a system for validating and comparing stochastic combinatorial optimization algorithms. The system also deals with selection of the optimal settings for computational nodes and number of nodes in terms of performance-cost tradeoff. We present applications of the system on a new class of project scheduling problem. We show that we can optimize the selection over cloud service providers as one of the settings and, according to the model, it resulted in a substantial cost-savings while meeting the deadline.

  14. Combinatorial optimization networks and matroids

    CERN Document Server

    Lawler, Eugene

    2011-01-01

    Perceptively written text examines optimization problems that can be formulated in terms of networks and algebraic structures called matroids. Chapters cover shortest paths, network flows, bipartite matching, nonbipartite matching, matroids and the greedy algorithm, matroid intersections, and the matroid parity problems. A suitable text or reference for courses in combinatorial computing and concrete computational complexity in departments of computer science and mathematics.

  15. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  16. Computational Complexity of Combinatorial Surfaces

    NARCIS (Netherlands)

    Vegter, Gert; Yap, Chee K.

    1990-01-01

    We investigate the computational problems associated with combinatorial surfaces. Specifically, we present an algorithm (based on the Brahana-Dehn-Heegaard approach) for transforming the polygonal schema of a closed triangulated surface into its canonical form in O(n log n) time, where n is the

  17. Combinatorial synthesis of ceramic materials

    Science.gov (United States)

    Lauf, Robert J.; Walls, Claudia A.; Boatner, Lynn A.

    2006-11-14

    A combinatorial library includes a gelcast substrate defining a plurality of cavities in at least one surface thereof; and a plurality of gelcast test materials in the cavities, at least two of the test materials differing from the substrate in at least one compositional characteristic, the two test materials differing from each other in at least one compositional characteristic.

  18. Combinatorial auctions for electronic business

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    (6) Information feedback: An auction protocol may be a direct mechanism or an .... transparency of allocation decisions arise in resolving these ties. .... bidding, however more recently, combinatorial bids are allowed [50] making ...... Also, truth revelation and other game theoretic considerations are not taken into account.

  19. Combinatorial Proofs and Algebraic Proofs

    Indian Academy of Sciences (India)

    Permanent link: https://www.ias.ac.in/article/fulltext/reso/018/07/0630-0645. Keywords. Combinatorial proof; algebraic proof; binomial identity; recurrence relation; composition; Fibonacci number; Fibonacci identity; Pascal triangle. Author Affiliations. Shailesh A Shirali1. Sahyadri School Tiwai Hill, Rajgurunagar Pune 410 ...

  20. Combinatorial Speculations and the Combinatorial Conjecture for Mathematics

    OpenAIRE

    Mao, Linfan

    2006-01-01

    Combinatorics is a powerful tool for dealing with relations among objectives mushroomed in the past century. However, an more important work for mathematician is to apply combinatorics to other mathematics and other sciences not merely to find combinatorial behavior for objectives. Recently, such research works appeared on journals for mathematics and theoretical physics on cosmos. The main purpose of this paper is to survey these thinking and ideas for mathematics and cosmological physics, s...

  1. Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model

    Directory of Open Access Journals (Sweden)

    Mi-Yuan Shan

    2013-01-01

    Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.

  2. Combinatorial algebra syntax and semantics

    CERN Document Server

    Sapir, Mark V

    2014-01-01

    Combinatorial Algebra: Syntax and Semantics provides a comprehensive account of many areas of combinatorial algebra. It contains self-contained proofs of  more than 20 fundamental results, both classical and modern. This includes Golod–Shafarevich and Olshanskii's solutions of Burnside problems, Shirshov's solution of Kurosh's problem for PI rings, Belov's solution of Specht's problem for varieties of rings, Grigorchuk's solution of Milnor's problem, Bass–Guivarc'h theorem about the growth of nilpotent groups, Kleiman's solution of Hanna Neumann's problem for varieties of groups, Adian's solution of von Neumann-Day's problem, Trahtman's solution of the road coloring problem of Adler, Goodwyn and Weiss. The book emphasize several ``universal" tools, such as trees, subshifts, uniformly recurrent words, diagrams and automata.   With over 350 exercises at various levels of difficulty and with hints for the more difficult problems, this book can be used as a textbook, and aims to reach a wide and diversified...

  3. Combinatorial aspects of covering arrays

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2004-11-01

    Full Text Available Covering arrays generalize orthogonal arrays by requiring that t -tuples be covered, but not requiring that the appearance of t -tuples be balanced.Their uses in screening experiments has found application in software testing, hardware testing, and a variety of fields in which interactions among factors are to be identified. Here a combinatorial view of covering arrays is adopted, encompassing basic bounds, direct constructions, recursive constructions, algorithmic methods, and applications.

  4. Two-stage exchange knee arthroplasty: does resistance of the infecting organism influence the outcome?

    Science.gov (United States)

    Kurd, Mark F; Ghanem, Elie; Steinbrecher, Jill; Parvizi, Javad

    2010-08-01

    Periprosthetic joint infection after TKA is a challenging complication. Two-stage exchange arthroplasty is the accepted standard of care, but reported failure rates are increasing. It has been suggested this is due to the increased prevalence of methicillin-resistant infections. We asked the following questions: (1) What is the reinfection rate after two-stage exchange arthroplasty? (2) Which risk factors predict failure? (3) Which variables are associated with acquiring a resistant organism periprosthetic joint infection? This was a case-control study of 102 patients with infected TKA who underwent a two-stage exchange arthroplasty. Ninety-six patients were followed for a minimum of 2 years (mean, 34.5 months; range, 24-90.1 months). Cases were defined as failures of two-stage exchange arthroplasty. Two-stage exchange arthroplasty was successful in controlling the infection in 70 patients (73%). Patients who failed two-stage exchange arthroplasty were 3.37 times more likely to have been originally infected with a methicillin-resistant organism. Older age, higher body mass index, and history of thyroid disease were predisposing factors to infection with a methicillin-resistant organism. Innovative interventions are needed to improve the effectiveness of two-stage exchange arthroplasty for TKA infection with a methicillin-resistant organism as current treatment protocols may not be adequate for control of these virulent pathogens. Level IV, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.

  5. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    Science.gov (United States)

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  6. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  7. Solving stochastic multiobjective vehicle routing problem using probabilistic metaheuristic

    Directory of Open Access Journals (Sweden)

    Gannouni Asmae

    2017-01-01

    closed form expression. This novel approach is based on combinatorial probability and can be incorporated in a multiobjective evolutionary algorithm. (iiProvide probabilistic approaches to elitism and diversification in multiobjective evolutionary algorithms. Finally, The behavior of the resulting Probabilistic Multi-objective Evolutionary Algorithms (PrMOEAs is empirically investigated on the multi-objective stochastic VRP problem.

  8. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie; Thammannagowda, Shivegowda; Mohagheghi, Ali; Maness, Pin-Ching; Logan, Bruce E.

    2009-01-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol

  9. Lingual mucosal graft two-stage Bracka technique for redo hypospadias repair

    Directory of Open Access Journals (Sweden)

    Ahmed Sakr

    2017-09-01

    Conclusion: Lingual mucosa is a reliable and versatile graft material in the armamentarium of two-stage Bracka hypospadias repair with the merits of easy harvesting and minor donor-site complications.

  10. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  11. TWO-STAGE CHARACTER CLASSIFICATION : A COMBINED APPROACH OF CLUSTERING AND SUPPORT VECTOR CLASSIFIERS

    NARCIS (Netherlands)

    Vuurpijl, L.; Schomaker, L.

    2000-01-01

    This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

  12. Cost-effectiveness Analysis of a Two-stage Screening Intervention for Hepatocellular Carcinoma in Taiwan

    Directory of Open Access Journals (Sweden)

    Sophy Ting-Fang Shih

    2010-01-01

    Conclusion: Screening the population of high-risk individuals for HCC with the two-stage screening intervention in Taiwan is considered potentially cost-effective compared with opportunistic screening in the target population of an HCC endemic area.

  13. Intrinsic information carriers in combinatorial dynamical systems

    Science.gov (United States)

    Harmer, Russ; Danos, Vincent; Feret, Jérôme; Krivine, Jean; Fontana, Walter

    2010-09-01

    Many proteins are composed of structural and chemical features—"sites" for short—characterized by definite interaction capabilities, such as noncovalent binding or covalent modification of other proteins. This modularity allows for varying degrees of independence, as the behavior of a site might be controlled by the state of some but not all sites of the ambient protein. Independence quickly generates a startling combinatorial complexity that shapes most biological networks, such as mammalian signaling systems, and effectively prevents their study in terms of kinetic equations—unless the complexity is radically trimmed. Yet, if combinatorial complexity is key to the system's behavior, eliminating it will prevent, not facilitate, understanding. A more adequate representation of a combinatorial system is provided by a graph-based framework of rewrite rules where each rule specifies only the information that an interaction mechanism depends on. Unlike reactions, which deal with molecular species, rules deal with patterns, i.e., multisets of molecular species. Although the stochastic dynamics induced by a collection of rules on a mixture of molecules can be simulated, it appears useful to capture the system's average or deterministic behavior by means of differential equations. However, expansion of the rules into kinetic equations at the level of molecular species is not only impractical, but conceptually indefensible. If rules describe bona fide patterns of interaction, molecular species are unlikely to constitute appropriate units of dynamics. Rather, we must seek aggregate variables reflective of the causal structure laid down by the rules. We call these variables "fragments" and the process of identifying them "fragmentation." Ideally, fragments are aspects of the system's microscopic population that the set of rules can actually distinguish on average; in practice, it may only be feasible to identify an approximation to this. Most importantly, fragments are

  14. Intrinsic information carriers in combinatorial dynamical systems.

    Science.gov (United States)

    Harmer, Russ; Danos, Vincent; Feret, Jérôme; Krivine, Jean; Fontana, Walter

    2010-09-01

    Many proteins are composed of structural and chemical features--"sites" for short--characterized by definite interaction capabilities, such as noncovalent binding or covalent modification of other proteins. This modularity allows for varying degrees of independence, as the behavior of a site might be controlled by the state of some but not all sites of the ambient protein. Independence quickly generates a startling combinatorial complexity that shapes most biological networks, such as mammalian signaling systems, and effectively prevents their study in terms of kinetic equations-unless the complexity is radically trimmed. Yet, if combinatorial complexity is key to the system's behavior, eliminating it will prevent, not facilitate, understanding. A more adequate representation of a combinatorial system is provided by a graph-based framework of rewrite rules where each rule specifies only the information that an interaction mechanism depends on. Unlike reactions, which deal with molecular species, rules deal with patterns, i.e., multisets of molecular species. Although the stochastic dynamics induced by a collection of rules on a mixture of molecules can be simulated, it appears useful to capture the system's average or deterministic behavior by means of differential equations. However, expansion of the rules into kinetic equations at the level of molecular species is not only impractical, but conceptually indefensible. If rules describe bona fide patterns of interaction, molecular species are unlikely to constitute appropriate units of dynamics. Rather, we must seek aggregate variables reflective of the causal structure laid down by the rules. We call these variables "fragments" and the process of identifying them "fragmentation." Ideally, fragments are aspects of the system's microscopic population that the set of rules can actually distinguish on average; in practice, it may only be feasible to identify an approximation to this. Most importantly, fragments are

  15. A Two-Stage Fuzzy Logic Control Method of Traffic Signal Based on Traffic Urgency Degree

    OpenAIRE

    Yan Ge

    2014-01-01

    City intersection traffic signal control is an important method to improve the efficiency of road network and alleviate traffic congestion. This paper researches traffic signal fuzzy control method on a single intersection. A two-stage traffic signal control method based on traffic urgency degree is proposed according to two-stage fuzzy inference on single intersection. At the first stage, calculate traffic urgency degree for all red phases using traffic urgency evaluation module and select t...

  16. Noncausal two-stage image filtration at presence of observations with anomalous errors

    OpenAIRE

    S. V. Vishnevyy; S. Ya. Zhuk; A. N. Pavliuchenkova

    2013-01-01

    Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptiv...

  17. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  18. Two-stage discrete-continuous multi-objective load optimization: An industrial consumer utility approach to demand response

    International Nuclear Information System (INIS)

    Abdulaal, Ahmed; Moghaddass, Ramin; Asfour, Shihab

    2017-01-01

    Highlights: •Two-stage model links discrete-optimization to real-time system dynamics operation. •The solutions obtained are non-dominated Pareto optimal solutions. •Computationally efficient GA solver through customized chromosome coding. •Modest to considerable savings are achieved depending on the consumer’s preference. -- Abstract: In the wake of today’s highly dynamic and competitive energy markets, optimal dispatching of energy sources requires effective demand responsiveness. Suppliers have adopted a dynamic pricing strategy in efforts to control the downstream demand. This method however requires consumer awareness, flexibility, and timely responsiveness. While residential activities are more flexible and schedulable, larger commercial consumers remain an obstacle due to the impacts on industrial performance. This paper combines methods from quadratic, stochastic, and evolutionary programming with multi-objective optimization and continuous simulation, to propose a two-stage discrete-continuous multi-objective load optimization (DiCoMoLoOp) autonomous approach for industrial consumer demand response (DR). Stage 1 defines discrete-event load shifting targets. Accordingly, controllable loads are continuously optimized in stage 2 while considering the consumer’s utility. Utility functions, which measure the loads’ time value to the consumer, are derived and weights are assigned through an analytical hierarchy process (AHP). The method is demonstrated for an industrial building model using real data. The proposed method integrates with building energy management system and solves in real-time with autonomous and instantaneous load shifting in the hour-ahead energy price (HAP) market. The simulation shows the occasional existence of multiple load management options on the Pareto frontier. Finally, the computed savings, based on the simulation analysis with real consumption, climate, and price data, ranged from modest to considerable amounts

  19. Dynamic combinatorial libraries : new opportunities in systems chemistry

    NARCIS (Netherlands)

    Hunt, Rosemary A. R.; Otto, Sijbren; Hunt, Rosemary A.R.

    2011-01-01

    Combinatorial chemistry is a tool for selecting molecules with special properties. Dynamic combinatorial chemistry started off aiming to be just that. However, unlike ordinary combinatorial chemistry, the interconnectedness of dynamic libraries gives them an extra dimension. An understanding of

  20. The Yoccoz Combinatorial Analytic Invariant

    DEFF Research Database (Denmark)

    Petersen, Carsten Lunde; Roesch, Pascale

    2008-01-01

    In this paper we develop a combinatorial analytic encoding of the Mandelbrot set M. The encoding is implicit in Yoccoz' proof of local connectivity of M at any Yoccoz parameter, i.e. any at most finitely renormalizable parameter for which all periodic orbits are repelling. Using this encoding we ...... to reprove that the dyadic veins of M are arcs and that more generally any two Yoccoz parameters are joined by a unique ruled (in the sense of Douady-Hubbard) arc in M....

  1. Probabilistic methods in combinatorial analysis

    CERN Document Server

    Sachkov, Vladimir N

    2014-01-01

    This 1997 work explores the role of probabilistic methods for solving combinatorial problems. These methods not only provide the means of efficiently using such notions as characteristic and generating functions, the moment method and so on but also let us use the powerful technique of limit theorems. The basic objects under investigation are nonnegative matrices, partitions and mappings of finite sets, with special emphasis on permutations and graphs, and equivalence classes specified on sequences of finite length consisting of elements of partially ordered sets; these specify the probabilist

  2. Log-balanced combinatorial sequences

    Directory of Open Access Journals (Sweden)

    Tomislav Došlic

    2005-01-01

    Full Text Available We consider log-convex sequences that satisfy an additional constraint imposed on their rate of growth. We call such sequences log-balanced. It is shown that all such sequences satisfy a pair of double inequalities. Sufficient conditions for log-balancedness are given for the case when the sequence satisfies a two- (or more- term linear recurrence. It is shown that many combinatorially interesting sequences belong to this class, and, as a consequence, that the above-mentioned double inequalities are valid for all of them.

  3. Stochastic processes

    CERN Document Server

    Parzen, Emanuel

    1962-01-01

    Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine

  4. Combinatorial optimization on a Boltzmann machine

    NARCIS (Netherlands)

    Korst, J.H.M.; Aarts, E.H.L.

    1989-01-01

    We discuss the problem of solving (approximately) combinatorial optimization problems on a Boltzmann machine. It is shown for a number of combinatorial optimization problems how they can be mapped directly onto a Boltzmann machine by choosing appropriate connection patterns and connection strengths.

  5. Combinatorial Interpretation of General Eulerian Numbers

    OpenAIRE

    Tingyao Xiong; Jonathan I. Hall; Hung-Ping Tsao

    2014-01-01

    Since 1950s, mathematicians have successfully interpreted the traditional Eulerian numbers and $q-$Eulerian numbers combinatorially. In this paper, the authors give a combinatorial interpretation to the general Eulerian numbers defined on general arithmetic progressions { a, a+d, a+2d,...}.

  6. Fourier analysis in combinatorial number theory

    International Nuclear Information System (INIS)

    Shkredov, Il'ya D

    2010-01-01

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  7. Fourier analysis in combinatorial number theory

    Energy Technology Data Exchange (ETDEWEB)

    Shkredov, Il' ya D [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)

    2010-09-16

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  8. Toward Chemical Implementation of Encoded Combinatorial Libraries

    DEFF Research Database (Denmark)

    Nielsen, John; Janda, Kim D.

    1994-01-01

    The recent application of "combinatorial libraries" to supplement existing drug screening processes might simplify and accelerate the search for new lead compounds or drugs. Recently, a scheme for encoded combinatorial chemistry was put forward to surmount a number of the limitations possessed...

  9. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  10. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  11. Stochastic quantization

    International Nuclear Information System (INIS)

    Klauder, J.R.

    1983-01-01

    The author provides an introductory survey to stochastic quantization in which he outlines this new approach for scalar fields, gauge fields, fermion fields, and condensed matter problems such as electrons in solids and the statistical mechanics of quantum spins. (Auth.)

  12. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  13. Optimisation of Refrigeration System with Two-Stage and Intercooler Using Fuzzy Logic and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Bayram Kılıç

    2017-04-01

    Full Text Available Two-stage compression operation prevents excessive compressor outlet pressure and temperature and this operation provides more efficient working condition in low-temperature refrigeration applications. Vapor compression refrigeration system with two-stage and intercooler is very good solution for low-temperature refrigeration applications. In this study, refrigeration system with two-stage and intercooler were optimized using fuzzy logic and genetic algorithm. The necessary thermodynamic characteristics for optimization were estimated with Fuzzy Logic and liquid phase enthalpy, vapour phase enthalpy, liquid phase entropy, vapour phase entropy values were compared with actual values. As a result, optimum working condition of system was estimated by the Genetic Algorithm as -6.0449 oC for evaporator temperature, 25.0115 oC for condenser temperature and 5.9666 for COP. Morever, irreversibility values of the refrigeration system are calculated.

  14. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  15. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    Science.gov (United States)

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  16. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...... that the austenitization kinetics is governed by Ni-diffusion and that slow transformation kinetics separating the two stages is caused by soft impingement in the martensite phase. Increasing the lath width in the kinetics model had a similar effect on the austenitization kinetics as increasing the heating-rate....

  17. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  18. STOCHASTIC ASSESSMENT OF NIGERIAN STOCHASTIC ...

    African Journals Online (AJOL)

    eobe

    STOCHASTIC ASSESSMENT OF NIGERIAN WOOD FOR BRIDGE DECKS ... abandoned bridges with defects only in their decks in both rural and urban locations can be effectively .... which can be seen as the detection of rare physical.

  19. A two-stage biological gas to liquid transfer process to convert carbon dioxide into bioplastic

    KAUST Repository

    Al Rowaihi, Israa; Kick, Benjamin; Grö tzinger, Stefan W.; Burger, Christian; Karan, Ram; Weuster-Botz, Dirk; Eppinger, Jö rg; Arold, Stefan T.

    2018-01-01

    The fermentation of carbon dioxide (CO2) with hydrogen (H2) uses available low-cost gases to synthesis acetic acid. Here, we present a two-stage biological process that allows the gas to liquid transfer (Bio-GTL) of CO2 into the biopolymer

  20. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  1. On response time and cycle time distributions in a two-stage cyclic queue

    NARCIS (Netherlands)

    Boxma, O.J.; Donk, P.

    1982-01-01

    We consider a two-stage closed cyclic queueing model. For the case of an exponential server at each queue we derive the joint distribution of the successive response times of a custumer at both queues, using a reversibility argument. This joint distribution turns out to have a product form. The

  2. Simultaneous versus sequential pharmacokinetic-pharmacodynamic population analysis using an iterative two-stage Bayesian technique

    NARCIS (Netherlands)

    Proost, Johannes H.; Schiere, Sjouke; Eleveld, Douglas J.; Wierda, J. Mark K. H.

    A method for simultaneous pharmacokinetic-pharmacodynamic (PK-PD) population analysis using an Iterative Two-Stage Bayesian (ITSB) algorithm was developed. The method was evaluated using clinical data and Monte Carlo simulations. Data from a clinical study with rocuronium in nine anesthetized

  3. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  4. Numerical simulation of brain tumor growth model using two-stage ...

    African Journals Online (AJOL)

    In the recent years, the study of glioma growth to be an active field of research Mathematical models that describe the proliferation and diffusion properties of the growth have been developed by many researchers. In this work, the performance analysis of two-stage Gauss-Seidel (TSGS) method to solve the glioma growth ...

  5. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  6. Design and construction of a two-stage centrifugal pump | Nordiana ...

    African Journals Online (AJOL)

    Centrifugal pumps are widely used in moving liquids from one location to another in homes, offices and industries. Due to the ever increasing demand for centrifugal pumps it became necessary to design and construction of a two-stage centrifugal pump. The pump consisted of an electric motor, a shaft, two rotating impellers ...

  7. Some design aspects of a two-stage rail-to-rail CMOS op amp

    NARCIS (Netherlands)

    Gierkink, Sander L.J.; Holzmann, Peter J.; Wiegerink, Remco J.; Wassenaar, R.F.

    1999-01-01

    A two-stage low-voltage CMOS op amp with rail-to-rail input and output voltage ranges is presented. The circuit uses complementary differential input pairs to achieve the rail-to-rail common-mode input voltage range. The differential pairs operate in strong inversion, and the constant

  8. Insufficient sensitivity of joint aspiration during the two-stage exchange of the hip with spacers.

    Science.gov (United States)

    Boelch, Sebastian Philipp; Weissenberger, Manuel; Spohn, Frederik; Rudert, Maximilian; Luedemann, Martin

    2018-01-10

    Evaluation of infection persistence during the two-stage exchange of the hip is challenging. Joint aspiration before reconstruction is supposed to rule out infection persistence. Sensitivity and specificity of synovial fluid culture and synovial leucocyte count for detecting infection persistence during the two-stage exchange of the hip were evaluated. Ninety-two aspirations before planned joint reconstruction during the two-stage exchange with spacers of the hip were retrospectively analyzed. The sensitivity and specificity of synovial fluid culture was 4.6 and 94.3%. The sensitivity and specificity of synovial leucocyte count at a cut-off value of 2000 cells/μl was 25.0 and 96.9%. C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) values were significantly higher before prosthesis removal and reconstruction or spacer exchange (p = 0.00; p = 0.013 and p = 0.039; p = 0.002) in the infection persistence group. Receiver operating characteristic area under the curve values before prosthesis removal and reconstruction or spacer exchange for ESR were lower (0.516 and 0.635) than for CRP (0.720 and 0.671). Synovial fluid culture and leucocyte count cannot rule out infection persistence during the two-stage exchange of the hip.

  9. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed...

  10. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    Science.gov (United States)

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  11. The RTD measurement of two stage anaerobic digester using radiotracer in WWTP

    International Nuclear Information System (INIS)

    Jin-Seop, Kim; Jong-Bum, Kim; Sung-Hee, Jung

    2006-01-01

    The aims of this study are to assess the existence and location of the stagnant zone by estimating the MRT (mean residence time) on the two stage anaerobic digester, with the results to be used as informative clue for its better operation

  12. A two-stage meta-analysis identifies several new loci for Parkinson's disease.

    NARCIS (Netherlands)

    Plagnol, V.; Nalls, M.A.; Bras, J.M.; Hernandez, D.; Sharma, M.; Sheerin, U.M.; Saad, M.; Simon-Sanchez, J.; Schulte, C.; Lesage, S.; Sveinbjornsdottir, S.; Amouyel, P.; Arepalli, S.; Band, G.; Barker, R.A.; Bellinguez, C.; Ben-Shlomo, Y.; Berendse, H.W.; Berg, D; Bhatia, K.P.; Bie, R.M. de; Biffi, A.; Bloem, B.R.; Bochdanovits, Z.; Bonin, M.; Brockmann, K.; Brooks, J.; Burn, D.J.; Charlesworth, G.; Chen, H.; Chinnery, P.F.; Chong, S.; Clarke, C.E.; Cookson, M.R.; Cooper, J.M.; Corvol, J.C.; Counsell, J.; Damier, P.; Dartigues, J.F.; Deloukas, P.; Deuschl, G.; Dexter, D.T.; Dijk, K.D. van; Dillman, A.; Durif, F.; Durr, A.; Edkins, S.; Evans, J.R.; Foltynie, T.; Freeman, C.; Gao, J.; Gardner, M.; Gibbs, J.R.; Goate, A.; Gray, E.; Guerreiro, R.; Gustafsson, O.; Harris, C.; Hellenthal, G.; Hilten, J.J. van; Hofman, A.; Hollenbeck, A.; Holton, J.L.; Hu, M.; Huang, X.; Huber, H; Hudson, G.; Hunt, S.E.; Huttenlocher, J.; Illig, T.; Jonsson, P.V.; Langford, C.; Lees, A.J.; Lichtner, P.; Limousin, P.; Lopez, G.; McNeill, A.; Moorby, C.; Moore, M.; Morris, H.A.; Morrison, K.E.; Mudanohwo, E.; O'Sullivan, S.S; Pearson, J.; Pearson, R.; Perlmutter, J.; Petursson, H.; Pirinen, M.; Polnak, P.; Post, B.; Potter, S.C.; Ravina, B.; Revesz, T.; Riess, O.; Rivadeneira, F.; Rizzu, P.; Ryten, M.; Sawcer, S.J.; Schapira, A.; Scheffer, H.; Shaw, K.; Shoulson, I.; Sidransky, E.; Silva, R. de; Smith, C.; Spencer, C.C.; Stefansson, H.; Steinberg, S.; Stockton, J.D.; Strange, A.; Su, Z.; Talbot, K.; Tanner, C.M.; Tashakkori-Ghanbaria, A.; Tison, F.; Trabzuni, D.; Traynor, B.J.; Uitterlinden, A.G.; Vandrovcova, J.; Velseboer, D.; Vidailhet, M.; Vukcevic, D.; Walker, R.; Warrenburg, B.P.C. van de; Weale, M.E.; Wickremaratchi, M.; Williams, N.; Williams-Gray, C.H.; Winder-Rhodes, S.; Stefansson, K.; Martinez, M.; Donnelly, P.; Singleton, A.B.; Hardy, J.; Heutink, P.; Brice, A.; Gasser, T.; Wood, N.W.

    2011-01-01

    A previous genome-wide association (GWA) meta-analysis of 12,386 PD cases and 21,026 controls conducted by the International Parkinson's Disease Genomics Consortium (IPDGC) discovered or confirmed 11 Parkinson's disease (PD) loci. This first analysis of the two-stage IPDGC study

  13. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...

  14. A Two-Stage Approach to Civil Conflict: Contested Incompatibilities and Armed Violence

    DEFF Research Database (Denmark)

    Bartusevicius, Henrikas; Gleditsch, Kristian Skrede

    2017-01-01

    conflict origination but have no clear effect on militarization, whereas other features emphasized as shaping the risk of civil war, such as refugee flows and soft state power, strongly influence militarization but not incompatibilities. We posit that a two-stage approach to conflict analysis can help...

  15. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  16. Advancing early detection of autism spectrum disorder by applying an integrated two-stage screening approach

    NARCIS (Netherlands)

    Oosterling, Iris J.; Wensing, Michel; Swinkels, Sophie H.; van der Gaag, Rutger Jan; Visser, Janne C.; Woudenberg, Tim; Minderaa, Ruud; Steenhuis, Mark-Peter; Buitelaar, Jan K.

    Background: Few field trials exist on the impact of implementing guidelines for the early detection of autism spectrum disorders (ASD). The aims of the present study were to develop and evaluate a clinically relevant integrated early detection programme based on the two-stage screening approach of

  17. A Two-Stage Meta-Analysis Identifies Several New Loci for Parkinson's Disease

    NARCIS (Netherlands)

    Plagnol, Vincent; Nalls, Michael A.; Bras, Jose M.; Hernandez, Dena G.; Sharma, Manu; Sheerin, Una-Marie; Saad, Mohamad; Simon-Sanchez, Javier; Schulte, Claudia; Lesage, Suzanne; Sveinbjornsdottir, Sigurlaug; Amouyel, Philippe; Arepalli, Sampath; Band, Gavin; Barker, Roger A.; Bellinguez, Celine; Ben-Shlomo, Yoav; Berendse, Henk W.; Berg, Daniela; Bhatia, Kailash; de Bie, Rob M. A.; Biffi, Alessandro; Bloem, Bas; Bochdanovits, Zoltan; Bonin, Michael; Brockmann, Kathrin; Brooks, Janet; Burn, David J.; Charlesworth, Gavin; Chen, Honglei; Chinnery, Patrick F.; Chong, Sean; Clarke, Carl E.; Cookson, Mark R.; Cooper, J. Mark; Corvol, Jean Christophe; Counsell, Carl; Damier, Philippe; Dartigues, Jean-Francois; Deloukas, Panos; Deuschl, Guenther; Dexter, David T.; van Dijk, Karin D.; Dillman, Allissa; Durif, Frank; Duerr, Alexandra; Edkins, Sarah; Evans, Jonathan R.; Foltynie, Thomas; Freeman, Colin; Gao, Jianjun; Gardner, Michelle; Gibbs, J. Raphael; Goate, Alison; Gray, Emma; Guerreiro, Rita; Gustafsson, Omar; Harris, Clare; Hellenthal, Garrett; van Hilten, Jacobus J.; Hofman, Albert; Hollenbeck, Albert; Holton, Janice; Hu, Michele; Huang, Xuemei; Huber, Heiko; Hudson, Gavin; Hunt, Sarah E.; Huttenlocher, Johanna; Illig, Thomas; Jonsson, Palmi V.; Langford, Cordelia; Lees, Andrew; Lichtner, Peter; Limousin, Patricia; Lopez, Grisel; Lorenz, Delia; McNeill, Alisdair; Moorby, Catriona; Moore, Matthew; Morris, Huw; Morrison, Karen E.; Mudanohwo, Ese; O'Sullivan, Sean S.; Pearson, Justin; Pearson, Richard; Perlmutter, Joel S.; Petursson, Hjoervar; Pirinen, Matti; Pollak, Pierre; Post, Bart; Potter, Simon; Ravina, Bernard; Revesz, Tamas; Riess, Olaf; Rivadeneira, Fernando; Rizzu, Patrizia; Ryten, Mina; Sawcer, Stephen; Schapira, Anthony; Scheffer, Hans; Shaw, Karen; Shoulson, Ira; Sidransky, Ellen; de Silva, Rohan; Smith, Colin; Spencer, Chris C. A.; Stefansson, Hreinn; Steinberg, Stacy; Stockton, Joanna D.; Strange, Amy; Su, Zhan; Talbot, Kevin; Tanner, Carlie M.; Tashakkori-Ghanbaria, Avazeh; Tison, Francois; Trabzuni, Daniah; Traynor, Bryan J.; Uitterlinden, Andre G.; Vandrovcova, Jana; Velseboer, Daan; Vidailhet, Marie; Vukcevic, Damjan; Walker, Robert; van de Warrenburg, Bart; Weale, Michael E.; Wickremaratchi, Mirdhu; Williams, Nigel; Williams-Gray, Caroline H.; Winder-Rhodes, Sophie; Stefansson, Kari; Martinez, Maria; Donnelly, Peter; Singleton, Andrew B.; Hardy, John; Heutink, Peter; Brice, Alexis; Gasser, Thomas; Wood, Nicholas W.

    2011-01-01

    A previous genome-wide association (GWA) meta-analysis of 12,386 PD cases and 21,026 controls conducted by the International Parkinson's Disease Genomics Consortium (IPDGC) discovered or confirmed 11 Parkinson's disease (PD) loci. This first analysis of the two-stage IPDGC study focused on the set

  18. Quantum stochastics

    CERN Document Server

    Chang, Mou-Hsiung

    2015-01-01

    The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigrou...

  19. Tumor-targeting peptides from combinatorial libraries*

    Science.gov (United States)

    Liu, Ruiwu; Li, Xiaocen; Xiao, Wenwu; Lam, Kit S.

    2018-01-01

    Cancer is one of the major and leading causes of death worldwide. Two of the greatest challenges infighting cancer are early detection and effective treatments with no or minimum side effects. Widespread use of targeted therapies and molecular imaging in clinics requires high affinity, tumor-specific agents as effective targeting vehicles to deliver therapeutics and imaging probes to the primary or metastatic tumor sites. Combinatorial libraries such as phage-display and one-bead one-compound (OBOC) peptide libraries are powerful approaches in discovering tumor-targeting peptides. This review gives an overview of different combinatorial library technologies that have been used for the discovery of tumor-targeting peptides. Examples of tumor-targeting peptides identified from each combinatorial library method will be discussed. Published tumor-targeting peptide ligands and their applications will also be summarized by the combinatorial library methods and their corresponding binding receptors. PMID:27210583

  20. Combinatorial Micro-Macro Dynamical Systems

    OpenAIRE

    Diaz, Rafael; Villamarin, Sergio

    2015-01-01

    The second law of thermodynamics states that the entropy of an isolated system is almost always increasing. We propose combinatorial formalizations of the second law and explore their conditions of possibilities.

  1. A stochastic programming approach to manufacturing flow control

    OpenAIRE

    Haurie, Alain; Moresino, Francesco

    2012-01-01

    This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...

  2. Cubical version of combinatorial differential forms

    DEFF Research Database (Denmark)

    Kock, Anders

    2010-01-01

    The theory of combinatorial differential forms is usually presented in simplicial terms. We present here a cubical version; it depends on the possibility of forming affine combinations of mutual neighbour points in a manifold, in the context of synthetic differential geometry.......The theory of combinatorial differential forms is usually presented in simplicial terms. We present here a cubical version; it depends on the possibility of forming affine combinations of mutual neighbour points in a manifold, in the context of synthetic differential geometry....

  3. Conferences on Combinatorial and Additive Number Theory

    CERN Document Server

    2014-01-01

    This proceedings volume is based on papers presented at the Workshops on Combinatorial and Additive Number Theory (CANT), which were held at the Graduate Center of the City University of New York in 2011 and 2012. The goal of the workshops is to survey recent progress in combinatorial number theory and related parts of mathematics. The workshop attracts researchers and students who discuss the state-of-the-art, open problems, and future challenges in number theory.

  4. Jack superpolynomials: physical and combinatorial definitions

    International Nuclear Information System (INIS)

    Desrosiers, P.; Mathieu, P.; Lapointe, L.

    2004-01-01

    Jack superpolynomials are eigenfunctions of the supersymmetric extension of the quantum trigonometric Calogero-Moser-Sutherland Hamiltonian. They are orthogonal with respect to the scalar product, dubbed physical, that is naturally induced by this quantum-mechanical problem. But Jack superpolynomials can also be defined more combinatorially, starting from the multiplicative bases of symmetric superpolynomials, enforcing orthogonality with respect to a one-parameter deformation of the combinatorial scalar product. Both constructions turn out to be equivalent. (author)

  5. CFD simulations of compressed air two stage rotary Wankel expander – Parametric analysis

    International Nuclear Information System (INIS)

    Sadiq, Ghada A.; Tozer, Gavin; Al-Dadah, Raya; Mahmoud, Saad

    2017-01-01

    Highlights: • CFD ANSYS-Fluent 3D simulation of Wankel expander is developed. • Single and two-stage expander’s performance is compared. • Inlet and outlet ports shape and configurations are investigated. • Isentropic efficiency of two stage Wankel expander of 91% is achieved. - Abstract: A small scale volumetric Wankel expander is a powerful device for small-scale power generation in compressed air energy storage (CAES) systems and Organic Rankine cycles powered by different heat sources such as, biomass, low temperature geothermal, solar and waste heat leading to significant reduction in CO_2 emissions. Wankel expanders outperform other types of expander due to their ability to produce two power pulses per revolution per chamber additional to higher compactness, lower noise and vibration and lower cost. In this paper, a computational fluid dynamics (CFD) model was developed using ANSYS 16.2 to simulate the flow dynamics for a single and two stage Wankel expanders and to investigate the effect of port configurations, including size and spacing, on the expander’s power output and isentropic efficiency. Also, single-stage and two-stage expanders were analysed with different operating conditions. Single-stage 3D CFD results were compared to published work showing close agreement. The CFD modelling was used to investigate the performance of the rotary device using air as an ideal gas with various port diameters ranging from 15 mm to 50 mm; port spacing varying from 28 mm to 66 mm; different Wankel expander sizes (r = 48, e = 6.6, b = 32) mm and (r = 58, e = 8, b = 40) mm both as single-stage and as two-stage expanders with different configurations and various operating conditions. Results showed that the best Wankel expander design for a single-stage was (r = 48, e = 6.6, b = 32) mm, with the port diameters 20 mm and port spacing equal to 50 mm. Moreover, combining two Wankel expanders horizontally, with a larger one at front, produced 8.52 kW compared

  6. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  7. Stochastic cooling

    International Nuclear Information System (INIS)

    Bisognano, J.; Leemann, C.

    1982-03-01

    Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron

  8. A Two-stage Improvement Method for Robot Based 3D Surface Scanning

    Science.gov (United States)

    He, F. B.; Liang, Y. D.; Wang, R. F.; Lin, Y. S.

    2018-03-01

    As known that the surface of unknown object was difficult to measure or recognize precisely, hence the 3D laser scanning technology was introduced and used properly in surface reconstruction. Usually, the surface scanning speed was slower and the scanning quality would be better, while the speed was faster and the quality would be worse. In this case, the paper presented a new two-stage scanning method in order to pursuit the quality of surface scanning in a faster speed. The first stage was rough scanning to get general point cloud data of object’s surface, and then the second stage was specific scanning to repair missing regions which were determined by chord length discrete method. Meanwhile, a system containing a robotic manipulator and a handy scanner was also developed to implement the two-stage scanning method, and relevant paths were planned according to minimum enclosing ball and regional coverage theories.

  9. An adaptive two-stage dose-response design method for establishing proof of concept.

    Science.gov (United States)

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  10. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  11. Target tracking system based on preliminary and precise two-stage compound cameras

    Science.gov (United States)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  12. Gas pollutants removal in a single- and two-stage ejector-venturi scrubber.

    Science.gov (United States)

    Gamisans, Xavier; Sarrà, Montserrrat; Lafuente, F Javier

    2002-03-29

    The absorption of SO(2) and NH(3) from the flue gas into NaOH and H(2)SO(4) solutions, respectively has been studied using an industrial scale ejector-venturi scrubber. A statistical methodology is presented to characterise the performance of the scrubber by varying several factors such as gas pollutant concentration, air flowrate and absorbing solution flowrate. Some types of venturi tube constructions were assessed, including the use of a two-stage venturi tube. The results showed a strong influence of the liquid scrubbing flowrate on pollutant removal efficiency. The initial pollutant concentration and the gas flowrate had a slight influence. The use of a two-stage venturi tube considerably improved the absorption efficiency, although it increased energy consumption. The results of this study will be applicable to the optimal design of venturi-based absorbers for gaseous pollution control or chemical reactors.

  13. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...... pays any attention to this. In this paper, we show how various capacity and time constraints influence the performance of a specific two-stage system. We study the effects of several basic scheduling and sequencing rules in the presence of these constraints in order to learn the characteristics...

  14. A simple two stage optimization algorithm for constrained power economic dispatch

    International Nuclear Information System (INIS)

    Huang, G.; Song, K.

    1994-01-01

    A simple two stage optimization algorithm is proposed and investigated for fast computation of constrained power economic dispatch control problems. The method is a simple demonstration of the hierarchical aggregation-disaggregation (HAD) concept. The algorithm first solves an aggregated problem to obtain an initial solution. This aggregated problem turns out to be classical economic dispatch formulation, and it can be solved in 1% of overall computation time. In the second stage, linear programming method finds optimal solution which satisfies power balance constraints, generation and transmission inequality constraints and security constraints. Implementation of the algorithm for IEEE systems and EPRI Scenario systems shows that the two stage method obtains average speedup ratio 10.64 as compared to classical LP-based method

  15. Two-stage combustion for reducing pollutant emissions from gas turbine combustors

    Science.gov (United States)

    Clayton, R. M.; Lewis, D. H.

    1981-01-01

    Combustion and emission results are presented for a premix combustor fueled with admixtures of JP5 with neat H2 and of JP5 with simulated partial-oxidation product gas. The combustor was operated with inlet-air state conditions typical of cruise power for high performance aviation engines. Ultralow NOx, CO and HC emissions and extended lean burning limits were achieved simultaneously. Laboratory scale studies of the non-catalyzed rich-burning characteristics of several paraffin-series hydrocarbon fuels and of JP5 showed sooting limits at equivalence ratios of about 2.0 and that in order to achieve very rich sootless burning it is necessary to premix the reactants thoroughly and to use high levels of air preheat. The application of two-stage combustion for the reduction of fuel NOx was reviewed. An experimental combustor designed and constructed for two-stage combustion experiments is described.

  16. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid...... support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...... performed on a 3-kW two-stage single-phase grid-connected PV system, where the power reserve control is achieved upon demands....

  17. A two staged condensation of vapors of an isobutane tower in installations for sulfuric acid alkylation

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, N.P.; Feyzkhanov, R.I.; Idrisov, A.D.; Navalikhin, P.G.; Sakharov, V.D.

    1983-01-01

    In order to increase the concentration of isobutane to greater than 72 to 76 percent in an installation for sulfuric acid alkylation, a system of two staged condensation of vapors from an isobutane tower is placed into operation. The first stage condenses the heavier part of the upper distillate of the tower, which is achieved through somewhat of an increase in the condensate temperature. The product which is condensed in the first stage is completely returned to the tower as a live irrigation. The vapors of the isobutane fraction which did not condense in the first stage are sent to two newly installed condensers, from which the product after condensation passes through intermediate tanks to further depropanization. The two staged condensation of vapors of the isobutane tower reduces the content of the inert diluents, the propane and n-butane in the upper distillate of the isobutane tower and creates more favorable conditions for the operation of the isobutane and propane tower.

  18. Optimising the refrigeration cycle with a two-stage centrifugal compressor and a flash intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Roeyttae, Pekka; Turunen-Saaresti, Teemu; Honkatukia, Juha [Lappeenranta University of Technology, Laboratory of Energy and Environmental Technology, PO Box 20, 53851 Lappeenranta (Finland)

    2009-09-15

    The optimisation of a refrigeration process with a two-stage centrifugal compressor and flash intercooler is presented in this paper. The two-stage centrifugal compressor stages are on the same shaft and the electric motor is cooled with the refrigerant. The performance of the centrifugal compressor is evaluated based on semi-empirical specific-speed curves and the effect of the Reynolds number, surface roughness and tip clearance have also been taken into account. The thermodynamic and transport properties of the working fluids are modelled with a real-gas model. The condensing and evaporation temperatures, the temperature after the flash intercooler, and cooling power have been chosen as fixed values in the process. The aim is to gain a maximum coefficient of performance (COP). The method of optimisation, the operation of the compressor and flash intercooler, and the method for estimating the electric motor cooling are also discussed in the article. (author)

  19. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    OpenAIRE

    He, Xinhua; Hu, Wenfa

    2017-01-01

    Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total c...

  20. Generation of dense, pulsed beams of refractory metal atoms using two-stage laser ablation

    International Nuclear Information System (INIS)

    Kadar-Kallen, M.A.; Bonin, K.D.

    1994-01-01

    We report a technique for generating a dense, pulsed beam of refractory metal atoms using two-stage laser ablation. An atomic beam of uranium was produced with a peak, ground-state number density of 1x10 12 cm -3 at a distance of z=27 cm from the source. This density can be scaled as 1/z 3 to estimate the density at other distances which are also far from the source

  1. Two-stage hepatectomy: who will not jump over the second hurdle?

    Science.gov (United States)

    Turrini, O; Ewald, J; Viret, F; Sarran, A; Goncalves, A; Delpero, J-R

    2012-03-01

    Two-stage hepatectomy uses compensatory liver regeneration after a first noncurative hepatectomy to enable a second curative resection in patients with bilobar colorectal liver metastasis (CLM). To determine the predictive factors of failure of two-stage hepatectomy. Between 2000 and 2010, 48 patients with irresectable CLM were eligible for two-stage hepatectomy. The planned strategy was a) cleaning of the left hepatic lobe (first hepatectomy), b) right portal vein embolisation and c) right hepatectomy (second hepatectomy). Six patients had occult CLM (n = 5) or extra-hepatic disease (n = 1), which was discovered during the first hepatectomy. Thus, 42 patients completed the first hepatectomy and underwent portal vein embolisation in order to receive the second hepatectomy. Eight patients did not undergo a second hepatectomy due to disease progression. Upon univariate analysis, two factors were identified that precluded patients from having the second hepatectomy: the combined resection of a primary tumour during the first hepatectomy (p = 0.01) and administration of chemotherapy between the two hepatectomies (p = 0.03). An independent association with impairment to perform the two-stage strategy was demonstrated by multivariate analysis for only the combined resection of the primary colorectal cancer during the first hepatectomy (p = 0.04). Due to the small number of patients and the absence of equivalent conclusions in other studies, we cannot recommend performance of an isolated colorectal resection prior to chemotherapy. However, resection of an asymptomatic primary tumour before chemotherapy should not be considered as an outdated procedure. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  3. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Bayram [Mehmet Akif Ersoy University, Bucak Emin Guelmez Vocational School, Bucak, Burdur (Turkey)

    2012-07-15

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared. (orig.)

  4. Control strategy research of two stage topology for pulsed power supply

    International Nuclear Information System (INIS)

    Shi Chunfeng; Wang Rongkun; Huang Yuzhen; Chen Youxin; Yan Hongbin; Gao Daqing

    2013-01-01

    A kind of pulsed power supply of HIRFL-CSR was introduced, the ripple and the current error of the topological structure of the power in the operation process were analyzed, and two stage topology of pulsed power supply was given. The control strategy was simulated and the experiment was done in digital power platform. The results show that the main circuit structure and control method are feasible. (authors)

  5. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism

    Science.gov (United States)

    Yang, B.; Guo, X.; Wang, Q. H.; Lu, C. F.; Hu, D.

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s)2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  6. Two Stage Fuzzy Methodology to Evaluate the Credit Risks of Investment Projects

    OpenAIRE

    O. Badagadze; G. Sirbiladze; I. Khutsishvili

    2014-01-01

    The work proposes a decision support methodology for the credit risk minimization in selection of investment projects. The methodology provides two stages of projects’ evaluation. Preliminary selection of projects with minor credit risks is made using the Expertons Method. The second stage makes ranking of chosen projects using the Possibilistic Discrimination Analysis Method. The latter is a new modification of a well-known Method of Fuzzy Discrimination Analysis.

  7. A Two-Stage Rural Household Demand Analysis: Microdata Evidence from Jiangsu Province, China

    OpenAIRE

    X.M. Gao; Eric J. Wailes; Gail L. Cramer

    1996-01-01

    In this paper we evaluate economic and demographic effects on China's rural household demand for nine food commodities: vegetables, pork, beef and lamb, poultry, eggs, fish, sugar, fruit, and grain; and five nonfood commodity groups: clothing, fuel, stimulants, housing, and durables. A two-stage budgeting allocation procedure is used to obtain an empirically tractable amalgamative demand system for food commodities which combine an upper-level AIDS model and a lower-level GLES as a modeling f...

  8. Latent Inhibition as a Function of US Intensity in a Two-Stage CER Procedure

    Science.gov (United States)

    Rodriguez, Gabriel; Alonso, Gumersinda

    2004-01-01

    An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…

  9. Two-stage meta-analysis of survival data from individual participants using percentile ratios

    Science.gov (United States)

    Barrett, Jessica K; Farewell, Vern T; Siannis, Fotios; Tierney, Jayne; Higgins, Julian P T

    2012-01-01

    Methods for individual participant data meta-analysis of survival outcomes commonly focus on the hazard ratio as a measure of treatment effect. Recently, Siannis et al. (2010, Statistics in Medicine 29:3030–3045) proposed the use of percentile ratios as an alternative to hazard ratios. We describe a novel two-stage method for the meta-analysis of percentile ratios that avoids distributional assumptions at the study level. Copyright © 2012 John Wiley & Sons, Ltd. PMID:22825835

  10. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  11. Modelling of an air-cooled two-stage Rankine cycle for electricity production

    International Nuclear Information System (INIS)

    Liu, Bo

    2014-01-01

    This work considers a two stage Rankine cycle architecture slightly different from a standard Rankine cycle for electricity generation. Instead of expanding the steam to extremely low pressure, the vapor leaves the turbine at a higher pressure then having a much smaller specific volume. It is thus possible to greatly reduce the size of the steam turbine. The remaining energy is recovered by a bottoming cycle using a working fluid which has a much higher density than the water steam. Thus, the turbines and heat exchangers are more compact; the turbine exhaust velocity loss is lower. This configuration enables to largely reduce the global size of the steam water turbine and facilitate the use of a dry cooling system. The main advantage of such an air cooled two stage Rankine cycle is the possibility to choose the installation site of a large or medium power plant without the need of a large and constantly available water source; in addition, as compared to water cooled cycles, the risk regarding future operations is reduced (climate conditions may affect water availability or temperature, and imply changes in the water supply regulatory rules). The concept has been investigated by EDF R and D. A 22 MW prototype was developed in the 1970's using ammonia as the working fluid of the bottoming cycle for its high density and high latent heat. However, this fluid is toxic. In order to search more suitable working fluids for the two stage Rankine cycle application and to identify the optimal cycle configuration, we have established a working fluid selection methodology. Some potential candidates have been identified. We have evaluated the performances of the two stage Rankine cycles operating with different working fluids in both design and off design conditions. For the most acceptable working fluids, components of the cycle have been sized. The power plant concept can then be evaluated on a life cycle cost basis. (author)

  12. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    OpenAIRE

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Track...

  13. Actuator Fault Diagnosis in a Boeing 747 Model via Adaptive Modified Two-Stage Kalman Filter

    Directory of Open Access Journals (Sweden)

    Fikret Caliskan

    2014-01-01

    Full Text Available An adaptive modified two-stage linear Kalman filtering algorithm is utilized to identify the loss of control effectiveness and the magnitude of low degree of stuck faults in a closed-loop nonlinear B747 aircraft. Control effectiveness factors and stuck magnitudes are used to quantify faults entering control systems through actuators. Pseudorandom excitation inputs are used to help distinguish partial loss and stuck faults. The partial loss and stuck faults in the stabilizer are isolated and identified successfully.

  14. Two-stage energy storage equalization system for lithium-ion battery pack

    Science.gov (United States)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  15. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  16. Production of endo-pectate lyase by two stage cultivation of Erwinia carotovora

    Energy Technology Data Exchange (ETDEWEB)

    Fukuoka, Satoshi; Kobayashi, Yoshiaki

    1987-02-26

    The productivity of endo-pectate lyase from Erwinia carotovora GIR 1044 was found to be greatly improved by two stage cultivation: in the first stage the bacterium was grown with an inducing carbon source, e.g., pectin, and in the second stage it was cultivated with glycerol, xylose, or fructose with the addition of monosodium L-glutamate as nitrogen source. In the two stage cultivation using pectin or glycerol as the carbon source the enzyme activity reached 400 units/ml, almost 3 times as much as that of one stage cultivation in a 10 liter fermentor. Using two stage cultivation in the 200 liter fermentor improved enzyme productivity over that in the 10 liter fermentor, with 500 units/ml of activity. Compared with the cultivation in Erlenmeyer flasks, fermentor cultivation improved enzyme productivity. The optimum cultivating conditions were agitation of 480 rpm with aeration of 0.5 vvm at 28 /sup 0/C. (4 figs, 4 tabs, 14 refs)

  17. Assessing efficiency and effectiveness of Malaysian Islamic banks: A two stage DEA analysis

    Science.gov (United States)

    Kamarudin, Norbaizura; Ismail, Wan Rosmanira; Mohd, Muhammad Azri

    2014-06-01

    Islamic banks in Malaysia are indispensable players in the financial industry with the growing needs for syariah compliance system. In the banking industry, most recent studies concerned only on operational efficiency. However rarely on the operational effectiveness. Since the production process of banking industry can be described as a two-stage process, two-stage Data Envelopment Analysis (DEA) can be applied to measure the bank performance. This study was designed to measure the overall performance in terms of efficiency and effectiveness of Islamic banks in Malaysia using Two-Stage DEA approach. This paper presents analysis of a DEA model which split the efficiency and effectiveness in order to evaluate the performance of ten selected Islamic Banks in Malaysia for the financial year period ended 2011. The analysis shows average efficient score is more than average effectiveness score thus we can say that Malaysian Islamic banks were more efficient rather than effective. Furthermore, none of the bank exhibit best practice in both stages as we can say that a bank with better efficiency does not always mean having better effectiveness at the same time.

  18. A two-stage extraction procedure for insensitive munition (IM) explosive compounds in soils.

    Science.gov (United States)

    Felt, Deborah; Gurtowski, Luke; Nestler, Catherine C; Johnson, Jared; Larson, Steven

    2016-12-01

    The Department of Defense (DoD) is developing a new category of insensitive munitions (IMs) that are more resistant to detonation or promulgation from external stimuli than traditional munition formulations. The new explosive constituent compounds are 2,4-dinitroanisole (DNAN), nitroguanidine (NQ), and nitrotriazolone (NTO). The production and use of IM formulations may result in interaction of IM component compounds with soil. The chemical properties of these IM compounds present unique challenges for extraction from environmental matrices such as soil. A two-stage extraction procedure was developed and tested using several soil types amended with known concentrations of IM compounds. This procedure incorporates both an acidified phase and an organic phase to account for the chemical properties of the IM compounds. The method detection limits (MDLs) for all IM compounds in all soil types were regulatory risk-based Regional Screening Level (RSL) criteria for soil proposed by the U.S. Army Public Health Center. At defined environmentally relevant concentrations, the average recovery of each IM compound in each soil type was consistent and greater than 85%. The two-stage extraction method decreased the influence of soil composition on IM compound recovery. UV analysis of NTO established an isosbestic point based on varied pH at a detection wavelength of 341 nm. The two-stage soil extraction method is equally effective for traditional munition compounds, a potentially important point when examining soils exposed to both traditional and insensitive munitions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  20. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    International Nuclear Information System (INIS)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.; Park, T. K.; Deng, P.; Yang, G.; Jung, Y. S.; Kim, T. K.; Stauff, N. E.

    2016-01-01

    This report presents the performance characteristics of two ''two-stage'' fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  1. Application of two-stage biofilter system for the removal of odorous compounds.

    Science.gov (United States)

    Jeong, Gwi-Taek; Park, Don-Hee; Lee, Gwang-Yeon; Cha, Jin-Myeong

    2006-01-01

    Biofiltration is a biological process which is considered to be one of the more successful examples of biotechnological applications to environmental engineering, and is most commonly used in the removal of odoriferous compounds. In this study, we have attempted to assess the efficiency with which both single and complex odoriferous compounds could be removed, using one- or two-stage biofiltration systems. The tested single odor gases, limonene, alpha-pinene, and iso-butyl alcohol, were separately evaluated in the biofilters. Both limonene and alpha-pinene were removed by 90% or more EC (elimination capacity), 364 g/m3/h and 321 g/m3/h, respectively, at an input concentration of 50 ppm and a retention time of 30 s. The iso-butyl alcohol was maintained with an effective removal yield of more than 90% (EC 375 g/m3/h) at an input concentration of 100 ppm. The complex gas removal scheme was applied with a 200 ppm inlet concentration of ethanol, 70 ppm of acetaldehyde, and 70 ppm of toluene with residence time of 45 s in a one- or two-stage biofiltration system. The removal yield of toluene was determined to be lower than that of the other gases in the one-stage biofilter. Otherwise, the complex gases were sufficiently eliminated by the two-stage biofiltration system.

  2. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu, E-mail: zhanglu1211@gmail.com; Sun, Xiangyang, E-mail: xysunbjfu@gmail.com

    2015-05-15

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  3. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    International Nuclear Information System (INIS)

    Zhang, Lu; Sun, Xiangyang

    2015-01-01

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL

  4. Is the continuous two-stage anaerobic digestion process well suited for all substrates?

    Science.gov (United States)

    Lindner, Jonas; Zielonka, Simon; Oechsner, Hans; Lemmer, Andreas

    2016-01-01

    Two-stage anaerobic digestion systems are often considered to be advantageous compared to one-stage processes. Although process conditions and fermenter setups are well examined, overall substrate degradation in these systems is controversially discussed. Therefore, the aim of this study was to investigate how substrates with different fibre and sugar contents (hay/straw, maize silage, sugar beet) influence the degradation rate and methane production. Intermediates and gas compositions, as well as methane yields and VS-degradation degrees were recorded. The sugar beet substrate lead to a higher pH-value drop 5.67 in the acidification reactor, which resulted in a six time higher hydrogen production in comparison to the hay/straw substrate (pH-value drop 5.34). As the achieved yields in the two-stage system showed a difference of 70.6% for the hay/straw substrate, and only 7.8% for the sugar beet substrate. Therefore two-stage systems seem to be only recommendable for digesting sugar rich substrates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Study on the Control Algorithm of Two-Stage DC-DC Converter for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Changhao Piao

    2014-01-01

    Full Text Available The fast response, high efficiency, and good reliability are very important characteristics to electric vehicles (EVs dc/dc converters. Two-stage dc-dc converter is a kind of dc-dc topologies that can offer those characteristics to EVs. Presently, nonlinear control is an active area of research in the field of the control algorithm of dc-dc converters. However, very few papers research on two-stage converter for EVs. In this paper, a fixed switching frequency sliding mode (FSFSM controller and double-integral sliding mode (DISM controller for two-stage dc-dc converter are proposed. And a conventional linear control (lag is chosen as the comparison. The performances of the proposed FSFSM controller are compared with those obtained by the lag controller. In consequence, the satisfactory simulation and experiment results show that the FSFSM controller is capable of offering good large-signal operations with fast dynamical responses to the converter. At last, some other simulation results are presented to prove that the DISM controller is a promising method for the converter to eliminate the steady-state error.

  6. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Science.gov (United States)

    Bril, Aleksander; Kalinina, Olga; Levina, Anastasia

    2018-03-01

    The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  7. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    Directory of Open Access Journals (Sweden)

    Bril Aleksander

    2018-01-01

    Full Text Available The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  8. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  9. Stochastic thermodynamics

    Science.gov (United States)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  10. Empirical study of classification process for two-stage turbo air classifier in series

    Science.gov (United States)

    Yu, Yuan; Liu, Jiaxiang; Li, Gang

    2013-05-01

    The suitable process parameters for a two-stage turbo air classifier are important for obtaining the ultrafine powder that has a narrow particle-size distribution, however little has been published internationally on the classification process for the two-stage turbo air classifier in series. The influence of the process parameters of a two-stage turbo air classifier in series on classification performance is empirically studied by using aluminum oxide powders as the experimental material. The experimental results show the following: 1) When the rotor cage rotary speed of the first-stage classifier is increased from 2 300 r/min to 2 500 r/min with a constant rotor cage rotary speed of the second-stage classifier, classification precision is increased from 0.64 to 0.67. However, in this case, the final ultrafine powder yield is decreased from 79% to 74%, which means the classification precision and the final ultrafine powder yield can be regulated through adjusting the rotor cage rotary speed of the first-stage classifier. 2) When the rotor cage rotary speed of the second-stage classifier is increased from 2 500 r/min to 3 100 r/min with a constant rotor cage rotary speed of the first-stage classifier, the cut size is decreased from 13.16 μm to 8.76 μm, which means the cut size of the ultrafine powder can be regulated through adjusting the rotor cage rotary speed of the second-stage classifier. 3) When the feeding speed is increased from 35 kg/h to 50 kg/h, the "fish-hook" effect is strengthened, which makes the ultrafine powder yield decrease. 4) To weaken the "fish-hook" effect, the equalization of the two-stage wind speeds or the combination of a high first-stage wind speed with a low second-stage wind speed should be selected. This empirical study provides a criterion of process parameter configurations for a two-stage or multi-stage classifier in series, which offers a theoretical basis for practical production.

  11. Development and testing of a two stage granular filter to improve collection efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Rangan, R.S.; Prakash, S.G.; Chakravarti, S.; Rao, S.R.

    1999-07-01

    A circulating bed granular filter (CBGF) with a single filtration stage was tested with a PFB combustor in the Coal Research Facility of BHEL R and D in Hyderabad during the years 1993--95. Filter outlet dust loading varied between 20--50 mg/Nm{sup 3} for an inlet dust loading of 5--8 gms/Nm{sup 3}. The results were reported in Fluidized Bed Combustion-Volume 2, ASME 1995. Though the outlet consists of predominantly fine particulates below 2 microns, it is still beyond present day gas turbine specifications for particulate concentration. In order to enhance the collection efficiency, a two-stage granular filtration concept was evolved, wherein the filter depth is divided between two stages, accommodated in two separate vertically mounted units. The design also incorporates BHEL's scale-up concept of multiple parallel stages. The two-stage concept minimizes reentrainment of captured dust by providing clean granules in the upper stage, from where gases finally exit the filter. The design ensures that dusty gases come in contact with granules having a higher dust concentration at the bottom of the two-stage unit, where most of the cleaning is completed. A second filtration stage of cleaned granules is provided in the top unit (where the granules are returned to the system after dedusting) minimizing reentrainment. Tests were conducted to determine the optimum granule to dust ratio (G/D ratio) which decides the granule circulation rate required for the desired collection efficiency. The data brings out the importance of pre-separation and the limitation on inlet dust loading for any continuous system of granular filtration. Collection efficiencies obtained were much higher (outlet dust being 3--9 mg/Nm{sub 3}) than in the single stage filter tested earlier for similar dust loading at the inlet. The results indicate that two-stage granular filtration has a high potential for HTHT application with fewer risks as compared to other systems under development.

  12. Stochastic Analysis 2010

    CERN Document Server

    Crisan, Dan

    2011-01-01

    "Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa

  13. Stochastic processes

    CERN Document Server

    Borodin, Andrei N

    2017-01-01

    This book provides a rigorous yet accessible introduction to the theory of stochastic processes. A significant part of the book is devoted to the classic theory of stochastic processes. In turn, it also presents proofs of well-known results, sometimes together with new approaches. Moreover, the book explores topics not previously covered elsewhere, such as distributions of functionals of diffusions stopped at different random times, the Brownian local time, diffusions with jumps, and an invariance principle for random walks and local times. Supported by carefully selected material, the book showcases a wealth of examples that demonstrate how to solve concrete problems by applying theoretical results. It addresses a broad range of applications, focusing on concrete computational techniques rather than on abstract theory. The content presented here is largely self-contained, making it suitable for researchers and graduate students alike.

  14. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    Science.gov (United States)

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  15. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  16. Combinatorial stresses kill pathogenic Candida species

    Science.gov (United States)

    Kaloriti, Despoina; Tillmann, Anna; Cook, Emily; Jacobsen, Mette; You, Tao; Lenardon, Megan; Ames, Lauren; Barahona, Mauricio; Chandrasekaran, Komelapriya; Coghill, George; Goodman, Daniel; Gow, Neil A. R.; Grebogi, Celso; Ho, Hsueh-Lui; Ingram, Piers; McDonagh, Andrew; De Moura, Alessandro P. S.; Pang, Wei; Puttnam, Melanie; Radmaneshfar, Elahe; Romano, Maria Carmen; Silk, Daniel; Stark, Jaroslav; Stumpf, Michael; Thiel, Marco; Thorne, Thomas; Usher, Jane; Yin, Zhikang; Haynes, Ken; Brown, Alistair J. P.

    2012-01-01

    Pathogenic microbes exist in dynamic niches and have evolved robust adaptive responses to promote survival in their hosts. The major fungal pathogens of humans, Candida albicans and Candida glabrata, are exposed to a range of environmental stresses in their hosts including osmotic, oxidative and nitrosative stresses. Significant efforts have been devoted to the characterization of the adaptive responses to each of these stresses. In the wild, cells are frequently exposed simultaneously to combinations of these stresses and yet the effects of such combinatorial stresses have not been explored. We have developed a common experimental platform to facilitate the comparison of combinatorial stress responses in C. glabrata and C. albicans. This platform is based on the growth of cells in buffered rich medium at 30°C, and was used to define relatively low, medium and high doses of osmotic (NaCl), oxidative (H 2O2) and nitrosative stresses (e.g., dipropylenetriamine (DPTA)-NONOate). The effects of combinatorial stresses were compared with the corresponding individual stresses under these growth conditions. We show for the first time that certain combinations of combinatorial stress are especially potent in terms of their ability to kill C. albicans and C. glabrata and/or inhibit their growth. This was the case for combinations of osmotic plus oxidative stress and for oxidative plus nitrosative stress. We predict that combinatorial stresses may be highly signif cant in host defences against these pathogenic yeasts. PMID:22463109

  17. The experimental study of a two-stage photovoltaic thermal system based on solar trough concentration

    International Nuclear Information System (INIS)

    Tan, Lijun; Ji, Xu; Li, Ming; Leng, Congbin; Luo, Xi; Li, Haili

    2014-01-01

    Highlights: • A two-stage photovoltaic thermal system based on solar trough concentration. • Maximum cell efficiency of 5.21% with the mirror opening width of 57 cm. • With single cycle, maximum temperatures rise in the heating stage is 12.06 °C. • With 30 min multiple cycles, working medium temperature 62.8 °C, increased 28.7 °C. - Abstract: A two-stage photovoltaic thermal system based on solar trough concentration is proposed, in which the metal cavity heating stage is added on the basis of the PV/T stage, and thermal energy with higher temperature is output while electric energy is output. With the 1.8 m 2 mirror PV/T system, the characteristic parameters of the space solar cell under non-concentrating solar radiation and concentrating solar radiation are respectively tested experimentally, and the solar cell output characteristics at different opening widths of concentrating mirror of the PV/T stage under condensation are also tested experimentally. When the mirror opening width was 57 cm, the solar cell efficiency reached maximum value of 5.21%. The experimental platform of the two-stage photovoltaic thermal system was established, with a 1.8 m 2 mirror PV/T stage and a 15 m 2 mirror heating stage, or a 1.8 m 2 mirror PV/T stage and a 30 m 2 mirror heating stage. The results showed that with single cycle, the long metal cavity heating stage would bring lower thermal efficiency, but temperature rise of the working medium is higher, up to 12.06 °C with only single cycle. With 30 min closed multiple cycles, the temperature of the working medium in the water tank was 62.8 °C, with an increase of 28.7 °C, and thermal energy with higher temperature could be output

  18. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  19. Clinical evaluation of two-stage mandibular wisdom tooth extraction method to avoid mental nerve paresthesia

    International Nuclear Information System (INIS)

    Nozoe, Etsuro; Nakamura, Yasunori; Okawachi, Takako; Ishihata, Kiyohide; Shinnakasu, Mana; Nakamura, Norifumi

    2011-01-01

    Clinical courses following two-stage mandibular wisdom tooth extraction (TMWTE) carried out for preventing postoperative mental nerve paresthesia (MNP) were analyzed. When panoramic X-ray showed overlapping of wisdom tooth root on the superior 1/2 or more of the mandibular canal, interruption of the white line of the superior wall of the canal, or diversion of the canal, CT examination was facilitated. In cases where contact between the tooth root and canal was demonstrated in CT examination, TMWTE was then selected after gaining the patient's consent. TMWTE consisted of removing more than a half of the tooth crown and tooth root extraction at the second step after 2-3 months. The clinical features of wisdom teeth extracted and postoperative courses including tooth movement and occurrence of MNP during two-stage MWTE were evaluated. TMWTE was carried out for 40 teeth among 811 wisdom teeth (4.9%) that were extracted from 2007 to 2009. Among them, complete procedures were accomplished in 39 teeth, and crown removal was performed insufficiently at the first-stage operation in one tooth. Tooth movement was detected in 37 of 40 cases (92.5%). No postoperative MNP was observed in cases in which complete two-stage MWTE was carried out, but one case with insufficient crown removal was complicated by postoperative MNP. Seven mild complications (dehiscence, cold sensitivity, etc.) were noted after the first-stage operation. Therefore, we conclude that TMWTE for high-risk cases assessed by X-ray findings is useful to avoid MNP after MWTE. (author)

  20. Recent developments of a two-stage light gas gun for pellet injection

    International Nuclear Information System (INIS)

    Reggiori, A.

    1984-01-01

    A report is given on a two-stage pneumatic gun operated with ambient air as first stage driver which has been built and tested. Cylindrical polyethylene pellets of 1 mm diameter and 1 mm length have been launched at velocities up to 1800 m/s, with divergence angles of the pellet trajectory less than 1 0 . It is possible to optimize the pressure pulse for pellets of different masses, simply changing the mass of the piston and/or the initial pressures in the second stage. (author)

  1. Grids heat loading of an ion source in two-stage acceleration system

    International Nuclear Information System (INIS)

    Okumura, Yoshikazu; Ohara, Yoshihiro; Ohga, Tokumichi

    1978-05-01

    Heat loading of the extraction grids, which is one of the critical problems limiting the beam pulse duration at high power level, has been investigated experimentally, with an ion source in a two-stage acceleration system of four multi-aperture grids. The loading of each grid depends largely on extraction current and grid gap pressures; it decreases with improvement of the beam optics and with decrease of the pressures. In optimum operating modes, its level is typically less than -- 2% of the total beam power or -- 200 W/cm 2 at beam energies of 50 - 70 kV. (auth.)

  2. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  3. The global stability of a delayed predator-prey system with two stage-structure

    International Nuclear Information System (INIS)

    Wang Fengyan; Pang Guoping

    2009-01-01

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  4. Forecasting long memory series subject to structural change: A two-stage approach

    DEFF Research Database (Denmark)

    Papailias, Fotis; Dias, Gustavo Fruet

    2015-01-01

    A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...

  5. Two-Stage Load Shedding for Secondary Control in Hierarchical Operation of Islanded Microgrids

    DEFF Research Database (Denmark)

    Zhou, Quan; Li, Zhiyi; Wu, Qiuwei

    2018-01-01

    A two-stage load shedding scheme is presented to cope with the severe power deficit caused by microgrid islanding. Coordinated with the fast response of inverter-based distributed energy resources (DERs), load shedding at each stage and the resulting power flow redistribution are estimated....... The first stage of load shedding will cease rapid frequency decline in which the measured frequency deviation is employed to guide the load shedding level and process. Once a new steady-state is reached, the second stage is activated, which performs load shedding according to the priorities of loads...

  6. The rearrangement process in a two-stage broadcast switching network

    DEFF Research Database (Denmark)

    Jacobsen, Søren B.

    1988-01-01

    The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...... during a rearrangement. When each inlet channel appears twice, the maximum number of connections to be moved is found. For a special class of inlet assignment patterns in the case of which each inlet channel appears three times, the maximum number of connections to be moved is also found. In the general...

  7. Risk-Averse Suppliers’ Optimal Pricing Strategies in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Rui Shen

    2013-01-01

    Full Text Available Risk-averse suppliers’ optimal pricing strategies in two-stage supply chains under competitive environment are discussed. The suppliers in this paper focus more on losses as compared to profits, and they care their long-term relationship with their customers. We introduce for the suppliers a loss function, which covers both current loss and future loss. The optimal wholesale price is solved under situations of risk neutral, risk averse, and a combination of minimizing loss and controlling risk, respectively. Besides, some properties of and relations among these optimal wholesale prices are given as well. A numerical example is given to illustrate the performance of the proposed method.

  8. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    Science.gov (United States)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  9. Simple Digital Control of a Two-Stage PFC Converter Using DSPIC30F Microprocessor

    DEFF Research Database (Denmark)

    Török, Lajos; Munk-Nielsen, Stig

    2010-01-01

    The use of dsPIC digital signal controllers (DSC) in Switch Mode Power Supply (SMPS) applications opens new perspectives for cheap and flexible digital control solutions. This paper presents the digital control of a two stage power factor corrector (PFC) converter. The PFC circuit is designed...... and built for 70W rated output power. Average current mode control for boost converter and current programmed control for forward converter are implemented on a dsPIC30F1010. Pulse Width Modulation (PWM) technique is used to drive the switching MOSFETs. Results show that digital solutions with ds...

  10. A comprehensive review on two-stage integrative schemes for the valorization of dark fermentative effluents.

    Science.gov (United States)

    Sivagurunathan, Periyasamy; Kuppam, Chandrasekhar; Mudhoo, Ackmez; Saratale, Ganesh D; Kadier, Abudukeremu; Zhen, Guangyin; Chatellard, Lucile; Trably, Eric; Kumar, Gopalakrishnan

    2017-12-21

    This review provides the alternative routes towards the valorization of dark H 2 fermentation effluents that are mainly rich in volatile fatty acids such as acetate and butyrate. Various enhancement and alternative routes such as photo fermentation, anaerobic digestion, utilization of microbial electrochemical systems, and algal system towards the generation of bioenergy and electricity and also for efficient organic matter utilization are highlighted. What is more, various integration schemes and two-stage fermentation for the possible scale up are reviewed. Moreover, recent progress for enhanced performance towards waste stabilization and overall utilization of useful and higher COD present in the organic source into value-added products are extensively discussed.

  11. An Investigation on the Formation of Carbon Nanotubes by Two-Stage Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    M. S. Shamsudin

    2012-01-01

    Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

  12. A Novel Two-Stage Dynamic Spectrum Sharing Scheme in Cognitive Radio Networks

    Institute of Scientific and Technical Information of China (English)

    Guodong Zhang; Wei Heng; Tian Liang; Chao Meng; Jinming Hu

    2016-01-01

    In order to enhance the efficiency of spectrum utilization and reduce communication overhead in spectrum sharing process,we propose a two-stage dynamic spectrum sharing scheme in which cooperative and noncooperative modes are analyzed in both stages.In particular,the existence and the uniqueness of Nash Equilibrium (NE) strategies for noncooperative mode are proved.In addition,a distributed iterative algorithm is proposed to obtain the optimal solutions of the scheme.Simulation studies are carried out to show the performance comparison between two modes as well as the system revenue improvement of the proposed scheme compared with a conventional scheme without a virtual price control factor.

  13. The Design, Construction and Operation of a 75 kW Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Birk; Ahrenfeldt, Jesper; Jensen, Torben Kvist

    2003-01-01

    The Two-Stage Gasifier was operated for several weeks (465 hours) and of these 190 hours continuously. The gasifier is operated automatically unattended day and night, and only small adjustments of the feeding rate were necessary once or twice a day. The operation was successful, and the output...... as expected. The engine operated well on the produced gas, and no deposits were observed in the engine afterwards. The bag house filter was an excellent and well operating gas cleaning system. Small amounts of deposits consisting of salts and carbonates were observed in the hot gas heat exchangers. The top...

  14. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    OpenAIRE

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-01

    Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of militar...

  15. High-speed pellet injection with a two-stage pneumatic gun

    International Nuclear Information System (INIS)

    Reggiori, A.; Carlevaro, R.; Riva, G.; Daminelli, G.B.; Scaramuzzi, F.; Frattolillo, A.; Martinis, L.; Cardoni, P.; Mori, L.

    1988-01-01

    The injection of pellets of frozen hydrogen isotopes into fusion plasmas is envisioned as a fueling technique for future fusion reactors. Research is underway to obtain high injection speeds for solid H 2 and D 2 pellets. The optimization of a two-stage light gas gun is being pursued by the Milano group; the search for a convenient method of creating pellets with good mechanical properties and a secure attachment to the cold surface on which they are formed is carried out in Frascati. Velocities >2000 m/s have been obtained, but reproducibility is not yet satisfactory

  16. Artificial immune system and sheep flock algorithms for two-stage fixed-charge transportation problem

    DEFF Research Database (Denmark)

    Kannan, Devika; Govindan, Kannan; Soleimani, Hamed

    2014-01-01

    In this paper, we cope with a two-stage distribution planning problem of supply chain regarding fixed charges. The focus of the paper is on developing efficient solution methodologies of the selected NP-hard problem. Based on computational limitations, common exact and approximation solution...... approaches are unable to solve real-world instances of such NP-hard problems in a reasonable time. These approaches involve cumbersome computational steps in real-size cases. In order to solve the mixed integer linear programming model, we develop an artificial immune system and a sheep flock algorithm...

  17. Local formulae for combinatorial Pontryagin classes

    International Nuclear Information System (INIS)

    Gaifullin, Alexander A

    2004-01-01

    Let p(|K|) be the characteristic class of a combinatorial manifold K given by a polynomial p in the rational Pontryagin classes of K. We prove that for any polynomial p there is a function taking each combinatorial manifold K to a cycle z p (K) in its rational simplicial chains such that: 1) the Poincare dual of z p (K) represents the cohomology class p(|K|); 2) the coefficient of each simplex Δ in the cycle z p (K) is determined solely by the combinatorial type of linkΔ. We explicitly describe all such functions for the first Pontryagin class. We obtain estimates for the denominators of the coefficients of the simplices in the cycles z p (K)

  18. Accessing Specific Peptide Recognition by Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Li, Ming

    Molecular recognition is at the basis of all processes for life, and plays a central role in many biological processes, such as protein folding, the structural organization of cells and organelles, signal transduction, and the immune response. Hence, my PhD project is entitled “Accessing Specific...... Peptide Recognition by Combinatorial Chemistry”. Molecular recognition is a specific interaction between two or more molecules through noncovalent bonding, such as hydrogen bonding, metal coordination, van der Waals forces, π−π, hydrophobic, or electrostatic interactions. The association involves kinetic....... Combinatorial chemistry was invented in 1980s based on observation of functional aspects of the adaptive immune system. It was employed for drug development and optimization in conjunction with high-throughput synthesis and screening. (chapter 2) Combinatorial chemistry is able to rapidly produce many thousands...

  19. The economics of planning electricity transmission to accommodate renewables: Using two-stage optimisation to evaluate flexibility and the cost of disregarding uncertainty

    International Nuclear Information System (INIS)

    Weijde, Adriaan Hendrik van der; Hobbs, Benjamin F.

    2012-01-01

    Aggressive development of renewable electricity sources will require significant expansions in transmission infrastructure. We present a stochastic two-stage optimisation model that captures the multistage nature of transmission planning under uncertainty and use it to evaluate interregional grid reinforcements in Great Britain (GB). In our model, a proactive transmission planner makes investment decisions in two time periods, each time followed by a market response. Uncertainty is represented by economic, technology, and regulatory scenarios, and first-stage investments must be made before it is known which scenario will occur. The model allows us to identify expected cost-minimising first-stage investments, as well as estimate the value of information, the cost of ignoring uncertainty, and the value of flexibility. Our results show that ignoring risk in planning transmission for renewables has quantifiable economic consequences, and that considering uncertainty can yield decisions that have lower expected costs than traditional deterministic planning methods. In the GB case, the value of information and cost of disregarding uncertainty in transmission planning were of the same order of magnitude (approximately £100 M, in present worth terms). Further, the best plan under a risk-neutral decision criterion can differ from the best under risk-aversion. Finally, a traditional sensitivity analysis-based robustness analysis also yields different results than the stochastic model, although the former's expected cost is not much higher.

  20. Development of an innovative two-stage process, a combination of acidogenic hydrogenesis and methanogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Han, S.K.; Shin, H.S. [Korea Advanced Inst. of Science and Technology, Daejeon (Korea, Republic of). Dept. of Civil and Enviromental Engineering

    2004-07-01

    Hydrogen produced from waste by means of fermentative bacteria is an attractive way to produce this fuel as an alternative to fossil fuels. It also helps treat the associated waste. The authors have undertaken to optimize acidogenic hydrogenesis and methanogenesis. Building on this, they then developed a two-stage process that produces both hydrogen and methane. Acidogenic hydrogenesis of food waste was investigated using a leaching bed reactor. The dilution rate was varied in order to maximize efficiency which was as high as 70.8 per cent. Further to this, an upflow anaerobic sludge blanket reactor converted the wastewater from acidogenic hydrogenesis into methane. Chemical oxygen demand (COD) removal rates exceeded 96 per cent up to a COD loading of 12.9 COD/l/d. After this, the authors devised a new two-stage process based on a combination of acidogenic hydrogenesis and methanogenesis. The authors report on results for this process using food waste as feedstock. 5 refs., 5 figs.

  1. Fueling of magnetically confined plasmas by single- and two-stage repeating pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Foust, C.R.; Milora, S.L.

    1990-01-01

    Advanced plasma fueling systems for magnetic fusion confinement experiments are under development at Oak Ridge National Laboratory (ORNL). The general approach is that of producing and accelerating frozen hydrogenic pellets to speeds in the kilometer-per-second range using single shot and repetitive pneumatic (light-gas gun) pellet injectors. The millimeter-to-centimeter size pellets enter the plasma and continuously ablate because of the plasma electron heat flux, depositing fuel atoms along the pellet trajectory. This fueling method allows direct fueling in the interior of the hot plasma and is more efficient than the alternative method of injecting room temperature fuel gas at the wall of the plasma vacuum chamber. Single-stage pneumatic injectors based on the light-gas gun concept have provided hydrogenic fuel pellets in the speed range of 1--2 km/s in single-shot injector designs. Repetition rates up to 5 Hz have been demonstrated in repetitive injector designs. Future fusion reactor-scale devices may need higher pellet velocities because of the larger plasma size and higher plasma temperatures. Repetitive two-stage pneumatic injectors are under development at ORNL to provide long-pulse plasma fueling in the 3--5 km/s speed range. Recently, a repeating, two-stage light-gas gun achieved repetitive operation at 1 Hz with speeds in the range of 2--3 km/s

  2. Plant specification of a generic human-error data through a two-stage Bayesian approach

    International Nuclear Information System (INIS)

    Heising, C.D.; Patterson, E.I.

    1984-01-01

    Expert judgement concerning human performance in nuclear power plants is quantitatively coupled with actuarial data on such performance in order to derive plant-specific human-error rate probability distributions. The coupling procedure consists of a two-stage application of Bayes' theorem to information which is grouped by type. The first information type contains expert judgement concerning human performance at nuclear power plants in general. Data collected on human performance at a group of similar plants forms the second information type. The third information type consists of data on human performance in a specific plant which has the same characteristics as the group members. The first and second information types are coupled in the first application of Bayes' theorem to derive a probability distribution for population performance. This distribution is then combined with the third information type in a second application of Bayes' theorem to determine a plant-specific human-error rate probability distribution. The two stage Bayesian procedure thus provides a means to quantitatively couple sparse data with expert judgement in order to obtain a human performance probability distribution based upon available information. Example calculations for a group of like reactors are also given. (author)

  3. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  4. Two stage bioethanol refining with multi litre stacked microbial fuel cell and microbial electrolysis cell.

    Science.gov (United States)

    Sugnaux, Marc; Happe, Manuel; Cachelin, Christian Pierre; Gloriod, Olivier; Huguenin, Gérald; Blatter, Maxime; Fischer, Fabian

    2016-12-01

    Ethanol, electricity, hydrogen and methane were produced in a two stage bioethanol refinery setup based on a 10L microbial fuel cell (MFC) and a 33L microbial electrolysis cell (MEC). The MFC was a triple stack for ethanol and electricity co-generation. The stack configuration produced more ethanol with faster glucose consumption the higher the stack potential. Under electrolytic conditions ethanol productivity outperformed standard conditions and reached 96.3% of the theoretically best case. At lower external loads currents and working potentials oscillated in a self-synchronized manner over all three MFC units in the stack. In the second refining stage, fermentation waste was converted into methane, using the scale up MEC stack. The bioelectric methanisation reached 91% efficiency at room temperature with an applied voltage of 1.5V using nickel cathodes. The two stage bioethanol refining process employing bioelectrochemical reactors produces more energy vectors than is possible with today's ethanol distilleries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Two staged incentive contract focused on efficiency and innovation matching in critical chain project management

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2014-09-01

    Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a two staged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The two staged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

  6. Two-stage acid saccharification of fractionated Gelidium amansii minimizing the sugar decomposition.

    Science.gov (United States)

    Jeong, Tae Su; Kim, Young Soo; Oh, Kyeong Keun

    2011-11-01

    Two-stage acid hydrolysis was conducted on easy reacting cellulose and resistant reacting cellulose of fractionated Gelidium amansii (f-GA). Acid hydrolysis of f-GA was performed at between 170 and 200 °C for a period of 0-5 min, and an acid concentration of 2-5% (w/v, H2SO4) to determine the optimal conditions for acid hydrolysis. In the first stage of the acid hydrolysis, an optimum glucose yield of 33.7% was obtained at a reaction temperature of 190 °C, an acid concentration of 3.0%, and a reaction time of 3 min. In the second stage, a glucose yield of 34.2%, on the basis the amount of residual cellulose from the f-GA, was obtained at a temperature of 190 °C, a sulfuric acid concentration of 4.0%, and a reaction time 3.7 min. Finally, 68.58% of the cellulose derived from f-GA was converted into glucose through two-stage acid saccharification under aforementioned conditions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Use of a two-stage light-gas gun as an injector for electromagnetic railguns

    International Nuclear Information System (INIS)

    Shahinpoor, M.

    1989-01-01

    Ablation of wall materials is known to be a major factor limiting the performance of railguns. To minimize this effect, it is desirable too inject projectiles into railgun at velocities greater than the ablation threshold velocity (6-8 km/s for copper rails). Because two-stage light-gas guns are capable of achieving such velocities, a program was initiated to design, build and evaluate the performance of a two-stage light gas gun, utilizing hydrogen gas, for use as an injector to an electromagnetic railgun. This effort is part of a project to develop a hypervelocity electromagnetic launcher (HELEOS) for use in equation-of-state studies. In this paper, the specific design features that enhance compatibility of the injector with the railgun, including a slip-joint between the injector launch tube and the coupling section to the railgun are described. The operational capabilities for using all major projectile velocity measuring techniques, such as in-bore pressure gauges, laser and CW x-ray interrupt techniques, flash x-ray and continuous in-bore velocity measurements using VISAR interferometry are also discussed. Finally an internal ballistics code for optimizing gun performance has been utilized to interpret performance data of the gun

  8. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    Science.gov (United States)

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  9. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    Science.gov (United States)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  10. Two-stage heterotrophic and phototrophic culture strategy for algal biomass and lipid production.

    Science.gov (United States)

    Zheng, Yubin; Chi, Zhanyou; Lucker, Ben; Chen, Shulin

    2012-01-01

    A two-stage heterotrophic and phototrophic culture strategy for algal biomass and lipid production was studied, wherein high density heterotrophic cultures of Chlorellasorokiniana serve as seed for subsequent phototrophic growth. The data showed growth rate, cell density and productivity of heterotrophic C.sorokiniana were 3.0, 3.3 and 7.4 times higher than phototrophic counterpart, respectively. Hetero- and phototrophic algal seeds had similar biomass/lipid production and fatty acid profile when inoculated into phototrophic culture system. To expand the application, food waste and wastewater were tested as feedstock for heterotrophic growth, and supported cell growth successfully. These results demonstrated the advantages of using heterotrophic algae cells as seeds for open algae culture system. Additionally, high inoculation rate of heterotrophic algal seed can be utilized as an effective method for contamination control. This two-stage heterotrophic phototrophic process is promising to provide a more efficient way for large scale production of algal biomass and biofuels. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Two-stage single-volume exchange transfusion in severe hemolytic disease of the newborn.

    Science.gov (United States)

    Abbas, Wael; Attia, Nayera I; Hassanein, Sahar M A

    2012-07-01

    Evaluation of two-stage single-volume exchange transfusion (TSSV-ET) in decreasing the post-exchange rebound increase in serum bilirubin level, with subsequent reduction of the need for repeated exchange transfusions. The study included 104 neonates with hyperbilirubinemia needing exchange transfusion. They were randomly enrolled into two equal groups, each group comprised 52 neonates. TSSV-ET was performed for the 52 neonates and the traditional single-stage double-volume exchange transfusion (SSDV-ET) was performed to 52 neonates. TSSV-ET significantly lowered rebound serum bilirubin level (12.7 ± 1.1 mg/dL), compared to SSDV-ET (17.3 ± 1.7 mg/dL), p < 0.001. Need for repeated exchange transfusions was significantly lower in TSSV-ET group (13.5%), compared to 32.7% in SSDV-ET group, p < 0.05. No significant difference was found between the two groups as regards the morbidity (11.5% and 9.6%, respectively) and the mortality (1.9% for both groups). Two-stage single-volume exchange transfusion proved to be more effective in reducing rebound serum bilirubin level post-exchange and in decreasing the need for repeated exchange transfusions.

  12. QUICKGUN: An algorithm for estimating the performance of two-stage light gas guns

    International Nuclear Information System (INIS)

    Milora, S.L.; Combs, S.K.; Gouge, M.J.; Kincaid, R.W.

    1990-09-01

    An approximate method is described for solving the equation of motion of a projectile accelerated by a two-stage light gas gun that uses high-pressure (<100-bar) gas from a storage reservoir to drive a piston to moderate speed (<400 m/s) for the purpose of compressing the low molecular weight propellant gas (hydrogen or helium) to high pressure (1000 to 10,000 bar) and temperature (1000 to 10,000 K). Zero-dimensional, adiabatic (isentropic) processes are used to describe the time dependence of the ideal gas thermodynamic properties of the storage reservoir and the first and second stages of the system. A one-dimensional model based on an approximate method of characteristics, or wave diagram analysis, for flow with friction (nonisentropic) is used to describe the nonsteady compressible flow processes in the launch tube. Linear approximations are used for the characteristic and fluid particle trajectories by averaging the values of the flow parameters at the breech and at the base of the projectile. An assumed functional form for the Mach number at the breech provides the necessary boundary condition. Results of the calculation are compared with data obtained from two-stage light gas gun experiments at Oak Ridge National Laboratory for solid deuterium and nylon projectiles with masses ranging from 10 to 35 mg and for projectile speeds between 1.6 and 4.5 km/s. The predicted and measured velocities generally agree to within 15%. 19 refs., 3 figs., 2 tabs

  13. Fate of dissolved organic nitrogen in two stage trickling filter process.

    Science.gov (United States)

    Simsek, Halis; Kasi, Murthy; Wadhawan, Tanush; Bye, Christopher; Blonigen, Mark; Khan, Eakalak

    2012-10-15

    Dissolved organic nitrogen (DON) represents a significant portion of nitrogen in the final effluent of wastewater treatment plants (WWTPs). Biodegradable portion of DON (BDON) can support algal growth and/or consume dissolved oxygen in the receiving waters. The fate of DON and BDON has not been studied for trickling filter WWTPs. DON and BDON data were collected along the treatment train of a WWTP with a two-stage trickling filter process. DON concentrations in the influent and effluent were 27% and 14% of total dissolved nitrogen (TDN). The plant removed about 62% and 72% of the influent DON and BDON mainly by the trickling filters. The final effluent BDON values averaged 1.8 mg/L. BDON was found to be between 51% and 69% of the DON in raw wastewater and after various treatment units. The fate of DON and BDON through the two-stage trickling filter treatment plant was modeled. The BioWin v3.1 model was successfully applied to simulate ammonia, nitrite, nitrate, TDN, DON and BDON concentrations along the treatment train. The maximum growth rates for ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria, and AOB half saturation constant influenced ammonia and nitrate output results. Hydrolysis and ammonification rates influenced all of the nitrogen species in the model output, including BDON. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Martínez-Jerónimo, Fernando; Flores-Ortíz, Cesar Mateo

    2017-11-20

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL -1 to 17.98gL -1 with an increase in initial glucose concentration from 10.6gL -1 to 30.3gL -1 . However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Y x/s ), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  15. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  16. Hydrodeoxygenation of oils from cellulose in single and two-stage hydropyrolysis

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, J.D.; Snape, C.E. [Strathclyde Univ., Glasgow (United Kingdom); Luengo, C.A. [Universidade Estadual de Campinas, SP (Brazil). Dept. de Fisica Aplicada

    1996-09-01

    To investigate the removal of oxygen (hydrodeoxygenation) during the hydropyrolysis of cellulose, single and two-stage experiments on pure cellulose have been carried out using hydrogen pressures up to 10 MPa and temperatures over the range 300-520{sup o}C. Carbon, oxygen and aromaticity balances have been determined from the product yields and compositions. For the two-stage tests, the primary oils were passed through a bed of commercial Ni/Mo {gamma}-alumina-supported catalyst (Criterion 424, presulphided) at 400{sup o}C. Raising the hydrogen pressure from atmospheric to 10 MPa increased the carbon conversion by 10 mole % which was roughly equally divided between the oil and hydrocarbon gases. The oxygen content of the primary oil was reduced by over 10% to below 20% w/w. The addition of a dispersed iron sulphide catalyst further increased the oil yield at 10 MPa and reduces the oxygen content of the oil by a further 10%. The effect of hydrogen pressure on oil yields was most pronounced at low flow rates where it is beneficial in helping to overcome diffusional resistances. Unlike the dispersed iron sulphide in the first stage, the use of the Ni-Mo catalyst in the second stage reduced both the oxygen content and aromaticity of the oils. (Author)

  17. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of two “two-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  18. Many-Objective Particle Swarm Optimization Using Two-Stage Strategy and Parallel Cell Coordinate System.

    Science.gov (United States)

    Hu, Wang; Yen, Gary G; Luo, Guangchun

    2017-06-01

    It is a daunting challenge to balance the convergence and diversity of an approximate Pareto front in a many-objective optimization evolutionary algorithm. A novel algorithm, named many-objective particle swarm optimization with the two-stage strategy and parallel cell coordinate system (PCCS), is proposed in this paper to improve the comprehensive performance in terms of the convergence and diversity. In the proposed two-stage strategy, the convergence and diversity are separately emphasized at different stages by a single-objective optimizer and a many-objective optimizer, respectively. A PCCS is exploited to manage the diversity, such as maintaining a diverse archive, identifying the dominance resistant solutions, and selecting the diversified solutions. In addition, a leader group is used for selecting the global best solutions to balance the exploitation and exploration of a population. The experimental results illustrate that the proposed algorithm outperforms six chosen state-of-the-art designs in terms of the inverted generational distance and hypervolume over the DTLZ test suite.

  19. Two stage, low temperature, catalyzed fluidized bed incineration with in situ neutralization for radioactive mixed wastes

    International Nuclear Information System (INIS)

    Wade, J.F.; Williams, P.M.

    1995-01-01

    A two stage, low temperature, catalyzed fluidized bed incineration process is proving successful at incinerating hazardous wastes containing nuclear material. The process operates at 550 degrees C and 650 degrees C in its two stages. Acid gas neutralization takes place in situ using sodium carbonate as a sorbent in the first stage bed. The feed material to the incinerator is hazardous waste-as defined by the Resource Conservation and Recovery Act-mixed with radioactive materials. The radioactive materials are plutonium, uranium, and americium that are byproducts of nuclear weapons production. Despite its low temperature operation, this system successfully destroyed poly-chlorinated biphenyls at a 99.99992% destruction and removal efficiency. Radionuclides and volatile heavy metals leave the fluidized beds and enter the air pollution control system in minimal amounts. Recently collected modeling and experimental data show the process minimizes dioxin and furan production. The report also discusses air pollution, ash solidification, and other data collected from pilot- and demonstration-scale testing. The testing took place at Rocky Flats Environmental Technology Site, a US Department of Energy facility, in the 1970s, 1980s, and 1990s

  20. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  1. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  2. A preventive maintenance policy based on dependent two-stage deterioration and external shocks

    International Nuclear Information System (INIS)

    Yang, Li; Ma, Xiaobing; Peng, Rui; Zhai, Qingqing; Zhao, Yu

    2017-01-01

    This paper proposes a preventive maintenance policy for a single-unit system whose failure has two competing and dependent causes, i.e., internal deterioration and sudden shocks. The internal failure process is divided into two stages, i.e. normal and defective. Shocks arrive according to a non-homogeneous Poisson process (NHPP), leading to the failure of the system immediately. The occurrence rate of a shock is affected by the state of the system. Both an age-based replacement and finite number of periodic inspections are schemed simultaneously to deal with the competing failures. The objective of this study is to determine the optimal preventive replacement interval, inspection interval and number of inspections such that the expected cost per unit time is minimized. A case study on oil pipeline maintenance is presented to illustrate the maintenance policy. - Highlights: • A maintenance model based on two-stage deterioration and sudden shocks is developed. • The impact of internal system state on external shock process is studied. • A new preventive maintenance strategy combining age-based replacements and periodic inspections is proposed. • Postponed replacement of a defective system is provided by restricting the number of inspections.

  3. Two-stage supercharging of a passenger car diesel engine; Zweistufige Aufladung eines Pkw-Dieselmotors

    Energy Technology Data Exchange (ETDEWEB)

    Wittmer, A.; Albrecht, P.; Becker, B.; Vogt, G.; Fischer, R. [Erphi Elektronik GmbH, Holzkirchen (Germany)

    2004-07-01

    Two-stage supercharging of internal combustion engines with specific capacities beyond 70 kW/l opens up new options for smaller charge volumes. A low-pressure and a high-pressure supercharger are connected in series, with by-passes. The control strategy is described in this contribution using a model of exhaust counterpressure. The potential of a two-stage supercharged diesel engine with CR injection was proved in two engines and in dynamic driving tests. The new concept offers optimum chances for downsizing provided that the driving performance is not affected. (orig.) [German] Die zweistufige Aufladung von Verbrennungsmotoren eroeffnet mit spezifischen Leistungen jenseits von 70 kW/l weitere Moeglichkeiten der Hubraumverkleinerung. Dabei werden ein Niederdruck- und ein Hochdrucklader mit Umgehungsleitungen in Reihe geschaltet. Die erforderliche Regelungsstrategie zum kontrollierten Uebergang von einer Stufe auf die naechste erfolgt in dem hier vorliegenden Beitrag anhand eines Modells fuer den Abgasgegendruck. Hierbei wird das Regelorgan so angesteuert, dass sich der gewuenschte Druck vor den Turbinen einstellt. An zwei Motoren konnten stationaere Ergebnisse das Leistungspotential eines zweistufig aufgeladenen Dieselmotors mit 'Common Rail' Einspritzung nachgewiesen werden. Die dynamischen Fahrversuche belegen eindrucksvoll den schnellen Ladedruckaufbau auch aus tiefen Drehzahlbereichen bei gleichzeitig gutem Uebergangsverhalten von der Hochdruck- auf die Niederdruckstufe. Damit bietet der zweistufig aufgeladene Dieselmotor mit dem hier dargestellten Regelungsverfahren optimale Voraussetzungen fuer 'Downsizing' unter der Randbedingung, dass moeglichst keine Einbussen bei den Fahrleistungen hinzunehmen sind. (orig.)

  4. A low-voltage sense amplifier with two-stage operational amplifier clamping for flash memory

    Science.gov (United States)

    Guo, Jiarong

    2017-04-01

    A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).

  5. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    Science.gov (United States)

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  6. Two-stage agglomeration of fine-grained herbal nettle waste

    Science.gov (United States)

    Obidziński, Sławomir; Joka, Magdalena; Fijoł, Olga

    2017-10-01

    This paper compares the densification work necessary for the pressure agglomeration of fine-grained dusty nettle waste, with the densification work involved in two-stage agglomeration of the same material. In the first stage, the material was pre-densified through coating with a binder material in the form of a 5% potato starch solution, and then subjected to pressure agglomeration. A number of tests were conducted to determine the effect of the moisture content in the nettle waste (15, 18 and 21%), as well as the process temperature (50, 70, 90°C) on the values of densification work and the density of the obtained pellets. For pre-densified pellets from a mixture of nettle waste and a starch solution, the conducted tests determined the effect of pellet particle size (1, 2, and 3 mm) and the process temperature (50, 70, 90°C) on the same values. On the basis of the tests, we concluded that the introduction of a binder material and the use of two-stage agglomeration in nettle waste densification resulted in increased densification work (as compared to the densification of nettle waste alone) and increased pellet density.

  7. Two-Stage Classification Approach for Human Detection in Camera Video in Bulk Ports

    Directory of Open Access Journals (Sweden)

    Mi Chao

    2015-09-01

    Full Text Available With the development of automation in ports, the video surveillance systems with automated human detection begun to be applied in open-air handling operation areas for safety and security. The accuracy of traditional human detection based on the video camera is not high enough to meet the requirements of operation surveillance. One of the key reasons is that Histograms of Oriented Gradients (HOG features of the human body will show great different between front & back standing (F&B and side standing (Side human body. Therefore, the final training for classifier will only gain a few useful specific features which have contribution to classification and are insufficient to support effective classification, while using the HOG features directly extracted by the samples from different human postures. This paper proposes a two-stage classification method to improve the accuracy of human detection. In the first stage, during preprocessing classification, images is mainly divided into possible F&B human body and not F&B human body, and then they were put into the second-stage classification among side human and non-human recognition. The experimental results in Tianjin port show that the two-stage classifier can improve the classification accuracy of human detection obviously.

  8. A two-stage heating scheme for heat assisted magnetic recording

    Science.gov (United States)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  9. Stochastic kinetics

    International Nuclear Information System (INIS)

    Colombino, A.; Mosiello, R.; Norelli, F.; Jorio, V.M.; Pacilio, N.

    1975-01-01

    A nuclear system kinetics is formulated according to a stochastic approach. The detailed probability balance equations are written for the probability of finding the mixed population of neutrons and detected neutrons, i.e. detectrons, at a given level for a given instant of time. Equations are integrated in search of a probability profile: a series of cases is analyzed through a progressive criterium. It tends to take into account an increasing number of physical processes within the chosen model. The most important contribution is that solutions interpret analytically experimental conditions of equilibrium (moise analysis) and non equilibrium (pulsed neutron measurements, source drop technique, start up procedures)

  10. Stochastic Jeux

    Directory of Open Access Journals (Sweden)

    Romanu Ekaterini

    2006-01-01

    Full Text Available This article shows the similarities between Claude Debussy’s and Iannis Xenakis’ philosophy of music and work, in particular the formers Jeux and the latter’s Metastasis and the stochastic works succeeding it, which seem to proceed parallel (with no personal contact to what is perceived as the evolution of 20th century Western music. Those two composers observed the dominant (German tradition as outsiders, and negated some of its elements considered as constant or natural by "traditional" innovators (i.e. serialists: the linearity of musical texture, its form and rhythm.

  11. A quark interpretation of the combinatorial hierarchy

    International Nuclear Information System (INIS)

    Enqvist, Kari.

    1979-01-01

    We propose a physical interpretation of the second level of the combinatorial hierarchy in terms of three quarks, three antiquarks and the vacuum. This interpretation allows us to introduce a new quantum number, which measures electromagnetic mass splitting of the quarks. We extend our argument by analogue to baryons, and find some SU(3) and some new mass formulas for baryons. The generalization of our approach to other hierarchy levels is discussed. We present also an empirical mass formula for baryons, which seems to be loosely connected with the combinatorial hierarchy. (author)

  12. Combinatorial designs a tribute to Haim Hanani

    CERN Document Server

    Hartman, A

    1989-01-01

    Haim Hanani pioneered the techniques for constructing designs and the theory of pairwise balanced designs, leading directly to Wilson''s Existence Theorem. He also led the way in the study of resolvable designs, covering and packing problems, latin squares, 3-designs and other combinatorial configurations.The Hanani volume is a collection of research and survey papers at the forefront of research in combinatorial design theory, including Professor Hanani''s own latest work on Balanced Incomplete Block Designs. Other areas covered include Steiner systems, finite geometries, quasigroups, and t-designs.

  13. Dynamic combinatorial chemistry with diselenides and disulfides in water

    DEFF Research Database (Denmark)

    Rasmussen, Brian; Sørensen, Anne; Gotfredsen, Henrik

    2014-01-01

    Diselenide exchange is introduced as a reversible reaction in dynamic combinatorial chemistry in water. At neutral pH, diselenides are found to mix with disulfides and form dynamic combinatorial libraries of diselenides, disulfides, and selenenylsulfides. This journal is......Diselenide exchange is introduced as a reversible reaction in dynamic combinatorial chemistry in water. At neutral pH, diselenides are found to mix with disulfides and form dynamic combinatorial libraries of diselenides, disulfides, and selenenylsulfides. This journal is...

  14. Distribution of extracellular potassium and electrophysiologic changes during two-stage coronary ligation in the isolated, perfused canine heart

    NARCIS (Netherlands)

    Coronel, R.; Fiolet, J. W.; Wilms-Schopman, J. G.; Opthof, T.; Schaapherder, A. F.; Janse, M. J.

    1989-01-01

    We studied the relation between [K+]o and the electrophysiologic changes during a "Harris two-stage ligation," which is an occlusion of a coronary artery, preceded by a 30-minute period of 50% reduction of flow through the artery. This two-stage ligation has been reported to be antiarrhythmic. Local

  15. Performance of an iterative two-stage bayesian technique for population pharmacokinetic analysis of rich data sets

    NARCIS (Netherlands)

    Proost, Johannes H.; Eleveld, Douglas J.

    2006-01-01

    Purpose. To test the suitability of an Iterative Two-Stage Bayesian (ITSB) technique for population pharmacokinetic analysis of rich data sets, and to compare ITSB with Standard Two-Stage (STS) analysis and nonlinear Mixed Effect Modeling (MEM). Materials and Methods. Data from a clinical study with

  16. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  17. A New Approach for Proving or Generating Combinatorial Identities

    Science.gov (United States)

    Gonzalez, Luis

    2010-01-01

    A new method for proving, in an immediate way, many combinatorial identities is presented. The method is based on a simple recursive combinatorial formula involving n + 1 arbitrary real parameters. Moreover, this formula enables one not only to prove, but also generate many different combinatorial identities (not being required to know them "a…

  18. Asessing for Structural Understanding in Childrens' Combinatorial Problem Solving.

    Science.gov (United States)

    English, Lyn

    1999-01-01

    Assesses children's structural understanding of combinatorial problems when presented in a variety of task situations. Provides an explanatory model of students' combinatorial understandings that informs teaching and assessment. Addresses several components of children's structural understanding of elementary combinatorial problems. (Contains 50…

  19. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  20. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  1. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  2. Two stages and three components of the postural preparation to action.

    Science.gov (United States)

    Krishnan, Vennila; Aruin, Alexander S; Latash, Mark L

    2011-07-01

    Previous studies of postural preparation to action/perturbation have primarily focused on anticipatory postural adjustments (APAs), the changes in muscle activation levels resulting in the production of net forces and moments of force. We hypothesized that postural preparation to action consists of two stages: (1) Early postural adjustments (EPAs), seen a few hundred ms prior to an expected external perturbation and (2) APAs seen about 100 ms prior to the perturbation. We also hypothesized that each stage consists of three components, anticipatory synergy adjustments seen as changes in covariation of the magnitudes of commands to muscle groups (M-modes), changes in averaged across trials levels of muscle activation, and mechanical effects such as shifts of the center of pressure. Nine healthy participants were subjected to external perturbations created by a swinging pendulum while standing in a semi-squatting posture. Electrical activity of twelve trunk and leg muscles and displacements of the center of pressure were recorded and analyzed. Principal component analysis was used to identify four M-modes within the space of muscle activations using indices of integrated muscle activation. This analysis was performed twice, over two phases, 400-700 ms prior to the perturbation and over 200 ms just prior to the perturbation. Similar robust results were obtained using the data from both phases. An index of a multi-M-mode synergy stabilizing the center of pressure displacement was computed using the framework of the uncontrolled manifold hypothesis. The results showed high synergy indices during quiet stance. Each of the two stages started with a drop in the synergy index followed by a change in the averaged across trials activation levels in postural muscles. There was a very long electromechanical delay during the early postural adjustments and a much shorter delay during the APAs. Overall, the results support our main hypothesis on the two stages and three components

  3. Evaluation of a modified two-stage inferior alveolar nerve block technique: A preliminary investigation

    Directory of Open Access Journals (Sweden)

    Ashwin Rao

    2017-01-01

    Full Text Available Introduction: The two-stage technique of inferior alveolar nerve block (IANB administration does not address the pain associated with “needle insertion” and “local anesthetic solution deposition” in the “first stage” of the injection. This study evaluated a “modified two stage technique” to the reaction of children during “needle insertion” and “local anesthetic solution deposition” during the “first stage” and compared it to the “first phase” of the IANB administered with the standard one-stage technique. Materials and Methods: This was a parallel, single-blinded comparative study. A total of 34 children (between 6 and 10 years of age were randomly divided into two groups to receive an IANB either through the modified two-stage technique (MTST (Group A; 15 children or the standard one-stage technique (SOST (Group B; 19 children. The evaluation was done using the Face Legs Activity Cry Consolability (FLACC; which is an objective scale based on the expressions of the child scale. The obtained data was analyzed using Fishers Exact test with the P value set at <0.05 as level of significance. Results: 73.7% of children in Group B indicated moderate pain during the “first phase” of SOST and no children indicated such in the “first stage” of group A. Group A had 33.3% children who scored “0” indicating relaxed/comfortable children compared to 0% in Group B. In Group A, 66.7% of children scored between 1–3 indicating mild discomfort compared to 26.3% in group B. The difference in the scores between the two groups in each category (relaxed/comfortable, mild discomfort, moderate pain was highly significant (P < 0.001. Conclusion: Reaction of children in Group A during “needle insertion” and “local anesthetic solution deposition” in the “first stage” of MTST was significantly lower than that of Group B during the “first phase” of the SOST.

  4. Product prioritization in a two-stage food production system with intermediate storage

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter

    2007-01-01

    In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through...... the performance improvements for the prioritized product, as well as the negative effects for the other products. We also show how the effect decreases with more storage tanks, and increases with more products....... the dedication of a storage tank. This type of situation has hardly been investigated, although planners struggle with it in practice. This paper aims at investigating the fundamental effect of prioritization and dedicated storage in a two-stage production system, for various product mixes. We show...

  5. Hugoniot measurements in vanadium using the LNL two-stage light-gas gun

    International Nuclear Information System (INIS)

    Gathers, G.R.; Mitchell, A.C.; Holmes, N.C.

    1983-01-01

    Hugoniot measurements on vanadium have been made using the LLNL two-stage light-gas gun. The direct collision method with electrical pins and a tantalum flyer accelerated to 6.28 km/s was used. Alt'shuler, et. al., have reported Hugoniot measurements in vanadium using explosives and the impedance match method. They reported a kink in the U/sub s/ - U/sub p/ relationship at 183 GPa, and attribute it to electronic transitions. The upper portion of their curve is based on a single point at 339 GPa. The present work was performed to further investigate the equation-of-state in the high-pressure range

  6. Two-stage multilevel en bloc spondylectomy with resection and replacement of the aorta.

    Science.gov (United States)

    Gösling, Thomas; Pichlmaier, Maximilian A; Länger, Florian; Krettek, Christian; Hüfner, Tobias

    2013-05-01

    We report a case of multilevel spondylectomy in which resection and replacement of the adjacent aorta were done. Although spondylectomy is nowadays an established technique, no report on a combined aortic resection and replacement has been reported so far. The case of a 43-year-old man with a primary chondrosarcoma of the thoracic spine is presented. The local pathology necessitated resection of the aorta. We did a two-stage procedure with resection and replacement of the aorta using a heart-lung machine followed by secondary tumor resection and spinal reconstruction. The procedure was successful. A tumor-free margin was achieved. The patient is free of disease 48 months after surgery. En bloc spondylectomy in combination with aortic resection is feasible and might expand the possibility of producing tumor-free margins in special situations.

  7. Integrated Circuit Design of 3 Electrode Sensing System Using Two-Stage Operational Amplifier

    Science.gov (United States)

    Rani, S.; Abdullah, W. F. H.; Zain, Z. M.; N, Aqmar N. Z.

    2018-03-01

    This paper presents the design of a two-stage operational amplifier(op amp) for 3-electrode sensing system readout circuits. The designs have been simulated using 0.13μm CMOS technology from Silterra (Malaysia) with Mentor graphics tools. The purpose of this projects is mainly to design a miniature interfacing circuit to detect the redox reaction in the form of current using standard analog modules. The potentiostat consists of several op amps combined together in order to analyse the signal coming from the 3-electrode sensing system. This op amp design will be used in potentiostat circuit device and to analyse the functionality for each module of the system.

  8. Design of a Two-stage High-capacity Stirling Cryocooler Operating below 30K

    Science.gov (United States)

    Wang, Xiaotao; Dai, Wei; Zhu, Jian; Chen, Shuai; Li, Haibing; Luo, Ercang

    The high capacity cryocooler working below 30K can find many applications such as superconducting motors, superconducting cables and cryopump. Compared to the GM cryocooler, the Stirling cryocooler can achieve higher efficiency and more compact structure. Because of these obvious advantages, we have designed a two stage free piston Stirling cryocooler system, which is driven by a moving magnet linear compressor with an operating frequency of 40 Hz and a maximum 5 kW input electric power. The first stage of the cryocooler is designed to operate in the liquid nitrogen temperature and output a cooling power of 100 W. And the second stage is expected to simultaneously provide a cooling power of 50 W below the temperature of 30 K. In order to achieve the best system efficiency, a numerical model based on the thermoacoustic model was developed to optimize the system operating and structure parameters.

  9. Two-stage autotransplantation of human submandibular gland: a novel approach to treat postradiogenic xerostomia.

    Science.gov (United States)

    Hagen, Rudolf; Scheich, Matthias; Kleinsasser, Norbert; Burghartz, Marc

    2016-08-01

    Xerostomia is a persistent side effect of radiotherapy (RT), which severely reduces the quality of life of the patients affected. Besides drug treatment and new irradiation strategies, surgical procedures aim for tissue protection of the submandibular gland. Using a new surgical approach, the submandibular gland was autotransplanted in 6 patients to the patient's forearm for the period of RT and reimplanted into the floor of the mouth 2-3 months after completion of RT. Saxon's test was performed during different time points to evaluate patient's saliva production. Furthermore patients had to answer EORTC QLQ-HN35 questionnaire and visual analog scale. Following this two-stage autotransplantation, xerostomia in the patients was markedly reduced due to improved saliva production of the reimplanted gland. Whether this promising novel approach is a reliable treatment option for RT patients in general should be evaluated in further studies.

  10. Compact high-flux two-stage solar collectors based on tailored edge-ray concentrators

    Science.gov (United States)

    Friedman, Robert P.; Gordon, Jeffrey M.; Ries, Harald

    1995-08-01

    Using the recently-invented tailored edge-ray concentrator (TERC) approach for the design of compact two-stage high-flux solar collectors--a focusing primary reflector and a nonimaging TERC secondary reflector--we present: 1) a new primary reflector shape based on the TERC approach and a secondary TERC tailored to its particular flux map, such that more compact concentrators emerge at flux concentration levels in excess of 90% of the thermodynamic limit; and 2) calculations and raytrace simulations result which demonstrate the V-cone approximations to a wide variety of TERCs attain the concentration of the TERC to within a few percent, and hence represent practical secondary concentrators that may be superior to corresponding compound parabolic concentrator or trumpet secondaries.

  11. On the optimal use of a slow server in two-stage queueing systems

    Science.gov (United States)

    Papachristos, Ioannis; Pandelis, Dimitrios G.

    2017-07-01

    We consider two-stage tandem queueing systems with a dedicated server in each queue and a slower flexible server that can attend both queues. We assume Poisson arrivals and exponential service times, and linear holding costs for jobs present in the system. We study the optimal dynamic assignment of servers to jobs assuming that two servers cannot collaborate to work on the same job and preemptions are not allowed. We formulate the problem as a Markov decision process and derive properties of the optimal allocation for the dedicated (fast) servers. Specifically, we show that the one downstream should not idle, and the same is true for the one upstream when holding costs are larger there. The optimal allocation of the slow server is investigated through extensive numerical experiments that lead to conjectures on the structure of the optimal policy.

  12. Sensorless Reserved Power Control Strategy for Two-Stage Grid-Connected Photovoltaic Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2016-01-01

    Due to still increasing penetration level of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A reserved power control, where the active power from the PV panels is reserved during operation, is required for grid...... support. In this paper, a cost-effective solution to realize the reserved power control for grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...... to achieve the power reserve. In this method, the irradiance measurements that have been used in conventional control schemes to estimate the available PV power are not required, and thereby being a sensorless solution. Simulations and experimental tests have been performed on a 3-kW two-stage single...

  13. Two-Stage Optimal Scheduling of Electric Vehicle Charging based on Transactive Control

    DEFF Research Database (Denmark)

    Liu, Zhaoxi; Wu, Qiuwei; Ma, Kang

    2018-01-01

    In this paper, a two-stage optimal charging scheme based on transactive control is proposed for the aggregator to manage day-ahead electricity procurement and real-time EV charging management in order to minimize its total operating cost. The day-ahead electricity procurement considers both the day......-ahead energy cost and expected real-time operation cost. In the real-time charging management, the cost of employing the charging flexibility from the EV owners is explicitly modelled. The aggregator uses a transactive market to manage the real-time charging demand to provide the regulating power. A model...... predictive control (MPC) based method is proposed for the aggregator to clear the transactive market. The realtime charging decisions of the EVs are determined by the clearing of the proposed transactive market according to the realtime requests and preferences of the EV owners. As such, the aggregators...

  14. Determination Bounds for Intermediate Products in a Two-Stage Network DEA

    Directory of Open Access Journals (Sweden)

    Hadi Bagherzadeh Valami

    2016-03-01

    Full Text Available The internal structure of decision making unit (DMU is the key element at extension of network DEA. In general considering internal performance evaluation of system is a better criterion than the conventional DEA-models, essentially based on the initial inputs and final outputs of the system. The internal performance of a system is dependent on the relation between sub-DMUs and intermediate products. Since the intermediate measures are consumed by some sub-DMUs produced by the others, it is possible to produce systems; the role of intermediate production is twice output and input. That's why they can be analyzed based on conventional mathematical modeling. In this paper we introduce a new method for determining bounds for intermediate product in a two stage network DEA structure.

  15. Two-Stage Electric Vehicle Charging Coordination in Low Voltage Distribution Grids

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    ). Being a sizable rated element, electric vehicles (EVs) can offer a great deal of demand flexibility in future intelligent grids. This paper first investigates and analyzes driving pattern and charging requirements of EVs. Secondly, a two-stage charging algorithm, namely local adaptive control...... encompassed by a central coordinative control, is proposed to realize the flexibility offered by EV. The local control enables adaptive charging; whereas the central coordinative control prepares optimized charging schedules. Results from various scenarios show that the proposed algorithm enables significant......Increased environmental awareness in the recent years has encouraged rapid growth of renewable energy sources (RESs); especially solar PV and wind. One of the effective solutions to compensate intermittencies in generation from the RESs is to enable consumer participation in demand response (DR...

  16. An X-ray Experiment with Two-Stage Korean Sounding Rocket

    Directory of Open Access Journals (Sweden)

    Uk-Won Nam

    1998-12-01

    Full Text Available The test result of the X-ray observation system is presented which have been developed at Korea Astronomy Observatory for 3 years (1995-1997. The instrument, which is composed of detector and signal processing parts, is designed for the future observations of compact X-ray sources. The performance of the instrument was tested by mounting on the two-stage Korean Sounding Rocket, which was launched from Taean rocket flight center on June 11 at 10:00 KST 1998. Telemetry data were received from individual parts of the instrument for 32 and 55.7 sec, respectively, since the launch of the rocket. In this paper, the result of the data analysis based on the telemetry data and discussion about the performance of the instrument is reported.

  17. Mediastinal Bronchogenic Cyst With Acute Cardiac Dysfunction: Two-Stage Surgical Approach.

    Science.gov (United States)

    Smail, Hassiba; Baste, Jean Marc; Melki, Jean; Peillon, Christophe

    2015-10-01

    We describe a two-stage surgical approach in a patient with cardiac dysfunction and hemodynamic compromise resulting from a massive and compressive mediastinal bronchogenic cyst. To drain this cyst, video-assisted mediastinoscopy was performed as an emergency procedure, which immediately improved the patient's cardiac function. Five days later and under video thoracoscopy, resection of the cyst margins was impossible because the cyst was tightly adherent to the left atrium. We performed deroofing of this cyst through a right thoracotomy. The patient had an uncomplicated postoperative recovery, and no recurrence was observed at the long-term follow-up visit. Copyright © 2015 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Two-stage triolein breath test differentiates pancreatic insufficiency from other causes of malabsorption

    International Nuclear Information System (INIS)

    Goff, J.S.

    1982-01-01

    In 24 patients with malabsorption, [ 14 C]triolein breath tests were conducted before and together with the administration of pancreatic enzymes (Pancrease, Johnson and Johnson, Skillman, N.J.). Eleven patients with pancreatic insufficiency had a significant rise in peak percent dose per hour 14 CO 2 excretion after Pancrease, whereas 13 patients with other causes of malabsorption had no increase in 14 CO 2 excretion (2.61 +/- 0.96 vs. 0.15 +/- 0.45, p less than 0.001). The two-stage [ 14 C]triolein breath test appears to be an accurate and simple noninvasive test of fat malabsorption that differentiates steatorrhea secondary to pancreatic insufficiency from other causes of steatorrhea

  19. Two-Stage Surgery for a Large Cervical Dumbbell Tumour in Neurofibromatosis 1: A Case Report

    Directory of Open Access Journals (Sweden)

    Mohd Ariff S

    2011-11-01

    Full Text Available Spinal neurofibromas occur sporadically and typically occur in association with neurofibromatosis 1. Patients afflicted with neurofibromatosis 1 usually present with involvement of several nerve roots. This report describes the case of a 14- year-old child with a large intraspinal, but extradural tumour with paraspinal extension, dumbbell neurofibroma of the cervical region extending from the C2 to C4 vertebrae. The lesions were readily detected by MR imaging and were successfully resected in a two-stage surgery. The time interval between the first and second surgery was one month. We provide a brief review of the literature regarding various surgical approaches, emphasising the utility of anterior and posterior approaches.

  20. Effekt of a two-stage nursing assesment and intervention - a randomized intervention study

    DEFF Research Database (Denmark)

    Rosted, Elizabeth Emilie; Poulsen, Ingrid; Hendriksen, Carsten

    % of geriatric patients have complex and often unresolved caring needs. The objective was to examine the effect of a two-stage nursing assessment and intervention to address the patients uncompensated problems given just after discharge from ED and one and six months after. Method: We conducted a prospective...... nursing assessment comprising a checklist of 10 physical, mental, medical and social items. The focus was on unresolved problems which require medical intervention, new or different home care services, or comprehensive geriatric assessment. Following this the nurses made relevant referrals...... to the geriatric outpatient clinic, community health centre, primary physician or arrangements with next-of-kin. Findings: Primary endpoints will be presented as unplanned readmission to ED; admission to nursing home; and death. Secondary endpoints will be presented as physical function; depressive symptoms...

  1. Two-stage SQUID systems and transducers development for MiniGRAIL

    International Nuclear Information System (INIS)

    Gottardi, L; Podt, M; Bassan, M; Flokstra, J; Karbalai-Sadegh, A; Minenkov, Y; Reinke, W; Shumack, A; Srinivas, S; Waard, A de; Frossati, G

    2004-01-01

    We present measurements on a two-stage SQUID system based on a dc-SQUID as a sensor and a DROS as an amplifier. We measured the intrinsic noise of the dc-SQUID at 4.2 K. A new dc-SQUID has been fabricated. It was specially designed to be used with MiniGRAIL transducers. Cooling fins have been added in order to improve the cooling of the SQUID and the design is optimized to achieve the quantum limit of the sensor SQUID at temperatures above 100 mK. In this paper we also report the effect of the deposition of a Nb film on the quality factor of a small mass Al5056 resonator. Finally, the results of Q-factor measurements on a capacitive transducer for the current MiniGRAIL run are presented

  2. A Two-Stage Diagnosis Framework for Wind Turbine Gearbox Condition Monitoring

    Directory of Open Access Journals (Sweden)

    Janet M. Twomey

    2013-01-01

    Full Text Available Advances in high performance sensing technologies enable the development of wind turbine condition monitoring system to diagnose and predict the system-wide effects of failure events. This paper presents a vibration-based two stage fault detection framework for failure diagnosis of rotating components in wind turbines. The proposed framework integrates an analytical defect detection method and a graphical verification method together to ensure the diagnosis efficiency and accuracy. The efficacy of the proposed methodology is demonstrated with a case study with the gearbox condition monitoring Round Robin study dataset provided by the National Renewable Energy Laboratory (NREL. The developed methodology successfully picked five faults out of seven in total with accurate severity levels without producing any false alarm in the blind analysis. The case study results indicated that the developed fault detection framework is effective for analyzing gear and bearing faults in wind turbine drive train system based upon system vibration characteristics.

  3. Development of advanced air-blown entrained-flow two-stage bituminous coal IGCC gasifier

    Directory of Open Access Journals (Sweden)

    Abaimov Nikolay A.

    2017-01-01

    Full Text Available Integrated gasification combined cycle (IGCC technology has two main advantages: high efficiency, and low levels of harmful emissions. Key element of IGCC is gasifier, which converts solid fuel into a combustible synthesis gas. One of the most promising gasifiers is air-blown entrained-flow two-stage bituminous coal gasifier developed by Mitsubishi Heavy Industries (MHI. The most obvious way to develop advanced gasifier is improvement of commercial-scale 1700 t/d MHI gasifier using the computational fluid dynamics (CFD method. Modernization of commercial-scale 1700 t/d MHI gasifier is made by changing the regime parameters in order to improve its cold gas efficiency (CGE and environmental performance, namely H2/CO ratio. The first change is supply of high temperature (900°C steam in gasifier second stage. And the second change is additional heating of blast air to 900°C.

  4. A Two-Stage Foot Repair in a 55-Year-Old Man with Poliomyelitis.

    Science.gov (United States)

    Pollack, Daniel

    2018-01-01

    A 55-year-old man with poliomyelitis presented with a plantarflexed foot and painful ulceration of the sub-first metatarsophalangeal joint present for many years. A two-stage procedure was performed to bring the foot to 90°, perpendicular to the leg, and resolve the ulceration. The first stage corrected only soft-tissue components. It involved using a hydrosurgery system to debride and prepare the ulcer, a unilobed rotational skin plasty to close the ulcer, and a tendo Achillis lengthening to decrease forefoot pressure. The second stage corrected the osseous deformity with a dorsiflexory wedge osteotomy of the first metatarsal. The ulceration has remained closed since the procedures, with complete resolution of pain.

  5. The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach

    Directory of Open Access Journals (Sweden)

    Frances Obafemi

    2013-11-01

    Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.

  6. A Two-stage DC-DC Converter for the Fuel Cell-Supercapacitor Hybrid System

    DEFF Research Database (Denmark)

    Zhang, Zhe; Thomsen, Ole Cornelius; Andersen, Michael A. E.

    2009-01-01

    A wide input range multi-stage converter is proposed with the fuel cells and supercapacitors as a hybrid system. The front-end two-phase boost converter is used to optimize the output power and to reduce the current ripple of fuel cells. The supercapacitor power module is connected by push...... and designed. A 1kW prototype controlled by TMS320F2808 DSP is built in the lab. Simulation and experimental results confirm the feasibility of the proposed two stage dc-dc converter system.......-pull-forward half bridge (PPFHB) converter with coupled inductors in the second stage to handle the slow transient response of the fuel cells and realize the bidirectional power flow control. Moreover, this cascaded structure simplifies the power management. The control strategy for the whole system is analyzed...

  7. Improvement of two-stage GM refrigerator performance using a hybrid regenerator

    International Nuclear Information System (INIS)

    Ke, G.; Makuuchi, H.; Hashimoto, T.; Onishi, A.; Li, R.; Satoh, T.; Kanazawa, Y.

    1994-01-01

    To improve the performance of two-stage GM refrigerators, a hybrid regenerator with magnetic materials of Er 3 Ni and ErNi 0.9 Co 0.1 was used in the 2nd stage regenerator because of its large heat exchange capacity. The largest refrigeration capacity achieved with the hybrid regenerator was 0.95W at helium liquefied temperature of 4.2K. This capacity is 15.9% greater than the 0.82W refrigerator with only Er 3 Ni as the 2nd regenerator material. Use of the hybrid regenerator not only increases the refrigeration capacity at 4.2K, but also allows the 4K GM refrigerator to be used with large 1st stage refrigeration capacity, thus making it more practical

  8. Hydrogen and methane production from household solid waste in the two-stage fermentation process

    DEFF Research Database (Denmark)

    Lui, D.; Liu, D.; Zeng, Raymond Jianxiong

    2006-01-01

    A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS...... added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. PH was observed as a key factor affecting fermentation pathway in hydrogen production stage....... Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase. (c) 2006 Elsevier Ltd. All rights reserved....

  9. Discrete time population dynamics of a two-stage species with recruitment and capture

    International Nuclear Information System (INIS)

    Ladino, Lilia M.; Mammana, Cristiana; Michetti, Elisabetta; Valverde, Jose C.

    2016-01-01

    This work models and analyzes the dynamics of a two-stage species with recruitment and capture factors. It arises from the discretization of a previous model developed by Ladino and Valverde (2013), which represents a progress in the knowledge of the dynamics of exploited populations. Although the methods used here are related to the study of discrete-time systems and are different from those related to continuous version, the results are similar in both the discrete and the continuous case what confirm the skill in the selection of the factors to design the model. Unlike for the continuous-time case, for the discrete-time one some (non-negative) parametric constraints are derived from the biological significance of the model and become fundamental for the proofs of such results. Finally, numerical simulations show different scenarios of dynamics related to the analytical results which confirm the validity of the model.

  10. Validation of Continuous CHP Operation of a Two-Stage Biomass Gasifier

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Henriksen, Ulrik Birk; Jensen, Torben Kvist

    2006-01-01

    The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-day...... measurement campaign was performed. The campaign verified a stable operation of the plant, and the energy balance resulted in an overall fuel to gas efficiency of 93% and a wood to electricity efficiency of 25%. Very low tar content in the producer gas was observed: only 0.1 mg/Nm3 naphthalene could...... be measured in raw gas. A stable engine operation on the producer gas was observed, and very low emissions of aldehydes, N2O, and polycyclic aromatic hydrocarbons were measured....

  11. Evaluation of biological hydrogen sulfide oxidation coupled with two-stage upflow filtration for groundwater treatment.

    Science.gov (United States)

    Levine, Audrey D; Raymer, Blake J; Jahn, Johna

    2004-01-01

    Hydrogen sulfide in groundwater can be oxidized by aerobic bacteria to form elemental sulfur and biomass. While this treatment approach is effective for conversion of hydrogen sulfide, it is important to have adequate control of the biomass exiting the biological treatment system to prevent release of elemental sulfur into the distribution system. Pilot scale tests were conducted on a Florida groundwater to evaluate the use of two-stage upflow filtration downstream of biological sulfur oxidation. The combined biological and filtration process was capable of excellent removal of hydrogen sulfide and associated turbidity. Additional benefits of this treatment approach include elimination of odor generation, reduction of chlorine demand, and improved stability of the finished water.

  12. Shaft Position Influence on Technical Characteristics of Universal Two-Stages Helical Speed Reducers

    Directory of Open Access Journals (Sweden)

    Мilan Rackov

    2005-10-01

    Full Text Available Purchasers of speed reducers decide on buying those reducers, that can the most approximately satisfy their demands with much smaller costs. Amount of used material, ie. mass and dimensions of gear unit influences on gear units price. Mass and dimensions of gear unit, besides output torque, gear unit ratio and efficiency, are the most important parameters of technical characteristics of gear units and their quality. Centre distance and position of shafts have significant influence on output torque, gear unit ratio and mass of gear unit through overall dimension of gear unit housing. Thus these characteristics are dependent on each other. This paper deals with analyzing of centre distance and shaft position influence on output torque and ratio of universal two stages gear units.

  13. Stepwise encapsulation and controlled two-stage release system for cis-Diamminediiodoplatinum.

    Science.gov (United States)

    Chen, Yun; Li, Qian; Wu, Qingsheng

    2014-01-01

    cis-Diamminediiodoplatinum (cis-DIDP) is a cisplatin-like anticancer drug with higher anticancer activity, but lower stability and price than cisplatin. In this study, a cis-DIDP carrier system based on micro-sized stearic acid was prepared by an emulsion solvent evaporation method. The maximum drug loading capacity of cis-DIDP-loaded solid lipid nanoparticles was 22.03%, and their encapsulation efficiency was 97.24%. In vitro drug release in phosphate-buffered saline (pH =7.4) at 37.5°C exhibited a unique two-stage process, which could prove beneficial for patients with tumors and malignancies. MTT (3-[4,5-dimethylthiazol-2-yl]-2, 5-diphenyltetrazolium bromide) assay results showed that cis-DIDP released from cis-DIDP-loaded solid lipid nanoparticles had better inhibition activity than cis-DIDP that had not been loaded.

  14. A Two-stage Kalman Filter for Sensorless Direct Torque Controlled PM Synchronous Motor Drive

    Directory of Open Access Journals (Sweden)

    Boyu Yi

    2013-01-01

    Full Text Available This paper presents an optimal two-stage extended Kalman filter (OTSEKF for closed-loop flux, torque, and speed estimation of a permanent magnet synchronous motor (PMSM to achieve sensorless DTC-SVPWM operation of drive system. The novel observer is obtained by using the same transformation as in a linear Kalman observer, which is proposed by C.-S. Hsieh and F.-C. Chen in 1999. The OTSEKF is an effective implementation of the extended Kalman filter (EKF and provides a recursive optimum state estimation for PMSMs using terminal signals that may be polluted by noise. Compared to a conventional EKF, the OTSEKF reduces the number of arithmetic operations. Simulation and experimental results verify the effectiveness of the proposed OTSEKF observer for DTC of PMSMs.

  15. Economic Design of Acceptance Sampling Plans in a Two-Stage Supply Chain

    Directory of Open Access Journals (Sweden)

    Lie-Fern Hsu

    2012-01-01

    Full Text Available Supply Chain Management, which is concerned with material and information flows between facilities and the final customers, has been considered the most popular operations strategy for improving organizational competitiveness nowadays. With the advanced development of computer technology, it is getting easier to derive an acceptance sampling plan satisfying both the producer's and consumer's quality and risk requirements. However, all the available QC tables and computer software determine the sampling plan on a noneconomic basis. In this paper, we design an economic model to determine the optimal sampling plan in a two-stage supply chain that minimizes the producer's and the consumer's total quality cost while satisfying both the producer's and consumer's quality and risk requirements. Numerical examples show that the optimal sampling plan is quite sensitive to the producer's product quality. The product's inspection, internal failure, and postsale failure costs also have an effect on the optimal sampling plan.

  16. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  17. Combinatorial biosynthesis of medicinal plant secondary metabolites

    NARCIS (Netherlands)

    Julsing, Mattijs K.; Koulman, Albert; Woerdenbag, Herman J.; Quax, Wim J.; Kayser, Oliver

    2006-01-01

    Combinatorial biosynthesis is a new tool in the generation of novel natural products and for the production of rare and expensive natural products. The basic concept is combining metabolic pathways in different organisms on a genetic level. As a consequence heterologous organisms provide precursors

  18. Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies

    NARCIS (Netherlands)

    Ketema, J.; Simonsen, Jakob Grue

    2010-01-01

    We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in

  19. PIPERIDINE OLIGOMERS AND COMBINATORIAL LIBRARIES THEREOF

    DEFF Research Database (Denmark)

    1999-01-01

    The present invention relates to piperidine oligomers, methods for the preparation of piperidine oligomers and compound libraries thereof, and the use of piperidine oligomers as drug substances. The present invention also relates to the use of combinatorial libraries of piperidine oligomers...... in libraries (arrays) of compounds especially suitable for screening purposes....

  20. Dendrimer-based dynamic combinatorial libraries

    NARCIS (Netherlands)

    Chang, T.; Meijer, E.W.

    2005-01-01

    The aim of this project is to create water-sol. dynamic combinatorial libraries based upon dendrimer-guest complexes. The guest mols. are designed to bind to dendrimers using multiple secondary interactions, such as electrostatics and hydrogen bonding. We have been able to incorporate various guest

  1. Gian-Carlos Rota and Combinatorial Math.

    Science.gov (United States)

    Kolata, Gina Bari

    1979-01-01

    Presents the first of a series of occasional articles about mathematics as seen through the eyes of its prominent scholars. In an interview with Gian-Carlos Rota of the Massachusetts Institute of Technology he discusses how combinatorial mathematics began as a field and its future. (HM)

  2. A Model of Students' Combinatorial Thinking

    Science.gov (United States)

    Lockwood, Elise

    2013-01-01

    Combinatorial topics have become increasingly prevalent in K-12 and undergraduate curricula, yet research on combinatorics education indicates that students face difficulties when solving counting problems. The research community has not yet addressed students' ways of thinking at a level that facilitates deeper understanding of how students…

  3. Torus actions, combinatorial topology, and homological algebra

    International Nuclear Information System (INIS)

    Bukhshtaber, V M; Panov, T E

    2000-01-01

    This paper is a survey of new results and open problems connected with fundamental combinatorial concepts, including polytopes, simplicial complexes, cubical complexes, and arrangements of subspaces. Attention is concentrated on simplicial and cubical subdivisions of manifolds, and especially on spheres. Important constructions are described that enable one to study these combinatorial objects by using commutative and homological algebra. The proposed approach to combinatorial problems is based on the theory of moment-angle complexes recently developed by the authors. The crucial construction assigns to each simplicial complex K with m vertices a T m -space Z K with special bigraded cellular decomposition. In the framework of this theory, well-known non-singular toric varieties arise as orbit spaces of maximally free actions of subtori on moment-angle complexes corresponding to simplicial spheres. It is shown that diverse invariants of simplicial complexes and related combinatorial-geometric objects can be expressed in terms of bigraded cohomology rings of the corresponding moment-angle complexes. Finally, it is shown that the new relationships between combinatorics, geometry, and topology lead to solutions of some well-known topological problems

  4. Combinatorial Aspects of the Generalized Euler's Totient

    Directory of Open Access Journals (Sweden)

    Nittiya Pabhapote

    2010-01-01

    Full Text Available A generalized Euler's totient is defined as a Dirichlet convolution of a power function and a product of the Souriau-Hsu-Möbius function with a completely multiplicative function. Two combinatorial aspects of the generalized Euler's totient, namely, its connections to other totients and its relations with counting formulae, are investigated.

  5. Quantum Resonance Approach to Combinatorial Optimization

    Science.gov (United States)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  6. Logging to Facilitate Combinatorial System Testing

    NARCIS (Netherlands)

    Kruse, P.M.; Prasetya, I.S.W.B.; Hage, J; Elyasov, Alexander

    2014-01-01

    Testing a web application is typically very complicated. Imposing simple coverage criteria such as function or line coverage is often not sufficient to uncover bugs due to incorrect components integration. Combinatorial testing can enforce a stronger criterion, while still allowing the

  7. Kinetics of two-stage fermentation process for the production of hydrogen

    Energy Technology Data Exchange (ETDEWEB)

    Nath, Kaushik [Department of Chemical Engineering, G.H. Patel College of Engineering and Technology, Vallabh Vidyanagar 388 120, Gujarat (India); Muthukumar, Manoj; Kumar, Anish; Das, Debabrata [Fermentation Technology Laboratory, Department of Biotechnology, Indian Institute of Technology, Kharagpur 721302 (India)

    2008-02-15

    Two-stage process described in the present work is a combination of dark and photofermentation in a sequential batch mode. In the first stage glucose is fermented to acetate, CO{sub 2} and H{sub 2} in an anaerobic dark fermentation by Enterobacter cloacae DM11. This is followed by a successive second stage where acetate is converted to H{sub 2} and CO{sub 2} in a photobioreactor by photosynthetic bacteria, Rhodobacter sphaeroides O.U. 001. The yield of hydrogen in the first stage was about 3.31molH{sub 2}(molglucose){sup -1} (approximately 82% of theoretical) and that in the second stage was about 1.5-1.72molH{sub 2}(molaceticacid){sup -1} (approximately 37-43% of theoretical). The overall yield of hydrogen in two-stage process considering glucose as preliminary substrate was found to be higher compared to a single stage process. Monod model, with incorporation of substrate inhibition term, has been used to determine the growth kinetic parameters for the first stage. The values of maximum specific growth rate ({mu} {sub max}) and K{sub s} (saturation constant) were 0.398h{sup -1} and 5.509gl{sup -1}, respectively, using glucose as substrate. The experimental substrate and biomass concentration profiles have good resemblance with those obtained by kinetic model predictions. A model based on logistic equation has been developed to describe the growth of R. sphaeroides O.U 001 in the second stage. Modified Gompertz equation was applied to estimate the hydrogen production potential, rate and lag phase time in a batch process for various initial concentration of glucose, based on the cumulative hydrogen production curves. Both the curve fitting and statistical analysis showed that the equation was suitable to describe the progress of cumulative hydrogen production. (author)

  8. Removal of trichloroethylene (TCE) contaminated soil using a two-stage anaerobic-aerobic composting technique.

    Science.gov (United States)

    Ponza, Supat; Parkpian, Preeda; Polprasert, Chongrak; Shrestha, Rajendra P; Jugsujinda, Aroon

    2010-01-01

    The effect of organic carbon addition on remediation of trichloroethylene (TCE) contaminated clay soil was investigated using a two stage anaerobic-aerobic composting system. TCE removal rate and processes involved were determined. Uncontaminated clay soil was treated with composting materials (dried cow manure, rice husk and cane molasses) to represent carbon based treatments (5%, 10% and 20% OC). All treatments were spiked with TCE at 1,000 mg TCE/kg DW and incubated under anaerobic and mesophillic condition (35 degrees C) for 8 weeks followed by continuous aerobic condition for another 6 weeks. TCE dissipation, its metabolites and biogas composition were measured throughout the experimental period. Results show that TCE degradation depended upon the amount of organic carbon (OC) contained within the composting treatments/matrices. The highest TCE removal percentage (97%) and rate (75.06 micro Mole/kg DW/day) were obtained from a treatment of 10% OC composting matrices as compared to 87% and 27.75 micro Mole/kg DW/day for 20% OC, and 83% and 38.08 micro Mole/kg DW/day for soil control treatment. TCE removal rate was first order reaction kinetics. Highest degradation rate constant (k(1) = 0.035 day(- 1)) was also obtained from the 10% OC treatment, followed by 20% OC (k(1) = 0.026 day(- 1)) and 5% OC or soil control treatment (k(1) = 0.023 day(- 1)). The half-life was 20, 27 and 30 days, respectively. The overall results suggest that sequential two stages anaerobic-aerobic composting technique has potential for remediation of TCE in heavy texture soil, providing that easily biodegradable source of organic carbon is present.

  9. Design of a Two-Stage Light Gas Gun for Muzzle Velocities of 10 - 11 kms

    Science.gov (United States)

    Bogdanoff, David W.

    2016-01-01

    Space debris poses a major risk to spacecraft. In low earth orbit, impact velocities can be 10 11 kms and as high as 15 kms. For debris shield design, it would be desirable to be able to launch controlled shape projectiles to these velocities. The design of the proposed 10 11 kmsec gun uses, as a starting point, the Ames 1.280.22 two stage gun, which has achieved muzzle velocities of 10 11.3 kmsec. That gun is scaled up to a 0.3125 launch tube diameter. The gun is then optimized with respect to maximum pressures by varying the pump tube length to diameter ratio (LD), the piston mass and the hydrogen pressure. A pump tube LD of 36.4 is selected giving the best overall performance. Piezometric ratios for the optimized guns are found to be 2.3, much more favorable than for more traditional two stage light gas guns, which range from 4 to 6. The maximum powder chamber pressures are 20 to 30 ksi. To reduce maximum pressures, the desirable range of the included angle of the cone of the high pressure coupling is found to be 7.3 to 14.6 degrees. Lowering the break valve rupture pressure is found to lower the maximum projectile base pressure, but to raise the maximum gun pressure. For the optimized gun with a pump tube LD of 36.4, increasing the muzzle velocity by decreasing the projectile mass and increasing the powder loads is studied. It appears that saboted spheres could be launched to 10.25 and possibly as high as 10.7 10.8 kmsec, and that disc-like plastic models could be launched to 11.05 kms. The use of a tantalum liner to greatly reduce bore erosion and increase muzzle velocity is discussed. With a tantalum liner, CFD code calculations predict muzzle velocities as high as 12 to 13 kms.

  10. Study of two-stage turbine characteristic and its influence on turbo-compound engine performance

    International Nuclear Information System (INIS)

    Zhao, Rongchao; Zhuge, Weilin; Zhang, Yangjun; Yang, Mingyang; Martinez-Botas, Ricardo; Yin, Yong

    2015-01-01

    Highlights: • An analytical model was built to study the interactions between two turbines in series. • The impacts of HP VGT and LP VGT on turbo-compound engine performance were investigated. • The fuel reductions obtained by HP VGT at 1900 rpm and 1000 rpm are 3.08% and 7.83% respectively. • The optimum value of AR ranged from 2.0 to 2.5 as the turbo-compound engine speed decreases. - Abstract: Turbo-compounding is an effective way to recover waste heat from engine exhaust and reduce fuel consumption for internal combustion engine (ICE). The characteristics of two-stage turbine, including turbocharger turbine and power turbine, have significant effects on the overall performance of turbo-compound engine. This paper investigates the interaction between two turbines in a turbo-compound engine and its impact on the engine performance. Firstly an analytical model is built to investigate the effects of turbine equivalent flow area on the two-stage turbine characteristics, including swallowing capacity and load split. Next both simulation and experimental method are carried out to study the effects of high pressure variable geometry turbine (HP VGT), low pressure variable geometry turbine (LP VGT) and combined VGT on the engine overall performance. The results show that the engine performance is more sensitive to HP VGT compared with LP VGT at all the operation conditions, which is caused by the larger influences of HP VGT on the total expansion ratio and engine air–fuel ratio. Using the HP VGT method, the fuel reductions of the turbo-compound engine at 1900 rpm and 1000 rpm are 3.08% and 7.83% respectively, in comparison with the baseline engine. The corresponding optimum values of AR are 2.0 and 2.5

  11. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  12. Two-Stage Latissimus Dorsi Flap with Implant for Unilateral Breast Reconstruction: Getting the Size Right

    Directory of Open Access Journals (Sweden)

    Jiajun Feng

    2016-03-01

    Full Text Available BackgroundThe aim of unilateral breast reconstruction after mastectomy is to craft a natural-looking breast with symmetry. The latissimus dorsi (LD flap with implant is an established technique for this purpose. However, it is challenging to obtain adequate volume and satisfactory aesthetic results using a one-stage operation when considering factors such as muscle atrophy, wound dehiscence and excessive scarring. The two-stage reconstruction addresses these difficulties by using a tissue expander to gradually enlarge the skin pocket which eventually holds an appropriately sized implant.MethodsWe analyzed nine patients who underwent unilateral two-stage LD reconstruction. In the first stage, an expander was placed along with the LD flap to reconstruct the mastectomy defect, followed by gradual tissue expansion to achieve overexpansion of the skin pocket. The final implant volume was determined by measuring the residual expander volume after aspirating the excess saline. Finally, the expander was replaced with the chosen implant.ResultsThe average volume of tissue expansion was 460 mL. The resultant expansion allowed an implant ranging in volume from 255 to 420 mL to be placed alongside the LD muscle. Seven patients scored less than six on the relative breast retraction assessment formula for breast symmetry, indicating excellent breast symmetry. The remaining two patients scored between six and eight, indicating good symmetry.ConclusionsThis approach allows the size of the eventual implant to be estimated after the skin pocket has healed completely and the LD muscle has undergone natural atrophy. Optimal reconstruction results were achieved using this approach.

  13. Two-stage double-effect ammonia/lithium nitrate absorption cycle

    International Nuclear Information System (INIS)

    Ventas, R.; Lecuona, A.; Vereda, C.; Legrand, M.

    2016-01-01

    Highlights: • A two stage double effect cycle with NH3-LiNO3 is proposed. • The cycle operates at lower pressures than conventional. • Adiabatic absorber offers better performance than the diabatic version. • Evaporator external inlet temperatures higher than −10 °C avoids crystallization. • Maximum COP is 1.25 for driving water inlet temperature of 100 C. - Abstract: The two-stage configuration of a double-effect absorption cycle using ammonia/lithium nitrate as working fluid is studied by means of a thermodynamic model. The maximum pressure of this cycle configuration is the same as the single-effect cycle, up to 15.8 bars, being an advantage over the double-effect conventional configuration with three pressure levels that exhibits much higher maximum pressure. The performance of the cycle and the limitation imposed by crystallization of the working fluid is determined for both adiabatic and diabatic absorber cycles. Both cycles offer similar COP; however the adiabatic variant shows a larger margin against crystallization. This cycle can produce cold for external inlet evaporator temperatures down to −10 °C, but for this limit the crystallization could happen at high inlet generator temperatures. The maximum COP can be 1.25 for an external inlet generator temperature of 100 °C. This cycle shows a better COP than a typical double effect cycle with in-parallel configuration for the range of the moderate temperatures under study and using the same working fluid. Comparisons with double effect cycles using H_2O/LiBr and NH_3/H_2O as working fluids are also offered, highlighting the present configurations advantages regarding COP, evaporation and condensation temperatures as well as crystallization.

  14. Two-stage high frequency pulse tube refrigerator with base temperature below 10 K

    Science.gov (United States)

    Chen, Liubiao; Wu, Xianlin; Liu, Sixue; Zhu, Xiaoshuang; Pan, Changzhao; Guo, Jia; Zhou, Yuan; Wang, Junjie

    2017-12-01

    This paper introduces our recent experimental results of pulse tube refrigerator driven by linear compressor. The working frequency is 23-30 Hz, which is much higher than the G-M type cooler (the developed cryocooler will be called high frequency pulse tube refrigerator in this paper). To achieve a temperature below 10 K, two types of two-stage configuration, gas coupled and thermal coupled, have been designed, built and tested. At present, both types can achieve a no-load temperature below 10 K by using only one compressor. As to gas-coupled HPTR, the second stage can achieve a cooling power of 16 mW/10K when the first stage applied a 400 mW heat load at 60 K with a total input power of 400 W. As to thermal-coupled HPTR, the designed cooling power of the first stage is 10W/80K, and then the temperature of the second stage can get a temperature below 10 K with a total input power of 300 W. In the current preliminary experiment, liquid nitrogen is used to replace the first coaxial configuration as the precooling stage, and a no-load temperature 9.6 K can be achieved with a stainless steel mesh regenerator. Using Er3Ni sphere with a diameter about 50-60 micron, the simulation results show it is possible to achieve a temperature below 8 K. The configuration, the phase shifters and the regenerative materials of the developed two types of two-stage high frequency pulse tube refrigerator will be discussed, and some typical experimental results and considerations for achieving a better performance will also be presented in this paper.

  15. A Two-Stage Composition Method for Danger-Aware Services Based on Context Similarity

    Science.gov (United States)

    Wang, Junbo; Cheng, Zixue; Jing, Lei; Ota, Kaoru; Kansen, Mizuo

    Context-aware systems detect user's physical and social contexts based on sensor networks, and provide services that adapt to the user accordingly. Representing, detecting, and managing the contexts are important issues in context-aware systems. Composition of contexts is a useful method for these works, since it can detect a context by automatically composing small pieces of information to discover service. Danger-aware services are a kind of context-aware services which need description of relations between a user and his/her surrounding objects and between users. However when applying the existing composition methods to danger-aware services, they show the following shortcomings that (1) they have not provided an explicit method for representing composition of multi-user' contexts, (2) there is no flexible reasoning mechanism based on similarity of contexts, so that they can just provide services exactly following the predefined context reasoning rules. Therefore, in this paper, we propose a two-stage composition method based on context similarity to solve the above problems. The first stage is composition of the useful information to represent the context for a single user. The second stage is composition of multi-users' contexts to provide services by considering the relation of users. Finally the danger degree of the detected context is computed by using context similarity between the detected context and the predefined context. Context is dynamically represented based on two-stage composition rules and a Situation theory based Ontology, which combines the advantages of Ontology and Situation theory. We implement the system in an indoor ubiquitous environment, and evaluate the system through two experiments with the support of subjects. The experiment results show the method is effective, and the accuracy of danger detection is acceptable to a danger-aware system.

  16. A heterogeneous stochastic FEM framework for elliptic PDEs

    International Nuclear Information System (INIS)

    Hou, Thomas Y.; Liu, Pengfei

    2015-01-01

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage

  17. Combinatorial structures to modeling simple games and applications

    Science.gov (United States)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  18. Stochastic modeling

    CERN Document Server

    Lanchier, Nicolas

    2017-01-01

    Three coherent parts form the material covered in this text, portions of which have not been widely covered in traditional textbooks. In this coverage the reader is quickly introduced to several different topics enriched with 175 exercises which focus on real-world problems. Exercises range from the classics of probability theory to more exotic research-oriented problems based on numerical simulations. Intended for graduate students in mathematics and applied sciences, the text provides the tools and training needed to write and use programs for research purposes. The first part of the text begins with a brief review of measure theory and revisits the main concepts of probability theory, from random variables to the standard limit theorems. The second part covers traditional material on stochastic processes, including martingales, discrete-time Markov chains, Poisson processes, and continuous-time Markov chains. The theory developed is illustrated by a variety of examples surrounding applications such as the ...

  19. Comparison of single-stage and temperature-phased two-stage anaerobic digestion of oily food waste

    International Nuclear Information System (INIS)

    Wu, Li-Jie; Kobayashi, Takuro; Li, Yu-You; Xu, Kai-Qin

    2015-01-01

    Highlights: • A single-stage and two two-stage anaerobic systems were synchronously operated. • Similar methane production 0.44 L/g VS_a_d_d_e_d from oily food waste was achieved. • The first stage of the two-stage process became inefficient due to serious pH drop. • Recycle favored the hythan production in the two-stage digestion. • The conversion of unsaturated fatty acids was enhanced by recycle introduction. - Abstract: Anaerobic digestion is an effective technology to recover energy from oily food waste. A single-stage system and temperature-phased two-stage systems with and without recycle for anaerobic digestion of oily food waste were constructed to compare the operation performances. The synchronous operation indicated the similar ability to produce methane in the three systems, with a methane yield of 0.44 L/g VS_a_d_d_e_d. The pH drop to less than 4.0 in the first stage of two-stage system without recycle resulted in poor hydrolysis, and methane or hydrogen was not produced in this stage. Alkalinity supplement from the second stage of two-stage system with recycle improved pH in the first stage to 5.4. Consequently, 35.3% of the particulate COD in the influent was reduced in the first stage of two-stage system with recycle according to a COD mass balance, and hydrogen was produced with a percentage of 31.7%, accordingly. Similar solids and organic matter were removed in the single-stage system and two-stage system without recycle. More lipid degradation and the conversion of long-chain fatty acids were achieved in the single-stage system. Recycling was proved to be effective in promoting the conversion of unsaturated long-chain fatty acids into saturated fatty acids in the two-stage system.

  20. Machine learning meliorates computing and robustness in discrete combinatorial optimization problems.

    Directory of Open Access Journals (Sweden)

    Fushing Hsieh

    2016-11-01

    Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.

  1. STOCHASTIC FLOWS OF MAPPINGS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.

  2. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  3. Experimental and numerical studies on two-stage combustion of biomass

    Energy Technology Data Exchange (ETDEWEB)

    Houshfar, Eshan

    2012-07-01

    In this thesis, two-stage combustion of biomass was experimentally/numerically investigated in a multifuel reactor. The following emissions issues have been the main focus of the work: 1- NOx and N2O 2- Unburnt species (CO and CxHy) 3- Corrosion related emissions.The study had a focus on two-stage combustion in order to reduce pollutant emissions (primarily NOx emissions). It is well known that pollutant emissions are very dependent on the process conditions such as temperature, reactant concentrations and residence times. On the other hand, emissions are also dependent on the fuel properties (moisture content, volatiles, alkali content, etc.). A detailed study of the important parameters with suitable biomass fuels in order to optimize the various process conditions was performed. Different experimental studies were carried out on biomass fuels in order to study the effect of fuel properties and combustion parameters on pollutant emissions. Process conditions typical for biomass combustion processes were studied. Advanced experimental equipment was used in these studies. The experiments showed the effects of staged air combustion, compared to non-staged combustion, on the emission levels clearly. A NOx reduction of up to 85% was reached with staged air combustion using demolition wood as fuel. An optimum primary excess air ratio of 0.8-0.95 was found as a minimizing parameter for the NOx emissions for staged air combustion. Air staging had, however, a negative effect on N2O emissions. Even though the trends showed a very small reduction in the NOx level as temperature increased for non-staged combustion, the effect of temperature was not significant for NOx and CxHy, neither in staged air combustion or non-staged combustion, while it had a great influence on the N2O and CO emissions, with decreasing levels with increasing temperature. Furthermore, flue gas recirculation (FGR) was used in combination with staged combustion to obtain an enhanced NOx reduction. The

  4. Hydrogen and methane production from condensed molasses fermentation soluble by a two-stage anaerobic process

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Chiu-Yue; Liang, You-Chyuan; Lay, Chyi-How [Feng Chia Univ., Taichung, Taiwan (China). Dept. of Environmental Engineering and Science; Chen, Chin-Chao [Chungchou Institute of Technology, Taiwan (China). Environmental Resources Lab.; Chang, Feng-Yuan [Feng Chia Univ., Taichung, Taiwan (China). Research Center for Energy and Resources

    2010-07-01

    The treatment of condensed molasses fermentation soluble (CMS) is a troublesome problem for glutamate manufacturing factory. However, CMS contains high carbohydrate and nutrient contents and is an attractive and commercially potential feedstock for bioenergy production. The aim of this paper is to produce hydrogen and methane by two-stage anaerobic fermentation process. The fermentative hydrogen production from CMS was conducted in a continuously-stirred tank bioreactor (working volume 4 L) which was operated at a hydraulic retention time (HRT) of 8 h, organic loading rate (OLR) of 120 kg COD/m{sup 3}-d, temperature of 35 C, pH 5.5 and sewage sludge as seed. The anaerobic methane production was conducted in an up-flow bioreactor (working volume 11 L) which was operated at a HRT of 24 -60 hrs, OLR of 4.0-10 kg COD/m{sup 3}-d, temperature of 35 C, pH 7.0 with using anaerobic granule sludge from fructose manufacturing factory as the seed and the effluent from hydrogen production process as the substrate. These two reactors have been operated successfully for more than 400 days. The steady-state hydrogen content, hydrogen production rate and hydrogen production yield in the hydrogen fermentation system were 37%, 169 mmol-H{sub 2}/L-d and 93 mmol-H{sub 2}/g carbohydrate{sub removed}, respectively. In the methane fermentation system, the peak methane content and methane production rate were 66.5 and 86.8 mmol-CH{sub 4}/L-d with methane production yield of 189.3 mmol-CH{sub 4}/g COD{sub removed} at an OLR 10 kg/m{sup 3}-d. The energy production rate was used to elucidate the energy efficiency for this two-stage process. The total energy production rate of 133.3 kJ/L/d was obtained with 5.5 kJ/L/d from hydrogen fermentation and 127.8 kJ/L/d from methane fermentation. (orig.)

  5. Combined two-stage xanthate processes for the treatment of copper-containing wastewater

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Y.K. [Department of Safety Health and Environmental Engineering, Central Taiwan University of Sciences and Technology, Taichung (Taiwan); Leu, M.H. [Department of Environmental Engineering, Kun Shan University of Technology, Yung-Kang City (Taiwan); Chang, J.E.; Lin, T.F.; Chen, T.C. [Department of Environmental Engineering, National Cheng Kung University, Tainan City (Taiwan); Chiang, L.C.; Shih, P.H. [Department of Environmental Engineering and Science, Fooyin University, Kaohsiung County (Taiwan)

    2007-02-15

    Heavy metal removal is mainly conducted by adjusting the wastewater pH to form metal hydroxide precipitates. However, in recent years, the xanthate process with a high metal removal efficiency, attracted attention due to its use of sorption/desorption of heavy metals from aqueous solutions. In this study, two kinds of agricultural xanthates, insoluble peanut-shell xanthate (IPX) and insoluble starch xanthate (ISX), were used as sorbents to treat the copper-containing wastewater (Cu concentration from 50 to 1,000 mg/L). The experimental results showed that the maximum Cu removal efficiency by IPX was 93.5 % in the case of high Cu concentrations, whereby 81.1 % of copper could rapidly be removed within one minute. Moreover, copper-containing wastewater could also be treated by ISX over a wide range (50 to 1,000 mg/L) to a level that meets the Taiwan EPA's effluent regulations (3 mg/L) within 20 minutes. Whereas IPX had a maximum binding capacity for copper of 185 mg/g IPX, the capacity for ISX was 120 mg/g ISX. IPX is cheaper than ISX, and has the benefits of a rapid reaction and a high copper binding capacity, however, it exhibits a lower copper removal efficiency. A sequential IPX and ISX treatment (i.e., two-stage xanthate processes) could therefore be an excellent alternative. The results obtained using the two-stage xanthate process revealed an effective copper treatment. The effluent (C{sub e}) was below 0.6 mg/L, compared to the influent (C{sub 0}) of 1,001 mg/L at pH = 4 and a dilution rate of 0.6 h{sup -1}. Furthermore, the Cu-ISX complex formed could meet the Taiwan TCLP regulations, and be classified as non-hazardous waste. The xanthatilization of agricultural wastes offers a comprehensive strategy for solving both agricultural waste disposal and metal-containing wastewater treatment problems. (Abstract Copyright [2007], Wiley Periodicals, Inc.)

  6. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie

    2009-08-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

  7. Studies on quantitative physiology of Trichoderma reesei with two-stage continuous culture for cellulase production

    Energy Technology Data Exchange (ETDEWEB)

    Ryu, D; Andreotti, R; Mandels, M; Gallo, B; Reese, E T

    1979-11-01

    By employing a two-stage continuous-culture system, some of the more important physiological parameters involved in cellulase biosynthesis have been evaluated with an ultimate objective of designing an optimally controlled cellulase process. The two-stage continuous-culture system was run for a period of 1350 hr with Trichoderma reesei strain MCG-77. The temperature and pH were controlled at 32/sup 0/C and pH 4.5 for the first stage (growth) and 28/sup 0/C and pH 3.5 for the second stage (enzyme production). Lactose was the only carbon source for both stages. The ratio of specific uptake rate of carbon to that of nitrogen, Q(C)/Q(N), that supported good cell growth ranged from 11 to 15, and the ratio for maximum specific enzyme productivity ranged from 5 to 13. The maintenance coefficients determined for oxygen, M/sub 0/, and for carbon source, M/sub c/, are 0.85 mmol O/sub 2//g biomass/hr and 0.14 mmol hexose/g biomass/hr, respectively. The yield constants determined are: Y/sub X/O/ = 32.3 g biomass/mol O/sub 2/, Y/sub X/C/ = 1.1 g biomass/g C or Y/sub X/C/ = 0.44 g biomass/g hexose, Y/sub X/N/ = 12.5 g biomass/g nitrogen for the cell growth stage, and Y/sub X/N/ = 16.6 g biomass/g nitrogen for the enzyme production stage. Enzyme was produced only in the second stage. Volumetric and specific enzyme productivities obtained were 90 IU/liter/hrand 8 IU/g biomass/hr, respectively. The maximum specific enzyme productivity observed was 14.8 IU/g biomass/hr. The optimal dilution rate in the second stage that corresponded to the maximum enzyme productivity was 0.026 approx. 0.028 hr/sup -1/, and the specific growth rate in the second stage that supported maximum specific enzyme productivity was equal to or slightly less than zero.

  8. Removal of cesium from simulated liquid waste with countercurrent two-stage adsorption followed by microfiltration

    Energy Technology Data Exchange (ETDEWEB)

    Han, Fei; Zhang, Guang-Hui [School of Environmental Science and Engineering, Tianjin University, Tianjin, 300072 (China); Gu, Ping, E-mail: guping@tju.edu.cn [School of Environmental Science and Engineering, Tianjin University, Tianjin, 300072 (China)

    2012-07-30

    Highlights: Black-Right-Pointing-Pointer The adsorption isotherm of cesium by copper ferrocyanide followed a Freundlich model. Black-Right-Pointing-Pointer Decontamination factor of cesium was higher in lab-scale test than that in jar test. Black-Right-Pointing-Pointer A countercurrent two-stage adsorption-microfiltration process was achieved. Black-Right-Pointing-Pointer Cesium concentration in the effluent could be calculated. Black-Right-Pointing-Pointer It is a new cesium removal process with a higher decontamination factor. - Abstract: Copper ferrocyanide (CuFC) was used as an adsorbent to remove cesium. Jar test results showed that the adsorption capacity of CuFC was better than that of potassium zinc hexacyanoferrate. Lab-scale tests were performed by an adsorption-microfiltration process, and the mean decontamination factor (DF) was 463 when the initial cesium concentration was 101.3 {mu}g/L, the dosage of CuFC was 40 mg/L and the adsorption time was 20 min. The cesium concentration in the effluent continuously decreased with the operation time, which indicated that the used adsorbent retained its adsorption capacity. To use this capacity, experiments on a countercurrent two-stage adsorption (CTA)-microfiltration (MF) process were carried out with CuFC adsorption combined with membrane separation. A calculation method for determining the cesium concentration in the effluent was given, and batch tests in a pressure cup were performed to verify the calculated method. The results showed that the experimental values fitted well with the calculated values in the CTA-MF process. The mean DF was 1123 when the dilution factor was 0.4, the initial cesium concentration was 98.75 {mu}g/L and the dosage of CuFC and adsorption time were the same as those used in the lab-scale test. The DF obtained by CTA-MF process was more than three times higher than the single-stage adsorption in the jar test.

  9. A farm-scale pilot plant for biohydrogen and biomethane production by two-stage fermentation

    Directory of Open Access Journals (Sweden)

    R. Oberti

    2013-09-01

    Full Text Available Hydrogen is considered one of the possible main energy carriers for the future, thanks to its unique environmental properties. Indeed, its energy content (120 MJ/kg can be exploited virtually without emitting any exhaust in the atmosphere except for water. Renewable production of hydrogen can be obtained through common biological processes on which relies anaerobic digestion, a well-established technology in use at farm-scale for treating different biomass and residues. Despite two-stage hydrogen and methane producing fermentation is a simple variant of the traditional anaerobic digestion, it is a relatively new approach mainly studied at laboratory scale. It is based on biomass fermentation in two separate, seuqential stages, each maintaining conditions optimized to promote specific bacterial consortia: in the first acidophilic reactorhydrogen is produced production, while volatile fatty acids-rich effluent is sent to the second reactor where traditional methane rich biogas production is accomplished. A two-stage pilot-scale plant was designed, manufactured and installed at the experimental farm of the University of Milano and operated using a biomass mixture of livestock effluents mixed with sugar/starch-rich residues (rotten fruits and potatoes and expired fruit juices, afeedstock mixture based on waste biomasses directly available in the rural area where plant is installed. The hydrogenic and the methanogenic reactors, both CSTR type, had a total volume of 0.7m3 and 3.8 m3 respectively, and were operated in thermophilic conditions (55 2 °C without any external pH control, and were fully automated. After a brief description of the requirements of the system, this contribution gives a detailed description of its components and of engineering solutions to the problems encountered during the plant realization and start-up. The paper also discusses the results obtained in a first experimental run which lead to production in the range of previous

  10. Comparative Analysis of Direct Hospital Care Costs between Aseptic and Two-Stage Septic Knee Revision

    Science.gov (United States)

    Kasch, Richard; Merk, Sebastian; Assmann, Grit; Lahm, Andreas; Napp, Matthias; Merk, Harry; Flessa, Steffen

    2017-01-01

    Background The most common intermediate and long-term complications of total knee arthroplasty (TKA) include aseptic and septic failure of prosthetic joints. These complications cause suffering, and their management is expensive. In the future the number of revision TKA will increase, which involves a greater financial burden. Little concrete data about direct costs for aseptic and two-stage septic knee revisions with an in depth-analysis of septic explantation and implantation is available. Questions/Purposes A retrospective consecutive analysis of the major partial costs involved in revision TKA for aseptic and septic failure was undertaken to compare 1) demographic and clinical characteristics, and 2) variable direct costs (from a hospital department’s perspective) between patients who underwent single-stage aseptic and two-stage septic revision of TKA in a hospital providing maximum care. We separately analyze the explantation and implantation procedures in septic revision cases and identify the major cost drivers of knee revision operations. Methods A total of 106 consecutive patients (71 aseptic and 35 septic) was included. All direct costs of diagnosis, surgery, and treatment from the hospital department’s perspective were calculated as real purchase prices. Personnel involvement was calculated in units of minutes. Results Aseptic versus septic revisions differed significantly in terms of length of hospital stay (15.2 vs. 39.9 days), number of reported secondary diagnoses (6.3 vs. 9.8) and incision-suture time (108.3 min vs. 193.2 min). The management of septic revision TKA was significantly more expensive than that of aseptic failure ($12,223.79 vs. $6,749.43) (p costs of explantation stage ($4,540.46) were lower than aseptic revision TKA ($6,749.43) which were again lower than those of the septic implantation stage ($7,683.33). All mean costs of stays were not comparable as they differ significantly (p cost drivers were the cost of the implant and

  11. Two-stage laparoscopic approaches for high anorectal malformation: transumbilical colostomy and anorectoplasty.

    Science.gov (United States)

    Yang, Li; Tang, Shao-Tao; Li, Shuai; Aubdoollah, T H; Cao, Guo-Qing; Lei, Hai-Yan; Wang, Xin-Xing

    2014-11-01

    Trans-umbilical colostomy (TUC) has been previously created in patients with Hirschsprung's disease and intermediate anorectal malformation (ARM), but not in patients with high-ARM. The purposes of this study were to assess the feasibility, safety, complications and cosmetic results of TUC in a divided fashion, and subsequently stoma closure and laparoscopic assisted anorectoplasty (LAARP) were simultaneously completed by using the colostomy site for a laparoscopic port in high-ARM patients. Twenty male patients with high-ARMs were chosen for this two-stage procedure. The first-stage consisted of creating the TUC in double-barreled fashion colostomy with a high chimney at the umbilicus, and the loop was divided at the same time, in such a way that the two diverting ends were located at the umbilical incision with the distal end half closed and slightly higher than proximal end. In the second-stage, 3 to 7 months later, the stoma was closed through a peristomal skin incision followed by end-to-end anastomosis and simultaneously LAARP was performed by placing a laparoscopic port at the umbilicus, which was previously the colonostomy site. Umbilical wound closure was performed in a semi-opened fashion to create a deep umbilicus. TUC and LAARP were successfully performed in 20 patients. Four cases with bladder neck fistulas and 16 cases with prostatic urethra fistulas were found. Postoperative complications were rectal mucosal prolapsed in three cases, anal stricture in two cases and wound dehiscence in one case. Neither umbilical ring narrowing, parastomal hernia nor obstructive symptoms was observed. Neither umbilical nor perineal wound infection was observed. Stoma care was easily carried-out by attaching stoma bag. Healing of umbilical wounds after the second-stage was excellent. Early functional stooling outcome were satisfactory. The umbilicus may be an alternative stoma site for double-barreled colostomy in high-ARM patients. The two-stage laparoscopic

  12. Numerical Investigation and Experimental Demonstration of Chaos from Two-Stage Colpitts Oscillator in the Ultrahigh Frequency Range

    DEFF Research Database (Denmark)

    Bumeliene, S.; Tamasevicius, A.; Mykolaitis, G.

    2006-01-01

    A hardware prototype of the two-stage Colpitts oscillator employing the microwave BFG520 type transistors with the threshold frequency of 9 GHz and designed to operate in the ultrahigh frequency range (300–1000 MHz) is described. The practical circuit in addition to the intrinsic two-stage oscill......A hardware prototype of the two-stage Colpitts oscillator employing the microwave BFG520 type transistors with the threshold frequency of 9 GHz and designed to operate in the ultrahigh frequency range (300–1000 MHz) is described. The practical circuit in addition to the intrinsic two......-stage oscillator contains an emitter follower acting as a buffer and minimizing the influence of the load. The circuit is investigated both numerically and experimentally. Typical phase portraits, Lyapunov exponents, Lyapunov dimension and broadband continuous power spectra are presented. The main advantage...

  13. A New Cost-Effective Multi-Drive Solution based on a Two-Stage Direct Power Electronic Conversion Topology

    DEFF Research Database (Denmark)

    Klumpner, Christian; Blaabjerg, Frede

    2002-01-01

    of a protection circuit involving twelve diodes with full voltage/current ratings used only during faulty situations, makes this topology not so attractive. Lately, two stage Direct Power Electronic Conversion (DPEC) topologies have been proposed, providing similar functionality as a matrix converter but allowing...... shared by many loads, making this topology more cost effective. The functionality of the proposed two-stage multi-drive direct power electronic conversion topology is validated by experiments on a realistic laboratory prototype....

  14. Normal Theory Two-Stage ML Estimator When Data Are Missing at the Item Level.

    Science.gov (United States)

    Savalei, Victoria; Rhemtulla, Mijke

    2017-08-01

    In many modeling contexts, the variables in the model are linear composites of the raw items measured for each participant; for instance, regression and path analysis models rely on scale scores, and structural equation models often use parcels as indicators of latent constructs. Currently, no analytic estimation method exists to appropriately handle missing data at the item level. Item-level multiple imputation (MI), however, can handle such missing data straightforwardly. In this article, we develop an analytic approach for dealing with item-level missing data-that is, one that obtains a unique set of parameter estimates directly from the incomplete data set and does not require imputations. The proposed approach is a variant of the two-stage maximum likelihood (TSML) methodology, and it is the analytic equivalent of item-level MI. We compare the new TSML approach to three existing alternatives for handling item-level missing data: scale-level full information maximum likelihood, available-case maximum likelihood, and item-level MI. We find that the TSML approach is the best analytic approach, and its performance is similar to item-level MI. We recommend its implementation in popular software and its further study.

  15. Biogas production of Chicken Manure by Two-stage fermentation process

    Science.gov (United States)

    Liu, Xin Yuan; Wang, Jing Jing; Nie, Jia Min; Wu, Nan; Yang, Fang; Yang, Ren Jie

    2018-06-01

    This paper performs a batch experiment for pre-acidification treatment and methane production from chicken manure by the two-stage anaerobic fermentation process. Results shows that the acetate was the main component in volatile fatty acids produced at the end of pre-acidification stage, accounting for 68% of the total amount. The daily biogas production experienced three peak period in methane production stage, and the methane content reached 60% in the second period and then slowly reduced to 44.5% in the third period. The cumulative methane production was fitted by modified Gompertz equation, and the kinetic parameters of the methane production potential, the maximum methane production rate and lag phase time were 345.2 ml, 0.948 ml/h and 343.5 h, respectively. The methane yield of 183 ml-CH4/g-VSremoved during the methane production stage and VS removal efficiency of 52.7% for the whole fermentation process were achieved.

  16. Study on the effect of mutated bacillus megaterium in two-stage fermentation of vitamin C

    International Nuclear Information System (INIS)

    Lv Shujuan; Wang Jun; Yao Jianming; Yu Zengliang

    2003-01-01

    Bacillus megaterium as a companion strain in two-stage fermentation of vitamin C could secrete some active substances to spur growth of Gluconobacter oxydans to produce 2-KLG. In the fermenting system where Gluconobacter oxydans was combined with GB82-a mutated strain of B. megaterium by ion implantation, the amount of 2-KLG harvested was larger than that produced by the original B. megaterium BP52 being substituted for GB82. In this paper, authors studied the effect of the active substances secreted by GB82 to enhance the capability of Gluconobacter oxydans to produce 2-KLG. The supernate of GB82 sampled at different cultivation times all had much more activity to spur Gluconobacter oxydans to yield 2-KLG than that of the original B. megaterium, which might be due to the genetic changes in the active components caused by ion implantation. Furthermore, the active substances of GB82's supernate would lose a part of its activity in extreme environments, which is typical of some proteins

  17. A Risk-Based Interval Two-Stage Programming Model for Agricultural System Management under Uncertainty

    Directory of Open Access Journals (Sweden)

    Ye Xu

    2016-01-01

    Full Text Available Nonpoint source (NPS pollution caused by agricultural activities is main reason that water quality in watershed becomes worse, even leading to deterioration. Moreover, pollution control is accompanied with revenue’s fall for agricultural system. How to design and generate a cost-effective and environmentally friendly agricultural production pattern is a critical issue for local managers. In this study, a risk-based interval two-stage programming model (RBITSP was developed. Compared to general ITSP model, significant contribution made by RBITSP model was that it emphasized importance of financial risk under various probabilistic levels, rather than only being concentrated on expected economic benefit, where risk is expressed as the probability of not meeting target profit under each individual scenario realization. This way effectively avoided solutions’ inaccuracy caused by traditional expected objective function and generated a variety of solutions through adjusting weight coefficients, which reflected trade-off between system economy and reliability. A case study of agricultural production management with the Tai Lake watershed was used to demonstrate superiority of proposed model. Obtained results could be a base for designing land-structure adjustment patterns and farmland retirement schemes and realizing balance of system benefit, system-failure risk, and water-body protection.

  18. A two-stage flow-based intrusion detection model for next-generation networks.

    Science.gov (United States)

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  19. A two-stage bioprocess for hydrogen and methane production from rice straw bioethanol residues.

    Science.gov (United States)

    Cheng, Hai-Hsuan; Whang, Liang-Ming; Wu, Chao-Wei; Chung, Man-Chien

    2012-06-01

    This study evaluates a two-stage bioprocess for recovering hydrogen and methane while treating organic residues of fermentative bioethanol from rice straw. The obtained results indicate that controlling a proper volumetric loading rate, substrate-to-biomass ratio, or F/M ratio is important to maximizing biohydrogen production from rice straw bioethanol residues. Clostridium tyrobutyricum, the identified major hydrogen-producing bacteria enriched in the hydrogen bioreactor, is likely utilizing lactate and acetate for biohydrogen production. The occurrence of acetogenesis during biohydrogen fermentation may reduce the B/A ratio and lead to a lower hydrogen production. Organic residues remained in the effluent of hydrogen bioreactor can be effectively converted to methane with a rate of 2.8 mmol CH(4)/gVSS/h at VLR of 4.6 kg COD/m(3)/d. Finally, approximately 75% of COD in rice straw bioethanol residues can be removed and among that 1.3% and 66.1% of COD can be recovered in the forms of hydrogen and methane, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Antioxidant activity and total phenolic content of Moringa oleifera leaves in two stages of maturity.

    Science.gov (United States)

    Sreelatha, S; Padma, P R

    2009-12-01

    Antioxidants play an important role in inhibiting and scavenging free radicals, thus providing protection to human against infections and degenerative diseases. Current research is now directed towards natural antioxidants originated from plants due to safe therapeutics. Moringa oleifera is used in Indian traditional medicine for a wide range of various ailments. To understand the mechanism of pharmacological actions, antioxidant properties of the Moringa oleifera leaf extracts were tested in two stages of maturity using standard in vitro models. The successive aqueous extract of Moringa oleifera exhibited strong scavenging effect on 2, 2-diphenyl-2-picryl hydrazyl (DPPH) free radical, superoxide, nitric oxide radical and inhibition of lipid per oxidation. The free radical scavenging effect of Moringa oleifera leaf extract was comparable with that of the reference antioxidants. The data obtained in the present study suggests that the extracts of Moringa oleifera both mature and tender leaves have potent antioxidant activity against free radicals, prevent oxidative damage to major biomolecules and afford significant protection against oxidative damage.

  1. Spectral Characteristic Based on Fabry—Pérot Laser Diode with Two-Stage Optical Feedback

    International Nuclear Information System (INIS)

    Wu Jian-Wei; Nakarmi Bikash

    2013-01-01

    An optical device, consisting of a multi-mode Fabry—Pérot laser diode (MMFP-LD) with two-stage optical feedback, is proposed and experimentally demonstrated. The results show that the single-mode output with side-mode suppression ratio (SMSR) of ∼21.7 dB is attained by using the first-stage feedback. By using the second-stage feedback, the SMSR of single-mode operation could be increased to ∼28.5 dB while injection feedback power of −29 dBm is introduced into the laser diode. In the case of up to −29 dBm feedback power, the outcome SMSR is rapidly decayed to a very low level so that an obvious multi-mode operation in the output spectrum could be achieved at the feedback power level of −15.5 dBm. Thus, a transition between single- and multi-mode operations could be flexibly controlled by adjusting the injected power in the second-stage feedback system. Additionally, in the case of injection locking, the outcome SMSR and output power at the locked wavelength are as high as ∼50 dB and ∼5.8 dBm, respectively

  2. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    International Nuclear Information System (INIS)

    Khramtsov, P P; Vasetskij, V A; Makhnach, A I; Grishenko, V M; Chernik, M Yu; Shikh, I A; Doroshko, M V

    2016-01-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas. (paper)

  3. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    Science.gov (United States)

    Khramtsov, P. P.; Vasetskij, V. A.; Makhnach, A. I.; Grishenko, V. M.; Chernik, M. Yu; Shikh, I. A.; Doroshko, M. V.

    2016-11-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas.

  4. Two stage S-N curve in corrosion fatigue of extruded magnesium alloy AZ31

    Directory of Open Access Journals (Sweden)

    Yoshiharu Mutoh

    2009-11-01

    Full Text Available Tension-compression fatigue tests of extruded AZ31 magnesium alloys were carried out under corrosive environments:(a high humidity environment (80 %RH and (b 5 wt. %NaCl environment. It was found that the reduction rate of fatiguestrength due to corrosive environment was 0.12 under a high humidity and 0.53 under a NaCl environment. It was alsoobserved that under corrosive environments, the S-N curve was not a single curve but a two-stage curve. Above the fatiguelimit under low humidity, the crack nucleation mechanism was due to a localized slip band formation mechanism. Below thefatigue limit under low humidity, the reduction in fatigue strength was attributed to the corrosion pit formation and growth to the critical size for fatigue crack nucleation under the combined effect of cyclic load and the corrosive environment. The critical size was attained when the stress intensity factor range reached the threshold value for crack growth.

  5. Spread and Control of Mobile Benign Worm Based on Two-Stage Repairing Mechanism

    Directory of Open Access Journals (Sweden)

    Meng Wang

    2014-01-01

    Full Text Available Both in traditional social network and in mobile network environment, the worm is a serious threat, and this threat is growing all the time. Mobile smartphones generally promote the development of mobile network. The traditional antivirus technologies have become powerless when facing mobile networks. The development of benign worms, especially active benign worms and passive benign worms, has become a new network security measure. In this paper, we focused on the spread of worm in mobile environment and proposed the benign worm control and repair mechanism. The control process of mobile benign worms is divided into two stages: the first stage is rapid repair control, which uses active benign worm to deal with malicious worm in the mobile network; when the network is relatively stable, it enters the second stage of postrepair and uses passive mode to optimize the environment for the purpose of controlling the mobile network. Considering whether the existence of benign worm, we simplified the model and analyzed the four situations. Finally, we use simulation to verify the model. This control mechanism for benign worm propagation is of guiding significance to control the network security.

  6. Two-Stage Regularized Linear Discriminant Analysis for 2-D Data.

    Science.gov (United States)

    Zhao, Jianhua; Shi, Lei; Zhu, Ji

    2015-08-01

    Fisher linear discriminant analysis (LDA) involves within-class and between-class covariance matrices. For 2-D data such as images, regularized LDA (RLDA) can improve LDA due to the regularized eigenvalues of the estimated within-class matrix. However, it fails to consider the eigenvectors and the estimated between-class matrix. To improve these two matrices simultaneously, we propose in this paper a new two-stage method for 2-D data, namely a bidirectional LDA (BLDA) in the first stage and the RLDA in the second stage, where both BLDA and RLDA are based on the Fisher criterion that tackles correlation. BLDA performs the LDA under special separable covariance constraints that incorporate the row and column correlations inherent in 2-D data. The main novelty is that we propose a simple but effective statistical test to determine the subspace dimensionality in the first stage. As a result, the first stage reduces the dimensionality substantially while keeping the significant discriminant information in the data. This enables the second stage to perform RLDA in a much lower dimensional subspace, and thus improves the two estimated matrices simultaneously. Experiments on a number of 2-D synthetic and real-world data sets show that BLDA+RLDA outperforms several closely related competitors.

  7. Design of Korean nuclear reliability data-base network using a two-stage Bayesian concept

    International Nuclear Information System (INIS)

    Kim, T.W.; Jeong, K.S.; Chae, S.K.

    1987-01-01

    In an analysis of probabilistic risk, safety, and reliability of a nuclear power plant, the reliability data base (DB) must be established first. As the importance of the reliability data base increases, event reporting systems such as the US Nuclear Regulatory Commission's Licensee Event Report and the International Atomic Energy Agency's Incident Reporting System have been developed. In Korea, however, the systematic reliability data base is not yet available. Therefore, foreign data bases have been directly quoted in reliability analyses of Korean plants. In order to develop a reliability data base for Korean plants, the problem is which methodology is to be used, and the application limits of the selected method must be solved and clarified. After starting the commercial operation of Korea Nuclear Unit-1 (KNU-1) in 1978, six nuclear power plants have begun operation. Of these, only KNU-3 is a Canada Deuterium Uranium pressurized heavy-water reactor, and the others are all pressurized water reactors. This paper describes the proposed reliability data-base network (KNRDS) for Korean nuclear power plants in the context of two-stage Bayesian (TSB) procedure of Kaplan. It describes the concept of TSB to obtain the Korean-specific plant reliability data base, which is updated with the incorporation of both the reported generic reliability data and the operation experiences of similar plants

  8. Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm

    International Nuclear Information System (INIS)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2002-01-01

    A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies such as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained

  9. Characterization of a low frequency magnetic noise from a two stage pulse tube cryocooler

    International Nuclear Information System (INIS)

    Eshraghi, Mohamad Javad; Sasada, Ichiro; Kim, Jin Mok; Lee, Yong Ho

    2008-01-01

    Magnetic noise of a two stage pulse tube cryocooler(PT) has been measured by a fundamental mode orthogonal fluxgate magnetometer and by a LTS SQUID gradiometer. The magnetometer was installed in a Dewar made of aluminum at 12 cm apart from a section containing magnetic regenerative materials of the PT. The magnetic noise shows a clear peak at 1.8 Hz which is the fundamental frequency of the He gas pumping rate. The 1.8 Hz magnetic noise took a peak, during the cooling process, when the cold stage temperature was at (or close to) 12 K, which resembles the variation of the temperature of the second cold stage of 1.8 Hz. Hence we attributed the main source of this magnetic noise to the temperature dependency of magnetic susceptibility of magnetic regenerative materials such as Er3Ni and HoCu2 used at the second stage. We pointed out that the superconducting magnetic shield by lead sheets reduced the interfering magnetic noise generated from this part. With this scheme, the magnetic noise amplitude measured with the first order gradiometer DROS, mounted in the vicinity of the magnetic regenerator, when the noise amplitude is minimum, which could be found from the fluxgate measurement results, was less than 500 pT peak to peak. Whereas without lead shielding the noise level was higher than the dynamic range of SQUID instrumentations which is around ±10nT. (author)

  10. Heat transfer and pressure measurements and comparison with prediction for the SSME two-stage turbine

    Science.gov (United States)

    Dunn, M. G.; Kim, J.

    1992-01-01

    Time averaged Stanton number and surface pressure distributions are reported for the first stage vane row, the first stage blade row, and the second stage vane row of the Rocketdyne Space Shuttle Main Engine (SSME) two-stage fuel-side turbine. Unsteady pressure envelope measurements for the first blade are also reported. These measurements were made at 10 percent, 50 percent, and 90 percent span on both the pressure and suction surfaces of the first stage components. Additional Stanton number measurements were made on the first stage blade platform, blade tip, and shroud, and at 50 percent span on the second vane. A shock tube was used as a short duration source of heated and pressurized air to which the turbine was subjected. Platinum thin film heat flux gages were used to obtain the heat flux measurements, while miniature silicon diaphragm flush-mounted pressure transducers were used to obtain the pressure measurements. The first stage vane Stanton number distributions are compared with predictions obtained using a version of STAN5 and quasi-3D Navier-Stokes solution. This same quasi-3D N-S code was also used to obtain predictions for the first blade and the second vane.

  11. Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

    Energy Technology Data Exchange (ETDEWEB)

    Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven; Heyer, Gerhard; Koch, Steffen; Ertl, Thomas; Weber, Gunther H.

    2010-07-19

    During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point clouds given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.

  12. A Concept of Two-Stage-To-Orbit Reusable Launch Vehicle

    Science.gov (United States)

    Yang, Yong; Wang, Xiaojun; Tang, Yihua

    2002-01-01

    Reusable Launch Vehicle (RLV) has a capability of delivering a wide rang of payload to earth orbit with greater reliability, lower cost, more flexibility and operability than any of today's launch vehicles. It is the goal of future space transportation systems. Past experience on single stage to orbit (SSTO) RLVs, such as NASA's NASP project, which aims at developing an rocket-based combined-cycle (RBCC) airplane and X-33, which aims at developing a rocket RLV, indicates that SSTO RLV can not be realized in the next few years based on the state-of-the-art technologies. This paper presents a concept of all rocket two-stage-to-orbit (TSTO) reusable launch vehicle. The TSTO RLV comprises an orbiter and a booster stage. The orbiter is mounted on the top of the booster stage. The TSTO RLV takes off vertically. At the altitude about 50km the booster stage is separated from the orbiter, returns and lands by parachutes and airbags, or lands horizontally by means of its own propulsion system. The orbiter continues its ascent flight and delivers the payload into LEO orbit. After completing orbit mission, the orbiter will reenter into the atmosphere, automatically fly to the ground base and finally horizontally land on the runway. TSTO RLV has less technology difficulties and risk than SSTO, and maybe the practical approach to the RLV in the near future.

  13. Evaluation of carcinogenic potential of diuron in a rat mammary two-stage carcinogenesis model.

    Science.gov (United States)

    Grassi, Tony Fernando; Rodrigues, Maria Aparecida Marchesan; de Camargo, João Lauro Viana; Barbisan, Luís Fernando

    2011-04-01

    This study aimed to evaluate the carcinogenic potential of the herbicide Diuron in a two-stage rat medium-term mammary carcinogenesis model initiated by 7,12-dimethylbenz(a)anthracene (DMBA). Female seven-week-old Sprague-Dawley (SD) rats were allocated to six groups: groups G1 to G4 received intragastrically (i.g.) a single 50 mg/kg dose of DMBA; groups G5 and G6 received single administration of canola oil (vehicle of DMBA). Groups G1 and G5 received a basal diet, and groups G2, G3, G4, and G6 were fed the basal diet with the addition of Diuron at 250, 1250, 2500, and 2500 ppm, respectively. After twenty-five weeks, the animals were euthanized and mammary tumors were histologically confirmed and quantified. Tumor samples were also processed for immunohistochemical evaluation of the expressions of proliferating cell nuclear antigen (PCNA), cleaved caspase-3, estrogen receptor-α (ER-α), p63, bcl-2, and bak. Diuron treatment did not increase the incidence or multiplicity of mammary tumors (groups G2 to G4 versus Group G1). Also, exposure to Diuron did not alter tumor growth (cell proliferation and apoptosis indexes) or immunoreactivity to ER-α, p63 (myoephitelial marker), or bcl-2 and bak (apoptosis regulatory proteins). These findings indicate that Diuron does not have a promoting potential on mammary carcinogenesis in female SD rats initiated with DMBA.

  14. Evidence that viral RNAs have evolved for efficient, two-stage packaging.

    Science.gov (United States)

    Borodavka, Alexander; Tuma, Roman; Stockley, Peter G

    2012-09-25

    Genome packaging is an essential step in virus replication and a potential drug target. Single-stranded RNA viruses have been thought to encapsidate their genomes by gradual co-assembly with capsid subunits. In contrast, using a single molecule fluorescence assay to monitor RNA conformation and virus assembly in real time, with two viruses from differing structural families, we have discovered that packaging is a two-stage process. Initially, the genomic RNAs undergo rapid and dramatic (approximately 20-30%) collapse of their solution conformations upon addition of cognate coat proteins. The collapse occurs with a substoichiometric ratio of coat protein subunits and is followed by a gradual increase in particle size, consistent with the recruitment of additional subunits to complete a growing capsid. Equivalently sized nonviral RNAs, including high copy potential in vivo competitor mRNAs, do not collapse. They do support particle assembly, however, but yield many aberrant structures in contrast to viral RNAs that make only capsids of the correct size. The collapse is specific to viral RNA fragments, implying that it depends on a series of specific RNA-protein interactions. For bacteriophage MS2, we have shown that collapse is driven by subsequent protein-protein interactions, consistent with the RNA-protein contacts occurring in defined spatial locations. Conformational collapse appears to be a distinct feature of viral RNA that has evolved to facilitate assembly. Aspects of this process mimic those seen in ribosome assembly.

  15. Production of acids and alcohols from syngas in a two-stage continuous fermentation process.

    Science.gov (United States)

    Abubackar, Haris Nalakath; Veiga, María C; Kennes, Christian

    2018-04-01

    A two-stage continuous system with two stirred tank reactors in series was utilized to perform syngas fermentation using Clostridium carboxidivorans. The first bioreactor (bioreactor 1) was maintained at pH 6 to promote acidogenesis and the second one (bioreactor 2) at pH 5 to stimulate solventogenesis. Both reactors were operated in continuous mode by feeding syngas (CO:CO 2 :H 2 :N 2 ; 30:10:20:40; vol%) at a constant flow rate while supplying a nutrient medium at different flow rates of 8.1, 15, 22 and 30 ml/h. A cell recycling unit was added to bioreactor 2 in order to recycle the cells back to the reactor, maintaining the OD 600 around 1 in bioreactor 2 throughout the experimental run. When comparing the flow rates, the best results in terms of solvent production were obtained with a flow rate of 22 ml/h, reaching the highest average outlet concentration for alcohols (1.51 g/L) and the most favorable alcohol/acid ratio of 0.32. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Two stage enucleation and deflation of a large unicystic ameloblastoma with mural invasion in mandible.

    Science.gov (United States)

    Sasaki, Ryo; Watanabe, Yorikatsu; Ando, Tomohiro; Akizuki, Tanetaka

    2014-06-01

    A treatment for strategy of unicystic ameloblastoma (UA) should be decided by its pathology type including luminal or mural type. Luminal type of UA can be treated only by enucleation alone, but UA with mural invasion should be treated aggressively like conventional ameloblastomas. However, it is difficult to diagnose the subtype of UA by an initial biopsy. There is a possibility that the lesion is an ordinary cyst or keratocystic odontogenic tumor, leading to a possible overtreatment. Therefore, this study performed the enucleation of the cyst wall and deflation at first, and the pathological finding confirmed mural invasion into the cystic wall, leading to the second surgery. The second surgery enucleated scar tissue, bone curettage, and deflation, and was able to contribute to the reduction of the recurrence rate by removing tumor nest in scar tissue or new bone, enhancing new bone formation, and shrinking the mandibular expanding by fenestration. In this study, a large UA with mural invasion including condyle was treated by "two-stage enucleation and deflation" in a 20-year-old patient.

  17. Two-Stage Tissue-Expander Breast Reconstruction: A Focus on the Surgical Technique

    Directory of Open Access Journals (Sweden)

    Elisa Bellini

    2017-01-01

    Full Text Available Objective. Breast cancer, the most common malignancy in women, comprises 18% of all female cancers. Mastectomy is an essential intervention to save lives, but it can destroy one’s body image, causing both physical and psychological trauma. Reconstruction is an important step in restoring patient quality of life after the mutilating treatment. Material and Methods. Tissue expanders and implants are now commonly used in breast reconstruction. Autologous reconstruction allows a better aesthetic result; however, many patients prefer implant reconstruction due to the shorter operation time and lack of donor site morbidity. Moreover, this reconstruction strategy is safe and can be performed in patients with multiple health problems. Tissue-expander reconstruction is conventionally performed as a two-stage procedure starting immediately after mammary gland removal. Results. Mastectomy is a destructive but essential intervention for women with breast cancer. Tissue expansion breast reconstruction is a safe, reliable, and efficacious procedure with considerable psychological benefits since it provides a healthy body image. Conclusion. This article focuses on this surgical technique and how to achieve the best reconstruction possible.

  18. Stepwise encapsulation and controlled two-stage release system for cis-Diamminediiodoplatinum

    Directory of Open Access Journals (Sweden)

    Chen Y

    2014-06-01

    Full Text Available Yun Chen,1,* Qian Li,1,2,* Qingsheng Wu1 1Department of Chemistry, Key Laboratory of Yangtze River Water Environment, Ministry of Education, Tongji University, Shanghai; 2Shanghai Institute of Quality Inspection and Technical Research, Shanghai, People’s Republic of China *These authors contributed equally to this work Abstract: cis-Diamminediiodoplatinum (cis-DIDP is a cisplatin-like anticancer drug with higher anticancer activity, but lower stability and price than cisplatin. In this study, a cis-DIDP carrier system based on micro-sized stearic acid was prepared by an emulsion solvent evaporation method. The maximum drug loading capacity of cis-DIDP-loaded solid lipid nanoparticles was 22.03%, and their encapsulation efficiency was 97.24%. In vitro drug release in phosphate-buffered saline (pH =7.4 at 37.5°C exhibited a unique two-stage process, which could prove beneficial for patients with tumors and malignancies. MTT (3-[4,5-dimethylthiazol-2-yl]-2, 5-diphenyltetrazolium bromide assay results showed that cis-DIDP released from cis-DIDP-loaded solid lipid nanoparticles had better inhibition activity than cis-DIDP that had not been loaded. Keywords: stearic acid, emulsion solvent evaporation method, drug delivery, cis-DIDP, in vitro

  19. Effluent composition prediction of a two-stage anaerobic digestion process: machine learning and stoichiometry techniques.

    Science.gov (United States)

    Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene

    2018-05-16

    Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.

  20. A two-stage preventive maintenance optimization model incorporating two-dimensional extended warranty

    International Nuclear Information System (INIS)

    Su, Chun; Wang, Xiaolin

    2016-01-01

    In practice, customers can decide whether to buy an extended warranty or not, at the time of item sale or at the end of the basic warranty. In this paper, by taking into account the moments of customers purchasing two-dimensional extended warranty, the optimization of imperfect preventive maintenance for repairable items is investigated from the manufacturer's perspective. A two-dimensional preventive maintenance strategy is proposed, under which the item is preventively maintained according to a specified age interval or usage interval, whichever occurs first. It is highlighted that when the extended warranty is purchased upon the expiration of the basic warranty, the manufacturer faces a two-stage preventive maintenance optimization problem. Moreover, in the second stage, the possibility of reducing the servicing cost over the extended warranty period is explored by classifying customers on the basis of their usage rates and then providing them with customized preventive maintenance programs. Numerical examples show that offering customized preventive maintenance programs can reduce the manufacturer's warranty cost, while a larger saving in warranty cost comes from encouraging customers to buy the extended warranty at the time of item sale. - Highlights: • A two-dimensional PM strategy is investigated. • Imperfect PM strategy is optimized by considering both two-dimensional BW and EW. • Customers are categorized based on their usage rates throughout the BW period. • Servicing cost of the EW is reduced by offering customized PM programs. • Customers buying the EW at the time of sale is preferred for the manufacturer.

  1. Implications of the two stage clonal expansion model to radiation risk estimation

    International Nuclear Information System (INIS)

    Curtis, S.B.; Hazelton, W.D.; Luebeck, E.G.; Moolgavkar, S.H.

    2003-01-01

    The Two Stage Clonal Expansion Model of carcinogenesis has been applied to the analysis of several cohorts of persons exposed to chronic exposures of high and low LET radiation. The results of these analyses are: (1) the importance of radiation-induced initiation is small and, if present at all, contributes to cancers only late in life and only if exposure begins early in life, (2) radiation-induced promotion dominates and produces the majority of cancers by accelerating proliferation of already-initiated cells, and (3) radiation-induced malignant conversion is important only during and immediately after exposure ceases and tends to dominate only late in life, acting on already initiated and promoted cells. Two populations, the Colorado Plateau miners (high-LET, radon exposed) and the Canadian radiation workers (low-LET, gamma ray exposed) are used as examples to show the time dependence of the hazard function and the relative importance of the three hypothesized processes (initiation, promotion and malignant conversion) for each radiation quality

  2. Two-Stage Dynamic Pricing and Advertising Strategies for Online Video Services

    Directory of Open Access Journals (Sweden)

    Zhi Li

    2017-01-01

    Full Text Available As the demands for online video services increase intensively, the selection of business models has drawn the great attention of online providers. Among them, pay-per-view mode and advertising mode are two important resource modes, where the reasonable fee charge and suitable volume of ads need to be determined. This paper establishes an analytical framework studying the optimal dynamic pricing and advertising strategies for online providers; it shows how the strategies are influenced by the videos available time and the viewers’ emotional factor. We create the two-stage strategy of revenue models involving a single fee mode and a mixed fee-free mode and find out the optimal fee charge and advertising level of online video services. According to the results, the optimal video price and ads volume dynamically vary over time. The viewer’s aversion level to advertising has direct effects on both the volume of ads and the number of viewers who have selected low-quality content. The optimal volume of ads decreases with the increase of ads-aversion coefficient, while increasing as the quality of videos increases. The results also indicate that, in the long run, a pure fee mode or free mode is the optimal strategy for online providers.

  3. Compressed gas combined single- and two-stage light-gas gun

    Science.gov (United States)

    Lamberson, L. E.; Boettcher, P. A.

    2018-02-01

    With more than 1 trillion artificial objects smaller than 1 μm in low and geostationary Earth orbit, space assets are subject to the constant threat of space debris impact. These collisions occur at hypervelocity or speeds greater than 3 km/s. In order to characterize material behavior under this extreme event as well as study next-generation materials for space exploration, this paper presents a unique two-stage light-gas gun capable of replicating hypervelocity impacts. While a limited number of these types of facilities exist, they typically are extremely large and can be costly and dangerous to operate. The design presented in this paper is novel in two distinct ways. First, it does not use a form of combustion in the first stage. The projectile is accelerated from a pressure differential using air and inert gases (or purely inert gases), firing a projectile in a nominal range of 1-4 km/s. Second, the design is modular in that the first stage sits on a track sled and can be pulled back and used in itself to study lower speed impacts without any further modifications, with the first stage piston as the impactor. The modularity of the instrument allows the ability to investigate three orders of magnitude of impact velocities or between 101 and 103 m/s in a single, relatively small, cost effective instrument.

  4. Two-stage combined treatment of acid mine drainage and municipal wastewater.

    Science.gov (United States)

    Deng, Dongyang; Lin, Lian-Shin

    2013-01-01

    This study examined the feasibility of the combined treatment of field-collected acid mine drainages (AMD, pH = 4.2 ± 0.9, iron = 112 ± 118 mg/L, sulfate = 1,846 ± 594 mg/L) and municipal wastewater (MWW, avg. chemical oxygen demand (COD) = 234-333 mg/L) using a two-stage process. The process consisted of batch mixing of the two wastes to condition the mixture solutions, followed by anaerobic biological treatment. The mixings performed under a range of AMD/MWW ratios resulted in phosphate removal of 9 to ∼100%, the mixture pH of 6.2-7.9, and COD/sulfate concentration ratio of 0.05-5.4. The biological treatment consistently removed COD and sulfate by >80% from the mixture solutions for COD/sulfate ratios of 0.6-5.4. Alkalinity was produced in the biological treatment causing increased pH and further removal of metals from the solutions. Scanning electron microscopy of produced sludge with energy dispersion analysis suggested chemical precipitation and associated adsorption and co-precipitation as the mechanisms for metal removal (Fe: >99%, Al: ∼100%, Mn: 75 to ∼100%, Ca: 52-81%, Mg: 13-76%, and Na: 56-76%). The study showed promising results for the treatment method and denoted the potential of developing innovative technologies for combined management of the two wastes in mining regions.

  5. HOUSEHOLD FOOD DEMAND IN INDONESIA: A TWO-STAGE BUDGETING APPROACH

    Directory of Open Access Journals (Sweden)

    Agus Widarjono

    2016-05-01

    Full Text Available A two-stage budgeting approach was applied to analyze the food demand in urban areas separated by geographical areas and classified by income groups. The demographically augmented Quadratic Almost Ideal Demand System (QUAIDS was employed to estimate the demand elasticity. Data from the National Social and Economic Survey of Households (SUSENAS in 2011 were used. The demand system is a censored model because the data contains zero expenditures and is estimated by employing the consistent two-step estimation procedure to solve biased estimation. The results show that price and income elasticities become less elastic from poor households to rich households. Demand by urban households in Java is more responsive to price but less responsive to income than urban households outside of Java. Simulation policies indicate that an increase in food prices would have more adverse impacts than a decrease in income levels. Poor families would suffer more than rich families from rising food prices and/or decreasing incomes. More importantly, urban households on Java are more vulnerable to an economic crisis, and would respond by reducing their food consumption. Economic policies to stabilize food prices are better than income policies, such as the cash transfer, to maintain the well-being of the population in Indonesia

  6. A two-stage storage routing model for green roof runoff detention.

    Science.gov (United States)

    Vesuviano, Gianni; Sonnenwald, Fred; Stovin, Virginia

    2014-01-01

    Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.

  7. Two-stage collaborative global optimization design model of the CHPG microgrid

    Science.gov (United States)

    Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng

    2017-06-01

    With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.

  8. A two-stage biological gas to liquid transfer process to convert carbon dioxide into bioplastic

    KAUST Repository

    Al Rowaihi, Israa

    2018-03-06

    The fermentation of carbon dioxide (CO2) with hydrogen (H2) uses available low-cost gases to synthesis acetic acid. Here, we present a two-stage biological process that allows the gas to liquid transfer (Bio-GTL) of CO2 into the biopolymer polyhydroxybutyrate (PHB). Using the same medium in both stages, first, acetic acid is produced (3.2 g L−1) by Acetobacterium woodii from 5.2 L gas-mixture of CO2:H2 (15:85 v/v) under elevated pressure (≥2.0 bar) to increase H2-solubility in water. Second, acetic acid is converted to PHB (3 g L−1 acetate into 0.5 g L−1 PHB) by Ralstonia eutropha H16. The efficiencies and space-time yields were evaluated, and our data show the conversion of CO2 into PHB with a 33.3% microbial cell content (percentage of the ratio of PHB concentration to cell concentration) after 217 h. Collectively, our results provide a resourceful platform for future optimization and commercialization of a Bio-GTL for PHB production.

  9. The Effect of Effluent Recirculation in a Semi-Continuous Two-Stage Anaerobic Digestion System

    Directory of Open Access Journals (Sweden)

    Karthik Rajendran

    2013-06-01

    Full Text Available The effect of recirculation in increasing organic loading rate (OLR and decreasing hydraulic retention time (HRT in a semi-continuous two-stage anaerobic digestion system using stirred tank reactor (CSTR and an upflow anaerobic sludge bed (UASB was evaluated. Two-parallel processes were in operation for 100 days, one with recirculation (closed system and the other without recirculation (open system. For this purpose, two structurally different carbohydrate-based substrates were used; starch and cotton. The digestion of starch and cotton in the closed system resulted in production of 91% and 80% of the theoretical methane yield during the first 60 days. In contrast, in the open system the methane yield was decreased to 82% and 56% of the theoretical value, for starch and cotton, respectively. The OLR could successfully be increased to 4 gVS/L/day for cotton and 10 gVS/L/day for starch. It is concluded that the recirculation supports the microorganisms for effective hydrolysis of polyhydrocarbons in CSTR and to preserve the nutrients in the system at higher OLRs, thereby improving the overall performance and stability of the process.

  10. Multifunctional Solar Systems Based On Two-Stage Regeneration Absorbent Solution

    Directory of Open Access Journals (Sweden)

    Doroshenko A.V.

    2015-04-01

    Full Text Available The concepts of multifunctional dehumidification solar systems, heat supply, cooling, and air conditioning based on the open absorption cycle with direct absorbent regeneration developed. The solar systems based on preliminary drainage of current of air and subsequent evaporated cooling. The solar system using evaporative coolers both types (direct and indirect. The principle of two-stage regeneration of absorbent used in the solar systems, it used as the basis of liquid and gas-liquid solar collectors. The main principle solutions are designed for the new generation of gas-liquid solar collectors. Analysis of the heat losses in the gas-liquid solar collectors, due to the mechanism of convection and radiation is made. Optimal cost of gas and liquid, as well as the basic dimensions and configuration of the working channel of the solar collector identified. Heat and mass transfer devices, belonging to the evaporative cooling system based on the interaction between the film and the gas stream and the liquid therein. Multichannel structure of the polymeric materials used to create the tip. Evaporative coolers of water and air both types (direct and indirect are used in the cooling of the solar systems. Preliminary analysis of the possibilities of multifunctional solar absorption systems made reference to problems of cooling media and air conditioning on the basis of experimental data the authors. Designed solar systems feature low power consumption and environmental friendliness.

  11. Armature formation in a railgun using a two-stage light-gas gun injector

    International Nuclear Information System (INIS)

    Hawke, R.S.; Susoeff, A.R.; Asay, J.R.; Hall, C.A.; Konrad, C.H.; Hickman, R.J.; Sauve, J.L.

    1989-01-01

    During the past decade several research groups have tried to achieve reliable acceleration of projectiles to velocities in excess of 8 km/s by using a railgun. All attempts have met with difficulties. However, in the past four years the researchers have come to agree on the nature and causes of the difficulties. The consensus is that the hot plasma armature - used to commutate across the rails and to accelerate the projectile - causes ablation of the barrel wall; this ablation ultimately results in parasitic secondary arc formation through armature separation and/or restrike. The subsequence deprivation of current to the propulsion armature results in a limit to the achievable projectile velocity. Methods of mitigating the process are under study. One method uses a two-stage light-gas gun as a preaccelerator/injector to the railgun. The gas gun serves a double purpose: It quickly accelerates the projectile to a high velocity, and it fills the barrel behind the propulsive armature with insulating gas. While this approach is expected to improve railgun performance, it also requires development of techniques to form the propulsive armature behind the projectile in the high-velocity, high-pressure gas stream. This paper briefly summarizes the problems encountered in attempts to achieve hypervelocities with a railgun. Included is a description of the phenomenology and details of joint Sandia National Laboratories, Albuquerque/Lawrence Livermore National Laboratory (SNLA/LNLL) work at SNLA on a method for forming the needed plasma armature

  12. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    Science.gov (United States)

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  13. TWO-STAGE REVISION HIP REPLACEMENT PATIENS WITH SEVERE ACETABULUM DEFECT (CASE REPORT

    Directory of Open Access Journals (Sweden)

    V. V. Pavlov

    2017-01-01

    Full Text Available Favorable short-term results of arthroplasty are observed in 80–90% of cases, however, over the longer follow up period the percentage of positive outcomes is gradually reduced. Need for revision of the prosthesis or it’s components increases in proportion to time elapsed from the surgery. In addition, such revision is accompanied with a need to substitute the bone defect of the acetabulum. As a solution the authors propose to replace pelvic defects in two stages. During the first stage the defect was filled with bone allograft with platelet-rich fibrin (allografting with the use of PRF technology. After the allograft remodeling during the second stage the revision surgery is performed by implanting standard prostheses. The authors present a clinical case of a female patient with aseptic loosening of acetabular component of prosthesis in the right hip joint, with failed hip function of stage 2, right limb shortening of 2 cm. Treatment results confirm the efficiency and rationality of the proposed bone grafting option. The authors conclude bone allograft in combination with the PRF technology proves to be an alternative to the implantation of massive metal implants in the acetabulum while it reduces the risk of implant-associated infection, of metallosis in surrounding tissues and expands further revision options.

  14. Two stages of directed forgetting: Electrophysiological evidence from a short-term memory task.

    Science.gov (United States)

    Gao, Heming; Cao, Bihua; Qi, Mingming; Wang, Jing; Zhang, Qi; Li, Fuhong

    2016-06-01

    In this study, a short-term memory test was used to investigate the temporal course and neural mechanism of directed forgetting under different memory loads. Within each trial, two memory items with high or low load were presented sequentially, followed by a cue indicating whether the presented items should be remembered. After an interval, subjects were asked to respond to the probe stimuli. The ERPs locked to the cues showed that (a) the effect of cue type was initially observed during the P2 (160-240 ms) time window, with more positive ERPs for remembering relative to forgetting cues; (b) load effects were observed during the N2-P3 (250-500 ms) time window, with more positive ERPs for the high-load than low-load condition; (c) the cue effect was also observed during the N2-P3 time window, with more negative ERPs for forgetting versus remembering cues. These results demonstrated that directed forgetting involves two stages: task-relevance identification and information discarding. The cue effects during the N2 epoch supported the view that directed forgetting is an active process. © 2016 Society for Psychophysiological Research.

  15. Computational Modelling of Large Scale Phage Production Using a Two-Stage Batch Process

    Directory of Open Access Journals (Sweden)

    Konrad Krysiak-Baltyn

    2018-04-01

    Full Text Available Cost effective and scalable methods for phage production are required to meet an increasing demand for phage, as an alternative to antibiotics. Computational models can assist the optimization of such production processes. A model is developed here that can simulate the dynamics of phage population growth and production in a two-stage, self-cycling process. The model incorporates variable infection parameters as a function of bacterial growth rate and employs ordinary differential equations, allowing application to a setup with multiple reactors. The model provides simple cost estimates as a function of key operational parameters including substrate concentration, feed volume and cycling times. For the phage and bacteria pairing examined, costs and productivity varied by three orders of magnitude, with the lowest cost found to be most sensitive to the influent substrate concentration and low level setting in the first vessel. An example case study of phage production is also presented, showing how parameter values affect the production costs and estimating production times. The approach presented is flexible and can be used to optimize phage production at laboratory or factory scale by minimizing costs or maximizing productivity.

  16. Improving neuromodulation technique for refractory voiding dysfunctions: two-stage implant.

    Science.gov (United States)

    Janknegt, R A; Weil, E H; Eerdmans, P H

    1997-03-01

    Neuromodulation is a new technique that uses electrical stimulation of the sacral nerves for patients with refractory urinary urge/frequency or urge-incontinence, and some forms of urinary retention. The limiting factor for receiving an implant is often a failure of the percutaneous nerve evaluation (PNE) test. Present publications mention only about a 50% success score for PNE of all patients, although the micturition diaries and urodynamic parameters are similar. We wanted to investigate whether PNE results improved by using a permanent electrode as a PNE test. This would show that improvement of the PNE technique is feasible. In 10 patients where the original PNE had failed to improve the micturition diary parameters more than 50%, a permanent electrode was implanted by operation. It was connected to an external stimulator. In those cases where the patients improved according to their micturition diary by more than 50% during a period of 4 days, the external stimulator was replaced by a permanent subcutaneous neurostimulator. Eight of the 10 patients had a good to very good result (60% to 90% improvement) during the testing period and received their implant 5 to 14 days after the first stage. The good results of the two-stage implant technique we used indicate that the development of better PNE electrodes may lead to an improvement of the testing technique and better selection between nonresponders and technical failures.

  17. Performance analysis of a potassium-steam two stage vapour cycle

    International Nuclear Information System (INIS)

    Mitachi, Kohshi; Saito, Takeshi

    1983-01-01

    It is an important subject to raise the thermal efficiency in thermal power plants. In present thermal power plants which use steam cycle, the plant thermal efficiency has already reached 41 to 42 %, steam temperature being 839 K, and steam pressure being 24.2 MPa. That is, the thermal efficiency in a steam cycle is facing a limit. In this study, analysis was made on the performance of metal vapour/steam two-stage Rankine cycle obtained by combining a metal vapour cycle with a present steam cycle. Three different combinations using high temperature potassium regenerative cycle and low temperature steam regenerative cycle, potassium regenerative cycle and steam reheat and regenerative cycle, and potassium bleed cycle and steam reheat and regenerative cycle were systematically analyzed for the overall thermal efficiency, the output ratio and the flow rate ratio, when the inlet temperature of a potassium turbine, the temperature of a potassium condenser, and others were varied. Though the overall thermal efficiency was improved by lowering the condensing temperature of potassium vapour, it is limited by the construction because the specific volume of potassium in low pressure section increases greatly. In the combinatipn of potassium vapour regenerative cycle with steam regenerative cycle, the overall thermal efficiency can be 58.5 %, and also 60.2 % if steam reheat and regenerative cycle is employed. If a cycle to heat steam with the bled vapor out of a potassium vapour cycle is adopted, the overall thermal efficiency of 63.3 % is expected. (Wakatsuki, Y.)

  18. Comparison of Microalgae Cultivation in Photobioreactor, Open Raceway Pond, and a Two-Stage Hybrid System

    Energy Technology Data Exchange (ETDEWEB)

    Narala, Rakesh R.; Garg, Sourabh; Sharma, Kalpesh K.; Thomas-Hall, Skye R.; Deme, Miklos; Li, Yan; Schenk, Peer M., E-mail: p.schenk@uq.edu.au [Algae Biotechnology Laboratory, School of Agriculture and Food Sciences, The University of Queensland, Brisbane, QLD (Australia)

    2016-08-02

    In the wake of intensive fossil fuel usage and CO{sub 2} accumulation in the environment, research is targeted toward sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors, while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

  19. Modeling two-stage bunch compression with wakefields: Macroscopic properties and microbunching instability

    Directory of Open Access Journals (Sweden)

    R. A. Bosch

    2008-09-01

    Full Text Available In a two-stage compression and acceleration system, where each stage compresses a chirped bunch in a magnetic chicane, wakefields affect high-current bunches. The longitudinal wakes affect the macroscopic energy and current profiles of the compressed bunch and cause microbunching at short wavelengths. For macroscopic wavelengths, impedance formulas and tracking simulations show that the wakefields can be dominated by the resistive impedance of coherent edge radiation. For this case, we calculate the minimum initial bunch length that can be compressed without producing an upright tail in phase space and associated current spike. Formulas are also obtained for the jitter in the bunch arrival time downstream of the compressors that results from the bunch-to-bunch variation of current, energy, and chirp. Microbunching may occur at short wavelengths where the longitudinal space-charge wakes dominate or at longer wavelengths dominated by edge radiation. We model this range of wavelengths with frequency-dependent impedance before and after each stage of compression. The growth of current and energy modulations is described by analytic gain formulas that agree with simulations.

  20. Feasibility of a two-stage biological aerated filter for depth processing of electroplating-wastewater.

    Science.gov (United States)

    Liu, Bo; Yan, Dongdong; Wang, Qi; Li, Song; Yang, Shaogui; Wu, Wenfei

    2009-09-01

    A "two-stage biological aerated filter" (T-SBAF) consisting of two columns in series was developed to treat electroplating-wastewater. Due to the low BOD/CODcr values of electroplating-wastewater, "twice start-up" was employed to reduce the time for adaptation of microorganisms, a process that takes up of 20 days. Under steady-state conditions, the removal of CODcr and NH(4)(+)-N increased first and then decreased while the hydraulic loadings increased from 0.75 to 1.5 m(3) m(-2) h(-1). The air/water ratio had the same influence on the removal of CODcr and NH(4)(+)-N when increasing from 3:1 to 6:1. When the hydraulic loadings and air/water ratio were 1.20 m(3) m(-2) h(-1) and 4:1, the optimal removal of CODcr, NH(4)(+)-N and total-nitrogen (T-N) were 90.13%, 92.51% and 55.46%, respectively. The effluent steadily reached the wastewater reuse standard. Compared to the traditional BAF, the period before backwashing of the T-SBAF could be extended to 10days, and the recovery time was considerably shortened.

  1. Determinan Tingkat Efisiensi Perbankan Syariah Di Indonesia: Two Stages Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Zulfikar Bagus Pambuko

    2016-12-01

    Full Text Available Efficiency is an important indicator to observe the banks’ ability in resisting and facing the tight rivalry at banking industry. The study aims to evaluate the efficiency and to analyze the determinants of efficiency of Islamic bank in Indonesia on 2010 – 2013 with Two-Stage Data Envelopment Analysis approach. The objects of the study are 11 Islamic banks (BUS. The first phase of testing uses the Data Envelopment Analysis (DEA method showed that the efficiency of Islamic bank is inefficient on managing the resources and small Islamic banks are more efficient than the larger. The second phase of testing uses Tobit model showed that Capital Adequacy Ratio (CAR, Return on Asset (ROA, Non Performing Financing (NPF, Financing to Deposit Ratio (FDR, and Net Interest Margin (NIM have positive significant effect on the efficiency of Islamic banks, while Good Corporate Governance (GCG has a negative significant effect. Moreover, the macroeconomic variables, such as GDP growth and inflation have no significant effect on efficiency of Islamic banks. It suggests that to realize the optimum level of Islamic banks’ efficiency is only related with bank-specific, while the volatility of macroeconomics condition contributes nothing

  2. Enhanced acarbose production by Streptomyces M37 using a two-stage fermentation strategy.

    Directory of Open Access Journals (Sweden)

    Fei Ren

    Full Text Available In this work, we investigated the effect of pH on Streptomyces M37 growth and its acarbose biosynthesis ability. We observed that low pH was beneficial for cell growth, whereas high pH favored acarbose synthesis. Moreover, addition of glucose and maltose to the fermentation medium after 72 h of cultivation promoted acarbose production. Based on these results, a two-stage fermentation strategy was developed to improve acarbose production. Accordingly, pH was kept at 7.0 during the first 72 h and switched to 8.0 after that. At the same time, glucose and maltose were fed to increase acarbose accumulation. With this strategy, we achieved an acarbose titer of 6210 mg/L, representing an 85.7% increase over traditional batch fermentation without pH control. Finally, we determined that the increased acarbose production was related to the high activity of glutamate dehydrogenase and glucose 6-phosphate dehydrogenase.

  3. Possible two-stage /sup 87/Sr evolution in the Stockdale Rhyolite

    Energy Technology Data Exchange (ETDEWEB)

    Compston, W.; McDougall, I. (Australian National Univ., Canberra. Research School of Earth Sciences); Wyborn, D. (Department of Minerals and Energy, Canberra (Australia). Bureau of Mineral Resources)

    1982-12-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage /sup 87/Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial /sup 87/Sr//sup 86/Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y.

  4. Possible two-stage 87Sr evolution in the Stockdale Rhyolite

    International Nuclear Information System (INIS)

    Compston, W.; McDougall, I.; Wyborn, D.

    1982-01-01

    The Rb-Sr total-rock data for the Stockdale Rhyolite, of significance for the Palaeozoic time scale, are more scattered about a single-stage isochron than expected from experimental error. Two-stage 87 Sr evolution for several of the samples is explored to explain this, as an alternative to variation in the initial 87 Sr/ 86 Sr which is customarily used in single-stage dating models. The deletion of certain samples having very high Rb/Sr removes most of the excess scatter and leads to an estimate of 430 +- 7 m.y. for the age of extrusion. There is a younger alignment of Rb-Sr data within each sampling site at 412 +- 7 m.y. We suggest that the Stockdale Rhyolite is at least 430 m.y. old, that its original range in Rb/Sr was smaller than now observed, and that it experienced a net loss in Sr during later hydrothermal alteration at ca. 412 m.y. (orig.)

  5. New Grapheme Generation Rules for Two-Stage Modelbased Grapheme-to-Phoneme Conversion

    Directory of Open Access Journals (Sweden)

    Seng Kheang

    2015-01-01

    Full Text Available The precise conversion of arbitrary text into its  corresponding phoneme sequence (grapheme-to-phoneme or G2P conversion is implemented in speech synthesis and recognition, pronunciation learning software, spoken term detection and spoken document retrieval systems. Because the quality of this module plays an important role in the performance of such systems and many problems regarding G2P conversion have been reported, we propose a novel two-stage model-based approach, which is implemented using an existing weighted finite-state transducer-based G2P conversion framework, to improve the performance of the G2P conversion model. The first-stage model is built for automatic conversion of words  to phonemes, while  the second-stage  model utilizes the input graphemes and output phonemes obtained from the first stage to determine the best final output phoneme sequence. Additionally, we designed new grapheme generation rules, which enable extra detail for the vowel and consonant graphemes appearing within a word. When compared with previous approaches, the evaluation results indicate that our approach using rules focusing on the vowel graphemes slightly improved the accuracy of the out-of-vocabulary dataset and consistently increased the accuracy of the in-vocabulary dataset.

  6. Two-Stage Residual Inclusion Estimation in Health Services Research and Health Economics.

    Science.gov (United States)

    Terza, Joseph V

    2018-06-01

    Empirical analyses in health services research and health economics often require implementation of nonlinear models whose regressors include one or more endogenous variables-regressors that are correlated with the unobserved random component of the model. In such cases, implementation of conventional regression methods that ignore endogeneity will likely produce results that are biased and not causally interpretable. Terza et al. (2008) discuss a relatively simple estimation method that avoids endogeneity bias and is applicable in a wide variety of nonlinear regression contexts. They call this method two-stage residual inclusion (2SRI). In the present paper, I offer a 2SRI how-to guide for practitioners and a step-by-step protocol that can be implemented with any of the popular statistical or econometric software packages. We introduce the protocol and its Stata implementation in the context of a real data example. Implementation of 2SRI for a very broad class of nonlinear models is then discussed. Additional examples are given. We analyze cigarette smoking as a determinant of infant birthweight using data from Mullahy (1997). It is hoped that the discussion will serve as a practical guide to implementation of the 2SRI protocol for applied researchers. © Health Research and Educational Trust.

  7. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    Science.gov (United States)

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  8. Two-stage categorization in brand extension evaluation: electrophysiological time course evidence.

    Directory of Open Access Journals (Sweden)

    Qingguo Ma

    Full Text Available A brand name can be considered a mental category. Similarity-based categorization theory has been used to explain how consumers judge a new product as a member of a known brand, a process called brand extension evaluation. This study was an event-related potential study conducted in two experiments. The study found a two-stage categorization process reflected by the P2 and N400 components in brand extension evaluation. In experiment 1, a prime-probe paradigm was presented in a pair consisting of a brand name and a product name in three conditions, i.e., in-category extension, similar-category extension, and out-of-category extension. Although the task was unrelated to brand extension evaluation, P2 distinguished out-of-category extensions from similar-category and in-category ones, and N400 distinguished similar-category extensions from in-category ones. In experiment 2, a prime-probe paradigm with a related task was used, in which product names included subcategory and major-category product names. The N400 elicited by subcategory products was more significantly negative than that elicited by major-category products, with no salient difference in P2. We speculated that P2 could reflect the early low-level and similarity-based processing in the first stage, whereas N400 could reflect the late analytic and category-based processing in the second stage.

  9. A CURRENT MIRROR BASED TWO STAGE CMOS CASCODE OP-AMP FOR HIGH FREQUENCY APPLICATION

    Directory of Open Access Journals (Sweden)

    RAMKRISHNA KUNDU

    2017-03-01

    Full Text Available This paper presents a low power, high slew rate, high gain, ultra wide band two stage CMOS cascode operational amplifier for radio frequency application. Current mirror based cascoding technique and pole zero cancelation technique is used to ameliorate the gain and enhance the unity gain bandwidth respectively, which is the novelty of the circuit. In cascading technique a common source transistor drive a common gate transistor. The cascoding is used to enhance the output resistance and hence improve the overall gain of the operational amplifier with less complexity and less power dissipation. To bias the common gate transistor, a current mirror is used in this paper. The proposed circuit is designed and simulated using Cadence analog and digital system design tools of 45 nanometer CMOS technology. The simulated results of the circuit show DC gain of 63.62 dB, unity gain bandwidth of 2.70 GHz, slew rate of 1816 V/µs, phase margin of 59.53º, power supply of the proposed operational amplifier is 1.4 V (rail-to-rail ±700 mV, and power consumption is 0.71 mW. This circuit specification has encountered the requirements of radio frequency application.

  10. A New Two-Stage Approach to Short Term Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Dragan Tasić

    2013-04-01

    Full Text Available In the deregulated energy market, the accuracy of load forecasting has a significant effect on the planning and operational decision making of utility companies. Electric load is a random non-stationary process influenced by a number of factors which make it difficult to model. To achieve better forecasting accuracy, a wide variety of models have been proposed. These models are based on different mathematical methods and offer different features. This paper presents a new two-stage approach for short-term electrical load forecasting based on least-squares support vector machines. With the aim of improving forecasting accuracy, one more feature was added to the model feature set, the next day average load demand. As this feature is unknown for one day ahead, in the first stage, forecasting of the next day average load demand is done and then used in the model in the second stage for next day hourly load forecasting. The effectiveness of the presented model is shown on the real data of the ISO New England electricity market. The obtained results confirm the validity advantage of the proposed approach.

  11. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  12. [Comparison research on two-stage sequencing batch MBR and one-stage MBR].

    Science.gov (United States)

    Yuan, Xin-Yan; Shen, Heng-Gen; Sun, Lei; Wang, Lin; Li, Shi-Feng

    2011-01-01

    Aiming at resolving problems in MBR operation, like low nitrogen and phosphorous removal efficiency, severe membrane fouling and etc, comparison research on two-stage sequencing batch MBR (TSBMBR) and one-stage aerobic MBR has been done in this paper. The results indicated that TSBMBR owned advantages of SBR in removing nitrogen and phosphorous, which could make up the deficiency of traditional one-stage aerobic MBR in nitrogen and phosphorous removal. During steady operation period, effluent average NH4(+) -N, TN and TP concentration is 2.83, 12.20, 0.42 mg/L, which could reach domestic scenic environment use. From membrane fouling control point of view, TSBMBR has lower SMP in supernatant, specific trans-membrane flux deduction rate, membrane fouling resistant than one-stage aerobic MBR. The sedimentation and gel layer resistant of TSBMBR was only 6.5% and 33.12% of one-stage aerobic MBR. Besides high efficiency in removing nitrogen and phosphorous, TSBMBR could effectively reduce sedimentation and gel layer pollution on membrane surface. Comparing with one-stage MBR, TSBMBR could operate with higher trans-membrane flux, lower membrane fouling rate and better pollutants removal effects.

  13. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    Science.gov (United States)

    Wang, Jiaxi; Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  14. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    Directory of Open Access Journals (Sweden)

    Jiaxi Wang

    Full Text Available Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  15. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    Directory of Open Access Journals (Sweden)

    Kuei-Chi Tsao

    2018-04-01

    Full Text Available Complementary metal-oxide-semiconductor (CMOS radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA. The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  16. Comparison of Paired ROC Curves through a Two-Stage Test.

    Science.gov (United States)

    Yu, Wenbao; Park, Eunsik; Chang, Yuan-Chin Ivan

    2015-01-01

    The area under the receiver operating characteristic (ROC) curve (AUC) is a popularly used index when comparing two ROC curves. Statistical tests based on it for analyzing the difference have been well developed. However, this index is less informative when two ROC curves cross and have similar AUCs. In order to detect differences between ROC curves in such situations, a two-stage nonparametric test that uses a shifted area under the ROC curve (sAUC), along with AUCs, is proposed for paired designs. The new procedure is shown, numerically, to be effective in terms of power under a wide range of scenarios; additionally, it outperforms two conventional ROC-type tests, especially when two ROC curves cross each other and have similar AUCs. Larger sAUC implies larger partial AUC at the range of low false-positive rates in this case. Because high specificity is important in many classification tasks, such as medical diagnosis, this is an appealing characteristic. The test also implicitly analyzes the equality of two commonly used binormal ROC curves at every operating point. We also apply the proposed method to synthesized data and two real examples to illustrate its usefulness in practice.

  17. A high-power two stage traveling-wave tube amplifier

    International Nuclear Information System (INIS)

    Shiffler, D.; Nation, J.A.; Schachter, L.; Ivers, J.D.; Kerslick, G.S.

    1991-01-01

    Results are presented on the development of a two stage high-efficiency, high-power 8.76-GHz traveling-wave tube amplifier. The work presented augments previously reported data on a single stage amplifier and presents new data on the operational characteristics of two identical amplifiers operated in series and separated from each other by a sever. Peak powers of 410 MW have been obtained over the complete pulse duration of the device, with a conversion efficiency from the electron beam to microwave energy of 45%. In all operating conditions the severed amplifier showed a ''sideband''-like structure in the frequency spectrum of the microwave radiation. A similar structure was apparent at output powers in excess of 70 MW in the single stage device. The frequencies of the ''sidebands'' are not symmetric with respect to the center frequency. The maximum, single frequency, average output power was 210 MW corresponding to an amplifier efficiency of 24%. Simulation data is also presented that indicates that the short amplifiers used in this work exhibit significant differences in behavior from conventional low-power amplifiers. These include finite length effects on the gain characteristics, which may account for the observed narrow bandwidth of the amplifiers and for the appearance of the sidebands. It is also found that the bunching length for the beam may be a significant fraction of the total amplifier length

  18. Non-ideal magnetohydrodynamic simulations of the two-stage fragmentation model for cluster formation

    International Nuclear Information System (INIS)

    Bailey, Nicole D.; Basu, Shantanu

    2014-01-01

    We model molecular cloud fragmentation with thin-disk, non-ideal magnetohydrodynamic simulations that include ambipolar diffusion and partial ionization that transitions from primarily ultraviolet-dominated to cosmic-ray-dominated regimes. These simulations are used to determine the conditions required for star clusters to form through a two-stage fragmentation scenario. Recent linear analyses have shown that the fragmentation length scales and timescales can undergo a dramatic drop across the column density boundary that separates the ultraviolet- and cosmic-ray-dominated ionization regimes. As found in earlier studies, the absence of an ionization drop and regular perturbations leads to a single-stage fragmentation on pc scales in transcritical clouds, so that the nonlinear evolution yields the same fragment sizes as predicted by linear theory. However, we find that a combination of initial transcritical mass-to-flux ratio, evolution through a column density regime in which the ionization drop takes place, and regular small perturbations to the mass-to-flux ratio is sufficient to cause a second stage of fragmentation during the nonlinear evolution. Cores of size ∼0.1 pc are formed within an initial fragment of ∼pc size. Regular perturbations to the mass-to-flux ratio also accelerate the onset of runaway collapse.

  19. Addition of seaweed and bentonite accelerates the two-stage composting of green waste.

    Science.gov (United States)

    Zhang, Lu; Sun, Xiangyang

    2017-11-01

    Green waste (GW) is an important recyclable resource, and composting is an effective technology for the recycling of organic solid waste, including GW. This study investigated the changes in physical and chemical characteristics during the two-stage composting of GW with or without addition of seaweed (SW, Ulva ohnoi) (at 0, 35, and 55%) and bentonite (BT) (at 0.0, 2.5%, and 4.5%). During the bio-oxidative phase, the combined addition of SW and BT improved the physicochemical conditions, increased the respiration rate and enzyme activities, and decreased ammonia and nitrous oxide emissions. The combination of SW and BT also enhanced the quality of the final compost in terms of water-holding capacity, porosity, particle-size distribution, water soluble organic carbon/organic nitrogen ratio, humification, nutrient content, and phytotoxicity. The best quality compost, which matured in only 21days, was obtained with 35% SW and 4.5% BT. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Chromium (Ⅵ) removal from aqueous solutions through powdered activated carbon countercurrent two-stage adsorption.

    Science.gov (United States)

    Wang, Wenqiang

    2018-01-01

    To exploit the adsorption capacity of commercial powdered activated carbon (PAC) and to improve the efficiency of Cr(VI) removal from aqueous solutions, the adsorption of Cr(VI) by commercial PAC and the countercurrent two-stage adsorption (CTA) process was investigated. Different adsorption kinetics models and isotherms were compared, and the pseudo-second-order model and the Langmuir and Freundlich models fit the experimental data well. The Cr(VI) removal efficiency was >80% and was improved by 37% through the CTA process compared with the conventional single-stage adsorption process when the initial Cr(VI) concentration was 50 mg/L with a PAC dose of 1.250 g/L and a pH of 3. A calculation method for calculating the effluent Cr(VI) concentration and the PAC dose was developed for the CTA process, and the validity of the method was confirmed by a deviation of <5%. Copyright © 2017. Published by Elsevier Ltd.

  1. Design and Analysis of a Split Deswirl Vane in a Two-Stage Refrigeration Centrifugal Compressor

    Directory of Open Access Journals (Sweden)

    Jeng-Min Huang

    2014-09-01

    Full Text Available This study numerically investigated the influence of using the second row of a double-row deswirl vane as the inlet guide vane of the second stage on the performance of the first stage in a two-stage refrigeration centrifugal compressor. The working fluid was R134a, and the turbulence model was the Spalart-Allmaras model. The parameters discussed included the cutting position of the deswirl vane, the staggered angle of two rows of vane, and the rotation angle of the second row. The results showed that the performance of staggered angle 7.5° was better than that of 15° or 22.5°. When the staggered angle was 7.5°, the performance of cutting at 1/3 and 1/2 of the original deswirl vane length was slightly different from that of the original vane but obviously better than that of cutting at 2/3. When the staggered angle was 15°, the cutting position influenced the performance slightly. At a low flow rate prone to surge, when the second row at a staggered angle 7.5° cutting at the half of vane rotated 10°, the efficiency was reduced by only about 0.6%, and 10% of the swirl remained as the preswirl of the second stage, which is generally better than other designs.

  2. Optimization of a Two Stage Pulse Tube Refrigerator for the Integrated Current Lead System

    Science.gov (United States)

    Maekawa, R.; Matsubara, Y.; Okada, A.; Takami, S.; Konno, M.; Tomioka, A.; Imayoshi, T.; Hayashi, H.; Mito, T.

    2008-03-01

    Implementation of a conventional current lead with a pulse tube refrigerator has been validated to be working as an Integrated Current Lead (ICL) system for the Superconducting Magnetic Energy Storage (SMES). Realization of the system is primarily accounted for the flexibility of a pulse tube refrigerator, which does not posses any mechanical piston and/or displacer. As for an ultimate version of the ICL system, a High Temperature Superconducting (HTS) lead links a superconducting coil with a conventional copper lead. To ensure the minimization of heat loads to the superconducting coil, a pulse tube refrigerator has been upgraded to have a second cooling stage. This arrangement reduces not only the heat loads to the superconducting coil but also the operating cost for a SMES system. A prototype two-stage pulse tube refrigerator, series connected arrangement, was designed and fabricated to satisfy the requirements for the ICL system. Operation of the first stage refrigerator is a four-valve mode, while the second stage utilizes a double inlet configuration to ensure its confined geometry. The paper discusses the optimization of second stage cooling to validate the conceptual design

  3. Two-stage case-control association study of dopamine-related genes and migraine

    Directory of Open Access Journals (Sweden)

    Pardo Julio

    2009-09-01

    Full Text Available Abstract Background We previously reported risk haplotypes for two genes related with serotonin and dopamine metabolism: MAOA in migraine without aura and DDC in migraine with aura. Herein we investigate the contribution to migraine susceptibility of eight additional genes involved in dopamine neurotransmission. Methods We performed a two-stage case-control association study of 50 tag single nucleotide polymorphisms (SNPs, selected according to genetic coverage parameters. The first analysis consisted of 263 patients and 274 controls and the replication study was composed by 259 cases and 287 controls. All cases were diagnosed according to ICHD-II criteria, were Spanish Caucasian, and were sex-matched with control subjects. Results Single-marker analysis of the first population identified nominal associations of five genes with migraine. After applying a false discovery rate correction of 10%, the differences remained significant only for DRD2 (rs2283265 and TH (rs2070762. Multiple-marker analysis identified a five-marker T-C-G-C-G (rs12363125-rs2283265-rs2242592-rs1554929-rs2234689 risk haplotype in DRD2 and a two-marker A-C (rs6356-rs2070762 risk haplotype in TH that remained significant after correction by permutations. These results, however, were not replicated in the second independent cohort. Conclusion The present study does not support the involvement of the DRD1, DRD2, DRD3, DRD5, DBH, COMT, SLC6A3 and TH genes in the genetic predisposition to migraine in the Spanish population.

  4. The study on a gas-coupled two-stage stirling-type pulse tube cryocooler

    Science.gov (United States)

    Wu, X. L.; Chen, L. B.; Zhu, X. S.; Pan, C. Z.; Guo, J.; Wang, J. J.; Zhou, Y.

    2017-12-01

    A two-stage gas-coupled Stirling-type pulse tube cryocooler (SPTC) driven by a linear dual-opposed compressor has been designed, manufactured and tested. Both of the stages adopted coaxial structure for compactness. The effect of a cold double-inlet at the second stage on the cooling performance was investigated. The test results show that the cold double-inlet will help to achieve a lower cooling temperature, but it is not conducive to achieving a higher cooling capacity. At present, without the cold double-inlet, the second stage has achieved a no-load temperature of 11.28 K and a cooling capacity of 620 mW/20 K with an input electric power of 450 W. With the cold double-inlet, the no-load temperature is lowered to 9.4 K, but the cooling capacity is reduced to 400 mW/20 K. The structure of the developed cryocooler and the influences of charge pressure, operating frequency and hot end temperature will also be introduced in this paper.

  5. SUCCESS FACTORS IN GROWING SMBs: A STUDY OF TWO INDUSTRIES AT TWO STAGES OF DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Tor Jarl Trondsen

    2002-01-01

    Full Text Available The study attempts to identify factors for growing SMBs. An evolutionary phase approach has been used. The study also aims to find out if there are common and different denominators for newer and older firms that can affect their profitability. The study selects a sampling frame that isolates two groups of firms in two industries at two stages of development. A variety of organizational and structural data was collected and analyzed. Amongst the conclusions that may be drawn from the study are that it is not easy to find a common definition of success, it is important to stratify SMBs when studying them, an evolutionary stage approach helps to compare firms with roughly the same external and internal dynamics and each industry has its own set of success variables.The study has identified three success variables for older firms that reflect contemporary strategic thinking such as crafting a good strategy and changing it only incrementally, building core competencies and outsourcing the rest, and keeping up with innovation and honing competitive skills.

  6. Comparison of microalgae cultivation in photobioreactor, open raceway pond, and a two-stage hybrid system

    Directory of Open Access Journals (Sweden)

    Rakesh R Narala

    2016-08-01

    Full Text Available In the wake of intensive fossil fuel usage and CO2 accumulation in the environment, research is targeted towards sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

  7. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    Science.gov (United States)

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Dynamic two-stage mechanism of versatile DNA damage recognition by xeroderma pigmentosum group C protein

    Energy Technology Data Exchange (ETDEWEB)

    Clement, Flurina C.; Camenisch, Ulrike; Fei, Jia; Kaczmarek, Nina; Mathieu, Nadine [Institute of Pharmacology and Toxicology, University of Zuerich-Vetsuisse, Winterthurerstrasse 260, CH-8057 Zuerich (Switzerland); Naegeli, Hanspeter, E-mail: naegelih@vetpharm.uzh.ch [Institute of Pharmacology and Toxicology, University of Zuerich-Vetsuisse, Winterthurerstrasse 260, CH-8057 Zuerich (Switzerland)

    2010-03-01

    The recognition and subsequent repair of DNA damage are essential reactions for the maintenance of genome stability. A key general sensor of DNA lesions is xeroderma pigmentosum group C (XPC) protein, which recognizes a wide variety of helix-distorting DNA adducts arising from ultraviolet (UV) radiation, genotoxic chemicals and reactive metabolic byproducts. By detecting damaged DNA sites, this unique molecular sensor initiates the global genome repair (GGR) pathway, which allows for the removal of all the aforementioned lesions by a limited repertoire of excision factors. A faulty GGR activity causes the accumulation of DNA adducts leading to mutagenesis, carcinogenesis, neurological degeneration and other traits of premature aging. Recent findings indicate that XPC protein achieves its extraordinary substrate versatility by an entirely indirect readout strategy implemented in two clearly discernible stages. First, the XPC subunit uses a dynamic sensor interface to monitor the double helix for the presence of non-hydrogen-bonded bases. This initial screening generates a transient nucleoprotein intermediate that subsequently matures into the ultimate recognition complex by trapping undamaged nucleotides in the abnormally oscillating native strand, in a way that no direct contacts are made between XPC protein and the offending lesion itself. It remains to be elucidated how accessory factors like Rad23B, centrin-2 or the UV-damaged DNA-binding complex contribute to this dynamic two-stage quality control process.

  9. Comparison of Microalgae Cultivation in Photobioreactor, Open Raceway Pond, and a Two-Stage Hybrid System

    International Nuclear Information System (INIS)

    Narala, Rakesh R.; Garg, Sourabh; Sharma, Kalpesh K.; Thomas-Hall, Skye R.; Deme, Miklos; Li, Yan; Schenk, Peer M.

    2016-01-01

    In the wake of intensive fossil fuel usage and CO 2 accumulation in the environment, research is targeted toward sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors, while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

  10. Ethanol production from rape straw by a two-stage pretreatment under mild conditions.

    Science.gov (United States)

    Romero, Inmaculada; López-Linares, Juan C; Delgado, Yaimé; Cara, Cristóbal; Castro, Eulogio

    2015-08-01

    The growing interest on rape oil as raw material for biodiesel production has resulted in an increasing availability of rape straw, an agricultural residue that is an attractive renewable source for the production of second-generation bioethanol. Pretreatment is one of the key steps in such a conversion process. In this work, a sequential two-stage pretreatment with dilute sulfuric acid (130 °C, 60 min, 2% w/v H2SO4) followed by H2O2 (1-5% w/v) in alkaline medium (NaOH) at low temperature (60, 90 °C) and at different pretreatment times (30-90 min) was investigated. The first-acid stage allows the solubilisation of hemicellulose fraction into fermentable sugars. The second-alkaline peroxide stage allows the delignification of the solid material whilst the cellulose remaining in rape straw turned highly digestible by cellulases. Simultaneous saccharification and fermentation with 15% (w/v) delignified substrate at 90 °C, 5% H2O2 for 60 min, led to a maximum ethanol production of 53 g/L and a yield of 85% of the theoretical.

  11. A cooperation model based on CVaR measure for a two-stage supply chain

    Science.gov (United States)

    Xu, Xinsheng; Meng, Zhiqing; Shen, Rui

    2015-07-01

    In this paper, we introduce a cooperation model (CM) for the two-stage supply chain consisting of a manufacturer and a retailer. In this model, it is supposed that the objective of the manufacturer is to maximise his/her profit while the objective of the retailer is to minimise his/her CVaR while controlling the risk originating from fluctuation in market demand. In reality, the manufacturer and the retailer would like to choose their own decisions as to wholesale price and order quantity to optimise their own objectives, resulting the fact that the expected decision of the manufacturer and that of the retailer may conflict with each other. Then, to achieve cooperation, the manufacturer and the retailer both need to give some concessions. The proposed model aims to coordinate the decisions of the manufacturer and the retailer, and balance the concessions of the two in their cooperation. We introduce an s* - optimal equilibrium solution in this model, which can decide the minimum concession that the manufacturer and the retailer need to give for their cooperation, and prove that the s* - optimal equilibrium solution can be obtained by solving a goal programming problem. Further, the case of different concessions made by the manufacturer and the retailer is also discussed. Numerical results show that the CM is efficient in dealing with the cooperations between the supplier and the retailer.

  12. Two-Stage Multiobjective Optimization for Emergency Supplies Allocation Problem under Integrated Uncertainty

    Directory of Open Access Journals (Sweden)

    Xuejie Bai

    2016-01-01

    Full Text Available This paper proposes a new two-stage optimization method for emergency supplies allocation problem with multisupplier, multiaffected area, multirelief, and multivehicle. The triplet of supply, demand, and the availability of path is unknown prior to the extraordinary event and is descriptive with fuzzy random variable. Considering the fairness, timeliness, and economical efficiency, a multiobjective expected value model is built for facility location, vehicle routing, and supply allocation decisions. The goals of proposed model aim to minimize the proportion of demand nonsatisfied and response time of emergency reliefs and the total cost of the whole process. When the demand and the availability of path are discrete, the expected values in the objective functions are converted into their equivalent forms. When the supply amount is continuous, the equilibrium chance in the constraint is transformed to its equivalent one. To overcome the computational difficulty caused by multiple objectives, a goal programming model is formulated to obtain a compromise solution. Finally, an example is presented to illustrate the validity of the proposed model and the effectiveness of the solution method.

  13. Opposed piston linear compressor driven two-stage Stirling Cryocooler for cooling of IR sensors in space application

    Science.gov (United States)

    Bhojwani, Virendra; Inamdar, Asif; Lele, Mandar; Tendolkar, Mandar; Atrey, Milind; Bapat, Shridhar; Narayankhedkar, Kisan

    2017-04-01

    A two-stage Stirling Cryocooler has been developed and tested for cooling IR sensors in space application. The concept uses an opposed piston linear compressor to drive the two-stage Stirling expander. The configuration used a moving coil linear motor for the compressor as well as for the expander unit. Electrical phase difference of 80 degrees was maintained between the voltage waveforms supplied to the compressor motor and expander motor. The piston and displacer surface were coated with Rulon an anti-friction material to ensure oil less operation of the unit. The present article discusses analysis results, features of the cryocooler and experimental tests conducted on the developed unit. The two-stages of Cryo-cylinder and the expander units were manufactured from a single piece to ensure precise alignment between the two-stages. Flexure bearings were used to suspend the piston and displacer about its mean position. The objective of the work was to develop a two-stage Stirling cryocooler with 2 W at 120 K and 0.5 W at 60 K cooling capacity for the two-stages and input power of less than 120 W. The Cryocooler achieved a minimum temperature of 40.7 K at stage 2.

  14. Two-stage revision of septic knee prosthesis with articulating knee spacers yields better infection eradication rate than one-stage or two-stage revision with static spacers.

    Science.gov (United States)

    Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L

    2012-12-01

    The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.

  15. Gems of combinatorial optimization and graph algorithms

    CERN Document Server

    Skutella, Martin; Stiller, Sebastian; Wagner, Dorothea

    2015-01-01

    Are you looking for new lectures for your course on algorithms, combinatorial optimization, or algorithmic game theory?  Maybe you need a convenient source of relevant, current topics for a graduate student or advanced undergraduate student seminar?  Or perhaps you just want an enjoyable look at some beautiful mathematical and algorithmic results, ideas, proofs, concepts, and techniques in discrete mathematics and theoretical computer science?   Gems of Combinatorial Optimization and Graph Algorithms is a handpicked collection of up-to-date articles, carefully prepared by a select group of international experts, who have contributed some of their most mathematically or algorithmically elegant ideas.  Topics include longest tours and Steiner trees in geometric spaces, cartograms, resource buying games, congestion games, selfish routing, revenue equivalence and shortest paths, scheduling, linear structures in graphs, contraction hierarchies, budgeted matching problems, and motifs in networks.   This ...

  16. Three Syntactic Theories for Combinatory Graph Reduction

    DEFF Research Database (Denmark)

    Danvy, Olivier; Zerny, Ian

    2011-01-01

    in a third syntactic theory. The structure of the store-based abstract machine corresponding to this third syntactic theory oincides with that of Turner's original reduction machine. The three syntactic theories presented here The three syntactic heories presented here therefore have the following......We present a purely syntactic theory of graph reduction for the canonical combinators S, K, and I, where graph vertices are represented with evaluation contexts and let expressions. We express this syntactic theory as a reduction semantics, which we refocus into the first storeless abstract machine...... for combinatory graph reduction, which we refunctionalize into the first storeless natural semantics for combinatory graph reduction.We then factor out the introduction of let expressions to denote as many graph vertices as possible upfront instead of on demand, resulting in a second syntactic theory, this one...

  17. Three Syntactic Theories for Combinatory Graph Reduction

    DEFF Research Database (Denmark)

    Danvy, Olivier; Zerny, Ian

    2013-01-01

    , as a store-based reduction semantics of combinatory term graphs. We then refocus this store-based reduction semantics into a store-based abstract machine. The architecture of this store-based abstract machine coincides with that of Turner's original reduction machine. The three syntactic theories presented......We present a purely syntactic theory of graph reduction for the canonical combinators S, K, and I, where graph vertices are represented with evaluation contexts and let expressions. We express this rst syntactic theory as a storeless reduction semantics of combinatory terms. We then factor out...... the introduction of let expressions to denote as many graph vertices as possible upfront instead of on demand . The factored terms can be interpreted as term graphs in the sense of Barendregt et al. We express this second syntactic theory, which we prove equivalent to the rst, as a storeless reduction semantics...

  18. DNA-Encoded Dynamic Combinatorial Chemical Libraries.

    Science.gov (United States)

    Reddavide, Francesco V; Lin, Weilin; Lehnert, Sarah; Zhang, Yixin

    2015-06-26

    Dynamic combinatorial chemistry (DCC) explores the thermodynamic equilibrium of reversible reactions. Its application in the discovery of protein binders is largely limited by difficulties in the analysis of complex reaction mixtures. DNA-encoded chemical library (DECL) technology allows the selection of binders from a mixture of up to billions of different compounds; however, experimental results often show low a signal-to-noise ratio and poor correlation between enrichment factor and binding affinity. Herein we describe the design and application of DNA-encoded dynamic combinatorial chemical libraries (EDCCLs). Our experiments have shown that the EDCCL approach can be used not only to convert monovalent binders into high-affinity bivalent binders, but also to cause remarkably enhanced enrichment of potent bivalent binders by driving their in situ synthesis. We also demonstrate the application of EDCCLs in DNA-templated chemical reactions. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Exploiting Quantum Resonance to Solve Combinatorial Problems

    Science.gov (United States)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  20. Combinatorial Cis-regulation in Saccharomyces Species

    Directory of Open Access Journals (Sweden)

    Aaron T. Spivak

    2016-03-01

    Full Text Available Transcriptional control of gene expression requires interactions between the cis-regulatory elements (CREs controlling gene promoters. We developed a sensitive computational method to identify CRE combinations with conserved spacing that does not require genome alignments. When applied to seven sensu stricto and sensu lato Saccharomyces species, 80% of the predicted interactions displayed some evidence of combinatorial transcriptional behavior in several existing datasets including: (1 chromatin immunoprecipitation data for colocalization of transcription factors, (2 gene expression data for coexpression of predicted regulatory targets, and (3 gene ontology databases for common pathway membership of predicted regulatory targets. We tested several predicted CRE interactions with chromatin immunoprecipitation experiments in a wild-type strain and strains in which a predicted cofactor was deleted. Our experiments confirmed that transcription factor (TF occupancy at the promoters of the CRE combination target genes depends on the predicted cofactor while occupancy of other promoters is independent of the predicted cofactor. Our method has the additional advantage of identifying regulatory differences between species. By analyzing the S. cerevisiae and S. bayanus genomes, we identified differences in combinatorial cis-regulation between the species and showed that the predicted changes in gene regulation explain several of the species-specific differences seen in gene expression datasets. In some instances, the same CRE combinations appear to regulate genes involved in distinct biological processes in the two different species. The results of this research demonstrate that (1 combinatorial cis-regulation can be inferred by multi-genome analysis and (2 combinatorial cis-regulation can explain differences in gene expression between species.