WorldWideScience

Sample records for two-stage stochastic combinatorial

  1. Stochastic integrals: a combinatorial approach

    OpenAIRE

    Rota, Gian-Carlo; Wallstrom, Timothy C.

    1997-01-01

    A combinatorial definition of multiple stochastic integrals is given in the setting of random measures. It is shown that some properties of such stochastic integrals, formerly known to hold in special cases, are instances of combinatorial identities on the lattice of partitions of a set. The notion of stochastic sequences of binomial type is introduced as a generalization of special polynomial sequences occuring in stochastic integration, such as Hermite, Poisson–Charlier an...

  2. Stochastic Combinatorial Optimization under Probabilistic Constraints

    CERN Document Server

    Agrawal, Shipra; Ye, Yinyu

    2008-01-01

    In this paper, we present approximation algorithms for combinatorial optimization problems under probabilistic constraints. Specifically, we focus on stochastic variants of two important combinatorial optimization problems: the k-center problem and the set cover problem, with uncertainty characterized by a probability distribution over set of points or elements to be covered. We consider these problems under adaptive and non-adaptive settings, and present efficient approximation algorithms for the case when underlying distribution is a product distribution. In contrast to the expected cost model prevalent in stochastic optimization literature, our problem definitions support restrictions on the probability distributions of the total costs, via incorporating constraints that bound the probability with which the incurred costs may exceed a given threshold.

  3. STOCHASTIC DISCRETE MODEL OF TWO-STAGE ISOLATION SYSTEM WITH RIGID LIMITERS

    Institute of Scientific and Technical Information of China (English)

    HE Hua; FENG Qi; SHEN Rong-ying; WANG Yu

    2006-01-01

    The possible intermittent impacts of a two-stage isolation system with rigid limiters have been investigated. The isolation system is under periodic external excitation disturbed by small stationary Gaussian white noise after shock. The maximal impact Then in the period after shock, the zero order approximate stochastic discrete model and the first order approximate stochastic model are developed. The real isolation system of an MTU diesel engine is used to evaluate the established model. After calculating of the numerical example, the effects of noise excitation on the isolation system are discussed.The results show that the property of the system is complicated due to intermittent impact. The difference between zero order model and the first order model may be great.The effect of small noise is obvious. The results may be expected useful to the naval designers.

  4. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  5. Combinatorial Model Involving Stochastic Choices of Destination, Mode and Route

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Traffic assignment models are one of the basic tools for the analysis and design of transportation systems. However, the existing models have some defects. Considering the characteristics of Chinese urban mixed traffic and the randomness of transportation information, the author develops a combinatorial model involving stochastic choices of destination, mode and route. Its uniqueness and equivalance are also proved by the optimization theory.

  6. Planning an Agricultural Water Resources Management System: A Two-Stage Stochastic Fractional Programming Model

    Directory of Open Access Journals (Sweden)

    Liang Cui

    2015-07-01

    Full Text Available Irrigation water management is crucial for agricultural production and livelihood security in many regions and countries throughout the world. In this study, a two-stage stochastic fractional programming (TSFP method is developed for planning an agricultural water resources management system under uncertainty. TSFP can provide an effective linkage between conflicting economic benefits and the associated penalties; it can also balance conflicting objectives and maximize the system marginal benefit with per unit of input under uncertainty. The developed TSFP method is applied to a real case of agricultural water resources management of the Zhangweinan River Basin China, which is one of the main food and cotton producing regions in north China and faces serious water shortage. The results demonstrate that the TSFP model is advantageous in balancing conflicting objectives and reflecting complicated relationships among multiple system factors. Results also indicate that, under the optimized irrigation target, the optimized water allocation rate of Minyou Channel and Zhangnan Channel are 57.3% and 42.7%, respectively, which adapts the changes in the actual agricultural water resources management problem. Compared with the inexact two-stage water management (ITSP method, TSFP could more effectively address the sustainable water management problem, provide more information regarding tradeoffs between multiple input factors and system benefits, and help the water managers maintain sustainable water resources development of the Zhangweinan River Basin.

  7. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  8. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  9. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    Science.gov (United States)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  10. Effects of Risk Aversion on Market Outcomes: A Stochastic Two-Stage Equilibrium Model

    DEFF Research Database (Denmark)

    Kazempour, Jalal; Pinson, Pierre

    2016-01-01

    This paper evaluates how different risk preferences of electricity producers alter the market-clearing outcomes. Toward this goal, we propose a stochastic equilibrium model for electricity markets with two settlements, i.e., day-ahead and balancing, in which a number of conventional and stochastic...... by its optimality conditions, resulting in a mixed complementarity problem. Numerical results from a case study based on the IEEE one-area reliability test system are derived and discussed....

  11. A primal-dual decomposition based interior point approach to two-stage stochastic linear programming

    NARCIS (Netherlands)

    A.B. Berkelaar (Arjan); C.L. Dert (Cees); K.P.B. Oldenkamp; S. Zhang (Shuzhong)

    1999-01-01

    textabstractDecision making under uncertainty is a challenge faced by many decision makers. Stochastic programming is a major tool developed to deal with optimization with uncertainties that has found applications in, e.g. finance, such as asset-liability and bond-portfolio management. Computationa

  12. Stochastic Real-World Drive Cycle Generation Based on a Two Stage Markov Chain Approach

    NARCIS (Netherlands)

    Balau, A.E.; Kooijman, D.; Vazquez Rodarte, I.; Ligterink, N.

    2015-01-01

    This paper presents a methodology and tool that stochastically generates drive cycles based on measured data, with the purpose of testing and benchmarking light duty vehicles in a simulation environment or on a test-bench. The WLTP database, containing real world driving measurements, was used as in

  13. Capacity expansion of stochastic power generation under two-stage electricity markets

    DEFF Research Database (Denmark)

    Pineda, Salvador; Morales González, Juan Miguel

    2016-01-01

    of stochastic power generating units. This framework includes the explicit representation of a day-ahead and a balancing market-clearing mechanisms to properly capture the impact of forecast errors of power production on the short-term operation of a power system. The proposed generation expansion problems...... are first formulated from the standpoint of a social planner to characterize a perfectly competitive market. We investigate the effect of two paradigmatic market designs on generation expansion planning: a day-ahead market that is cleared following a conventional cost merit-order principle, and an ideal...... market-clearing procedure that determines day-ahead dispatch decisions accounting for their impact on balancing operation costs. Furthermore, we reformulate the proposed models to determine the optimal expansion decisions that maximize the profit of a collusion of stochastic power producers in order...

  14. TSCC: Two-Stage Combinatorial Clustering for virtual screening using protein-ligand interactions and physicochemical features

    Science.gov (United States)

    2010-01-01

    Background The increasing numbers of 3D compounds and protein complexes stored in databases contribute greatly to current advances in biotechnology, being employed in several pharmaceutical and industrial applications. However, screening and retrieving appropriate candidates as well as handling false positives presents a challenge for all post-screening analysis methods employed in retrieving therapeutic and industrial targets. Results Using the TSCC method, virtually screened compounds were clustered based on their protein-ligand interactions, followed by structure clustering employing physicochemical features, to retrieve the final compounds. Based on the protein-ligand interaction profile (first stage), docked compounds can be clustered into groups with distinct binding interactions. Structure clustering (second stage) grouped similar compounds obtained from the first stage into clusters of similar structures; the lowest energy compound from each cluster being selected as a final candidate. Conclusion By representing interactions at the atomic-level and including measures of interaction strength, better descriptions of protein-ligand interactions and a more specific analysis of virtual screening was achieved. The two-stage clustering approach enhanced our post-screening analysis resulting in accurate performances in clustering, mining and visualizing compound candidates, thus, improving virtual screening enrichment. PMID:21143810

  15. Combined Two-Stage Stochastic Programming and Receding Horizon Control Strategy for Microgrid Energy Management Considering Uncertainty

    Directory of Open Access Journals (Sweden)

    Zhongwen Li

    2016-06-01

    Full Text Available Microgrids (MGs are presented as a cornerstone of smart grids. With the potential to integrate intermittent renewable energy sources (RES in a flexible and environmental way, the MG concept has gained even more attention. Due to the randomness of RES, load, and electricity price in MG, the forecast errors of MGs will affect the performance of the power scheduling and the operating cost of an MG. In this paper, a combined stochastic programming and receding horizon control (SPRHC strategy is proposed for microgrid energy management under uncertainty, which combines the advantages of two-stage stochastic programming (SP and receding horizon control (RHC strategy. With an SP strategy, a scheduling plan can be derived that minimizes the risk of uncertainty by involving the uncertainty of MG in the optimization model. With an RHC strategy, the uncertainty within the MG can be further compensated through a feedback mechanism with the lately updated forecast information. In our approach, a proper strategy is also proposed to maintain the SP model as a mixed integer linear constrained quadratic programming (MILCQP problem, which is solvable without resorting to any heuristics algorithms. The results of numerical experiments explicitly demonstrate the superiority of the proposed strategy for both island and grid-connected operating modes of an MG.

  16. River water quality management considering agricultural return flows: application of a nonlinear two-stage stochastic fuzzy programming.

    Science.gov (United States)

    Tavakoli, Ali; Nikoo, Mohammad Reza; Kerachian, Reza; Soltani, Maryam

    2015-04-01

    In this paper, a new fuzzy methodology is developed to optimize water and waste load allocation (WWLA) in rivers under uncertainty. An interactive two-stage stochastic fuzzy programming (ITSFP) method is utilized to handle parameter uncertainties, which are expressed as fuzzy boundary intervals. An iterative linear programming (ILP) is also used for solving the nonlinear optimization model. To accurately consider the impacts of the water and waste load allocation strategies on the river water quality, a calibrated QUAL2Kw model is linked with the WWLA optimization model. The soil, water, atmosphere, and plant (SWAP) simulation model is utilized to determine the quantity and quality of each agricultural return flow. To control pollution loads of agricultural networks, it is assumed that a part of each agricultural return flow can be diverted to an evaporation pond and also another part of it can be stored in a detention pond. In detention ponds, contaminated water is exposed to solar radiation for disinfecting pathogens. Results of applying the proposed methodology to the Dez River system in the southwestern region of Iran illustrate its effectiveness and applicability for water and waste load allocation in rivers. In the planning phase, this methodology can be used for estimating the capacities of return flow diversion system and evaporation and detention ponds.

  17. Implementation of equity in resource allocation for regional earthquake risk mitigation using two-stage stochastic programming.

    Science.gov (United States)

    Zolfaghari, Mohammad R; Peyghaleh, Elnaz

    2015-03-01

    This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two-stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user-defined level to achieve the equity-efficiency tradeoff in the decision-making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk-return tradeoff, equity-reconstruction expenditures tradeoff, and variation of per-capita expected earthquake loss in different income classes are also presented.

  18. Scheduling Internal Audit Activities: A Stochastic Combinatorial Optimization Problem

    NARCIS (Netherlands)

    Rossi, R.; Tarim, S.A.; Hnich, B.; Prestwich, S.; Karacaer, S.

    2010-01-01

    The problem of finding the optimal timing of audit activities within an organisation has been addressed by many researchers. We propose a stochastic programming formulation with Mixed Integer Linear Programming (MILP) and Constraint Programming (CP) certainty-equivalent models. In experiments neithe

  19. Scheduling Internal Audit Activities: A Stochastic Combinatorial Optimization Problem

    NARCIS (Netherlands)

    Rossi, R.; Tarim, S.A.; Hnich, B.; Prestwich, S.; Karacaer, S.

    2010-01-01

    The problem of finding the optimal timing of audit activities within an organisation has been addressed by many researchers. We propose a stochastic programming formulation with Mixed Integer Linear Programming (MILP) and Constraint Programming (CP) certainty-equivalent models. In experiments

  20. The Semimartingale Approach to Almost Sure Stability Analysis of a Two-Stage Numerical Method for Stochastic Delay Differential Equation

    Directory of Open Access Journals (Sweden)

    Qian Guo

    2014-01-01

    Full Text Available Almost sure exponential stability of the split-step backward Euler (SSBE method applied to an Itô-type stochastic differential equation with time-varying delay is discussed by the techniques based on Doob-Mayer decomposition and semimartingale convergence theorem. Numerical experiments confirm the theoretical analysis.

  1. Stability and multiattractor dynamics of a toggle switch based on a two-stage model of stochastic gene expression.

    Science.gov (United States)

    Strasser, Michael; Theis, Fabian J; Marr, Carsten

    2012-01-04

    A toggle switch consists of two genes that mutually repress each other. This regulatory motif is active during cell differentiation and is thought to act as a memory device, being able to choose and maintain cell fate decisions. Commonly, this switch has been modeled in a deterministic framework where transcription and translation are lumped together. In this description, bistability occurs for transcription factor cooperativity, whereas autoactivation leads to a tristable system with an additional undecided state. In this contribution, we study the stability and dynamics of a two-stage gene expression switch within a probabilistic framework inspired by the properties of the Pu/Gata toggle switch in myeloid progenitor cells. We focus on low mRNA numbers, high protein abundance, and monomeric transcription-factor binding. Contrary to the expectation from a deterministic description, this switch shows complex multiattractor dynamics without autoactivation and cooperativity. Most importantly, the four attractors of the system, which only emerge in a probabilistic two-stage description, can be identified with committed and primed states in cell differentiation. To begin, we study the dynamics of the system and infer the mechanisms that move the system between attractors using both the quasipotential and the probability flux of the system. Next, we show that the residence times of the system in one of the committed attractors are geometrically distributed. We derive an analytical expression for the parameter of the geometric distribution, therefore completely describing the statistics of the switching process and elucidate the influence of the system parameters on the residence time. Moreover, we find that the mean residence time increases linearly with the mean protein level. This scaling also holds for a one-stage scenario and for autoactivation. Finally, we study the implications of this distribution for the stability of a switch and discuss the influence of the

  2. Ising computation based combinatorial optimization using spin-Hall effect (SHE) induced stochastic magnetization reversal

    Science.gov (United States)

    Shim, Yong; Jaiswal, Akhilesh; Roy, Kaushik

    2017-05-01

    Ising spin model is considered as an efficient computing method to solve combinatorial optimization problems based on its natural tendency of convergence towards low energy state. The underlying basic functions facilitating the Ising model can be categorized into two parts, "Annealing and Majority vote." In this paper, we propose an Ising cell based on Spin Hall Effect (SHE) induced magnetization switching in a Magnetic Tunnel Junction (MTJ). The stochasticity of our proposed Ising cell based on SHE induced MTJ switching can implement the natural annealing process by preventing the system from being stuck in solutions with local minima. Further, by controlling the current through the Heavy-Metal (HM) underlying the MTJ, we can mimic the majority vote function which determines the next state of the individual spins. By solving coupled Landau-Lifshitz-Gilbert equations, we demonstrate that our Ising cell can be replicated to map certain combinatorial problems. We present results for two representative problems—Maximum-cut and Graph coloring—to illustrate the feasibility of the proposed device-circuit configuration in solving combinatorial problems. Our proposed solution using a HM based MTJ device can be exploited to implement compact, fast, and energy efficient Ising spin model.

  3. Value framework of two-stage supply chain with stochastic demand%随机需求下两阶段供应链的价值结构

    Institute of Scientific and Technical Information of China (English)

    郝海

    2011-01-01

    探讨随机需求下两阶段供应链价值的构成.建立两阶段供应链的博弈模型,导出供应链最优收益的计算公式;提出供应链的保留价值、剩余价值概念及其测算方法,并就所给模型剖析了供应链中顾客资产的增值.通过实证研究,从供应链的最大收益出发估计供应链的各种价值成分,为供应链管理指明改进的方向.%The value framework is discussed about the two-stage supply chain with stochastic demand. The game model of the two-stage supply chain is established and the calculating equation of the supply chain's optimum profit is deduced. The concept and measure method are put forward about the retain value and surplus value with supply chain. And the rise in value for the customer equity of supply chain is analyzed according to the supply chain model. Based on the optimal return of supply chain, the various value component of supply chain is estimated by practicing application so as to indicate the improving direction of supply chain management.

  4. A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems

    Directory of Open Access Journals (Sweden)

    Angel A. Juan

    2015-12-01

    Full Text Available Many combinatorial optimization problems (COPs encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but they frequently assume that the problem inputs, the underlying objective function, and the set of optimization constraints are deterministic. However, uncertainty is all around us, which often makes deterministic models oversimplified versions of real-life systems. After completing an extensive review of related work, this paper describes a general methodology that allows for extending metaheuristics through simulation to solve stochastic COPs. ‘Simheuristics’ allow modelers for dealing with real-life uncertainty in a natural way by integrating simulation (in any of its variants into a metaheuristic-driven framework. These optimization-driven algorithms rely on the fact that efficient metaheuristics already exist for the deterministic version of the corresponding COP. Simheuristics also facilitate the introduction of risk and/or reliability analysis criteria during the assessment of alternative high-quality solutions to stochastic COPs. Several examples of applications in different fields illustrate the potential of the proposed methodology.

  5. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  6. A multi-objective stochastic approach to combinatorial technology space exploration

    Science.gov (United States)

    Patel, Chirag B.

    Historically, aerospace development programs have frequently been marked by performance shortfalls, cost growth, and schedule slippage. New technologies included in systems are considered to be one of the major sources of this programmatic risk. Decisions regarding the choice of technologies to include in a design are therefore crucial for a successful development program. This problem of technology selection is a challenging exercise in multi-objective decision making. The complexity of this selection problem is compounded by the geometric growth of the combinatorial space with the number of technologies being considered and the uncertainties inherent in the knowledge of the technological attributes. These problems are not typically addressed in the selection methods employed in common practice. Consequently, a method is desired to aid the selection of technologies for complex systems design with consideration of the combinatorial complexity, multi-dimensionality, and the presence of uncertainties. Several categories of techniques are explored to address the shortcomings of current approaches and to realize the goal of an efficient and effective combinatorial technology space exploration method. For the multi-objective decision making, a posteriori preference articulation is implemented. To realize this, a stochastic algorithm for Pareto optimization is formulated based on the concepts of SPEA2. Techniques to address the uncertain nature of technology impact on the system are also examined. Monte Carlo simulations using the surrogate models are used for uncertainty quantification. The concepts of graph theory are used for modeling and analyzing compatibility constraints among technologies and assessing their impact on the technology combinatorial space. The overall decision making approach is enabled by the application of an uncertainty quantification technique under the framework of an efficient probabilistic Pareto optimization algorithm. As a result, multiple

  7. The transmission process: A combinatorial stochastic process for the evolution of transmission trees over networks.

    Science.gov (United States)

    Sainudiin, Raazesh; Welch, David

    2016-12-07

    We derive a combinatorial stochastic process for the evolution of the transmission tree over the infected vertices of a host contact network in a susceptible-infected (SI) model of an epidemic. Models of transmission trees are crucial to understanding the evolution of pathogen populations. We provide an explicit description of the transmission process on the product state space of (rooted planar ranked labelled) binary transmission trees and labelled host contact networks with SI-tags as a discrete-state continuous-time Markov chain. We give the exact probability of any transmission tree when the host contact network is a complete, star or path network - three illustrative examples. We then develop a biparametric Beta-splitting model that directly generates transmission trees with exact probabilities as a function of the model parameters, but without explicitly modelling the underlying contact network, and show that for specific values of the parameters we can recover the exact probabilities for our three example networks through the Markov chain construction that explicitly models the underlying contact network. We use the maximum likelihood estimator (MLE) to consistently infer the two parameters driving the transmission process based on observations of the transmission trees and use the exact MLE to characterize equivalence classes over the space of contact networks with a single initial infection. An exploratory simulation study of the MLEs from transmission trees sampled from three other deterministic and four random families of classical contact networks is conducted to shed light on the relation between the MLEs of these families with some implications for statistical inference along with pointers to further extensions of our models. The insights developed here are also applicable to the simplest models of "meme" evolution in online social media networks through transmission events that can be distilled from observable actions such as "likes", "mentions

  8. Two-stage stochastic day-ahead optimal resource scheduling in a distribution network with intensive use of distributed energy resources

    DEFF Research Database (Denmark)

    Sousa, Tiago; Ghazvini, Mohammad Ali Fotouhi; Morais, Hugo

    2015-01-01

    The integration of renewable sources and electric vehicles will introduce new uncertainties to the optimal resource scheduling, namely at the distribution level. These uncertainties are mainly originated by the power generated by renewables sources and by the electric vehicles charge requirements....... This paper proposes a two-state stochastic programming approach to solve the day-ahead optimal resource scheduling problem. The case study considers a 33-bus distribution network with 66 distributed generation units and 1000 electric vehicles....

  9. A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems

    OpenAIRE

    Juan, Angel A.; Javier Faulin; Scott E. Grasman; Markus Rabe; Gonçalo Figueira

    2015-01-01

    Many combinatorial optimization problems (COPs) encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but ...

  10. An inexact two-stage stochastic model for water resource management under uncertainty%基于水质模拟的不确定条件下两阶段随机水资源规划模型

    Institute of Scientific and Technical Information of China (English)

    徐毅; 汤烨; 付殿峥; 解玉磊

    2012-01-01

    针对流域内不同企业的水资源分配及企业生产污染排放导致的水环境问题,运用区间两阶段随机规划的方法,耦合区间两阶段模型(ITSP)和区间水质模型(IS-P),建立不确定两阶段随机水质-水量耦合规划模型(ITSP-SP).该模型以流域内系统利益最大为目标函数,模拟了流域内各个企业的水量分配及排污过程中河道水质变化,并在保证河流水质达标前提下优化预计分配水量,调整企业生产规模.通过模型运算得到区间解,为管理者提供了多样的决策方案.并且,该模型充分考虑不确定因素对系统利益的影响,能够有效的规避系统决策失误及方案缺失现象.%In order to solve the water allocation and water pollution problems during different industries production process in a river basin,an inexact twostage stochastic programming integrated with water quality simulation model was developed for water resources management and water quality improvement act planning under uncertainty.The model was coupled with an inexact two-stage stochastic programming (ITSP) and an inexact Streeter-Phelps model (IS-P).The model optimized water resources quantity to each industrial plants with maximization of benefits of the system as an objective function by simulating water quality change trends under different inflow levels.Interactive algorithms were utilized for finding solutions to the ITSP-SP model.The solutions can provide multiple decision-making patterns for water resource managers.Meanwhile,this model has capability of analyzing the impacts of various uncertainty factors on benefits of the system,and avoiding the mistake of decision-making and the lack of decision-alternatives.

  11. Efficient Two-Stage Group Testing Algorithms for DNA Screening

    CERN Document Server

    Huber, Michael

    2011-01-01

    Group testing algorithms are very useful tools for DNA library screening. Building on recent work by Levenshtein (2003) and Tonchev (2008), we construct in this paper new infinite classes of combinatorial structures, the existence of which are essential for attaining the minimum number of individual tests at the second stage of a two-stage disjunctive testing algorithm.

  12. Stochastic Constraint Programming

    OpenAIRE

    Walsh, Toby

    2009-01-01

    To model combinatorial decision problems involving uncertainty and probability, we introduce stochastic constraint programming. Stochastic constraint programs contain both decision variables (which we can set) and stochastic variables (which follow a probability distribution). They combine together the best features of traditional constraint satisfaction, stochastic integer programming, and stochastic satisfiability. We give a semantics for stochastic constraint programs, and propose a number...

  13. 基于两阶段随机规划方法的灌区水资源优化配置%Optimal water resources planning based on interval-parameter two-stage stochastic programming

    Institute of Scientific and Technical Information of China (English)

    付银环; 郭萍; 方世奇; 李茉

    2014-01-01

    灌区水资源优化配置的不确定性研究,对于提高水分的利用效率,减少农业灌溉用水,建立节水型社会具有重要的意义,尤其是对于中国的干旱半干旱地区。该文针对灌区水资源系统中存在的不确定性,以西营灌区、清源灌区、永昌灌区为研究区域,运用区间2阶段随机规划的方法,建立了地表水和地下水联合调度的灌区之间水资源优化配置模型。该模型以多灌区、多水源联合调度系统的成本最小为目标函数,引入随机数和区间数表示该系统中存在的不确定性,将地下水和地表水水资源在不同地区之间进行优化,并以配置结果为输入数据,以作物全生育期的水分生产函数为基础,建立灌区不同农作物灌溉定额的非线性区间不确定性水资源优化配置模型,将优化配置水量分配到灌区典型农作物。2个模型均以区间的形式给出优化配置的结果,为决策者提供更为准确的决策空间,更真实地反映实际的水资源优化配置形式。%Studies on water resources allocation in irrigation area under uncertainty are important for increasing water use efficiency, reducing agricultural irrigation water amount and establishing water-saving society, especially for the arid and semi-arid areas in China. In this study, two models were established based on uncertainty theory in order to make plans for efficient water resources management. One of the models was an interval-parameter two-stage stochastic optimization model developed for dispatching the underground and surface water systems for irrigation area of Xiying, Qingyuan, Yongchuan (China) under the conditions of uncertainty and complexity. In the model, the minimal system operation cost was regarded as the objective function and the probability distribution and interval parameters were used to express the uncertainty of water supply. The process of water supply from multiple

  14. Combinatorial chemistry

    DEFF Research Database (Denmark)

    Nielsen, John

    1994-01-01

    An overview of combinatorial chemistry is presented. Combinatorial chemistry, sometimes referred to as `irrational drug design,' involves the generation of molecular diversity. The resulting chemical library is then screened for biologically active compounds.......An overview of combinatorial chemistry is presented. Combinatorial chemistry, sometimes referred to as `irrational drug design,' involves the generation of molecular diversity. The resulting chemical library is then screened for biologically active compounds....

  15. Combinatorial chemistry

    DEFF Research Database (Denmark)

    Nielsen, John

    1994-01-01

    An overview of combinatorial chemistry is presented. Combinatorial chemistry, sometimes referred to as `irrational drug design,' involves the generation of molecular diversity. The resulting chemical library is then screened for biologically active compounds.......An overview of combinatorial chemistry is presented. Combinatorial chemistry, sometimes referred to as `irrational drug design,' involves the generation of molecular diversity. The resulting chemical library is then screened for biologically active compounds....

  16. The construction of two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1988-01-01

    Although two-stage testing is not the most efficient form of adaptive testing, it has some advantages. In this paper, linear programming models are given for the construction of two-stage tests. In these models, practical constraints with respect to, among other things, test composition, administrat

  17. Recursive algorithm for the two-stage EFOP estimation method

    Institute of Scientific and Technical Information of China (English)

    LUO GuiMing; HUANG Jian

    2008-01-01

    A recursive algorithm for the two-stage empirical frequency-domain optimal param-eter (EFOP) estimation method Was proposed. The EFOP method was a novel sys-tem identificallon method for Black-box models that combines time-domain esti-mation and frequency-domain estimation. It has improved anti-disturbance perfor-mance, and could precisely identify models with fewer sample numbers. The two-stage EFOP method based on the boot-strap technique was generally suitable for Black-box models, but it was an iterative method and takes too much computation work so that it did not work well online. A recursive algorithm was proposed for dis-turbed stochastic systems. Some simulation examples are included to demonstrate the validity of the new method.

  18. COMBINATORIAL LIBRARIES

    DEFF Research Database (Denmark)

    1997-01-01

    The invention provides a method for the production of a combinatorial library of compound of general formula (I) using solid phase methodologies. The cleavage of the array of immobilised compounds of the phthalimido type from the solid support matrix is accomplished by using an array of dinucleop......The invention provides a method for the production of a combinatorial library of compound of general formula (I) using solid phase methodologies. The cleavage of the array of immobilised compounds of the phthalimido type from the solid support matrix is accomplished by using an array...... of dinucleophiles, e.g. hydrazines (hydrazinolysis) or N-hydroxylamines, whereby a combinatorial dimension is introduced in the cleavage step. The invention also provides a compound library....

  19. Combinatorial Optimization

    CERN Document Server

    Chvátal, V

    2011-01-01

    This book is a collection of six articles arising from the meeting of the NATO Advanced Study Institute (ASI) "Combinatorial Optimization: Methods and Applications," which was held at the University of Montreal in June 2006. This ASI consisted of seven series of five one-hour lectures and one series of four one-hour lectures. It was attended by some sixty students of graduate or postdoctoral level from fifteen countries worldwide. It includes topics such as: integer and mixed integer programming, facility location, branching on split disjunctions, convexity in combinatorial optimizat

  20. Combinatorial Origami

    Science.gov (United States)

    Dieleman, Peter; Waitukaitis, Scott; van Hecke, Martin

    To design rigidly foldable quadrilateral meshes one generally needs to solve a complicated set of constraints. Here we present a systematic, combinatorial approach to create rigidly foldable quadrilateral meshes with a limited number of different vertices. The number of discrete, 1 degree-of-freedom folding branches for some of these meshes scales exponentially with the number of vertices on the edge, whilst other meshes generated this way only have two discrete folding branches, regardless of mesh size. We show how these two different behaviours both emerge from the two folding branches present in a single generic 4-vertex. Furthermore, we model generic 4-vertices as a spherical linkage and exploit a previously overlooked symmetry to create non-developable origami patterns using the same combinatorial framework.

  1. 用户平衡与随机用户平衡共存的弹性需求组合模型%A Combinatorial Model of Demand Elasticity Under Coexistence of User Equilibrium and Stochastic User Equilibrium

    Institute of Scientific and Technical Information of China (English)

    罗朝晖

    2011-01-01

    本文讨论用户平衡(UE)与随机用户平衡(SUE)共存的情况.首先将交通系统分成遵循UE及SUE原则的两个子系统,在考虑弹性需求及两系统用户相互影响的情况下,给出了用户平衡与随机用户平衡共存的弹性需求组合模型,并证明了模型符合UE及SUE路径选择条件的一阶条件,和推出了弹性需求函数形式,最后给出对角化及MSA组合算法.%In this article, user equilibrium (UE) and stochastic user equilibrium (SUE) coexisted in the model. First, the traffic network was divided into two subsystems which abided by SO and SUE respectively. Considering the elasticity demand and the interaction between the two subsystems, it assumed that user equilibrium and stochastic user equilibrium could coexist in the condition of elastic demand. Under these assumptions, a combinatorial model was developed. Then, the model's first-order conditions fitting UE and SUE was clearly shown. And the elastic demand function was introduced. Finally, an algorithm for the model using the diagonalization algorithm and MSA algorithm was also proposed.

  2. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal [alpha] should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  3. Two-stage sampling for acceptance testing

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, C.L.; Bryan, M.F.

    1992-09-01

    Sometimes a regulatory requirement or a quality-assurance procedure sets an allowed maximum on a confidence limit for a mean. If the sample mean of the measurements is below the allowed maximum, but the confidence limit is above it, a very widespread practice is to increase the sample size and recalculate the confidence bound. The confidence level of this two-stage procedure is rarely found correctly, but instead is typically taken to be the nominal confidence level, found as if the final sample size had been specified in advance. In typical settings, the correct nominal {alpha} should be between the desired P(Type I error) and half that value. This note gives tables for the correct a to use, some plots of power curves, and an example of correct two-stage sampling.

  4. Two Stage Gear Tooth Dynamics Program

    Science.gov (United States)

    1989-08-01

    cordi - tions and associated iteration prooedure become more complex. This is due to both the increased number of components and to the time for a...solved for each stage in the two stage solution . There are (3 + ntrrber of planets) degrees of freedom fcr eacb stage plus two degrees of freedom...should be devised. It should be noted that this is not minor task. In general, each stage plus an input or output shaft will have 2 times (4 + number

  5. Combinatorial model and algorithm involving OD distribution and stochastic user equilibrium assignment%OD分布与随机均衡分配的组合模型及算法

    Institute of Scientific and Technical Information of China (English)

    周溪召

    2001-01-01

    Because the randomness of transportation information has not been considered in present practical transportation planning ,the accuracy or efficiency of transportation planning is depreciated.In order to overcome this drawback,the combinatorial model simulaneously involving OD distribution of trips in a transportation network and stochastic user equilibrium assignment(SUEA) of trips to routes in each OD pairs was developped on the basis of analysis on random ness in choices of route and destination for a given mode. It was proved that the solution of the combinatorial model was unique and it satisfied Wardropian principle of SUEA and requirement of OD distribution by introducing Langrangian function.Finally,the algorithm of model was given by using the direction hunting method.%目前交通规划实践缺乏考虑交通信息的随机性,从而降低了它所得结果的准确性.为此,通过分析出行路径选择和目标选择的随机性,建立了交通网络OD分布与随机平衡(或均衡)分配的组合模型,通过引入拉格朗日函数,证明了模型最优解满足随机用户平衡条件和OD分布的要求且最优解是唯一的;最后给出了模型的方向搜索算法.

  6. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  7. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds......Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...

  8. Two Stage Sibling Cycle Compressor/Expander.

    Science.gov (United States)

    1994-02-01

    vol. 5, p. 424. 11. L. Bauwens and M.P. Mitchell, " Regenerator Analysis: Validation of the MS*2 Stirling Cycle Code," Proc. XVIIIth International...PL-TR--94-1051 PL-TR-- 94-1051 TWO STAGE SIBLING CYCLE COMPRESSOR/EXPANDER Matthew P. Mitchell . Mitchell/ Stirling Machines/Systems, Inc. No\\ 1995...ty. THIS PAGE IS UNCLASSIFIED PL-TR-94-1051 This final report was prepared byMitchell/ Stirling Machines/Systems, Inc., Berkeley, CA under Contract

  9. 基于区间两阶段模糊随机模型的灌区多水源优化配置%Multi-water conjunctive optimal allocation based on interval-parameter two-stage Fuzzy-stochastic programming

    Institute of Scientific and Technical Information of China (English)

    李晨洋; 张志鑫

    2016-01-01

    针对灌区水资源调度系统中的不确定性和复杂性,该文以红兴隆灌区为研究区域,构建区间两阶段模糊随机规划模型,并将其应用到灌区地表水和地下优化配置中,模型以灌区多水源联合调度系统收益最大为目标函数,引入区间数、模糊数、随机变量表示系统中的不确定性,对地表水和地下水在各作物之间配水目标进行优化.通过计算得到不同水源向不同作物配水的最优配水目标值及最优配置水量,模型不仅可以充分考虑到不确定性因素对系统收益的影响,而且可以将经济效益与处罚风险进行权衡.以2006年红兴隆灌区作物种植情况及灌溉情况为例进行研究分析,得到系统最大收益值在1355.144×106~2371.792×106元之间,该优化结果以区间形式给出,可以为决策者提供更为宽裕的决策空间,从而获得最为科学的决策方案.%Rapid population growth and economy development has led to increasing reliance on water resources. It is even aggravated for agricultural irrigation systems where more water is necessary to support the increasing population. In this study, an interval-parameter two-stage Fuzzy-stochastic optimization model was developed for dispatching the underground and surface water systems for different crops in Hong Xinglong irrigation of China under the conditions of uncertainty and complexity. In the model, the maximal system benefit was regarded as the objective function and 3 methods of probability density function, discrete intervals and fuzzy sets were introduced into the two-stage linear programming framework to resolve uncertain issues. The model allocated a predefined water to crops in the first stage, according to benefit and punishment for water shortage condition to adjust the water supply in the second stage, making the system reach the balance of systems benefit and the risk of punishment, the process of water allocation for multiple corps was simulated

  10. Applications of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2013-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management. The three volumes of the Combinatorial Optimization series aims to cover a wide range of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. "Applications of Combinatorial Optimization" is presenting a certain number among the most common and well-known applications of Combinatorial Optimization.

  11. Classification in two-stage screening.

    Science.gov (United States)

    Longford, Nicholas T

    2015-11-10

    Decision theory is applied to the problem of setting thresholds in medical screening when it is organised in two stages. In the first stage that involves a less expensive procedure that can be applied on a mass scale, an individual is classified as a negative or a likely positive. In the second stage, the likely positives are subjected to another test that classifies them as (definite) positives or negatives. The second-stage test is more accurate, but also more expensive and more involved, and so there are incentives to restrict its application. Robustness of the method with respect to the parameters, some of which have to be set by elicitation, is assessed by sensitivity analysis.

  12. Two stage gear tooth dynamics program

    Science.gov (United States)

    Boyd, Linda S.

    1989-01-01

    The epicyclic gear dynamics program was expanded to add the option of evaluating the tooth pair dynamics for two epicyclic gear stages with peripheral components. This was a practical extension to the program as multiple gear stages are often used for speed reduction, space, weight, and/or auxiliary units. The option was developed for either stage to be a basic planetary, star, single external-external mesh, or single external-internal mesh. The two stage system allows for modeling of the peripherals with an input mass and shaft, an output mass and shaft, and a connecting shaft. Execution of the initial test case indicated an instability in the solution with the tooth paid loads growing to excessive magnitudes. A procedure to trace the instability is recommended as well as a method of reducing the program's computation time by reducing the number of boundary condition iterations.

  13. Two-Stage Modelling Of Random Phenomena

    Science.gov (United States)

    Barańska, Anna

    2015-12-01

    The main objective of this publication was to present a two-stage algorithm of modelling random phenomena, based on multidimensional function modelling, on the example of modelling the real estate market for the purpose of real estate valuation and estimation of model parameters of foundations vertical displacements. The first stage of the presented algorithm includes a selection of a suitable form of the function model. In the classical algorithms, based on function modelling, prediction of the dependent variable is its value obtained directly from the model. The better the model reflects a relationship between the independent variables and their effect on the dependent variable, the more reliable is the model value. In this paper, an algorithm has been proposed which comprises adjustment of the value obtained from the model with a random correction determined from the residuals of the model for these cases which, in a separate analysis, were considered to be the most similar to the object for which we want to model the dependent variable. The effect of applying the developed quantitative procedures for calculating the corrections and qualitative methods to assess the similarity on the final outcome of the prediction and its accuracy, was examined by statistical methods, mainly using appropriate parametric tests of significance. The idea of the presented algorithm has been designed so as to approximate the value of the dependent variable of the studied phenomenon to its value in reality and, at the same time, to have it "smoothed out" by a well fitted modelling function.

  14. Concepts of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2014-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management.  The three volumes of the Combinatorial Optimization series aim to cover a wide range  of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization.Concepts of Combinatorial Optimization, is divided into three parts:- On the complexity of combinatorial optimization problems, presenting basics about worst-case and randomi

  15. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2002-01-01

    Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs......Composite likelihood; Two-stage estimation; Family studies; Copula; Optimal weights; All possible pairs...

  16. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  17. Matching tutor to student: rules and mechanisms for efficient two-stage learning in neural circuits

    CERN Document Server

    Tesileanu, Tiberiu; Balasubramanian, Vijay

    2016-01-01

    Existing models of birdsong learning assume that brain area LMAN introduces variability into song for trial-and-error learning. Recent data suggest that LMAN also encodes a corrective bias driving short-term improvements in song. These later consolidate in area RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using a stochastic gradient descent approach, we derive how 'tutor' circuits should match plasticity mechanisms in 'student' circuits for efficient learning. We further describe a reinforcement learning framework with which the tutor can build its teaching signal. We show that mismatching the tutor signal and plasticity mechanism can impair or abolish learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.

  18. Generalized Yule-walker and two-stage identification algorithms for dual-rate systems

    Institute of Scientific and Technical Information of China (English)

    Feng DING

    2006-01-01

    In this paper, two approaches are developed for directly identifying single-rate models of dual-rate stochastic systems in which the input updating frequency is an integer multiple of the output sampling frequency. The first is the generalized Yule-Walker algorithm and the second is a two-stage algorithm based on the correlation technique. The basic idea is to directly identify the parameters of underlying single-rate models instead of the lifted models of dual-rate systems from the dual-rate input-output data, assuming that the measurement data are stationary and ergodic. An example is given.

  19. Integer and combinatorial optimization

    CERN Document Server

    Nemhauser, George L

    1999-01-01

    Rave reviews for INTEGER AND COMBINATORIAL OPTIMIZATION ""This book provides an excellent introduction and survey of traditional fields of combinatorial optimization . . . It is indeed one of the best and most complete texts on combinatorial optimization . . . available. [And] with more than 700 entries, [it] has quite an exhaustive reference list.""-Optima ""A unifying approach to optimization problems is to formulate them like linear programming problems, while restricting some or all of the variables to the integers. This book is an encyclopedic resource for such f

  20. Treatment of cadmium dust with two-stage leaching process

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The treatment of cadmium dust with a two-stage leaching process was investigated to replace the existing sulphation roast-leaching processes. The process parameters in the first stage leaching were basically similar to the neutralleaching in zinc hydrometallurgy. The effects of process parameters in the second stage leaching on the extraction of zincand cadmium were mainly studied. The experimental results indicated that zinc and cadmium could be efficiently recoveredfrom the cadmium dust by two-stage leaching process. The extraction percentages of zinc and cadmium in two stage leach-ing reached 95% and 88% respectively under the optimum conditions. The total extraction percentage of Zn and Cdreached 94%.

  1. The combinatorial approach

    Directory of Open Access Journals (Sweden)

    Wilhelm F. Maier

    2004-10-01

    Full Text Available Two recently published books examine combinatorial materials synthesis, high-throughput screening of libraries, and the design of successful experiments. Both are a must for those interested in materials development and discovery, says Wilhelm F. Maier

  2. Combinatorial Floer Homology

    CERN Document Server

    de Silva, Vin; Salamon, Dietmar

    2012-01-01

    We define combinatorial Floer homology of a transverse pair of noncontractibe nonisotopic embedded loops in an oriented 2-manifold without boundary, prove that it is invariant under isotopy, and prove that it is isomorphic to the original Lagrangian Floer homology.

  3. Normal Order: Combinatorial Graphs

    CERN Document Server

    Solomon, A I; Blasiak, P; Horzela, A; Penson, K A; Solomon, Allan I.; Duchamp, Gerard; Blasiak, Pawel; Horzela, Andrzej; Penson, Karol A.

    2004-01-01

    A conventional context for supersymmetric problems arises when we consider systems containing both boson and fermion operators. In this note we consider the normal ordering problem for a string of such operators. In the general case, upon which we touch briefly, this problem leads to combinatorial numbers, the so-called Rook numbers. Since we assume that the two species, bosons and fermions, commute, we subsequently restrict ourselves to consideration of a single species, single-mode boson monomials. This problem leads to elegant generalisations of well-known combinatorial numbers, specifically Bell and Stirling numbers. We explicitly give the generating functions for some classes of these numbers. In this note we concentrate on the combinatorial graph approach, showing how some important classical results of graph theory lead to transparent representations of the combinatorial numbers associated with the boson normal ordering problem.

  4. LOGISTICS SCHEDULING: ANALYSIS OF TWO-STAGE PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yung-Chia CHANG; Chung-Yee LEE

    2003-01-01

    This paper studies the coordination effects between stages for scheduling problems where decision-making is a two-stage process. Two stages are considered as one system. The system can be a supply chain that links two stages, one stage representing a manufacturer; and the other, a distributor.It also can represent a single manufacturer, while each stage represents a different department responsible for a part of operations. A problem that jointly considers both stages in order to achieve ideal overall system performance is defined as a system problem. In practice, at times, it might not be feasible for the two stages to make coordinated decisions due to (i) the lack of channels that allow decision makers at the two stages to cooperate, and/or (ii) the optimal solution to the system problem is too difficult (or costly) to achieve.Two practical approaches are applied to solve a variant of two-stage logistic scheduling problems. The Forward Approach is defined as a solution procedure by which the first stage of the system problem is solved first, followed by the second stage. Similarly, the Backward Approach is defined as a solution procedure by which the second stage of the system problem is solved prior to solving the first stage. In each approach, two stages are solved sequentially and the solution generated is treated as a heuristic solution with respect to the corresponding system problem. When decision makers at two stages make decisions locally without considering consequences to the entire system,ineffectiveness may result - even when each stage optimally solves its own problem. The trade-off between the time complexity and the solution quality is the main concern. This paper provides the worst-case performance analysis for each approach.

  5. Residential Two-Stage Gas Furnaces - Do They Save Energy?

    Energy Technology Data Exchange (ETDEWEB)

    Lekov, Alex; Franco, Victor; Lutz, James

    2006-05-12

    Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in the DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.

  6. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    JIANG JianCheng; LI JianTao

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models. A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives. Under very mild conditions, the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known. The established asymptotic results also hold for two particular local M-estimations: the local least squares and least absolute deviation estimations. However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions, its implementation is time-consuming. To reduce the computational burden, one-step approximations to the two-stage local M-estimators are developed. The one-step estimators are shown to achieve the same efficiency as the fully iterative two-stage local M-estimators, which makes the two-stage local M-estimation more feasible in practice. The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers. In addition, the practical implementation of the proposed estimation is considered in details. Simulations demonstrate the merits of the two-stage local M-estimation, and a real example illustrates the performance of the methodology.

  7. Two-stage local M-estimation of additive models

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    This paper studies local M-estimation of the nonparametric components of additive models.A two-stage local M-estimation procedure is proposed for estimating the additive components and their derivatives.Under very mild conditions,the proposed estimators of each additive component and its derivative are jointly asymptotically normal and share the same asymptotic distributions as they would be if the other components were known.The established asymptotic results also hold for two particular local M-estimations:the local least squares and least absolute deviation estimations.However,for general two-stage local M-estimation with continuous and nonlinear ψ-functions,its implementation is time-consuming.To reduce the computational burden,one-step approximations to the two-stage local M-estimators are developed.The one-step estimators are shown to achieve the same effciency as the fully iterative two-stage local M-estimators,which makes the two-stage local M-estimation more feasible in practice.The proposed estimators inherit the advantages and at the same time overcome the disadvantages of the local least-squares based smoothers.In addition,the practical implementation of the proposed estimation is considered in details.Simulations demonstrate the merits of the two-stage local M-estimation,and a real example illustrates the performance of the methodology.

  8. Research into Integrated Optimization for Two-stage Logistics Distribution Network under Stochastic Demand Based on Time%基于时间的随机需求二级分销网络物流系统集成优化研究

    Institute of Scientific and Technical Information of China (English)

    马汉武; 杨相; 赵林度; 程发新

    2012-01-01

    分销网络设计包括设施选址、库存控制、运输等方面的设计与优化,但以往只是从战略层、战术层、运作层来分别进行各自的研究.实际上,这三个层次的决策要素之间存在着复杂的互动关系,并存在着广泛的效益悖反关系,这些在变化的环境下显得尤为突出.本文充分考虑时间因素的重要性,从物流系统的集成优化高度出发,研究建立需求随机的多分销中心多顾客的设施选址——运输路线安排——库存控制问题(ILRIP)的模型,对此设计了一个两层粒子群优化( PSO)算法,并给出了计算实例.研究结果有助于供应链分销网络的集成优化,缩短商品流转周期,提高顾客服务水平,提升竞争力.%The main component of logistics distribution network design includes such three decisions as strategic facility location, tactical inventory control policy and operational vehicle routing, but in the past they were studied separately. In fact, there exists wide and close trade-off relation among the above decisions-making factors, especially in changing environment. Considering the importance of time factor, a model of Integrated Location Routing and Inventory Problem (ILRIP) is built on the condition that the demand is stochastic and there are many potential distribution centers and customers from the perspective of integrated logistics, then a bi-level algorithm of Particle Swarm Optimization(PSO) is designed to solve it. At the end of this paper, a numerical example is given . The conclusion will help to optimize the distribution network of supply chain, so as to quicken commodity turnover, as well as improve the customer service level, and it is helpful to enhance competitiveness.

  9. STARS A Two Stage High Gain Harmonic Generation FEL Demonstrator

    Energy Technology Data Exchange (ETDEWEB)

    M. Abo-Bakr; W. Anders; J. Bahrdt; P. Budz; K.B. Buerkmann-Gehrlein; O. Dressler; H.A. Duerr; V. Duerr; W. Eberhardt; S. Eisebitt; J. Feikes; R. Follath; A. Gaupp; R. Goergen; K. Goldammer; S.C. Hessler; K. Holldack; E. Jaeschke; Thorsten Kamps; S. Klauke; J. Knobloch; O. Kugeler; B.C. Kuske; P. Kuske; A. Meseck; R. Mitzner; R. Mueller; M. Neeb; A. Neumann; K. Ott; D. Pfluckhahn; T. Quast; M. Scheer; Th. Schroeter; M. Schuster; F. Senf; G. Wuestefeld; D. Kramer; Frank Marhauser

    2007-08-01

    BESSY is proposing a demonstration facility, called STARS, for a two-stage high-gain harmonic generation free electron laser (HGHG FEL). STARS is planned for lasing in the wavelength range 40 to 70 nm, requiring a beam energy of 325 MeV. The facility consists of a normal conducting gun, three superconducting TESLA-type acceleration modules modified for CW operation, a single stage bunch compressor and finally a two-stage HGHG cascaded FEL. This paper describes the faciliy layout and the rationale behind the operation parameters.

  10. Dynamic Modelling of the Two-stage Gasification Process

    DEFF Research Database (Denmark)

    Gøbel, Benny; Henriksen, Ulrik B.; Houbak, Niels

    1999-01-01

    A two-stage gasification pilot plant was designed and built as a co-operative project between the Technical University of Denmark and the company REKA.A dynamic, mathematical model of the two-stage pilot plant was developed to serve as a tool for optimising the process and the operating conditions...... of the gasification plant.The model consists of modules corresponding to the different elements in the plant. The modules are coupled together through mass and heat conservation.Results from the model are compared with experimental data obtained during steady and unsteady operation of the pilot plant. A good...

  11. Combinatorial Hybrid Systems

    DEFF Research Database (Denmark)

    Larsen, Jesper Abildgaard; Wisniewski, Rafal; Grunnet, Jacob Deleuran

    2008-01-01

    As initially suggested by E. Sontag, it is possible to approximate an arbitrary nonlinear system by a set of piecewise linear systems. In this work we concentrate on how to control a system given by a set of piecewise linear systems defined on simplices. By using the results of L. Habets and J. van...... Schuppen, it is possible to find a controller for the system on each of the simplices thus guaranteeing that the system flow on the simplex only will leave the simplex through a subset of its faces. Motivated by R. Forman, on the triangulated state space we define a combinatorial vector field, which...... indicates for a given face the future simplex. In the suggested definition we allow nondeterminacy in form of splitting and merging of solution trajectories. The combinatorial vector field gives rise to combinatorial counterparts of most concepts from dynamical systems, such as duals to vector fields, flow...

  12. Introduction to combinatorial designs

    CERN Document Server

    Wallis, WD

    2007-01-01

    Combinatorial theory is one of the fastest growing areas of modern mathematics. Focusing on a major part of this subject, Introduction to Combinatorial Designs, Second Edition provides a solid foundation in the classical areas of design theory as well as in more contemporary designs based on applications in a variety of fields. After an overview of basic concepts, the text introduces balanced designs and finite geometries. The author then delves into balanced incomplete block designs, covering difference methods, residual and derived designs, and resolvability. Following a chapter on the e

  13. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  14. High Performance Gasification with the Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Gøbel, Benny; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars. In the ......Based on more than 15 years of research and practical experience, the Technical University of Denmark (DTU) and COWI Consulting Engineers and Planners AS present the two-stage gasification process, a concept for high efficiency gasification of biomass producing negligible amounts of tars....... In the two-stage gasification concept, the pyrolysis and the gasification processes are physical separated. The volatiles from the pyrolysis are partially oxidized, and the hot gases are used as gasification medium to gasify the char. Hot gases from the gasifier and a combustion unit can be used for drying...... a cold gas efficiency exceeding 90% is obtained. In the original design of the two-stage gasification process, the pyrolysis unit consists of a screw conveyor with external heating, and the char unit is a fixed bed gasifier. This design is well proven during more than 1000 hours of testing with various...

  15. FREE GRAFT TWO-STAGE URETHROPLASTY FOR HYPOSPADIAS REPAIR

    Institute of Scientific and Technical Information of China (English)

    Zhong-jin Yue; Ling-jun Zuo; Jia-ji Wang; Gan-ping Zhong; Jian-ming Duan; Zhi-ping Wang; Da-shan Qin

    2005-01-01

    Objective To evaluate the effectiveness of free graft transplantation two-stage urethroplasty for hypospadias repair.Methods Fifty-eight cases with different types of hypospadias including 10 subcoronal, 36 penile shaft, 9 scrotal, and 3 perineal were treated with free full-thickness skin graft or (and) buccal mucosal graft transplantation two-stage urethroplasty. Of 58 cases, 45 were new cases, 13 had history of previous failed surgeries. Operative procedure included two stages: the first stage is to correct penile curvature (chordee), prepare transplanting bed, harvest and prepare full-thickness skin graft, buccal mucosal graft, and perform graft transplantation. The second stage is to complete urethroplasty and glanuloplasty.Results After the first stage operation, 56 of 58 cases (96.6%) were successful with grafts healing well, another 2foreskin grafts got gangrened. After the second stage operation on 56 cases, 5 cases failed with newly formed urethras opened due to infection, 8 cases had fistulas, 43 (76.8%) cases healed well.Conclusions Free graft transplantation two-stage urethroplasty for hypospadias repair is a kind of effective treatment with broad indication, comparatively high success rate, less complicationsand good cosmatic results, indicative of various types of hypospadias repair.

  16. Composite likelihood and two-stage estimation in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2004-01-01

    In this paper register based family studies provide the motivation for linking a two-stage estimation procedure in copula models for multivariate failure time data with a composite likelihood approach. The asymptotic properties of the estimators in both parametric and semi-parametric models are d...

  17. A two-stage rank test using density estimation

    NARCIS (Netherlands)

    Albers, Willem/Wim

    1995-01-01

    For the one-sample problem, a two-stage rank test is derived which realizes a required power against a given local alternative, for all sufficiently smooth underlying distributions. This is achieved using asymptotic expansions resulting in a precision of orderm −1, wherem is the size of the first

  18. The construction of customized two-stage tests

    NARCIS (Netherlands)

    Adema, Jos J.

    1990-01-01

    In this paper mixed integer linear programming models for customizing two-stage tests are given. Model constraints are imposed with respect to test composition, administration time, inter-item dependencies, and other practical considerations. It is not difficult to modify the models to make them use

  19. Manipulating Combinatorial Structures.

    Science.gov (United States)

    Labelle, Gilbert

    This set of transparencies shows how the manipulation of combinatorial structures in the context of modern combinatorics can easily lead to interesting teaching and learning activities at every level of education from elementary school to university. The transparencies describe: (1) the importance and relations of combinatorics to science and…

  20. A Two-Stage Approach to the Orienteering Problem with Stochastic Weights

    NARCIS (Netherlands)

    Evers, L.; Glorie, K.; Ster, S. van der; Barros, A.I.; Monsuur, H.

    2014-01-01

    The Orienteering Problem (OP) is a routing problem which has many interesting applications in logistics, tourism and defense. The aim of the OP is to find a maximum profit path or tour, which is feasible with respect to a capacity constraint on the total weight of the selected arcs. In this paper we

  1. A simple randomised algorithm for convex optimisation - Application to two-stage stochastic programming.

    NARCIS (Netherlands)

    M. Dyer; R. Kannan; L. Stougie (Leen)

    2014-01-01

    htmlabstractWe consider maximising a concave function over a convex set by a simplerandomised algorithm. The strength of the algorithm is that it requires only approximatefunction evaluations for the concave function and a weak membership oraclefor the convex set. Under smoothness conditions on the

  2. Research on universal combinatorial coding.

    Science.gov (United States)

    Lu, Jun; Zhang, Zhuo; Mo, Juan

    2014-01-01

    The conception of universal combinatorial coding is proposed. Relations exist more or less in many coding methods. It means that a kind of universal coding method is objectively existent. It can be a bridge connecting many coding methods. Universal combinatorial coding is lossless and it is based on the combinatorics theory. The combinational and exhaustive property make it closely related with the existing code methods. Universal combinatorial coding does not depend on the probability statistic characteristic of information source, and it has the characteristics across three coding branches. It has analyzed the relationship between the universal combinatorial coding and the variety of coding method and has researched many applications technologies of this coding method. In addition, the efficiency of universal combinatorial coding is analyzed theoretically. The multicharacteristic and multiapplication of universal combinatorial coding are unique in the existing coding methods. Universal combinatorial coding has theoretical research and practical application value.

  3. Square Kilometre Array station configuration using two-stage beamforming

    CERN Document Server

    Jiwani, Aziz; Razavi-Ghods, Nima; Hall, Peter J; Padhi, Shantanu; de Vaate, Jan Geralt bij

    2012-01-01

    The lowest frequency band (70 - 450 MHz) of the Square Kilometre Array will consist of sparse aperture arrays grouped into geographically-localised patches, or stations. Signals from thousands of antennas in each station will be beamformed to produce station beams which form the inputs for the central correlator. Two-stage beamforming within stations can reduce SKA-low signal processing load and costs, but has not been previously explored for the irregular station layouts now favoured in radio astronomy arrays. This paper illustrates the effects of two-stage beamforming on sidelobes and effective area, for two representative station layouts (regular and irregular gridded tile on an irregular station). The performance is compared with a single-stage, irregular station. The inner sidelobe levels do not change significantly between layouts, but the more distant sidelobes are affected by the tile layouts; regular tile creates diffuse, but regular, grating lobes. With very sparse arrays, the station effective area...

  4. Two stage sorption type cryogenic refrigerator including heat regeneration system

    Science.gov (United States)

    Jones, Jack A.; Wen, Liang-Chi; Bard, Steven

    1989-01-01

    A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system is disclosed. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The two stages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

  5. Two-stage approach to full Chinese parsing

    Institute of Scientific and Technical Information of China (English)

    Cao Hailong; Zhao Tiejun; Yang Muyun; Li Sheng

    2005-01-01

    Natural language parsing is a task of great importance and extreme difficulty. In this paper, we present a full Chinese parsing system based on a two-stage approach. Rather than identifying all phrases by a uniform model, we utilize a divide and conquer strategy. We propose an effective and fast method based on Markov model to identify the base phrases. Then we make the first attempt to extend one of the best English parsing models i.e. the head-driven model to recognize Chinese complex phrases. Our two-stage approach is superior to the uniform approach in two aspects. First, it creates synergy between the Markov model and the head-driven model. Second, it reduces the complexity of full Chinese parsing and makes the parsing system space and time efficient. We evaluate our approach in PARSEVAL measures on the open test set, the parsing system performances at 87.53% precision, 87.95% recall.

  6. Income and Poverty across SMSAs: A Two-Stage Analysis

    OpenAIRE

    1993-01-01

    Two popular explanations of urban poverty are the "welfare-disincentive" and "urban-deindustrialization" theories. Using cross-sectional Census data, we develop a two-stage model to predict an SMSAs median family income and poverty rate. The model allows the city's welfare level and industrial structure to affect its median family income and poverty rate directly. It also allows welfare and industrial structure to affect income and poverty indirectly, through their effects on family structure...

  7. A Two-stage Polynomial Method for Spectrum Emissivity Modeling

    OpenAIRE

    Qiu, Qirong; Liu, Shi; Teng, Jing; Yan, Yong

    2015-01-01

    Spectral emissivity is a key in the temperature measurement by radiation methods, but not easy to determine in a combustion environment, due to the interrelated influence of temperature and wave length of the radiation. In multi-wavelength radiation thermometry, knowing the spectral emissivity of the material is a prerequisite. However in many circumstances such a property is a complex function of temperature and wavelength and reliable models are yet to be sought. In this study, a two stages...

  8. Measuring the Learning from Two-Stage Collaborative Group Exams

    CERN Document Server

    Ives, Joss

    2014-01-01

    A two-stage collaborative exam is one in which students first complete the exam individually, and then complete the same or similar exam in collaborative groups immediately afterward. To quantify the learning effect from the group component of these two-stage exams in an introductory Physics course, a randomized crossover design was used where each student participated in both the treatment and control groups. For each of the two two-stage collaborative group midterm exams, questions were designed to form matched near-transfer pairs with questions on an end-of-term diagnostic which was used as a learning test. For learning test questions paired with questions from the first midterm, which took place six to seven weeks before the learning test, an analysis using a mixed-effects logistic regression found no significant differences in learning-test performance between the control and treatment group. For learning test questions paired with questions from the second midterm, which took place one to two weeks prio...

  9. Combinatorial vector fields and the valley structure of fitness landscapes.

    Science.gov (United States)

    Stadler, Bärbel M R; Stadler, Peter F

    2010-12-01

    Adaptive (downhill) walks are a computationally convenient way of analyzing the geometric structure of fitness landscapes. Their inherently stochastic nature has limited their mathematical analysis, however. Here we develop a framework that interprets adaptive walks as deterministic trajectories in combinatorial vector fields and in return associate these combinatorial vector fields with weights that measure their steepness across the landscape. We show that the combinatorial vector fields and their weights have a product structure that is governed by the neutrality of the landscape. This product structure makes practical computations feasible. The framework presented here also provides an alternative, and mathematically more convenient, way of defining notions of valleys, saddle points, and barriers in landscape. As an application, we propose a refined approximation for transition rates between macrostates that are associated with the valleys of the landscape.

  10. Rules and mechanisms for efficient two-stage learning in neural circuits

    Science.gov (United States)

    Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay

    2017-01-01

    Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (e.g., LMAN) should match plasticity mechanisms in ‘student’ circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning. DOI: http://dx.doi.org/10.7554/eLife.20944.001 PMID:28374674

  11. Combinatorial materials synthesis

    Directory of Open Access Journals (Sweden)

    Ichiro Takeuchi

    2005-10-01

    Full Text Available The pace at which major technological changes take place is often dictated by the rate at which new materials are discovered, and the timely arrival of new materials has always played a key role in bringing advances to our society. It is no wonder then that the so-called combinatorial or high-throughput strategy has been embraced by practitioners of materials science in virtually every field. High-throughput experimentation allows simultaneous synthesis and screening of large arrays of different materials. Pioneered by the pharmaceutical industry, the combinatorial method is now widely considered to be a watershed in accelerating the discovery and optimization of new materials1–5.

  12. Combinatorial Reciprocity Theorems

    CERN Document Server

    Beck, Matthias

    2012-01-01

    A common theme of enumerative combinatorics is formed by counting functions that are polynomials evaluated at positive integers. In this expository paper, we focus on four families of such counting functions connected to hyperplane arrangements, lattice points in polyhedra, proper colorings of graphs, and $P$-partitions. We will see that in each instance we get interesting information out of a counting function when we evaluate it at a \\emph{negative} integer (and so, a priori the counting function does not make sense at this number). Our goals are to convey some of the charm these "alternative" evaluations of counting functions exhibit, and to weave a unifying thread through various combinatorial reciprocity theorems by looking at them through the lens of geometry, which will include some scenic detours through other combinatorial concepts.

  13. Forty-five-degree two-stage venous cannula: advantages over standard two-stage venous cannulation.

    Science.gov (United States)

    Lawrence, D R; Desai, J B

    1997-01-01

    We present a 45-degree two-stage venous cannula that confers advantage to the surgeon using cardiopulmonary bypass. This cannula exits the mediastinum under the transverse bar of the sternal retractor, leaving the rostral end of the sternal incision free of apparatus. It allows for lifting of the heart with minimal effect on venous return and does not interfere with the radially laid out sutures of an aortic valve replacement using an interrupted suture technique.

  14. Pseudorandomness and Combinatorial Constructions

    OpenAIRE

    2006-01-01

    In combinatorics, the probabilistic method is a very powerful tool to prove the existence of combinatorial objects with interesting and useful properties. Explicit constructions of objects with such properties are often very difficult, or unknown. In computer science, probabilistic algorithms are sometimes simpler and more efficient than the best known deterministic algorithms for the same problem. Despite this evidence for the power of random choices, the computational theory of pseudorandom...

  15. Combinatorial group theory

    CERN Document Server

    Lyndon, Roger C

    2001-01-01

    From the reviews: "This book (...) defines the boundaries of the subject now called combinatorial group theory. (...)it is a considerable achievement to have concentrated a survey of the subject into 339 pages. This includes a substantial and useful bibliography; (over 1100 ÄitemsÜ). ...the book is a valuable and welcome addition to the literature, containing many results not previously available in a book. It will undoubtedly become a standard reference." Mathematical Reviews, AMS, 1979.

  16. Combinatorial Quantum Gravity

    CERN Document Server

    Trugenberger, Carlo A

    2016-01-01

    In a recently developed approach, geometry is modelled as an emergent property of random networks. Here I show that one of these models I proposed is exactly quantum gravity defined in terms of the combinatorial Ricci curvature recently derived by Ollivier. Geometry in the weak (classical) gravity regime arises in a phase transition driven by the condensation of short graph cycles. The strong (quantum) gravity regime corresponds to "small world" random graphs with logarithmic distance scaling.

  17. A two-stage heuristic method for vehicle routing problem with split deliveries and pickups

    Institute of Scientific and Technical Information of China (English)

    Yong WANG; Xiao-lei MA; Yun-teng LAO; Hai-yan YU; Yong LIU

    2014-01-01

    The vehicle routing problem (VRP) is a well-known combinatorial optimization issue in transportation and logistics network systems. There exist several limitations associated with the traditional VRP. Releasing the restricted conditions of traditional VRP has become a research focus in the past few decades. The vehicle routing problem with split deliveries and pickups (VRPSPDP) is particularly proposed to release the constraints on the visiting times per customer and vehicle capacity, that is, to allow the deliveries and pickups for each customer to be simultaneously split more than once. Few studies have focused on the VRPSPDP problem. In this paper we propose a two-stage heuristic method integrating the initial heuristic algorithm and hybrid heuristic algorithm to study the VRPSPDP problem. To validate the proposed algorithm, Solomon benchmark datasets and extended Solomon benchmark datasets were modified to compare with three other popular algorithms. A total of 18 datasets were used to evaluate the effectiveness of the proposed method. The computational results indicated that the proposed algorithm is superior to these three algorithms for VRPSPDP in terms of total travel cost and average loading rate.

  18. On Two-stage Seamless Adaptive Design in Clinical Trials

    Directory of Open Access Journals (Sweden)

    Shein-Chung Chow

    2008-12-01

    Full Text Available In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular because of its efficiency and flexibility in modifying trial and/or statistical procedures of ongoing clinical trials. One of the most commonly considered adaptive designs is probably a two-stage seamless adaptive trial design that combines two separate studies into one single study. In many cases, study endpoints considered in a two-stage seamless adaptive design may be similar but different (e.g. a biomarker versus a regular clinical endpoint or the same study endpoint with different treatment durations. In this case, it is important to determine how the data collected from both stages should be combined for the final analysis. It is also of interest to know how the sample size calculation/allocation should be done for achieving the study objectives originally set for the two stages (separate studies. In this article, formulas for sample size calculation/allocation are derived for cases in which the study endpoints are continuous, discrete (e.g. binary responses, and contain time-to-event data assuming that there is a well-established relationship between the study endpoints at different stages, and that the study objectives at different stages are the same. In cases in which the study objectives at different stages are different (e.g. dose finding at the first stage and efficacy confirmation at the second stage and when there is a shift in patient population caused by protocol amendments, the derived test statistics and formulas for sample size calculation and allocation are necessarily modified for controlling the overall type I error at the prespecified level.

  19. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa.

    Science.gov (United States)

    Yadavalli, Rajasri; Heggers, Goutham Rao Venkata Naga

    2013-12-19

    Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer.

  20. Two-stage series array SQUID amplifier for space applications

    Science.gov (United States)

    Tuttle, J. G.; DiPirro, M. J.; Shirron, P. J.; Welty, R. P.; Radparvar, M.

    We present test results for a two-stage integrated SQUID amplifier which uses a series array of d.c. SQUIDS to amplify the signal from a single input SQUID. The device was developed by Welty and Martinis at NIST and recent versions have been manufactured by HYPRES, Inc. Shielding and filtering techniques were employed during the testing to minimize the external noise. Energy resolution of 300 h was demonstrated using a d.c. excitation at frequencies above 1 kHz, and better than 500 h resolution was typical down to 300 Hz.

  1. A Two Stage Classification Approach for Handwritten Devanagari Characters

    CERN Document Server

    Arora, Sandhya; Nasipuri, Mita; Malik, Latesh

    2010-01-01

    The paper presents a two stage classification approach for handwritten devanagari characters The first stage is using structural properties like shirorekha, spine in character and second stage exploits some intersection features of characters which are fed to a feedforward neural network. Simple histogram based method does not work for finding shirorekha, vertical bar (Spine) in handwritten devnagari characters. So we designed a differential distance based technique to find a near straight line for shirorekha and spine. This approach has been tested for 50000 samples and we got 89.12% success

  2. Two-Stage Aggregate Formation via Streams in Myxobacteria

    Science.gov (United States)

    Alber, Mark; Kiskowski, Maria; Jiang, Yi

    2005-03-01

    In response to adverse conditions, myxobacteria form aggregates which develop into fruiting bodies. We model myxobacteria aggregation with a lattice cell model based entirely on short range (non-chemotactic) cell-cell interactions. Local rules result in a two-stage process of aggregation mediated by transient streams. Aggregates resemble those observed in experiment and are stable against even very large perturbations. Noise in individual cell behavior increases the effects of streams and result in larger, more stable aggregates. Phys. Rev. Lett. 93: 068301 (2004).

  3. Straw Gasification in a Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Hindsgaul, Claus; Henriksen, Ulrik Birk

    2002-01-01

    Additive-prepared straw pellets were gasified in the 100 kW two-stage gasifier at The Department of Mechanical Engineering of the Technical University of Denmark (DTU). The fixed bed temperature range was 800-1000°C. In order to avoid bed sintering, as observed earlier with straw gasification...... residues were examined after the test. No agglomeration or sintering was observed in the ash residues. The tar content was measured both by solid phase amino adsorption (SPA) method and cold trapping (Petersen method). Both showed low tar contents (~42 mg/Nm3 without gas cleaning). The particle content...

  4. Two-Stage Fan I: Aerodynamic and Mechanical Design

    Science.gov (United States)

    Messenger, H. E.; Kennedy, E. E.

    1972-01-01

    A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.

  5. Two-Stage Eagle Strategy with Differential Evolution

    CERN Document Server

    Yang, Xin-She

    2012-01-01

    Efficiency of an optimization process is largely determined by the search algorithm and its fundamental characteristics. In a given optimization, a single type of algorithm is used in most applications. In this paper, we will investigate the Eagle Strategy recently developed for global optimization, which uses a two-stage strategy by combing two different algorithms to improve the overall search efficiency. We will discuss this strategy with differential evolution and then evaluate their performance by solving real-world optimization problems such as pressure vessel and speed reducer design. Results suggest that we can reduce the computing effort by a factor of up to 10 in many applications.

  6. Two-stage perceptual learning to break visual crowding.

    Science.gov (United States)

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  7. Two-Stage Heuristic Algorithm for Aircraft Recovery Problem

    Directory of Open Access Journals (Sweden)

    Cheng Zhang

    2017-01-01

    Full Text Available This study focuses on the aircraft recovery problem (ARP. In real-life operations, disruptions always cause schedule failures and make airlines suffer from great loss. Therefore, the main objective of the aircraft recovery problem is to minimize the total recovery cost and solve the problem within reasonable runtimes. An aircraft recovery model (ARM is proposed herein to formulate the ARP and use feasible line of flights as the basic variables in the model. We define the feasible line of flights (LOFs as a sequence of flights flown by an aircraft within one day. The number of LOFs exponentially grows with the number of flights. Hence, a two-stage heuristic is proposed to reduce the problem scale. The algorithm integrates a heuristic scoring procedure with an aggregated aircraft recovery model (AARM to preselect LOFs. The approach is tested on five real-life test scenarios. The computational results show that the proposed model provides a good formulation of the problem and can be solved within reasonable runtimes with the proposed methodology. The two-stage heuristic significantly reduces the number of LOFs after each stage and finally reduces the number of variables and constraints in the aircraft recovery model.

  8. Combinatorial auctions for electronic business

    Indian Academy of Sciences (India)

    Y Narahari; Pankaj Dayama

    2005-04-01

    Combinatorial auctions (CAs) have recently generated significant interest as an automated mechanism for buying and selling bundles of goods. They are proving to be extremely useful in numerous e-business applications such as eselling, e-procurement, e-logistics, and B2B exchanges. In this article, we introduce combinatorial auctions and bring out important issues in the design of combinatorial auctions. We also highlight important contributions in current research in this area. This survey emphasizes combinatorial auctions as applied to electronic business situations.

  9. The Yoccoz Combinatorial Analytic Invariant

    DEFF Research Database (Denmark)

    Petersen, Carsten Lunde; Roesch, Pascale

    2008-01-01

    In this paper we develop a combinatorial analytic encoding of the Mandelbrot set M. The encoding is implicit in Yoccoz' proof of local connectivity of M at any Yoccoz parameter, i.e. any at most finitely renormalizable parameter for which all periodic orbits are repelling. Using this encoding we...... define an explicit combinatorial analytic modelspace, which is sufficiently abstract that it can serve as a go-between for proving that other sets such as the parabolic Mandelbrot set M1 has the same combinatorial structure as M. As an immediate application we use here the combinatorial-analytic model...

  10. Introduction to combinatorial analysis

    CERN Document Server

    Riordan, John

    2002-01-01

    This introduction to combinatorial analysis defines the subject as ""the number of ways there are of doing some well-defined operation."" Chapter 1 surveys that part of the theory of permutations and combinations that finds a place in books on elementary algebra, which leads to the extended treatment of generation functions in Chapter 2, where an important result is the introduction of a set of multivariable polynomials.Chapter 3 contains an extended treatment of the principle of inclusion and exclusion which is indispensable to the enumeration of permutations with restricted position given

  11. Infinitary Combinatory Reduction Systems

    DEFF Research Database (Denmark)

    Ketema, Jeroen; Simonsen, Jakob Grue

    2011-01-01

    We define infinitary Combinatory Reduction Systems (iCRSs), thus providing the first notion of infinitary higher-order rewriting. The systems defined are sufficiently general that ordinary infinitary term rewriting and infinitary ¿-calculus are special cases. Furthermore,we generalise a number...... of knownresults fromfirst-order infinitary rewriting and infinitary ¿-calculus to iCRSs. In particular, for fully-extended, left-linear iCRSs we prove the well-known compression property, and for orthogonal iCRSs we prove that (1) if a set of redexes U has a complete development, then all complete developments...

  12. Dynamic Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Lisbjerg, Micke

    This thesis is divided into seven chapters, which can all be read individually. The first chapter, however, contains a general introduction to the chemistry used in the remaining six chapters, and it is therefore recommended to read chapter one before reading the other chapters. Chapter 1...... is a general introductory chapter for the whole thesis. The history and concepts of dynamic combinatorial chemistry are described, as are some of the new and intriguing results recently obtained. Finally, the properties of a broad range of hexameric macrocycles are described in detail. Chapter 2 gives...

  13. Dynamic Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Lisbjerg, Micke

    This thesis is divided into seven chapters, which can all be read individually. The first chapter, however, contains a general introduction to the chemistry used in the remaining six chapters, and it is therefore recommended to read chapter one before reading the other chapters. Chapter 1...... is a general introductory chapter for the whole thesis. The history and concepts of dynamic combinatorial chemistry are described, as are some of the new and intriguing results recently obtained. Finally, the properties of a broad range of hexameric macrocycles are described in detail. Chapter 2 gives...

  14. Two-Stage Part-Based Pedestrian Detection

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Prioletti, Antonio; Trivedi, Mohan M.

    2012-01-01

    Detecting pedestrians is still a challenging task for automotive vision system due the extreme variability of targets, lighting conditions, occlusions, and high speed vehicle motion. A lot of research has been focused on this problem in the last 10 years and detectors based on classifiers has...... gained a special place among the different approaches presented. This work presents a state-of-the-art pedestrian detection system based on a two stages classifier. Candidates are extracted with a Haar cascade classifier trained with the DaimlerDB dataset and then validated through part-based HOG...... of several metrics, such as detection rate, false positives per hour, and frame rate. The novelty of this system rely in the combination of HOG part-based approach, tracking based on specific optimized feature and porting on a real prototype....

  15. A two-stage method for inverse medium scattering

    KAUST Repository

    Ito, Kazufumi

    2013-03-01

    We present a novel numerical method to the time-harmonic inverse medium scattering problem of recovering the refractive index from noisy near-field scattered data. The approach consists of two stages, one pruning step of detecting the scatterer support, and one resolution enhancing step with nonsmooth mixed regularization. The first step is strictly direct and of sampling type, and it faithfully detects the scatterer support. The second step is an innovative application of nonsmooth mixed regularization, and it accurately resolves the scatterer size as well as intensities. The nonsmooth model can be efficiently solved by a semi-smooth Newton-type method. Numerical results for two- and three-dimensional examples indicate that the new approach is accurate, computationally efficient, and robust with respect to data noise. © 2012 Elsevier Inc.

  16. Laparoscopic management of a two staged gall bladdertorsion

    Institute of Scientific and Technical Information of China (English)

    2015-01-01

    Gall bladder torsion (GBT) is a relatively uncommonentity and rarely diagnosed preoperatively. A constantfactor in all occurrences of GBT is a freely mobilegall bladder due to congenital or acquired anomalies.GBT is commonly observed in elderly white females.We report a 77-year-old, Caucasian lady who wasoriginally diagnosed as gall bladder perforation butwas eventually found with a two staged torsion of thegall bladder with twisting of the Riedel's lobe (partof tongue like projection of liver segment 4A). Thistogether, has not been reported in literature, to thebest of our knowledge. We performed laparoscopiccholecystectomy and she had an uneventful postoperativeperiod. GBT may create a diagnostic dilemmain the context of acute cholecystitis. Timely diagnosisand intervention is necessary, with extra care whileoperating as the anatomy is generally distorted. Thefundus first approach can be useful due to alteredanatomy in the region of Calot's triangle. Laparoscopiccholecystectomy has the benefit of early recovery.

  17. Lightweight Concrete Produced Using a Two-Stage Casting Process

    Directory of Open Access Journals (Sweden)

    Jin Young Yoon

    2015-03-01

    Full Text Available The type of lightweight aggregate and its volume fraction in a mix determine the density of lightweight concrete. Minimizing the density obviously requires a higher volume fraction, but this usually causes aggregates segregation in a conventional mixing process. This paper proposes a two-stage casting process to produce a lightweight concrete. This process involves placing lightweight aggregates in a frame and then filling in the remaining interstitial voids with cementitious grout. The casting process results in the lowest density of lightweight concrete, which consequently has low compressive strength. The irregularly shaped aggregates compensate for the weak point in terms of strength while the round-shape aggregates provide a strength of 20 MPa. Therefore, the proposed casting process can be applied for manufacturing non-structural elements and structural composites requiring a very low density and a strength of at most 20 MPa.

  18. TWO-STAGE OCCLUDED OBJECT RECOGNITION METHOD FOR MICROASSEMBLY

    Institute of Scientific and Technical Information of China (English)

    WANG Huaming; ZHU Jianying

    2007-01-01

    A two-stage object recognition algorithm with the presence of occlusion is presented for microassembly. Coarse localization determines whether template is in image or not and approximately where it is, and fine localization gives its accurate position. In coarse localization, local feature, which is invariant to translation, rotation and occlusion, is used to form signatures. By comparing signature of template with that of image, approximate transformation parameter from template to image is obtained, which is used as initial parameter value for fine localization. An objective function, which is a function of transformation parameter, is constructed in fine localization and minimized to realize sub-pixel localization accuracy. The occluded pixels are not taken into account in objective function, so the localization accuracy will not be influenced by the occlusion.

  19. Two-stage designs for cross-over bioequivalence trials.

    Science.gov (United States)

    Kieser, Meinhard; Rauch, Geraldine

    2015-07-20

    The topic of applying two-stage designs in the field of bioequivalence studies has recently gained attention in the literature and in regulatory guidelines. While there exists some methodological research on the application of group sequential designs in bioequivalence studies, implementation of adaptive approaches has focused up to now on superiority and non-inferiority trials. Especially, no comparison of the features and performance characteristics of these designs has been performed, and therefore, the question of which design to employ in this setting remains open. In this paper, we discuss and compare 'classical' group sequential designs and three types of adaptive designs that offer the option of mid-course sample size recalculation. A comprehensive simulation study demonstrates that group sequential designs can be identified, which show power characteristics that are similar to those of the adaptive designs but require a lower average sample size. The methods are illustrated with a real bioequivalence study example.

  20. The hybrid two stage anticlockwise cycle for ecological energy conversion

    Directory of Open Access Journals (Sweden)

    Cyklis Piotr

    2016-01-01

    Full Text Available The anticlockwise cycle is commonly used for refrigeration, air conditioning and heat pumps applications. The application of refrigerant in the compression cycle is within the temperature limits of the triple point and the critical point. New refrigerants such as 1234yf or 1234ze have many disadvantages, therefore natural refrigerants application is favourable. The carbon dioxide and water can be applied only in the hybrid two stages cycle. The possibilities of this solutions are shown for refrigerating applications, as well some experimental results of the adsorption-compression double stages cycle, powered with solar collectors are shown. As a high temperature cycle the adsorption system is applied. The low temperature cycle is the compression stage with carbon dioxide as a working fluid. This allows to achieve relatively high COP for low temperature cycle and for the whole system.

  1. Two Stage Assessment of Thermal Hazard in An Underground Mine

    Science.gov (United States)

    Drenda, Jan; Sułkowski, Józef; Pach, Grzegorz; Różański, Zenon; Wrona, Paweł

    2016-06-01

    The results of research into the application of selected thermal indices of men's work and climate indices in a two stage assessment of climatic work conditions in underground mines have been presented in this article. The difference between these two kinds of indices was pointed out during the project entitled "The recruiting requirements for miners working in hot underground mine environments". The project was coordinated by The Institute of Mining Technologies at Silesian University of Technology. It was a part of a Polish strategic project: "Improvement of safety in mines" being financed by the National Centre of Research and Development. Climate indices are based only on physical parameters of air and their measurements. Thermal indices include additional factors which are strictly connected with work, e.g. thermal resistance of clothing, kind of work etc. Special emphasis has been put on the following indices - substitute Silesian temperature (TS) which is considered as the climatic index, and the thermal discomfort index (δ) which belongs to the thermal indices group. The possibility of the two stage application of these indices has been taken into consideration (preliminary and detailed estimation). Based on the examples it was proved that by the application of thermal hazard (detailed estimation) it is possible to avoid the use of additional technical solutions which would be necessary to reduce thermal hazard in particular work places according to the climate index. The threshold limit value for TS has been set, based on these results. It was shown that below TS = 24°C it is not necessary to perform detailed estimation.

  2. Simple Combinatorial Optimisation Cost Games

    NARCIS (Netherlands)

    van Velzen, S.

    2005-01-01

    In this paper we introduce the class of simple combinatorial optimisation cost games, which are games associated to {0, 1}-matrices.A coalitional value of a combinatorial optimisation game is determined by solving an integer program associated with this matrix and the characteristic vector of the

  3. Polyhedral Techniques in Combinatorial Optimization

    NARCIS (Netherlands)

    Aardal, K.I.; van Hoesel, S.

    1995-01-01

    Combinatorial optimization problems arise in several areas ranging from management to mathematics and graph theory. Most combinatorial optimization problems are compu- tationally hard due to the restriction that a subset of the variables have to take integral values. During the last two decades

  4. Multistage quadratic stochastic programming

    Science.gov (United States)

    Lau, Karen K.; Womersley, Robert S.

    2001-04-01

    Quadratic stochastic programming (QSP) in which each subproblem is a convex piecewise quadratic program with stochastic data, is a natural extension of stochastic linear programming. This allows the use of quadratic or piecewise quadratic objective functions which are essential for controlling risk in financial and project planning. Two-stage QSP is a special case of extended linear-quadratic programming (ELQP). The recourse functions in QSP are piecewise quadratic convex and Lipschitz continuous. Moreover, they have Lipschitz gradients if each QP subproblem is strictly convex and differentiable. Using these properties, a generalized Newton algorithm exhibiting global and superlinear convergence has been proposed recently for the two stage case. We extend the generalized Newton algorithm to multistage QSP and show that it is globally and finitely convergent under suitable conditions. We present numerical results on randomly generated data and modified publicly available stochastic linear programming test sets. Efficiency schemes on different scenario tree structures are discussed. The large-scale deterministic equivalent of the multistage QSP is also generated and their accuracy compared.

  5. Effect of Silica Fume on two-stage Concrete Strength

    Science.gov (United States)

    Abdelgader, H. S.; El-Baden, A. S.

    2015-11-01

    Two-stage concrete (TSC) is an innovative concrete that does not require vibration for placing and compaction. TSC is a simple concept; it is made using the same basic constituents as traditional concrete: cement, coarse aggregate, sand and water as well as mineral and chemical admixtures. As its name suggests, it is produced through a two-stage process. Firstly washed coarse aggregate is placed into the formwork in-situ. Later a specifically designed self compacting grout is introduced into the form from the lowest point under gravity pressure to fill the voids, cementing the aggregate into a monolith. The hardened concrete is dense, homogeneous and has in general improved engineering properties and durability. This paper presents the results from a research work attempt to study the effect of silica fume (SF) and superplasticizers admixtures (SP) on compressive and tensile strength of TSC using various combinations of water to cement ratio (w/c) and cement to sand ratio (c/s). Thirty six concrete mixes with different grout constituents were tested. From each mix twenty four standard cylinder samples of size (150mm×300mm) of concrete containing crushed aggregate were produced. The tested samples were made from combinations of w/c equal to: 0.45, 0.55 and 0.85, and three c/s of values: 0.5, 1 and 1.5. Silica fume was added at a dosage of 6% of weight of cement, while superplasticizer was added at a dosage of 2% of cement weight. Results indicated that both tensile and compressive strength of TSC can be statistically derived as a function of w/c and c/s with good correlation coefficients. The basic principle of traditional concrete, which says that an increase in water/cement ratio will lead to a reduction in compressive strength, was shown to hold true for TSC specimens tested. Using a combination of both silica fume and superplasticisers caused a significant increase in strength relative to control mixes.

  6. Characterization of component interactions in two-stage axial turbine

    Directory of Open Access Journals (Sweden)

    Adel Ghenaiet

    2016-08-01

    Full Text Available This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic performances show noticeable differences when simulating the turbine stages simultaneously or separately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while downstream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevailing effect is rather linked to the blade tip flow structure.

  7. A continuous two stage solar coal gasification system

    Science.gov (United States)

    Mathur, V. K.; Breault, R. W.; Lakshmanan, S.; Manasse, F. K.; Venkataramanan, V.

    The characteristics of a two-stage fluidized-bed hybrid coal gasification system to produce syngas from coal, lignite, and peat are described. Devolatilization heat of 823 K is supplied by recirculating gas heated by a solar receiver/coal heater. A second-stage gasifier maintained at 1227 K serves to crack remaining tar and light oil to yield a product free from tar and other condensables, and sulfur can be removed by hot clean-up processes. CO is minimized because the coal is not burned with oxygen, and the product gas contains 50% H2. Bench scale reactors consist of a stage I unit 0.1 m in diam which is fed coal 200 microns in size. A stage II reactor has an inner diam of 0.36 m and serves to gasify the char from stage I. A solar power source of 10 kWt is required for the bench model, and will be obtained from a central receiver with quartz or heat pipe configurations for heat transfer.

  8. Characterization of component interactions in two-stage axial turbine

    Institute of Scientific and Technical Information of China (English)

    Adel Ghenaiet; Kaddour Touil

    2016-01-01

    This study concerns the characterization of both the steady and unsteady flows and the analysis of stator/rotor interactions of a two-stage axial turbine. The predicted aerodynamic perfor-mances show noticeable differences when simulating the turbine stages simultaneously or sepa-rately. By considering the multi-blade per row and the scaling technique, the Computational fluid dynamics (CFD) produced better results concerning the effect of pitchwise positions between vanes and blades. The recorded pressure fluctuations exhibit a high unsteadiness characterized by a space–time periodicity described by a double Fourier decomposition. The Fast Fourier Transform FFT analysis of the static pressure fluctuations recorded at different interfaces reveals the existence of principal harmonics and their multiples, and each lobed structure of pressure wave corresponds to the number of vane/blade count. The potential effect is seen to propagate both upstream and downstream of each blade row and becomes accentuated at low mass flow rates. Between vanes and blades, the potential effect is seen to dominate the quasi totality of blade span, while down-stream the blades this effect seems to dominate from hub to mid span. Near the shroud the prevail-ing effect is rather linked to the blade tip flow structure.

  9. Two stages kinetics of municipal solid waste inoculation composting processes

    Institute of Scientific and Technical Information of China (English)

    XI Bei-dou1; HUANG Guo-he; QIN Xiao-sheng; LIU Hong-liang

    2004-01-01

    In order to understand the key mechanisms of the composting processes, the municipal solid waste(MSW) composting processes were divided into two stages, and the characteristics of typical experimental scenarios from the viewpoint of microbial kinetics was analyzed. Through experimentation with advanced composting reactor under controlled composting conditions, several equations were worked out to simulate the degradation rate of the substrate. The equations showed that the degradation rate was controlled by concentration of microbes in the first stage. The degradation rates of substrates of inoculation Run A, B, C and Control composting systems were 13.61 g/(kg·h), 13.08 g/(kg·h), 15.671 g/(kg·h), and 10.5 g/(kg·h), respectively. The value of Run C is around 1.5 times higher than that of Control system. The decomposition rate of the second stage is controlled by concentration of substrate. Although the organic matter decomposition rates were similar to all Runs, inoculation could reduce the values of the half velocity coefficient and could be more efficient to make the composting stable. Particularly. For Run C, the decomposition rate is high in the first stage, and is low in the second stage. The results indicated that the inoculation was efficient for the composting processes.

  10. Gas loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, Lee; Bartram, Brian; Dattelbaum, Dana; Lang, John; Morris, John

    2015-06-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez and Teflon. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system, and example data from the plate impact experiments will be shown. LA-UR-15-20521

  11. An explicit combinatorial design

    CERN Document Server

    Ma, Xiongfeng

    2011-01-01

    A combinatorial design is a family of sets that are almost disjoint, which is applied in pseudo random number generations and randomness extractions. The parameter, $\\rho$, quantifying the overlap between the sets within the family, is directly related to the length of a random seed needed and the efficiency of an extractor. Nisan and Wigderson proposed an explicit construction of designs in 1994. Later in 2003, Hartman and Raz proved a bound of $\\rho\\le e^2$ for the Nisan-Wigderson construction. In this work, we prove a tighter bound of $\\rho

  12. Combinatorial Maps with Normalized Knot

    CERN Document Server

    Zeps, Dainis

    2010-01-01

    We consider combinatorial maps with fixed combinatorial knot numbered with augmenting numeration called normalized knot. We show that knot's normalization doesn't affect combinatorial map what concerns its generality. Knot's normalization leads to more concise numeration of corners in maps, e.g., odd or even corners allow easy to follow distinguished cycles in map caused by the fixation of the knot. Knot's normalization may be applied to edge structuring knot too. If both are normalized then one is fully and other partially normalized mutually.

  13. PERFORMANCE STUDY OF A TWO STAGE SOLAR ADSORPTION REFRIGERATION SYSTEM

    Directory of Open Access Journals (Sweden)

    BAIJU. V

    2011-07-01

    Full Text Available The present study deals with the performance of a two stage solar adsorption refrigeration system with activated carbon-methanol pair investigated experimentally. Such a system was fabricated and tested under the conditions of National Institute of Technology Calicut, Kerala, India. The system consists of a parabolic solar concentrator,two water tanks, two adsorbent beds, condenser, expansion device, evaporator and accumulator. In this particular system the second water tank is act as a sensible heat storage device so that the system can be used during night time also. The system has been designed for heating 50 litres of water from 25oC to 90oC as well ascooling 10 litres of water from 30oC to 10oC within one hour. The performance parameters such as specific cooling power (SCP, coefficient of performance, solar COP and exergetic efficiency are studied. The dependency between the exergetic efficiency and cycle COP with the driving heat source temperature is also studied. The optimum heat source temperature for this system is determined as 72.4oC. The results show that the system has better performance during night time as compared to the day time. The system has a mean cycle COP of 0.196 during day time and 0.335 for night time. The mean SCP values during day time and night time are 47.83 and 68.2, respectively. The experimental results also demonstrate that the refrigerator has cooling capacity of 47 to 78 W during day time and 57.6 W to 104.4W during night time.

  14. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  15. Fleet Planning Decision-Making: Two-Stage Optimization with Slot Purchase

    Directory of Open Access Journals (Sweden)

    Lay Eng Teoh

    2016-01-01

    Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

  16. Combinatorial fractal Brownian motion model

    Institute of Scientific and Technical Information of China (English)

    朱炬波; 梁甸农

    2000-01-01

    To solve the problem of how to determine the non-scaled interval when processing radar clutter using fractal Brownian motion (FBM) model, a concept of combinatorial FBM model is presented. Since the earth (or sea) surface varies diversely with space, a radar clutter contains several fractal structures, which coexist on all scales. Taking the combination of two FBMs into account, via theoretical derivation we establish a combinatorial FBM model and present a method to estimate its fractal parameters. The correctness of the model and the method is proved by simulation experiments and computation of practial data. Furthermore, we obtain the relationship between fractal parameters when processing combinatorial model with a single FBM model. Meanwhile, by theoretical analysis it is concluded that when combinatorial model is observed on different scales, one of the fractal structures is more obvious.

  17. Combinatorial designs constructions and analysis

    CERN Document Server

    Stinson, Douglas R

    2004-01-01

    Created to teach students many of the most important techniques used for constructing combinatorial designs, this is an ideal textbook for advanced undergraduate and graduate courses in combinatorial design theory. The text features clear explanations of basic designs, such as Steiner and Kirkman triple systems, mutual orthogonal Latin squares, finite projective and affine planes, and Steiner quadruple systems. In these settings, the student will master various construction techniques, both classic and modern, and will be well-prepared to construct a vast array of combinatorial designs. Design theory offers a progressive approach to the subject, with carefully ordered results. It begins with simple constructions that gradually increase in complexity. Each design has a construction that contains new ideas or that reinforces and builds upon similar ideas previously introduced. A new text/reference covering all apsects of modern combinatorial design theory. Graduates and professionals in computer science, applie...

  18. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  19. Combinatorial chemistry in the agrosciences.

    Science.gov (United States)

    Lindell, Stephen D; Pattenden, Lisa C; Shannon, Jonathan

    2009-06-15

    Combinatorial chemistry and high throughput screening have had a profound effect upon the way in which agrochemical companies conduct their lead discovery research. The article reviews recent applications of combinatorial synthesis in the lead discovery process for new fungicides, herbicides and insecticides. The role and importance of bioavailability guidelines, natural products, privileged structures, virtual screening and X-ray crystallographic protein structures on the design of solid- and solution-phase compound libraries is discussed and illustrated.

  20. Relativity in Combinatorial Gravitational Fields

    Directory of Open Access Journals (Sweden)

    Mao Linfan

    2010-04-01

    Full Text Available A combinatorial spacetime $(mathscr{C}_G| uboverline{t}$ is a smoothly combinatorial manifold $mathscr{C}$ underlying a graph $G$ evolving on a time vector $overline{t}$. As we known, Einstein's general relativity is suitable for use only in one spacetime. What is its disguise in a combinatorial spacetime? Applying combinatorial Riemannian geometry enables us to present a combinatorial spacetime model for the Universe and suggest a generalized Einstein gravitational equation in such model. Forfinding its solutions, a generalized relativity principle, called projective principle is proposed, i.e., a physics law ina combinatorial spacetime is invariant under a projection on its a subspace and then a spherically symmetric multi-solutions ofgeneralized Einstein gravitational equations in vacuum or charged body are found. We also consider the geometrical structure in such solutions with physical formations, and conclude that an ultimate theory for the Universe maybe established if all such spacetimes in ${f R}^3$. Otherwise, our theory is only an approximate theory and endless forever.

  1. Algorithmic Strategies in Combinatorial Chemistry

    Energy Technology Data Exchange (ETDEWEB)

    GOLDMAN,DEBORAH; ISTRAIL,SORIN; LANCIA,GIUSEPPE; PICCOLBONI,ANTONIO; WALENZ,BRIAN

    2000-08-01

    Combinatorial Chemistry is a powerful new technology in drug design and molecular recognition. It is a wet-laboratory methodology aimed at ``massively parallel'' screening of chemical compounds for the discovery of compounds that have a certain biological activity. The power of the method comes from the interaction between experimental design and computational modeling. Principles of ``rational'' drug design are used in the construction of combinatorial libraries to speed up the discovery of lead compounds with the desired biological activity. This paper presents algorithms, software development and computational complexity analysis for problems arising in the design of combinatorial libraries for drug discovery. The authors provide exact polynomial time algorithms and intractability results for several Inverse Problems-formulated as (chemical) graph reconstruction problems-related to the design of combinatorial libraries. These are the first rigorous algorithmic results in the literature. The authors also present results provided by the combinatorial chemistry software package OCOTILLO for combinatorial peptide design using real data libraries. The package provides exact solutions for general inverse problems based on shortest-path topological indices. The results are superior both in accuracy and computing time to the best software reports published in the literature. For 5-peptoid design, the computation is rigorously reduced to an exhaustive search of about 2% of the search space; the exact solutions are found in a few minutes.

  2. Right Axillary Sweating After Left Thoracoscopic Sypathectomy in Two-Stage Surgery

    Directory of Open Access Journals (Sweden)

    Berkant Ozpolat

    2013-06-01

    Full Text Available One stage bilateral or two stage unilateral video assisted thoracoscopic sympathectomy could be performed in the treatment of primary focal hyperhidrosis. Here we present a case with compensatory sweating of contralateral side after a two stage operation.

  3. The Two-stage Constrained Equal Awards and Losses Rules for Multi-Issue Allocation Situation

    NARCIS (Netherlands)

    Lorenzo-Freire, S.; Casas-Mendez, B.; Hendrickx, R.L.P.

    2005-01-01

    This paper considers two-stage solutions for multi-issue allocation situations.Characterisations are provided for the two-stage constrained equal awards and constrained equal losses rules, based on the properties of composition and path independence.

  4. Two-Stage Exams Improve Student Learning in an Introductory Geology Course: Logistics, Attendance, and Grades

    Science.gov (United States)

    Knierim, Katherine; Turner, Henry; Davis, Ralph K.

    2015-01-01

    Two-stage exams--where students complete part one of an exam closed book and independently and part two is completed open book and independently (two-stage independent, or TS-I) or collaboratively (two-stage collaborative, or TS-C)--provide a means to include collaborative learning in summative assessments. Collaborative learning has been shown to…

  5. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  6. Universally Balanced Combinatorial Optimization Games

    Directory of Open Access Journals (Sweden)

    Xiaotie Deng

    2010-09-01

    Full Text Available This article surveys studies on universally balanced properties of cooperative games defined in a succinct form. In particular, we focus on combinatorial optimization games in which the values to coalitions are defined through linear optimization programs, possibly combinatorial, that is subject to integer constraints. In economic settings, the integer requirement reflects some forms of indivisibility. We are interested in the classes of games that guarantee a non-empty core no matter what are the admissible values assigned to the parameters defining these programs. We call such classes universally balanced. We present characterization and complexity results on the universally balancedness property for some classes of interesting combinatorial optimization games. In particular, we focus on the algorithmic properties for identifying universally balancedness for the games under discussion.

  7. Combinatorial optimization theory and algorithms

    CERN Document Server

    Korte, Bernhard

    2002-01-01

    Combinatorial optimization is one of the youngest and most active areas of discrete mathematics, and is probably its driving force today. This book describes the most important ideas, theoretical results, and algorithms of this field. It is conceived as an advanced graduate text, and it can also be used as an up-to-date reference work for current research. The book includes the essential fundamentals of graph theory, linear and integer programming, and complexity theory. It covers classical topics in combinatorial optimization as well as very recent ones. The emphasis is on theoretical results and algorithms with provably good performance. Some applications and heuristics are mentioned, too.

  8. Combinatorial synthesis of natural products

    DEFF Research Database (Denmark)

    Nielsen, John

    2002-01-01

    for preparation of combinatorial libraries. In other examples, natural products or intermediates have served as building blocks or scaffolds in the synthesis of complex natural products, bioactive analogues or designed hybrid molecules. Finally, structural motifs from the biologically active parent molecule have......Combinatorial syntheses allow production of compound libraries in an expeditious and organized manner immediately applicable for high-throughput screening. Natural products possess a pedigree to justify quality and appreciation in drug discovery and development. Currently, we are seeing a rapid...

  9. On an Extension of a Combinatorial Identity

    Indian Academy of Sciences (India)

    M Rana; A K Agarwal

    2009-02-01

    Using Frobenius partitions we extend the main results of [4]. This leads to an infinite family of 4-way combinatorial identities. In some particular cases we get even 5-way combinatorial identities which give us four new combinatorial versions of Göllnitz–Gordon identities.

  10. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice.

    Directory of Open Access Journals (Sweden)

    Yong Wang

    Full Text Available This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS estimation and a Stochastic User Equilibrium (SUE assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers' route choice behavior.

  11. Combinatorial optimization networks and matroids

    CERN Document Server

    Lawler, Eugene

    2011-01-01

    Perceptively written text examines optimization problems that can be formulated in terms of networks and algebraic structures called matroids. Chapters cover shortest paths, network flows, bipartite matching, nonbipartite matching, matroids and the greedy algorithm, matroid intersections, and the matroid parity problems. A suitable text or reference for courses in combinatorial computing and concrete computational complexity in departments of computer science and mathematics.

  12. Combinatorial reasoning to solve problems

    NARCIS (Netherlands)

    Coenen, Tom; Hof, Frits; Verhoef, Nellie

    2016-01-01

    This study reports combinatorial reasoning to solve problems. We observed the mathematical thinking of students aged 14-16. We study the variation of the students’ solution strategies in the context of emergent modelling. The results show that the students are tempted to begin the problem solving pr

  13. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  14. The evolution of combinatorial phonology

    NARCIS (Netherlands)

    Zuidema, Willem; de Boer, Bart

    2009-01-01

    A fundamental, universal property of human language is that its phonology is combinatorial. That is, one can identify a set of basic, distinct units (phonemes, syllables) that can be productively combined in many different ways. In this paper, we develop a methodological framework based on evolution

  15. Combinatorial synthesis of natural products

    DEFF Research Database (Denmark)

    Nielsen, John

    2002-01-01

    Combinatorial syntheses allow production of compound libraries in an expeditious and organized manner immediately applicable for high-throughput screening. Natural products possess a pedigree to justify quality and appreciation in drug discovery and development. Currently, we are seeing a rapid...

  16. The Yoccoz Combinatorial Analytic Invariant

    DEFF Research Database (Denmark)

    Petersen, Carsten Lunde; Roesch, Pascale

    2008-01-01

    In this paper we develop a combinatorial analytic encoding of the Mandelbrot set M. The encoding is implicit in Yoccoz' proof of local connectivity of M at any Yoccoz parameter, i.e. any at most finitely renormalizable parameter for which all periodic orbits are repelling. Using this encoding we ...

  17. Stochastic Shadowing and Stochastic Stability

    OpenAIRE

    Todorov, Dmitry

    2014-01-01

    The notion of stochastic shadowing property is introduced. Relations to stochastic stability and standard shadowing are studied. Using tent map as an example it is proved that, in contrast to what happens for standard shadowing, there are significantly non-uniformly hyperbolic systems that satisfy stochastic shadowing property.

  18. Preemptive scheduling in a two-stage supply chain to minimize the makespan

    NARCIS (Netherlands)

    Pei, Jun; Fan, Wenjuan; Pardalos, Panos M.; Liu, Xinbao; Goldengorin, Boris; Yang, Shanlin

    2015-01-01

    This paper deals with the problem of preemptive scheduling in a two-stage supply chain framework. The supply chain environment contains two stages: production and transportation. In the production stage jobs are processed on a manufacturer's bounded serial batching machine, preemptions are allowed,

  19. Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model

    Directory of Open Access Journals (Sweden)

    Mi-Yuan Shan

    2013-01-01

    Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.

  20. The Shape of Things to Come: The Computational Pictograph as a Bridge from Combinatorial Space to Outcome Distribution

    Science.gov (United States)

    Abrahamson, Dor

    2006-01-01

    This snapshot introduces a computer-based representation and activity that enables students to simultaneously "see" the combinatorial space of a stochastic device (e.g., dice, spinner, coins) and its outcome distribution. The author argues that the "ambiguous" representation fosters student insight into probability. [Snapshots are subject to peer…

  1. Combinatorial algebra syntax and semantics

    CERN Document Server

    Sapir, Mark V

    2014-01-01

    Combinatorial Algebra: Syntax and Semantics provides a comprehensive account of many areas of combinatorial algebra. It contains self-contained proofs of  more than 20 fundamental results, both classical and modern. This includes Golod–Shafarevich and Olshanskii's solutions of Burnside problems, Shirshov's solution of Kurosh's problem for PI rings, Belov's solution of Specht's problem for varieties of rings, Grigorchuk's solution of Milnor's problem, Bass–Guivarc'h theorem about the growth of nilpotent groups, Kleiman's solution of Hanna Neumann's problem for varieties of groups, Adian's solution of von Neumann-Day's problem, Trahtman's solution of the road coloring problem of Adler, Goodwyn and Weiss. The book emphasize several ``universal" tools, such as trees, subshifts, uniformly recurrent words, diagrams and automata.   With over 350 exercises at various levels of difficulty and with hints for the more difficult problems, this book can be used as a textbook, and aims to reach a wide and diversified...

  2. Combinatorial Properties of Finite Models

    CERN Document Server

    Hubicka, Jan

    2010-01-01

    We study countable embedding-universal and homomorphism-universal structures and unify results related to both of these notions. We show that many universal and ultrahomogeneous structures allow a concise description (called here a finite presentation). Extending classical work of Rado (for the random graph), we find a finite presentation for each of the following classes: homogeneous undirected graphs, homogeneous tournaments and homogeneous partially ordered sets. We also give a finite presentation of the rational Urysohn metric space and some homogeneous directed graphs. We survey well known structures that are finitely presented. We focus on structures endowed with natural partial orders and prove their universality. These partial orders include partial orders on sets of words, partial orders formed by geometric objects, grammars, polynomials and homomorphism orders for various combinatorial objects. We give a new combinatorial proof of the existence of embedding-universal objects for homomorphism-defined...

  3. Stem cells and combinatorial science.

    Science.gov (United States)

    Fang, Yue Qin; Wong, Wan Qing; Yap, Yan Wen; Orner, Brendan P

    2007-09-01

    Stem cell-based technologies have the potential to help cure a number of cell degenerative diseases. Combinatorial and high throughput screening techniques could provide tools to control and manipulate the self-renewal and differentiation of stem cells. This review chronicles historic and recent progress in the stem cell field involving both pluripotent and multipotent cells, and it highlights relevant cellular signal transduction pathways. This review further describes screens using libraries of soluble, small-molecule ligands, and arrays of molecules immobilized onto surfaces while proposing future trends in similar studies. It is hoped that by reviewing both the stem cell and the relevant high throughput screening literature, this paper can act as a resource to the combinatorial science community.

  4. Combinatorial Approach of Associative Classification

    OpenAIRE

    P. R. Pal; R.C. Jain

    2010-01-01

    Association rule mining and classification are two important techniques of data mining in knowledge discovery process. Integration of these two has produced class association rule mining or associative classification techniques, which in many cases have shown better classification accuracy than conventional classifiers. Motivated by this study we have explored and applied the combinatorial mathematics in class association rule mining in this paper. Our algorithm is based on producing co...

  5. Combinatorial aspects of covering arrays

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2004-11-01

    Full Text Available Covering arrays generalize orthogonal arrays by requiring that t -tuples be covered, but not requiring that the appearance of t -tuples be balanced.Their uses in screening experiments has found application in software testing, hardware testing, and a variety of fields in which interactions among factors are to be identified. Here a combinatorial view of covering arrays is adopted, encompassing basic bounds, direct constructions, recursive constructions, algorithmic methods, and applications.

  6. Stochastic power flow modeling

    Energy Technology Data Exchange (ETDEWEB)

    1980-06-01

    The stochastic nature of customer demand and equipment failure on large interconnected electric power networks has produced a keen interest in the accurate modeling and analysis of the effects of probabilistic behavior on steady state power system operation. The principle avenue of approach has been to obtain a solution to the steady state network flow equations which adhere both to Kirchhoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques. Clearly the need of the present is to develop sound techniques for producing meaningful data to serve as input. This research has addressed this end and serves to bridge the gap between electric demand modeling, equipment failure analysis, etc., and the area of algorithm development. Therefore, the scope of this work lies squarely on developing an efficient means of producing sensible input information in the form of probability distributions for the many types of solution algorithms that have been developed. Two major areas of development are described in detail: a decomposition of stochastic processes which gives hope of stationarity, ergodicity, and perhaps even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.

  7. A production planning model considering uncertain demand using two-stage stochastic programming in a fresh vegetable supply chain context

    National Research Council Canada - National Science Library

    Mateo, Jordi; Pla, Lluis M; Solsona, Francesc; Pagès, Adela

    2016-01-01

    .... The main aim is to minimize overall procurement costs and meet future demand. This kind of problem is rather common in fresh vegetable supply chains where producers are located in proximity either to processing plants or retailers...

  8. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  9. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [ORNL; Rice, C Keith [ORNL; Abdelaziz, Omar [ORNL; Shrestha, Som S [ORNL

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  10. Cost-Based Domain Filtering for Stochastic Constraint Programming

    NARCIS (Netherlands)

    Rossi, R.; Tarim, S.A.; Hnich, B.; Prestwich, S.

    2008-01-01

    Cost-based filtering is a novel approach that combines techniques from Operations Research and Constraint Programming to filter from decision variable domains values that do not lead to better solutions [7]. Stochastic Constraint Programming is a framework for modeling combinatorial optimization pro

  11. Cost-Based Domain Filtering for Stochastic Constraint Programming

    NARCIS (Netherlands)

    Rossi, R.; Tarim, S.A.; Hnich, B.; Prestwich, S.

    2008-01-01

    Cost-based filtering is a novel approach that combines techniques from Operations Research and Constraint Programming to filter from decision variable domains values that do not lead to better solutions [7]. Stochastic Constraint Programming is a framework for modeling combinatorial optimization pro

  12. Solving stochastic multiobjective vehicle routing problem using probabilistic metaheuristic

    Directory of Open Access Journals (Sweden)

    Gannouni Asmae

    2017-01-01

    closed form expression. This novel approach is based on combinatorial probability and can be incorporated in a multiobjective evolutionary algorithm. (iiProvide probabilistic approaches to elitism and diversification in multiobjective evolutionary algorithms. Finally, The behavior of the resulting Probabilistic Multi-objective Evolutionary Algorithms (PrMOEAs is empirically investigated on the multi-objective stochastic VRP problem.

  13. Some polyhedral results in combinatorial optimization

    OpenAIRE

    Xiao, Han; 肖汉

    2016-01-01

    Many combinatorial optimization problems can be conceived of as optimizing a linear function over a polyhedron. Investigating properties of the associated polyhedron has been evidenced to be a powerful schema for solving combinatorial optimization problems, especially for characterizing min-max relations. Three different topics in combinatorial optimization are explored in this thesis, which fall within a unified characterization: integrality of polyhedra. Various min-max relations in com...

  14. Numerical simulation of a step-piston type series two-stage pulse tube refrigerator

    Science.gov (United States)

    Zhu, Shaowei; Nogawa, Masafumi; Inoue, Tatsuo

    2007-09-01

    A two-stage pulse tube refrigerator has a great advantage in that there are no moving parts at low temperatures. The problem is low theoretical efficiency. In an ordinary two-stage pulse tube refrigerator, the expansion work of the first stage pulse tube is rather large, but is changed to heat. The theoretical efficiency is lower than that of a Stirling refrigerator. A series two-stage pulse tube refrigerator was introduced for solving this problem. The hot end of the regenerator of the second stage is connected to the hot end of the first stage pulse tube. The expansion work in the first stage pulse tube is part of the input work of the second stage, therefore the efficiency is increased. In a simulation result for a step-piston type two-stage series pulse tube refrigerator, the efficiency is increased by 13.8%.

  15. Theory and calculation of two-stage voltage stabilizer on zener diodes

    Directory of Open Access Journals (Sweden)

    G. S. Veksler

    1966-12-01

    Full Text Available Two-stage stabilizer is compared with one-stage. There have been got formulas, which give the possibility to make an engineering calculation. There is an example of the calculation.

  16. Two-stage fungal pre-treatment for improved biogas production from sisal leaf decortication residues

    National Research Council Canada - National Science Library

    Muthangya, Mutemi; Mshandete, Anthony Manoni; Kivaisi, Amelia Kajumulo

    2009-01-01

    .... Pre-treatment of the residue prior to its anaerobic digestion (AD) was investigated using a two-stage pre-treatment approach with two fungal strains, CCHT-1 and Trichoderma reesei in succession in anaerobic batch bioreactors...

  17. Experiment research on two-stage dry-fed entrained flow coal gasifier

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The process flow and the main devices of a new two-stage dry-fed coal gasification pilot plant with a throughout of 36 t/d are introduced in this paper. For comparison with the traditional one-stage gasifiers, the influences of the coal feed ratio between two stages on the performance of the gasifier are detailedly studied by a series of experiments. The results reveal that the two-stage gasification decreases the temperature of the syngas at the outlet of the gasifier, simplifies the gasification process, and reduces the size of the syngas cooler. Moreover, the cold gas efficiency of the gasifier can be improved by using the two-stage gasification. In our experiments, the efficiency is about 3%-6% higher than the existing one-stage gasifiers.

  18. TWO-STAGE CHARACTER CLASSIFICATION : A COMBINED APPROACH OF CLUSTERING AND SUPPORT VECTOR CLASSIFIERS

    NARCIS (Netherlands)

    Vuurpijl, L.; Schomaker, L.

    2000-01-01

    This paper describes a two-stage classification method for (1) classification of isolated characters and (2) verification of the classification result. Character prototypes are generated using hierarchical clustering. For those prototypes known to sometimes produce wrong classification results, a

  19. A Two-Stage Waste Gasification Reactor for Mars In-Situ Resource Utilization Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to design, build, and test a two-stage waste processing reactor for space applications. Our proposed technology converts waste from space missions into...

  20. Two-stage model of radon-induced malignant lung tumors in rats: effects of cell killing

    Science.gov (United States)

    Luebeck, E. G.; Curtis, S. B.; Cross, F. T.; Moolgavkar, S. H.

    1996-01-01

    A two-stage stochastic model of carcinogenesis is used to analyze lung tumor incidence in 3750 rats exposed to varying regimens of radon carried on a constant-concentration uranium ore dust aerosol. New to this analysis is the parameterization of the model such that cell killing by the alpha particles could be included. The model contains parameters characterizing the rate of the first mutation, the net proliferation rate of initiated cells, the ratio of the rates of cell loss (cell killing plus differentiation) and cell division, and the lag time between the appearance of the first malignant cell and the tumor. Data analysis was by standard maximum likelihood estimation techniques. Results indicate that the rate of the first mutation is dependent on radon and consistent with in vitro rates measured experimentally, and that the rate of the second mutation is not dependent on radon. An initial sharp rise in the net proliferation rate of initiated cell was found with increasing exposure rate (denoted model I), which leads to an unrealistically high cell-killing coefficient. A second model (model II) was studied, in which the initial rise was attributed to promotion via a step function, implying that it is due not to radon but to the uranium ore dust. This model resulted in values for the cell-killing coefficient consistent with those found for in vitro cells. An "inverse dose-rate" effect is seen, i.e. an increase in the lifetime probability of tumor with a decrease in exposure rate. This is attributed in large part to promotion of intermediate lesions. Since model II is preferable on biological grounds (it yields a plausible cell-killing coefficient), such as uranium ore dust. This analysis presents evidence that a two-stage model describes the data adequately and generates hypotheses regarding the mechanism of radon-induced carcinogenesis.

  1. A new multi-motor drive system based on two-stage direct power converter

    OpenAIRE

    Kumar, Dinesh

    2011-01-01

    The two-stage AC to AC direct power converter is an alternative matrix converter topology, which offers the benefits of sinusoidal input currents and output voltages, bidirectional power flow and controllable input power factor. The absence of any energy storage devices, such as electrolytic capacitors, has increased the potential lifetime of the converter. In this research work, a new multi-motor drive system based on a two-stage direct power converter has been proposed, with two motors c...

  2. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised

  3. Two-Stage Conversion of Land and Marine Biomass for Biogas and Biohydrogen Production

    OpenAIRE

    Nkemka, Valentine

    2012-01-01

    The replacement of fossil fuels by renewable fuels such as biogas and biohydrogen will require efficient and economically competitive process technologies together with new kinds of biomass. A two-stage system for biogas production has several advantages over the widely used one-stage continuous stirred tank reactor (CSTR). However, it has not yet been widely implemented on a large scale. Biohydrogen can be produced in the anaerobic two-stage system. It is considered to be a useful fuel for t...

  4. Dynamic Combinatorial Libraries : From Exploring Molecular Recognition to Systems Chemistry

    NARCIS (Netherlands)

    Li, Jianwei; Nowak, Piotr; Otto, Sijbren

    2013-01-01

    Dynamic combinatorial chemistry (DCC) is a subset of combinatorial chemistry where the library members interconvert continuously by exchanging building blocks with each other. Dynamic combinatorial libraries (DCLs) are powerful tools for discovering the unexpected and have given rise to many

  5. Probabilistic methods in combinatorial analysis

    CERN Document Server

    Sachkov, Vladimir N

    2014-01-01

    This 1997 work explores the role of probabilistic methods for solving combinatorial problems. These methods not only provide the means of efficiently using such notions as characteristic and generating functions, the moment method and so on but also let us use the powerful technique of limit theorems. The basic objects under investigation are nonnegative matrices, partitions and mappings of finite sets, with special emphasis on permutations and graphs, and equivalence classes specified on sequences of finite length consisting of elements of partially ordered sets; these specify the probabilist

  6. Statistical mechanics of combinatorial auctions

    Science.gov (United States)

    Galla, Tobias; Leone, Michele; Marsili, Matteo; Sellitto, Mauro; Weigt, Martin; Zecchina, Riccardo

    2006-05-01

    Combinatorial auctions are formulated as frustrated lattice gases on sparse random graphs, allowing the determination of the optimal revenue by methods of statistical physics. Transitions between computationally easy and hard regimes are found and interpreted in terms of the geometric structure of the space of solutions. We introduce an iterative algorithm to solve intermediate and large instances, and discuss competing states of optimal revenue and maximal number of satisfied bidders. The algorithm can be generalized to the hard phase and to more sophisticated auction protocols.

  7. Fairness in Combinatorial Auctioning Systems

    CERN Document Server

    Saini, Megha

    2008-01-01

    One of the Multi-Agent Systems that is widely used by various government agencies, buyers and sellers in a market economy, in such a manner so as to attain optimized resource allocation, is the Combinatorial Auctioning System (CAS). We study another important aspect of resource allocations in CAS, namely fairness. We present two important notions of fairness in CAS, extended fairness and basic fairness. We give an algorithm that works by incorporating a metric to ensure fairness in a CAS that uses the Vickrey-Clark-Groves (VCG) mechanism, and uses an algorithm of Sandholm to achieve optimality. Mathematical formulations are given to represent measures of extended fairness and basic fairness.

  8. Cubature formulas on combinatorial graphs

    CERN Document Server

    Pesenson, Isaac Z

    2011-01-01

    Many contemporary applications, for example, cataloging of galaxies, document analysis, face recognition, learning theory, image processing, operate with a large amount of data which is often represented as a graph embedded into a high dimensional Euclidean space. The variety of problems arising in contemporary data processing requires development on graphs such topics of the classical harmonic analysis as Shannon sampling, splines, wavelets, cubature formulas. The goal of the paper is to establish cubature formulas on finite combinatorial graphs. The results have direct applications to problems that arise in connection with data filtering, data denoising and data dimension reduction.

  9. Stochastic integrals

    CERN Document Server

    McKean, Henry P

    2005-01-01

    This little book is a brilliant introduction to an important boundary field between the theory of probability and differential equations. -E. B. Dynkin, Mathematical Reviews This well-written book has been used for many years to learn about stochastic integrals. The book starts with the presentation of Brownian motion, then deals with stochastic integrals and differentials, including the famous Itô lemma. The rest of the book is devoted to various topics of stochastic integral equations, including those on smooth manifolds. Originally published in 1969, this classic book is ideal for supplemen

  10. Stochastic processes

    CERN Document Server

    Parzen, Emanuel

    2015-01-01

    Well-written and accessible, this classic introduction to stochastic processes and related mathematics is appropriate for advanced undergraduate students of mathematics with a knowledge of calculus and continuous probability theory. The treatment offers examples of the wide variety of empirical phenomena for which stochastic processes provide mathematical models, and it develops the methods of probability model-building.Chapter 1 presents precise definitions of the notions of a random variable and a stochastic process and introduces the Wiener and Poisson processes. Subsequent chapters examine

  11. Combinatorial algorithms for the seriation problem

    NARCIS (Netherlands)

    Seminaroti, Matteo

    2016-01-01

    In this thesis we study the seriation problem, a combinatorial problem arising in data analysis, which asks to sequence a set of objects in such a way that similar objects are ordered close to each other. We focus on the combinatorial structure and properties of Robinsonian matrices, a special class

  12. Combinatorial Interpretation of General Eulerian Numbers

    Directory of Open Access Journals (Sweden)

    Tingyao Xiong

    2014-01-01

    Full Text Available Since the 1950s, mathematicians have successfully interpreted the traditional Eulerian numbers and q-Eulerian numbers combinatorially. In this paper, the authors give a combinatorial interpretation to the general Eulerian numbers defined on general arithmetic progressions a,a+d,a+2d,….

  13. Combinatorial Solutions to Normal Ordering of Bosons

    CERN Document Server

    Blasiak, P; Horzela, A; Penson, K A; Solomon, A I

    2005-01-01

    We present a combinatorial method of constructing solutions to the normal ordering of boson operators. Generalizations of standard combinatorial notions - the Stirling and Bell numbers, Bell polynomials and Dobinski relations - lead to calculational tools which allow to find explicitly normally ordered forms for a large class of operator functions.

  14. Combinatorial Properties of Finite Models

    Science.gov (United States)

    Hubicka, Jan

    2010-09-01

    We study countable embedding-universal and homomorphism-universal structures and unify results related to both of these notions. We show that many universal and ultrahomogeneous structures allow a concise description (called here a finite presentation). Extending classical work of Rado (for the random graph), we find a finite presentation for each of the following classes: homogeneous undirected graphs, homogeneous tournaments and homogeneous partially ordered sets. We also give a finite presentation of the rational Urysohn metric space and some homogeneous directed graphs. We survey well known structures that are finitely presented. We focus on structures endowed with natural partial orders and prove their universality. These partial orders include partial orders on sets of words, partial orders formed by geometric objects, grammars, polynomials and homomorphism orders for various combinatorial objects. We give a new combinatorial proof of the existence of embedding-universal objects for homomorphism-defined classes of structures. This relates countable embedding-universal structures to homomorphism dualities (finite homomorphism-universal structures) and Urysohn metric spaces. Our explicit construction also allows us to show several properties of these structures.

  15. Combinatorial Clustering and the Beta Negative Binomial Process.

    Science.gov (United States)

    Broderick, Tamara; Mackey, Lester; Paisley, John; Jordan, Michael I

    2015-02-01

    We develop a Bayesian nonparametric approach to a general family of latent class problems in which individuals can belong simultaneously to multiple classes and where each class can be exhibited multiple times by an individual. We introduce a combinatorial stochastic process known as the negative binomial process ( NBP ) as an infinite-dimensional prior appropriate for such problems. We show that the NBP is conjugate to the beta process, and we characterize the posterior distribution under the beta-negative binomial process ( BNBP) and hierarchical models based on the BNBP (the HBNBP). We study the asymptotic properties of the BNBP and develop a three-parameter extension of the BNBP that exhibits power-law behavior. We derive MCMC algorithms for posterior inference under the HBNBP , and we present experiments using these algorithms in the domains of image segmentation, object recognition, and document analysis.

  16. Stochastic optimization

    CERN Document Server

    Schneider, Johannes J

    2007-01-01

    This book addresses stochastic optimization procedures in a broad manner. The first part offers an overview of relevant optimization philosophies; the second deals with benchmark problems in depth, by applying a selection of optimization procedures. Written primarily with scientists and students from the physical and engineering sciences in mind, this book addresses a larger community of all who wish to learn about stochastic optimization techniques and how to use them.

  17. Method of oxygen-enriched two-stage underground coal gasification

    Institute of Scientific and Technical Information of China (English)

    Liu Hongtao; Chen Feng; Pan Xia; Yao Kai; Liu Shuqin

    2011-01-01

    Two-stage underground coal gasification was studied to improve the caloric value of the syngas and to extend gas production times. A model test using the oxygen-enriched two-stage coal gasification method was carried out. The composition of the gas produced, the time ratio of the two stages, and the role of the temperature field were analysed. The results show that oxygen-enriched two-stage gasification shortens the time of the first stage and prolongs the time of the second stage. Feed oxygen concentrations of 30%,35%, 40%, 45%, 60%, or 80% gave time ratios (first stage to second stage) of 1:0.12, 1:0.21, 1:0.51, 1:0.64,1:0.90, and 1:4.0 respectively. Cooling rates of the temperature field after steam injection decreased with time from about 19.1-27.4 ℃/min to 2.3-6.8 ℃/min. But this rate increased with increasing oxygen concentrations in the first stage. The caloric value of the syngas improves with increased oxygen concentration in the first stage. Injection of 80% oxygen-enriched air gave gas with the highest caloric value and also gave the longest production time. The caloric value of the gas obtained from the oxygenenriched two-stage gasification method lies in the range from 5.31 MJ/Nm3 to 10.54 MJ/Nm3.

  18. High magnetostriction parameters for low-temperature sintered cobalt ferrite obtained by two-stage sintering

    Energy Technology Data Exchange (ETDEWEB)

    Khaja Mohaideen, K.; Joy, P.A., E-mail: pa.joy@ncl.res.in

    2014-12-15

    From the studies on the magnetostriction characteristics of two-stage sintered polycrystalline CoFe{sub 2}O{sub 4} made from nanocrystalline powders, it is found that two-stage sintering at low temperatures is very effective for enhancing the density and for attaining higher magnetostriction coefficient. Magnetostriction coefficient and strain derivative are further enhanced by magnetic field annealing and relatively larger enhancement in the magnetostriction parameters is obtained for the samples sintered at lower temperatures, after magnetic annealing, despite the fact that samples sintered at higher temperatures show larger magnetostriction coefficients before annealing. A high magnetostriction coefficient of ∼380 ppm is obtained after field annealing for the sample sintered at 1100 °C, below a magnetic field of 400 kA/m, which is the highest value so far reported at low magnetic fields for sintered polycrystalline cobalt ferrite. - Highlights: • Effect of two-stage sintering on the magnetostriction characteristics of CoFe{sub 2}O{sub 4} is studied. • Two-stage sintering is very effective for enhancing the density and the magnetostriction parameters. • Higher magnetostriction for samples sintered at low temperatures and after magnetic field annealing. • Highest reported magnetostriction of 380 ppm at low fields after two-stage, low-temperature sintering.

  19. 13 K thermally coupled two-stage Stirling-type pulse tube refrigerator

    Institute of Scientific and Technical Information of China (English)

    TANG Ke; CHEN Guobang; THUMMES Günter

    2005-01-01

    Stirling-type pulse tube refrigerators have attracted academic and commercial interest in recent years due to their more compact configuration and higher efficiency than those of G-M type pulse tube refrigerators. In order to achieve a no-load cooling temperature below 20 K, a thermally coupled two-stage Stirling-type pulse tube refrigerator has been built. The thermally coupled arrangement was expected to minimize the interference between the two stages and to simplify the adjustment and optimization of the phase shifters. A no-load cooling temperature of 14.97 K has been realized with the two-stage cooler driven by one linear compressor of 200 W electric input. When the two stages are driven by two compressors respectively, with total electric input of 400 W, the prototype has attained a no-load cooling temperature of 12.96 K, which is the lowest temperature ever reported with two-stage Stirling-type pulse tube refrigerators.

  20. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  1. Design and construction of the X-2 two-stage free piston driven expansion tube

    Science.gov (United States)

    Doolan, Con

    1995-01-01

    This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.

  2. Analysis of performance and optimum configuration of two-stage semiconductor thermoelectric module

    Institute of Scientific and Technical Information of China (English)

    Li Kai-Zhen; Liang Rui-Sheng; Wei Zheng-Jun

    2008-01-01

    In this paper, the theoretical analysis and simulating calculation were conducted for a basic two-stage semiconductor thermoelectric module, which contains one thermocouple in the second stage and several thermocouples in the first stage. The study focused on the configuration of the two-stage semiconductor thermoelectric cooler, especially investigating the influences of some parameters, such as the current I1 of the first stage, the area A1 of every thermocouple and the number n of thermocouples in the first stage, on the cooling performance of the module. The obtained results of analysis indicate that changing the current I1 of the first stage, the area A1 of thcrmocouples and the number n of thermocouples in the first stage can improve the cooling performance of the module. These results can be used to optimize the configuration of the two-stage semiconductor thermoelectric module and provide guides for the design and application of thermoelectric cooler.

  3. Effects of earthworm casts and zeolite on the two-stage composting of green waste.

    Science.gov (United States)

    Zhang, Lu; Sun, Xiangyang

    2015-05-01

    Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21days with the optimized two-stage composting method rather than in the 90-270days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  4. Two-Stage Revision Anterior Cruciate Ligament Reconstruction: Bone Grafting Technique Using an Allograft Bone Matrix.

    Science.gov (United States)

    Chahla, Jorge; Dean, Chase S; Cram, Tyler R; Civitarese, David; O'Brien, Luke; Moulton, Samuel G; LaPrade, Robert F

    2016-02-01

    Outcomes of primary anterior cruciate ligament (ACL) reconstruction have been reported to be far superior to those of revision reconstruction. However, as the incidence of ACL reconstruction is rapidly increasing, so is the number of failures. The subsequent need for revision ACL reconstruction is estimated to occur in up to 13,000 patients each year in the United States. Revision ACL reconstruction can be performed in one or two stages. A two-stage approach is recommended in cases of improper placement of the original tunnels or in cases of unacceptable tunnel enlargement. The aim of this study was to describe the technique for allograft ACL tunnel bone grafting in patients requiring a two-stage revision ACL reconstruction.

  5. The CSS and The Two-Staged Methods for Parameter Estimation in SARFIMA Models

    Directory of Open Access Journals (Sweden)

    Erol Egrioglu

    2011-01-01

    Full Text Available Seasonal Autoregressive Fractionally Integrated Moving Average (SARFIMA models are used in the analysis of seasonal long memory-dependent time series. Two methods, which are conditional sum of squares (CSS and two-staged methods introduced by Hosking (1984, are proposed to estimate the parameters of SARFIMA models. However, no simulation study has been conducted in the literature. Therefore, it is not known how these methods behave under different parameter settings and sample sizes in SARFIMA models. The aim of this study is to show the behavior of these methods by a simulation study. According to results of the simulation, advantages and disadvantages of both methods under different parameter settings and sample sizes are discussed by comparing the root mean square error (RMSE obtained by the CSS and two-staged methods. As a result of the comparison, it is seen that CSS method produces better results than those obtained from the two-staged method.

  6. A two-stage subsurface vertical flow constructed wetland for high-rate nitrogen removal.

    Science.gov (United States)

    Langergraber, Guenter; Leroch, Klaus; Pressl, Alexander; Rohrhofer, Roland; Haberl, Raimund

    2008-01-01

    By using a two-stage constructed wetland (CW) system operated with an organic load of 40 gCOD.m(-2).d(-1) (2 m2 per person equivalent) average nitrogen removal efficiencies of about 50% and average nitrogen elimination rates of 980 g N.m(-2).yr(-1) could be achieved. Two vertical flow beds with intermittent loading have been operated in series. The first stage uses sand with a grain size of 2-3.2 mm for the main layer and has a drainage layer that is impounded; the second stage sand with a grain size of 0.06-4 mm and a drainage layer with free drainage. The high nitrogen removal can be achieved without recirculation thus it is possible to operate the two-stage CW system without energy input. The paper shows performance data for the two-stage CW system regarding removal of organic matter and nitrogen for the two year operating period of the system. Additionally, its efficiency is compared with the efficiency of a single-stage vertical flow CW system designed and operated according to the Austrian design standards with 4 m2 per person equivalent. The comparison shows that a higher effluent quality could be reached with the two-stage system although the two-stage CW system is operated with the double organic load or half the specific surface area requirement, respectively. Another advantage is that the specific investment costs of the two-stage CW system amount to 1,200 EUR per person (without mechanical pre-treatment) and are only about 60% of the specific investment costs of the singe-stage CW system. IWA Publishing 2008.

  7. Stochastic Vehicle Routing with Recourse

    CERN Document Server

    Goertz, Inge Li; Saket, Rishi

    2012-01-01

    We study the classic Vehicle Routing Problem in the setting of stochastic optimization with recourse. StochVRP is a two-stage optimization problem, where demand is satisfied using two routes: fixed and recourse. The fixed route is computed using only a demand distribution. Then after observing the demand instantiations, a recourse route is computed -- but costs here become more expensive by a factor lambda. We present an O(log^2 n log(n lambda))-approximation algorithm for this stochastic routing problem, under arbitrary distributions. The main idea in this result is relating StochVRP to a special case of submodular orienteering, called knapsack rank-function orienteering. We also give a better approximation ratio for knapsack rank-function orienteering than what follows from prior work. Finally, we provide a Unique Games Conjecture based omega(1) hardness of approximation for StochVRP, even on star-like metrics on which our algorithm achieves a logarithmic approximation.

  8. Stochastic vehicle routing with recourse

    DEFF Research Database (Denmark)

    Gørtz, Inge Li; Nagarajan, Viswanath; Saket, Rishi

    2012-01-01

    We study the classic Vehicle Routing Problem in the setting of stochastic optimization with recourse. StochVRP is a two-stage problem, where demand is satisfied using two routes: fixed and recourse. The fixed route is computed using only a demand distribution. Then after observing the demand...... instantiations, a recourse route is computed - but costs here become more expensive by a factor λ. We present an O(log2n ·log(nλ))-approximation algorithm for this stochastic routing problem, under arbitrary distributions. The main idea in this result is relating StochVRP to a special case of submodular...... orienteering, called knapsack rank-function orienteering. We also give a better approximation ratio for knapsack rank-function orienteering than what follows from prior work. Finally, we provide a Unique Games Conjecture based ω(1) hardness of approximation for StochVRP, even on star-like metrics on which our...

  9. Methane production from sweet sorghum residues via a two-stage process

    Energy Technology Data Exchange (ETDEWEB)

    Stamatelatou, K.; Dravillas, K.; Lyberatos, G. [University of Patras (Greece). Department of Chemical Engineering, Laboratory of Biochemical Engineering and Environmental Technology

    2003-07-01

    The start-up of a two-stage reactor configuration for the anaerobic digestion of sweet sorghum residues was evaluated. The sweet sorghum residues were a waste stream originating from the alcoholic fermentation of sweet sorghum and the subsequent distillation step. This waste stream contained high concentration of solid matter (9% TS) and thus could be characterized as a semi-solid, not easily biodegradable wastewater with high COD (115 g/l). The application of the proposed two-stage configuration (consisting of one thermophilic hydrolyser and one mesophilic methaniser) achieved a methane production of 16 l/l wastewater under a hydraulic retention time of 19 d. (author)

  10. Two Stage Fully Differential Sample and Hold Circuit Using .18µm Technology

    Directory of Open Access Journals (Sweden)

    Dharmendra Dongardiye

    2014-05-01

    Full Text Available This paper presents a well-established Fully Differential sample & hold circuitry, implemented in 180-nm CMOS technology. In this two stage method the first stage give us very high gain and second stage gives large voltage swing. The proposed opamp provides 149MHz unity-gain bandwidth , 78 degree phase margin and a differential peak to peak output swing more than 2.4v. using the improved fully differential two stage operational amplifier of 76.7dB gain. Although the sample and hold circuit meets the requirements of SNR specifications.

  11. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  12. Development of a linear compressor for two-stage pulse tube cryocoolers

    Institute of Scientific and Technical Information of China (English)

    Peng-da YAN; Wei-li GAO; Guo-bang CHEN

    2009-01-01

    A valveless linear compressor was built up to drive a self-made two-stage pulse tube cryocooler. With a designed maximum swept volume of 60 cm~3, the compressor can provide the cryocooler with a pressure volume (PV) power of 400 W.Preliminary measurements of the compressor indicated that both an efficiency of 35%~55% and a pressure ratio of 1.3~1.4 could be obtained. The two-stage pulse tube cryocooler driven by this compressor achieved the lowest temperature of 14.2 K.

  13. Terephthalic acid wastewater treatment by using two-stage aerobic process

    Institute of Scientific and Technical Information of China (English)

    1999-01-01

    Based on the tests between anoxic and aerobic process, the two-stage aerobic process with a biological selector was chosen to treat terephthalic acid wastewater (PTA). By adopting the two- stage aerobic process, the CODCr in PTA wastewater could be reduced from 4000-6000 mg/L to below 100 mg/L; the COD loading in the first aerobic tank could reach 7.0-8.0 kgCODCr/(m3.d) and that of the second stage was from 0.2 to 0.4 kgCODCr/(m3.d). Further researches on the kinetics of substrate degradation were carried out.

  14. First Law Analysis of a Two-stage Ejector-vapor Compression Refrigeration Cycle working with R404A

    National Research Council Canada - National Science Library

    Feiza Memet; Daniela-Elena Mitu

    2011-01-01

    The traditional two-stage vapor compression refrigeration cycle might be replaced by a two-stage ejector-vapor compression refrigeration cycle if it is aimed the decrease of irreversibility during expansion...

  15. Natural Dynamics for Combinatorial Optimization

    CERN Document Server

    Ovchinnikov, Igor V

    2015-01-01

    Stochastic and or natural dynamical systems (DSs) are dominated by sudden nonlinear processes such as neuroavalanches, gamma-ray bursts, solar flares, earthquakes etc. that exhibit scale-free statistics. These behaviors also occur in many nanosystems. On phase diagrams, these DSs belong to a finite-width phase that separates the phases of thermodynamic equilibrium and ordinary chaotic dynamics, and that is known under such names as intermittency, noise-induced chaos, and self-organized criticality. Within the recently formulated approximation-free cohomological theory of stochastic differential equations, the noise-induced chaos can be roughly interpreted as a noise-induced overlap between regular (integrable) and chaotic (non-integrable) deterministic dynamics so that DSs in this phase inherit the properties of the both. Here, we analyze this unique set of properties and conclude that such DSs must be the most efficient natural optimizers. Based on this understanding, we propose the method of the natural dyn...

  16. Combinatorial Chemistry for Optical Sensing Applications

    Science.gov (United States)

    Díaz-García, M. E.; Luis, G. Pina; Rivero-Espejel, I. A.

    The recent interest in combinatorial chemistry for the synthesis of selective recognition materials for optical sensing applications is presented. The preparation, screening, and applications of libraries of ligands and chemosensors against molecular species and metal ions are first considered. Included in this chapter are also the developments involving applications of combinatorial approaches to the discovery of sol-gel and acrylic-based imprinted materials for optical sensing of antibiotics and pesticides, as well as libraries of doped sol-gels for high-throughput optical sensing of oxygen. The potential of combinatorial chemistry applied to the discovery of new sensing materials is highlighted.

  17. Overcoming the bottlenecks of anaerobic digestion of olive mill solid waste by two-stage fermentation.

    Science.gov (United States)

    Stoyanova, Elitza; Lundaa, Tserennyam; Bochmann, Günther; Fuchs, Werner

    2017-02-01

    Two-stage anaerobic digestion (AD) of two-phase olive mill solid waste (OMSW) was applied for reducing the inhibiting factors by optimizing the acidification stage. Single-stage AD and co-fermentation with chicken manure were conducted coinstantaneous for direct comparison. Degradation of the polyphenols up to 61% was observed during the methanogenic stage. Nevertheless the concentration of phenolic substances was still high; the two-stage fermentation remained stable at OLR 1.5 kgVS/m³day. The buffer capacity of the system was twice as high, compared to the one-stage fermentation, without additives. The two-stage AD was a combined process - thermophilic first stage and mesophilic second stage, which pointed out to be the most profitable for AD of OMSW for the reduced hydraulic retention time (HRT) from 230 to 150 days, and three times faster than the single-stage and the co-fermentation start-up of the fermentation. The optimal HRT and incubation temperature for the first stage were determined to four days and 55°C. The performance of the two-stage AD concerning the stability of the process was followed by the co-digestion of OMSW with chicken manure as a nitrogen-rich co-substrate, which makes them viable options for waste disposal with concomitant energy recovery.

  18. The Design, Construction and Operation of a 75 kW Two-Stage Gasifier

    DEFF Research Database (Denmark)

    Henriksen, Ulrik Birk; Ahrenfeldt, Jesper; Jensen, Torben Kvist

    2003-01-01

    The Two-Stage Gasifier was operated for several weeks (465 hours) and of these 190 hours continuously. The gasifier is operated automatically unattended day and night, and only small adjustments of the feeding rate were necessary once or twice a day. The operation was successful, and the output a...... of the reactor had to be constructed in some other material....

  19. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  20. A two-stage ethanol-based biodiesel production in a packed bed reactor

    DEFF Research Database (Denmark)

    Xu, Yuan; Nordblad, Mathias; Woodley, John

    2012-01-01

    A two-stage enzymatic process for producing fatty acid ethyl ester (FAEE) in a packed bed reactor is reported. The process uses an experimental immobilized lipase (NS 88001) and Novozym 435 to catalyze transesterification (first stage) and esterification (second stage), respectively. Both stages...

  1. Two-Stage MAS Technique for Analysis of DRA Elements and Arrays on Finite Ground Planes

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2007-01-01

    A two-stage Method of Auxiliary Sources (MAS) technique is proposed for analysis of dielectric resonator antenna (DRA) elements and arrays on finite ground planes (FGPs). The problem is solved by first analysing the DRA on an infinite ground plane (IGP) and then using this solution to model the FGP...... problem....

  2. Use a Log Splitter to Demonstrate Two-Stage Hydraulic Pump

    Science.gov (United States)

    Dell, Timothy W.

    2012-01-01

    The two-stage hydraulic pump is commonly used in many high school and college courses to demonstrate hydraulic systems. Unfortunately, many textbooks do not provide a good explanation of how the technology works. Another challenge that instructors run into with teaching hydraulic systems is the cost of procuring an expensive real-world machine…

  3. Two-Stage Sampling Procedures for Comparing Means When Population Distributions Are Non-Normal.

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen

    Two-stage sampling procedures for comparing two population means when variances are heterogeneous have been developed by D. G. Chapman (1950) and B. K. Ghosh (1975). Both procedures assume sampling from populations that are normally distributed. The present study reports on the effect that sampling from non-normal distributions has on Type I error…

  4. Some design aspects of a two-stage rail-to-rail CMOS op amp

    NARCIS (Netherlands)

    Gierkink, S.L.J.; Holzmann, Peter J.; Wiegerink, R.J.; Wassenaar, R.F.

    1999-01-01

    A two-stage low-voltage CMOS op amp with rail-to-rail input and output voltage ranges is presented. The circuit uses complementary differential input pairs to achieve the rail-to-rail common-mode input voltage range. The differential pairs operate in strong inversion, and the constant transconductan

  5. Capacity Analysis of Two-Stage Production lines with Many Products

    NARCIS (Netherlands)

    M.B.M. de Koster (René)

    1987-01-01

    textabstractWe consider two-stage production lines with an intermediate buffer. A buffer is needed when fluctuations occur. For single-product production lines fluctuations in capacity availability may be caused by random processing times, failures and random repair times. For multi-product producti

  6. Kinetics analysis of two-stage austenitization in supermartensitic stainless steel

    DEFF Research Database (Denmark)

    Nießen, Frank; Villa, Matteo; Hald, John

    2017-01-01

    The martensite-to-austenite transformation in X4CrNiMo16-5-1 supermartensitic stainless steel was followed in-situ during isochronal heating at 2, 6 and 18 K min−1 applying energy-dispersive synchrotron X-ray diffraction at the BESSY II facility. Austenitization occurred in two stages, separated...

  7. An intracooling system for a novel two-stage sliding-vane air compressor

    Science.gov (United States)

    Murgia, Stefano; Valenti, Gianluca; Costanzo, Ida; Colletta, Daniele; Contaldi, Giulio

    2017-08-01

    Lube-oil injection is used in positive-displacement compressors and, among them, in sliding-vane machines to guarantee the correct lubrication of the moving parts and as sealing to prevent air leakage. Furthermore, lube-oil injection allows to exploit lubricant also as thermal ballast with a great thermal capacity to minimize the temperature increase during the compression. This study presents the design of a two-stage sliding-vane rotary compressor in which the air cooling is operated by high-pressure cold oil injection into a connection duct between the two stages. The heat exchange between the atomized oil jet and the air results in a decrease of the air temperature before the second stage, improving the overall system efficiency. This cooling system is named here intracooling, as opposed to intercooling. The oil injection is realized via pressure-swirl nozzles, both within the compressors and inside the intracooling duct. The design of the two-stage sliding-vane compressor is accomplished by way of a lumped parameter model. The model predicts an input power reduction as large as 10% for intercooled and intracooled two-stage compressors, the latter being slightly better, with respect to a conventional single-stage compressor for compressed air applications. An experimental campaign is conducted on a first prototype that comprises the low-pressure compressor and the intracooling duct, indicating that a significant temperature reduction is achieved in the duct.

  8. Development of a heavy-duty diesel engine with two-stage turbocharging

    NARCIS (Netherlands)

    Sturm, L.; Kruithof, J.

    2001-01-01

    A mean value model was developed by using Matrixx/ Systembuild simulation tool for designing real-time control algorithms for the two-stage engine. All desired characteristics are achieved, apart from lower A/F ratio at lower engine speeds and Turbocharger matches calculations. The CANbus is used to

  9. Two-stage, dilute sulfuric acid hydrolysis of wood : an investigation of fundamentals

    Science.gov (United States)

    John F. Harris; Andrew J. Baker; Anthony H. Conner; Thomas W. Jeffries; James L. Minor; Roger C. Pettersen; Ralph W. Scott; Edward L Springer; Theodore H. Wegner; John I. Zerbe

    1985-01-01

    This paper presents a fundamental analysis of the processing steps in the production of methanol from southern red oak (Quercus falcata Michx.) by two-stage dilute sulfuric acid hydrolysis. Data for hemicellulose and cellulose hydrolysis are correlated using models. This information is used to develop and evaluate a process design.

  10. Two-stage data envelopment analysis technique for evaluating internal supply chain efficiency

    Directory of Open Access Journals (Sweden)

    Nisakorn Somsuk

    2014-12-01

    Full Text Available A two-stage data envelopment analysis (DEA which uses mathematical linear programming techniques is applied to evaluate the efficiency of a system composed of two relational sub-processes, by which the outputs from the first sub-process (as the intermediate outputs of the system are the inputs for the second sub-process. The relative efficiencies of the system and its sub-processes can be measured by applying the two-stage DEA. According to the literature review on the supply chain management, this technique can be used as a tool for evaluating the efficiency of the supply chain composed of two relational sub-processes. The technique can help to determine the inefficient sub-processes. Once the inefficient sub-process was improved its efficiency, it would result in better aggregate efficiency of the supply chain. This paper aims to present a procedure for evaluating the efficiency of the supply chain by using the two-stage DEA, under the assumption of constant returns to scale, with an example of internal supply chain efficiency measurement of insurance companies by applying the two-stage DEA for illustration. Moreover, in this paper the authors also present some observations on the application of this technique.

  11. Two-stage estimation in copula models used in family studies

    DEFF Research Database (Denmark)

    Andersen, Elisabeth Anne Wreford

    2005-01-01

    In this paper register based family studies provide the motivation for studying a two-stage estimation procedure in copula models for multivariate failure time data. The asymptotic properties of the estimators in both parametric and semi-parametric models are derived, generalising the approach by...

  12. Innovative two-stage anaerobic process for effective codigestion of cheese whey and cattle manure.

    Science.gov (United States)

    Bertin, Lorenzo; Grilli, Selene; Spagni, Alessandro; Fava, Fabio

    2013-01-01

    The valorisation of agroindustrial waste through anaerobic digestion represents a significant opportunity for refuse treatment and renewable energy production. This study aimed to improve the codigestion of cheese whey (CW) and cattle manure (CM) by an innovative two-stage process, based on concentric acidogenic and methanogenic phases, designed for enhancing performance and reducing footprint. The optimum CW to CM ratio was evaluated under batch conditions. Thereafter, codigestion was implemented under continuous-flow conditions comparing one- and two-stage processes. The results demonstrated that the addition of CM in codigestion with CW greatly improved the anaerobic process. The highest methane yield was obtained co-treating the two substrates at equal ratio by using the innovative two-stage process. The proposed system reached the maximum value of 258 mL(CH4) g(gv(-1), which was more than twice the value obtained by the one-stage process and 10% higher than the value obtained by the two-stage one.

  13. Extraoral implants for orbit rehabilitation: a comparison between one-stage and two-stage surgeries.

    Science.gov (United States)

    de Mello, M C L M P; Guedes, R; de Oliveira, J A P; Pecorari, V A; Abrahão, M; Dib, L L

    2014-03-01

    The aim of the study was to compare the osseointegration success rate and time for delivery of the prosthesis among cases treated by two-stage or one-stage surgery for orbit rehabilitation between 2003 and 2011. Forty-five patients were included, 31 males and 14 females; 22 patients had two-stage surgery and 23 patients had one-stage surgery. A total 138 implants were installed, 42 (30.4%) on previously irradiated bone. The implant survival rate was 96.4%, with a success rate of 99.0% among non-irradiated patients and 90.5% among irradiated patients. Two-stage patients received 74 implants with a survival rate of 94.6% (four implants lost); one-stage surgery patients received 64 implants with a survival rate of 98.4% (one implant lost). The median time interval between implant fixation and delivery of the prosthesis for the two-stage group was 9.6 months and for the one-stage group was 4.0 months (P < 0.001). The one-stage technique proved to be reliable and was associated with few risks and complications; the rate of successful osseointegration was similar to those reported in the literature. The one-stage technique should be considered a viable procedure that shortens the time to final rehabilitation and facilitates appropriate patient follow-up treatment.

  14. Validation of Continuous CHP Operation of a Two-Stage Biomass Gasifier

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Henriksen, Ulrik Birk; Jensen, Torben Kvist

    2006-01-01

    The Viking gasification plant at the Technical University of Denmark was built to demonstrate a continuous combined heat and power operation of a two-stage gasifier fueled with wood chips. The nominal input of the gasifier is 75 kW thermal. To validate the continuous operation of the plant, a 9-d...

  15. High rate treatment of terephthalic acid production wastewater in a two-stage anaerobic bioreactor

    NARCIS (Netherlands)

    Kleerebezem, R.; Beckers, J.; Pol, L.W.H.; Lettinga, G.

    2005-01-01

    The feasibility was studied of anaerobic treatment of wastewater generated during purified terephthalic acid (PTA) production in two-stage upflow anaerobic sludge blanket (UASB) reactor system. The artificial influent of the system contained the main organic substrates of PTA-wastewater: acetate, be

  16. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2017-04-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  17. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...

  18. Thermal design of two-stage evaporative cooler based on thermal comfort criterion

    Science.gov (United States)

    Gilani, Neda; Poshtiri, Amin Haghighi

    2016-09-01

    Performance of two-stage evaporative coolers at various outdoor air conditions was numerically studied, and its geometric and physical characteristics were obtained based on thermal comfort criteria. For this purpose, a mathematical model was developed based on conservation equations of mass, momentum and energy to determine heat and mass transfer characteristics of the system. The results showed that two-stage indirect/direct cooler can provide the thermal comfort condition when outdoor air temperature and relative humidity are located in the range of 34-54 °C and 10-60 %, respectively. Moreover, as relative humidity of the ambient air rises, two-stage evaporative cooler with the smaller direct and larger indirect cooler will be needed. In building with high cooling demand, thermal comfort may be achieved at a greater air change per hour number, and thus an expensive two-stage evaporative cooler with a higher electricity consumption would be required. Finally, a design guideline was proposed to determine the size of required plate heat exchangers at various operating conditions.

  19. A Two-Stage Exercise on the Binomial Distribution Using Minitab.

    Science.gov (United States)

    Shibli, M. Abdullah

    1990-01-01

    Describes a two-stage experiment that was designed to explain binomial distribution to undergraduate statistics students. A manual coin flipping exercise is explained as the first stage; a computerized simulation using MINITAB software is presented as stage two; and output from the MINITAB exercises is included. (two references) (LRW)

  20. The rearrangement process in a two-stage broadcast switching network

    DEFF Research Database (Denmark)

    Jacobsen, Søren B.

    1988-01-01

    The rearrangement process in the two-stage broadcast switching network presented by F.K. Hwang and G.W. Richards (ibid., vol.COM-33, no.10, p.1025-1035, Oct. 1985) is considered. By defining a certain function it is possible to calculate an upper bound on the number of connections to be moved...

  1. Two-stage laparoscopic resection of colon cancer and metastatic liver tumour

    Directory of Open Access Journals (Sweden)

    Yukio Iwashita

    2012-01-01

    Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both post-operative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

  2. Two-stage laparoscopic resection of colon cancer and metastatic liver tumour

    Directory of Open Access Journals (Sweden)

    Iwashita Yukio

    2005-01-01

    Full Text Available We report herein the case of 70-year-old woman in whom colon cancer and a synchronous metastatic liver tumour were successfully resected laparoscopically. The tumours were treated in two stages. Both postoperative courses were uneventful, and there has been no recurrence during the 8 months since the second procedure.

  3. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  4. Quantum stochastics

    CERN Document Server

    Chang, Mou-Hsiung

    2015-01-01

    The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigrou...

  5. Stochastic partial differential equations

    CERN Document Server

    Chow, Pao-Liu

    2014-01-01

    Preliminaries Introduction Some Examples Brownian Motions and Martingales Stochastic Integrals Stochastic Differential Equations of Itô Type Lévy Processes and Stochastic IntegralsStochastic Differential Equations of Lévy Type Comments Scalar Equations of First Order Introduction Generalized Itô's Formula Linear Stochastic Equations Quasilinear Equations General Remarks Stochastic Parabolic Equations Introduction Preliminaries Solution of Stochastic Heat EquationLinear Equations with Additive Noise Some Regularity Properties Stochastic Reaction-Diffusion Equations Parabolic Equations with Grad

  6. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. (Stanford Univ., CA (United States). Dept. of Operations Research Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft)

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  7. The Bracka two-stage repair for severe proximal hypospadias: A single center experience

    Directory of Open Access Journals (Sweden)

    Rakesh S Joshi

    2015-01-01

    Full Text Available Background: Surgical correction of severe proximal hypospadias represents a significant surgical challenge and single-stage corrections are often associated with complications and reoperations. Bracka two-stage repair is an attractive alternative surgical procedure with superior, reliable, and reproducible results. Purpose: To study the feasibility and applicability of Bracka two-stage repair for the severe proximal hypospadias and to analyze the outcomes and complications of this surgical technique. Materials and Methods: This prospective study was conducted from January 2011 to December 2013. Bracka two-stage repair was performed using inner preputial skin as a free graft in subjects with proximal hypospadias in whom severe degree of chordee and/or poor urethral plate was present. Only primary cases were included in this study. All subjects received three doses of intra-muscular testosterone 3 weeks apart before first stage. Second stage was performed 6 months after the first stage. Follow-up ranged from 6 months to 24 months. Results: A total of 43 patients operated for Bracka repair, out of which 30 patients completed two-stage repair. Mean age of the patients was 4 years and 8 months. We achieved 100% graft uptake and no revision was required. Three patients developed fistula, while two had metal stenosis. Glans dehiscence, urethral stricture and the residual chordee were not found during follow-up and satisfactory cosmetic results with good urinary stream were achieved in all cases. Conclusion: The Bracka two-stage repair is a safe and reliable approach in select patients in whom it is impractical to maintain the axial integrity of the urethral plate, and, therefore, a full circumference urethral reconstruction become necessary. This gives good results both in terms of restoration of normal function with minimal complication.

  8. Optimisation of two-stage screw expanders for waste heat recovery applications

    Science.gov (United States)

    Read, M. G.; Smith, I. K.; Stosic, N.

    2015-08-01

    It has previously been shown that the use of two-phase screw expanders in power generation cycles can achieve an increase in the utilisation of available energy from a low temperature heat source when compared with more conventional single-phase turbines. However, screw expander efficiencies are more sensitive to expansion volume ratio than turbines, and this increases as the expander inlet vapour dryness fraction decreases. For singlestage screw machines with low inlet dryness, this can lead to under expansion of the working fluid and low isentropic efficiency for the expansion process. The performance of the cycle can potentially be improved by using a two-stage expander, consisting of a low pressure machine and a smaller high pressure machine connected in series. By expanding the working fluid over two stages, the built-in volume ratios of the two machines can be selected to provide a better match with the overall expansion process, thereby increasing efficiency for particular inlet and discharge conditions. The mass flow rate though both stages must however be matched, and the compromise between increasing efficiency and maximising power output must also be considered. This research uses a rigorous thermodynamic screw machine model to compare the performance of single and two-stage expanders over a range of operating conditions. The model allows optimisation of the required intermediate pressure in the two- stage expander, along with the rotational speed and built-in volume ratio of both screw machine stages. The results allow the two-stage machine to be fully specified in order to achieve maximum efficiency for a required power output.

  9. Accessing Specific Peptide Recognition by Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Li, Ming

    Peptide Recognition by Combinatorial Chemistry”. Molecular recognition is a specific interaction between two or more molecules through noncovalent bonding, such as hydrogen bonding, metal coordination, van der Waals forces, π−π, hydrophobic, or electrostatic interactions. The association involves kinetic....... Combinatorial chemistry was invented in 1980s based on observation of functional aspects of the adaptive immune system. It was employed for drug development and optimization in conjunction with high-throughput synthesis and screening. (chapter 2) Combinatorial chemistry is able to rapidly produce many thousands...... was studied with this hook peptide library via the beadbead adhesion screening approach. The recognition pairs interlocked and formed a complex. (chapter 8) During accessing peptide molecular recognition by combinatorial chemistry, we faced several problems, which were solved by a range of analytical...

  10. Combinatorial Discovery and Optimization of New Materials

    Institute of Scientific and Technical Information of China (English)

    Gao Chen; Zhang Xinyi; Yan Dongsheng

    2001-01-01

    The concept of the combinatorial discovery and optimization of new materials, and its background,importance, and application, as well as its current status in the world, are briefly reviewed in this paper.

  11. Conferences on Combinatorial and Additive Number Theory

    CERN Document Server

    2014-01-01

    This proceedings volume is based on papers presented at the Workshops on Combinatorial and Additive Number Theory (CANT), which were held at the Graduate Center of the City University of New York in 2011 and 2012. The goal of the workshops is to survey recent progress in combinatorial number theory and related parts of mathematics. The workshop attracts researchers and students who discuss the state-of-the-art, open problems, and future challenges in number theory.

  12. A product formula and combinatorial field theory

    CERN Document Server

    Horzela, A; Duchamp, G H E; Penson, K A; Solomon, A I

    2004-01-01

    We treat the problem of normally ordering expressions involving the standard boson operators a, a* where [a,a*]=1. We show that a simple product formula for formal power series - essentially an extension of the Taylor expansion - leads to a double exponential formula which enables a powerful graphical description of the generating functions of the combinatorial sequences associated with such functions - in essence, a combinatorial field theory. We apply these techniques to some examples related to specific physical Hamiltonians.

  13. Cubical version of combinatorial differential forms

    DEFF Research Database (Denmark)

    Kock, Anders

    2010-01-01

    The theory of combinatorial differential forms is usually presented in simplicial terms. We present here a cubical version; it depends on the possibility of forming affine combinations of mutual neighbour points in a manifold, in the context of synthetic differential geometry.......The theory of combinatorial differential forms is usually presented in simplicial terms. We present here a cubical version; it depends on the possibility of forming affine combinations of mutual neighbour points in a manifold, in the context of synthetic differential geometry....

  14. Stochastic cooling

    Energy Technology Data Exchange (ETDEWEB)

    Bisognano, J.; Leemann, C.

    1982-03-01

    Stochastic cooling is the damping of betatron oscillations and momentum spread of a particle beam by a feedback system. In its simplest form, a pickup electrode detects the transverse positions or momenta of particles in a storage ring, and the signal produced is amplified and applied downstream to a kicker. The time delay of the cable and electronics is designed to match the transit time of particles along the arc of the storage ring between the pickup and kicker so that an individual particle receives the amplified version of the signal it produced at the pick-up. If there were only a single particle in the ring, it is obvious that betatron oscillations and momentum offset could be damped. However, in addition to its own signal, a particle receives signals from other beam particles. In the limit of an infinite number of particles, no damping could be achieved; we have Liouville's theorem with constant density of the phase space fluid. For a finite, albeit large number of particles, there remains a residue of the single particle damping which is of practical use in accumulating low phase space density beams of particles such as antiprotons. It was the realization of this fact that led to the invention of stochastic cooling by S. van der Meer in 1968. Since its conception, stochastic cooling has been the subject of much theoretical and experimental work. The earliest experiments were performed at the ISR in 1974, with the subsequent ICE studies firmly establishing the stochastic cooling technique. This work directly led to the design and construction of the Antiproton Accumulator at CERN and the beginnings of p anti p colliding beam physics at the SPS. Experiments in stochastic cooling have been performed at Fermilab in collaboration with LBL, and a design is currently under development for a anti p accumulator for the Tevatron.

  15. HRI catalytic two-stage liquefaction (CTSL) process materials: chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.

    1985-12-01

    This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stage liquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

  16. Two-stage precipitation process of iron and arsenic from acid leaching solutions

    Institute of Scientific and Technical Information of China (English)

    N.J.BOLIN; J.E.SUNDKVIST

    2008-01-01

    A leaching process for base metals recovery often generates considerable amounts of impurities such as iron and arsenic into the solution.It is a challenge to separate the non-valuable metals into manageable and stable waste products for final disposal,without loosing the valuable constituents.Boliden Mineral AB has patented a two-stage precipitation process that gives a very clean iron-arsenic precipitate by a minimum of coprecipitation of base metals.The obtained product shows to have good sedimentation and filtration properties,which makes it easy to recover the iron-arsenic depleted solution by filtration and washing of the precipitate.Continuos bench scale tests have been done,showing the excellent results achieved by the two-stage precipitation process.

  17. S-band gain-flattened EDFA with two-stage double-pass configuration

    Science.gov (United States)

    Fu, Hai-Wei; Xu, Shi-Chao; Qiao, Xue-Guang; Jia, Zhen-An; Liu, Ying-Gang; Zhou, Hong

    2011-11-01

    A gain-flattened S-band erbium-doped fiber amplifier (EDFA) using standard erbium-doped fiber (EDF) is proposed and experimentally demonstrated. The proposed amplifier with two-stage double-pass configuration employs two C-band suppressing filters to obtain the optical gain in S-band. The amplifier provides a maximum signal gain of 41.6 dB at 1524 nm with the corresponding noise figure of 3.8 dB. Furthermore, with a well-designed short-pass filter as a gain flattening filter (GFF), we are able to develop the S-band EDFA with a flattened gain of more than 20 dB in 1504-1524 nm. In the experiment, the two-stage double-pass amplifier configuration improves performance of gain and noise figure compared with the configuration of single-stage double-pass S-band EDFA.

  18. Power Frequency Oscillation Suppression Using Two-Stage Optimized Fuzzy Logic Controller for Multigeneration System

    Directory of Open Access Journals (Sweden)

    Y. K. Bhateshvar

    2016-01-01

    Full Text Available This paper attempts to develop a linearized model of automatic generation control (AGC for an interconnected two-area reheat type thermal power system in deregulated environment. A comparison between genetic algorithm optimized PID controller (GA-PID, particle swarm optimized PID controller (PSO-PID, and proposed two-stage based PSO optimized fuzzy logic controller (TSO-FLC is presented. The proposed fuzzy based controller is optimized at two stages: one is rule base optimization and other is scaling factor and gain factor optimization. This shows the best dynamic response following a step load change with different cases of bilateral contracts in deregulated environment. In addition, performance of proposed TSO-FLC is also examined for ±30% changes in system parameters with different type of contractual demands between control areas and compared with GA-PID and PSO-PID. MATLAB/Simulink® is used for all simulations.

  19. A two-stage scheme for multi-view human pose estimation

    Science.gov (United States)

    Yan, Junchi; Sun, Bing; Liu, Yuncai

    2010-08-01

    We present a two-stage scheme integrating voxel reconstruction and human motion tacking. By combining voxel reconstruction with human motion tracking interactively, our method can work in a cluttered background where perfect foreground silhouettes are hardly available. For each frame, a silhouette-based 3D volume reconstruction method and hierarchical tracking algorithm are applied in two stages. In the first stage, coarse reconstruction and tracking results are obtained, and then the refinement for reconstruction is applied in the second stage. The experimental results demonstrate our approach is promising. Although our method focuses on the problem of human body voxel reconstruction and motion tracking in this paper, our scheme can be used to reconstruct voxel data and infer the pose of many specified rigid and articulated objects.

  20. Toward Improving Electrocardiogram (ECG) Biometric Verification using Mobile Sensors: A Two-Stage Classifier Approach.

    Science.gov (United States)

    Tan, Robin; Perkowski, Marek

    2017-02-20

    Electrocardiogram (ECG) signals sensed from mobile devices pertain the potential for biometric identity recognition applicable in remote access control systems where enhanced data security is demanding. In this study, we propose a new algorithm that consists of a two-stage classifier combining random forest and wavelet distance measure through a probabilistic threshold schema, to improve the effectiveness and robustness of a biometric recognition system using ECG data acquired from a biosensor integrated into mobile devices. The proposed algorithm is evaluated using a mixed dataset from 184 subjects under different health conditions. The proposed two-stage classifier achieves a total of 99.52% subject verification accuracy, better than the 98.33% accuracy from random forest alone and 96.31% accuracy from wavelet distance measure algorithm alone. These results demonstrate the superiority of the proposed algorithm for biometric identification, hence supporting its practicality in areas such as cloud data security, cyber-security or remote healthcare systems.

  1. Effect of two-stage aging on superplasticity of Al-Li alloy

    Institute of Scientific and Technical Information of China (English)

    LUO Zhi-hui; ZHANG Xin-ming; DU Yu-xuan; YE Ling-ying

    2006-01-01

    The effect of two-stage aging on the microstructures and superplasticity of 01420 Al-Li alloy was investigated by means of OM, TEM analysis and stretching experiment. The results demonstrate that the second phase particles distributed more uniformly with a larger volume fraction can be observed after the two-stage aging (120 ℃, 12 h+300 ℃, 36 h) compared with the single-aging(300 ℃, 48 h). After rolling and recrystallization annealing, fine grains with size of 8-10 μm are obtained, and the superplastic elongation of the specimens reaches 560% at strain rate of 8×10-4 s-1 and 480 ℃. Uniformly distributed fine particles precipitate both on grain boundaries and in grains at lower temperature. When the sheet is aged at high temperature, the particles become coarser with a large volume fraction.

  2. Two stage bioethanol refining with multi litre stacked microbial fuel cell and microbial electrolysis cell.

    Science.gov (United States)

    Sugnaux, Marc; Happe, Manuel; Cachelin, Christian Pierre; Gloriod, Olivier; Huguenin, Gérald; Blatter, Maxime; Fischer, Fabian

    2016-12-01

    Ethanol, electricity, hydrogen and methane were produced in a two stage bioethanol refinery setup based on a 10L microbial fuel cell (MFC) and a 33L microbial electrolysis cell (MEC). The MFC was a triple stack for ethanol and electricity co-generation. The stack configuration produced more ethanol with faster glucose consumption the higher the stack potential. Under electrolytic conditions ethanol productivity outperformed standard conditions and reached 96.3% of the theoretically best case. At lower external loads currents and working potentials oscillated in a self-synchronized manner over all three MFC units in the stack. In the second refining stage, fermentation waste was converted into methane, using the scale up MEC stack. The bioelectric methanisation reached 91% efficiency at room temperature with an applied voltage of 1.5V using nickel cathodes. The two stage bioethanol refining process employing bioelectrochemical reactors produces more energy vectors than is possible with today's ethanol distilleries.

  3. HRI catalytic two-stage liquefaction (CTSL) process materials: chemical analysis and biological testing

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C.W.; Later, D.W.

    1985-12-01

    This report presents data from the chemical analysis and biological testing of coal liquefaction materials obtained from the Hydrocarbon Research, Incorporated (HRI) catalytic two-stage liquefaction (CTSL) process. Materials from both an experimental run and a 25-day demonstration run were analyzed. Chemical methods of analysis included adsorption column chromatography, high-resolution gas chromatography, gas chromatography/mass spectrometry, low-voltage probe-inlet mass spectrometry, and proton nuclear magnetic resonance spectroscopy. The biological activity was evaluated using the standard microbial mutagenicity assay and an initiation/promotion assay for mouse-skin tumorigenicity. Where applicable, the results obtained from the analyses of the CTSL materials have been compared to those obtained from the integrated and nonintegrated two-stage coal liquefaction processes. 18 refs., 26 figs., 22 tabs.

  4. Performance measurement of insurance firms using a two-stage DEA method

    Directory of Open Access Journals (Sweden)

    Raha Jalili Sabet

    2013-01-01

    Full Text Available Measuring the relative performance of insurance firms plays an important role in this industry. In this paper, we present a two-stage data envelopment analysis to measure the performance of insurance firms, which were active over the period of 2006-2010. The proposed study of this paper performs DEA method in two stages where the first stage considers five inputs and three outputs while the second stage considers the outputs of the first stage as the inputs of the second stage and uses three different outputs for this stage. The results of our survey have indicated that while there were 4 efficient insurance firms most other insurances were noticeably inefficient. This means market was monopolized mostly by a limited number of insurance firms and competition was not fare enough to let other firms participate in economy, more efficiently.

  5. Direct Torque Control of Sensorless Induction Machine Drives: A Two-Stage Kalman Filter Approach

    Directory of Open Access Journals (Sweden)

    Jinliang Zhang

    2015-01-01

    Full Text Available Extended Kalman filter (EKF has been widely applied for sensorless direct torque control (DTC in induction machines (IMs. One key problem associated with EKF is that the estimator suffers from computational burden and numerical problems resulting from high order mathematical models. To reduce the computational cost, a two-stage extended Kalman filter (TEKF based solution is presented for closed-loop stator flux, speed, and torque estimation of IM to achieve sensorless DTC-SVM operations in this paper. The novel observer can be similarly derived as the optimal two-stage Kalman filter (TKF which has been proposed by several researchers. Compared to a straightforward implementation of a conventional EKF, the TEKF estimator can reduce the number of arithmetic operations. Simulation and experimental results verify the performance of the proposed TEKF estimator for DTC of IMs.

  6. Syme's two-stage amputation in insulin-requiring diabetics with gangrene of the forefoot.

    Science.gov (United States)

    Pinzur, M S; Morrison, C; Sage, R; Stuck, R; Osterman, H; Vrbos, L

    1991-06-01

    Thirty-five insulin-requiring adult diabetic patients underwent 38 Syme's Two-Stage amputations for gangrene of the forefoot with nonreconstructible peripheral vascular insufficiency. All had a minimum Doppler ischemic index of 0.5, serum albumin of 3.0 gm/dl, and total lymphocyte count of 1500. Thirty-one (81.6%) eventually healed and were uneventfully fit with a prosthesis. Regional anesthesia was used in all of the patients, with 22 spinal and 16 ankle block anesthetics. Twenty-seven (71%) returned to their preamputation level of ambulatory function. Six (16%) had major, and fifteen (39%) minor complications following the first stage surgery. The results of this study support the use of the Syme's Two-Stage amputation in adult diabetic patients with gangrene of the forefoot requiring amputation.

  7. Low-noise SQUIDs with large transfer: two-stage SQUIDs based on DROSs

    Science.gov (United States)

    Podt, M.; Flokstra, J.; Rogalla, H.

    2002-08-01

    We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×10 5Φ0/s and the measured white flux noise was 1.3 μ Φ0/√Hz at 4.2 K. The system is based on a conventional dc SQUID with a double relaxation oscillation SQUID (DROS) as the second stage. Because of the large flux-to-voltage transfer, the sensitivity of the system is completely determined by the sensor SQUID and not by the DROS or the room-temperature preamplifier. Decreasing the Josephson junction area enables a further improvement of the sensitivity of the two-stage SQUID systems.

  8. Interval estimation of binomial proportion in clinical trials with a two-stage design.

    Science.gov (United States)

    Tsai, Wei-Yann; Chi, Yunchan; Chen, Chia-Min

    2008-01-15

    Generally, a two-stage design is employed in Phase II clinical trials to avoid giving patients an ineffective drug. If the number of patients with significant improvement, which is a binomial response, is greater than a pre-specified value at the first stage, then another binomial response at the second stage is also observed. This paper considers interval estimation of the response probability when the second stage is allowed to continue. Two asymptotic interval estimators, Wald and score, as well as two exact interval estimators, Clopper-Pearson and Sterne, are constructed according to the two binomial responses from this two-stage design, where the binomial response at the first stage follows a truncated binomial distribution. The mean actual coverage probability and expected interval width are employed to evaluate the performance of these interval estimators. According to the comparison results, the score interval is recommended for both Simon's optimal and minimax designs.

  9. Experiment and surge analysis of centrifugal two-stage turbocharging system

    Institute of Scientific and Technical Information of China (English)

    Yituan HE; Chaochen MA

    2008-01-01

    To study a centrifugal two-stage turbocharging system's surge and influencing factors, a special test bench was set up and the system surge test was performed. The test results indicate that the measured parameters such as air mass flow and rotation speed of a high pressure (HP) stage compressor can be converted into corrected para-meters under a standard condition according to the Mach number similarity criterion, because the air flow in a HP stage compressor has entered the Reynolds number (Re) auto-modeling range. Accordingly, the reasons leading to a two-stage turbocharging system's surge can be analyzed according to the corrected mass flow characteristic maps and actual operating conditions of HP and low pressure (LP) stage compressors.

  10. Two-staged management for all types of congenital pouch colon

    Directory of Open Access Journals (Sweden)

    Rajendra K Ghritlaharey

    2013-01-01

    Full Text Available Background: The aim of this study was to review our experience with two-staged management for all types of congenital pouch colon (CPC. Patients and Methods: This retrospective study included CPC cases that were managed with two-staged procedures in the Department of Paediatric Surgery, over a period of 12 years from 1 January 2000 to 31 December 2011. Results: CPC comprised of 13.71% (97 of 707 of all anorectal malformations (ARM and 28.19% (97 of 344 of high ARM. Eleven CPC cases (all males were managed with two-staged procedures. Distribution of cases (Narsimha Rao et al.′s classification into types I, II, III, and IV were 1, 2, 6, and 2, respectively. Initial operative procedures performed were window colostomy (n = 6, colostomy proximal to pouch (n = 4, and ligation of colovesical fistula and end colostomy (n = 1. As definitive procedures, pouch excision with abdomino-perineal pull through (APPT of colon in eight, and pouch excision with APPT of ileum in three were performed. The mean age at the time of definitive procedures was 15.6 months (ranges from 3 to 53 months and the mean weight was 7.5 kg (ranges from 4 to 11 kg. Good fecal continence was observed in six and fair in two cases in follow-up periods, while three of our cases lost to follow up. There was no mortality following definitive procedures amongst above 11 cases. Conclusions: Two-staged procedures for all types of CPC can also be performed safely with good results. The most important fact that the definitive procedure is being done without protective stoma and therefore, it avoids stoma closure, stoma-related complications, related cost of stoma closure and hospital stay.

  11. Hybrid staging of a Lysholm positive displacement engine with two Westinghouse two stage impulse Curtis turbines

    Energy Technology Data Exchange (ETDEWEB)

    Parker, D.A.

    1982-06-01

    The University of California at Berkeley has tested and modeled satisfactorly a hybrid staged Lysholm engine (positive displacement) with a two stage Curtis wheel turbine. The system operates in a stable manner over its operating range (0/1-3/1 water ratio, 120 psia input). Proposals are made for controlling interstage pressure with a partial admission turbine and volume expansion to control mass flow and pressure ratio for the Lysholm engine.

  12. Noncausal two-stage image filtration at presence of observations with anomalous errors

    Directory of Open Access Journals (Sweden)

    S. V. Vishnevyy

    2013-04-01

    Full Text Available Introduction. It is necessary to develop adaptive algorithms, which allow to detect such regions and to apply filter with respective parameters for suppression of anomalous noises for the purposes of image filtration, which consist of regions with anomalous errors. Development of adaptive algorithm for non-causal two-stage images filtration at pres-ence of observations with anomalous errors. The adaptive algorithm for noncausal two-stage filtration is developed. On the first stage the adaptive one-dimensional algorithm for causal filtration is used for independent processing along rows and columns of image. On the second stage the obtained data are united and a posteriori estimations are calculated. Results of experimental investigations. The developed adaptive algorithm for noncausal images filtration at presence of observations with anomalous errors is investigated on the model sample by means of statistical modeling on PC. The image is modeled as a realization of Gaussian-Markov random field. The modeled image is corrupted with uncorrelated Gaussian noise. Regions of image with anomalous errors are corrupted with uncorrelated Gaussian noise which has higher power than normal noise on the rest part of the image. Conclusions. The analysis of adaptive algorithm for noncausal two-stage filtration is done. The characteristics of accuracy of computed estimations are shown. The comparisons of first stage and second stage of the developed adaptive algorithm are done. Adaptive algorithm is compared with known uniform two-stage algorithm of image filtration. According to the obtained results the uniform algorithm does not suppress anomalous noise meanwhile the adaptive algorithm shows good results.

  13. Full noise characterization of a low-noise two-stage SQUID amplifier

    Energy Technology Data Exchange (ETDEWEB)

    Falferi, P [Istituto di Fotonica e Nanotecnologie, CNR-Fondazione Bruno Kessler, 38100 Povo, Trento (Italy); Mezzena, R [INFN, Gruppo Collegato di Trento, Sezione di Padova, 38100 Povo, Trento (Italy); Vinante, A [INFN, Sezione di Padova, 35131 Padova (Italy)], E-mail: falferi@science.unitn.it

    2009-07-15

    From measurements performed on a low-noise two-stage SQUID amplifier coupled to a high- Q electrical resonator we give a complete noise characterization of the SQUID amplifier around the resonator frequency of 11 kHz in terms of additive, back action and cross-correlation noise spectral densities. The minimum noise temperature evaluated at 135 mK is 10 {mu}K and corresponds to an energy resolution of 18{Dirac_h}.

  14. A covariate adjusted two-stage allocation design for binary responses in randomized clinical trials.

    Science.gov (United States)

    Bandyopadhyay, Uttam; Biswas, Atanu; Bhattacharya, Rahul

    2007-10-30

    In the present work, we develop a two-stage allocation rule for binary response using the log-odds ratio within the Bayesian framework allowing the current allocation to depend on the covariate value of the current subject. We study, both numerically and theoretically, several exact and limiting properties of this design. The applicability of the proposed methodology is illustrated by using some data set. We compare this rule with some of the existing rules by computing various performance measures.

  15. Development of a Novel Type Catalyst SY-2 for Two-Stage Hydrogenation of Pyrolysis Gasoline

    Institute of Scientific and Technical Information of China (English)

    Wu Linmei; Zhang Xuejun; Zhang Zhihua; Wang Fucun

    2004-01-01

    By using the group ⅢB or groupⅦB metals and modulating the characteristics of electric charges on carrier surface, improving the catalyst preparation process and techniques for loading the active metal components, a novel type SY-2 catalyst earmarked for two-stage hydrogenation of pyrolysis gasoline has been developed. The catalyst evaluation results have indicated that the novel catalyst is characterized by a better hydrogenation reaction activity to give higher aromatic yield.

  16. Investigation on a two-stage solvay refrigerator with magnetic material regenerator

    Science.gov (United States)

    Chen, Guobang; Zheng, Jianyao; Zhang, Fagao; Yu, Jianping; Tao, Zhenshi; Ding, Cenyu; Zhang, Liang; Wu, Peiyi; Long, Yi

    This paper describes experimental results that the no-load temperature of a two-stage Solvay refrigerator has been reached in liquid helium temperature region from the original 11.5 K by using magnetic regenerative material instead of lead. The structure and technological characteristics of the prototype machine are presented. The effects of operating frequency and pressure on the refrigerating temperature have been discussed in this paper.

  17. Biological hydrogen production from olive mill wastewater with two-stage processes

    Energy Technology Data Exchange (ETDEWEB)

    Eroglu, Ela; Eroglu, Inci [Department of Chemical Engineering, Middle East Technical University, 06531, Ankara (Turkey); Guenduez, Ufuk; Yuecel, Meral [Department of Biology, Middle East Technical University, 06531, Ankara (Turkey); Tuerker, Lemi [Department of Chemistry, Middle East Technical University, 06531, Ankara (Turkey)

    2006-09-15

    In the present work two novel two-stage hydrogen production processes from olive mill wastewater (OMW) have been introduced. The first two-stage process involved dark-fermentation followed by a photofermentation process. Dark-fermentation by activated sludge cultures and photofermentation by Rhodobacter sphaeroides O.U.001 were both performed in 55ml glass vessels, under anaerobic conditions. In some cases of dark-fermentation, activated sludge was initially acclimatized to the OMW to provide the adaptation of microorganisms to the extreme conditions of OMW. The highest hydrogen production potential obtained was 29l{sub H{sub 2}}/l{sub OMW} after photofermentation with 50% (v/v) effluent of dark fermentation with activated sludge. Photofermentation with 50% (v/v) effluent of dark fermentation with acclimated activated sludge had the highest hydrogen production rate (0.008ll{sup -1}h{sup -1}). The second two-stage process involved a clay treatment step followed by photofermentation by R. sphaeroides O.U.001. Photofermentation with the effluent of the clay pretreatment process (4% (v/v)) gives the highest hydrogen production potential (35l{sub H{sub 2}}/l{sub OMW}), light conversion efficiency (0.42%) and COD conversion efficiency (52%). It was concluded that both pretreatment processes enhanced the photofermentative hydrogen production process. Moreover, hydrogen could be produced with highly concentrated OMW. Two-stage processes developed in the present investigation have a high potential for solving the environmental problems caused by OMW. (author)

  18. The two-stage aegean extension, from localized to distributed, a result of slab rollback acceleration

    OpenAIRE

    Brun, Jean-Pierre; Faccenna, Claudio; Gueydan, Frédéric; Sokoutis, Dimitrios; Philippon, Mélody; Kydonakis, Konstantinos; Gorini, Christian

    2016-01-01

    International audience; Back-arc extension in the Aegean, which was driven by slab rollback since 45 Ma, is described here for the first time in two stages. From Middle Eocene to Middle Miocene, deformation was localized leading to i) the exhumation of high-pressure metamorphic rocks to crustal depths, ii) the exhumation of high-temperature metamorphic rocks in core complexes and iii) the deposition of sedimentary basins. Since Middle Miocene, extension distributed over the whole Aegean domai...

  19. A Two-stage Discriminating Framework for Making Supply Chain Operation Decisions under Uncertainties

    OpenAIRE

    Gu, H; Rong, G

    2010-01-01

    This paper addresses the problem of making supply chain operation decisions for refineries under two types of uncertainties: demand uncertainty and incomplete information shared with suppliers and transport companies. Most of the literature only focus on one uncertainty or treat more uncertainties identically. However, we note that refineries have more power to control uncertainties in procurement and transportation than in demand in the real world. Thus, a two-stage framework for dealing wit...

  20. Low-noise SQUIDs with large transfer: two-stage SQUIDs based on DROSs

    NARCIS (Netherlands)

    Podt, M.; Flokstra, Jakob; Rogalla, Horst

    2002-01-01

    We have realized a two-stage integrated superconducting quantum interference device (SQUID) system with a closed loop bandwidth of 2.5 MHz, operated in a direct voltage readout mode. The corresponding flux slew rate was 1.3×105 Φ0/s and the measured white flux noise was 1.3 μΦ0/√Hz at 4.2 K. The

  1. Latent Inhibition as a Function of US Intensity in a Two-Stage CER Procedure

    Science.gov (United States)

    Rodriguez, Gabriel; Alonso, Gumersinda

    2004-01-01

    An experiment is reported in which the effect of unconditioned stimulus (US) intensity on latent inhibition (LI) was examined, using a two-stage conditioned emotional response (CER) procedure in rats. A tone was used as the pre-exposed and conditioned stimulus (CS), and a foot-shock of either a low (0.3 mA) or high (0.7 mA) intensity was used as…

  2. Two stage dual gate MESFET monolithic gain control amplifier for Ka-band

    Science.gov (United States)

    Sokolov, V.; Geddes, J.; Contolatis, A.

    A monolithic two stage gain control amplifier has been developed using submicron gate length dual gate MESFETs fabricated on ion implanted material. The amplifier has a gain of 12 dB at 30 GHz with a gain control range of over 30 dB. This ion implanted monolithic IC is readily integrable with other phased array receiver functions such as low noise amplifiers and phase shifters.

  3. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Science.gov (United States)

    Kılıç, Bayram

    2012-07-01

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared.

  4. Exergy analysis of vapor compression refrigeration cycle with two-stage and intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Kilic, Bayram [Mehmet Akif Ersoy University, Bucak Emin Guelmez Vocational School, Bucak, Burdur (Turkey)

    2012-07-15

    In this study, exergy analyses of vapor compression refrigeration cycle with two-stage and intercooler using refrigerants R507, R407c, R404a were carried out. The necessary thermodynamic values for analyses were calculated by Solkane program. The coefficient of performance, exergetic efficiency and total irreversibility rate of the system in the different operating conditions for these refrigerants were investigated. The coefficient of performance, exergetic efficiency and total irreversibility rate for alternative refrigerants were compared. (orig.)

  5. Performance of Combined Water Turbine Darrieus-Savonius with Two Stage Savonius Buckets and Single Deflector

    OpenAIRE

    Sahim, Kaprawi; Santoso, Dyos; Sipahutar, Riman

    2016-01-01

    The objective of this study is to show the effect of single deflector plate on the performance of combined Darrieus-Savonius water turbine. In order to overcome the disadvantages of low torque of solo Darrieus turbine, a plate deflector mounted in front of returning Savonius bucket of combined water turbine composing of Darrieus and Savonius rotor has been proposed in this study. Some configurations of combined turbines with two stage Savonius rotors were experimentally tested in a river of c...

  6. Perceived Health Benefits and Soy Consumption Behavior: Two-Stage Decision Model Approach

    OpenAIRE

    Moon, Wanki; Balasubramanian, Siva K.; Rimal, Arbindra

    2005-01-01

    A two-stage decision model is developed to assess the effect of perceived soy health benefits on consumers' decisions with respect to soy food. The first stage captures whether or not to consume soy food, while the second stage reflects how often to consume. A conceptual/analytical framework is also employed, combining Lancaster's characteristics model and Fishbein's multi-attribute model. Results show that perceived soy health benefits significantly influence both decision stages. Further, c...

  7. Stochastic thermodynamics

    Science.gov (United States)

    Eichhorn, Ralf; Aurell, Erik

    2014-04-01

    'Stochastic thermodynamics as a conceptual framework combines the stochastic energetics approach introduced a decade ago by Sekimoto [1] with the idea that entropy can consistently be assigned to a single fluctuating trajectory [2]'. This quote, taken from Udo Seifert's [3] 2008 review, nicely summarizes the basic ideas behind stochastic thermodynamics: for small systems, driven by external forces and in contact with a heat bath at a well-defined temperature, stochastic energetics [4] defines the exchanged work and heat along a single fluctuating trajectory and connects them to changes in the internal (system) energy by an energy balance analogous to the first law of thermodynamics. Additionally, providing a consistent definition of trajectory-wise entropy production gives rise to second-law-like relations and forms the basis for a 'stochastic thermodynamics' along individual fluctuating trajectories. In order to construct meaningful concepts of work, heat and entropy production for single trajectories, their definitions are based on the stochastic equations of motion modeling the physical system of interest. Because of this, they are valid even for systems that are prevented from equilibrating with the thermal environment by external driving forces (or other sources of non-equilibrium). In that way, the central notions of equilibrium thermodynamics, such as heat, work and entropy, are consistently extended to the non-equilibrium realm. In the (non-equilibrium) ensemble, the trajectory-wise quantities acquire distributions. General statements derived within stochastic thermodynamics typically refer to properties of these distributions, and are valid in the non-equilibrium regime even beyond the linear response. The extension of statistical mechanics and of exact thermodynamic statements to the non-equilibrium realm has been discussed from the early days of statistical mechanics more than 100 years ago. This debate culminated in the development of linear response

  8. High quantum efficiency mid-wavelength interband cascade infrared photodetectors with one and two stages

    Science.gov (United States)

    Zhou, Yi; Chen, Jianxin; Xu, Zhicheng; He, Li

    2016-08-01

    In this paper, we report on mid-wavelength infrared interband cascade photodetectors grown on InAs substrates. We studied the transport properties of the photon-generated carriers in the interband cascade structures by comparing two different detectors, a single stage detector and a two-stage cascade detector. The two-stage device showed quantum efficiency around 19.8% at room temperature, and clear optical response was measured even at a temperature of 323 K. The two detectors showed similar Johnson-noise limited detectivity. The peak detectivity of the one- and two-stage devices was measured to be 2.15 × 1014 cm·Hz1/02/W and 2.19 × 1014 cm·Hz1/02/W at 80 K, 1.21 × 109 cm·Hz1/02/W and 1.23 × 109 cm·Hz1/02/W at 300 K, respectively. The 300 K background limited infrared performance (BLIP) operation temperature is estimated to be over 140 K.

  9. Development of Two-Stage Stirling Cooler for ASTRO-F

    Science.gov (United States)

    Narasaki, K.; Tsunematsu, S.; Ootsuka, K.; Kyoya, M.; Matsumoto, T.; Murakami, H.; Nakagawa, T.

    2004-06-01

    A two-stage small Stirling cooler has been developed and tested for the infrared astronomical satellite ASTRO-F that is planned to be launched by Japanese M-V rocket in 2005. ASTRO-F has a hybrid cryogenic system that is a combination of superfluid liquid helium (HeII) and two-stage Stirling coolers. The mechanical cooler has a two-stage displacer driven by a linear motor in a cold head and a new linear-ball-bearing system for the piston-supporting structure in a compressor. The linear-ball-bearing supporting system achieves the piston clearance seal, the long piston-stroke operation and the low frequency operation. The typical cooling power is 200 mW at 20 K and the total input power to the compressor and the cold head is below 90 W without driver electronics. The engineering, the prototype and the flight models of the cooler have been fabricated and evaluated to verify the capability for ASTRO-F. This paper describes the design of the cooler and the results from verification tests including cooler performance test, thermal vacuum test, vibration test and lifetime test.

  10. Performance analysis of RDF gasification in a two stage fluidized bed-plasma process.

    Science.gov (United States)

    Materazzi, M; Lettieri, P; Taylor, R; Chapman, C

    2016-01-01

    The major technical problems faced by stand-alone fluidized bed gasifiers (FBG) for waste-to gas applications are intrinsically related to the composition and physical properties of waste materials, such as RDF. The high quantity of ash and volatile material in RDF can provide a decrease in thermal output, create high ash clinkering, and increase emission of tars and CO2, thus affecting the operability for clean syngas generation at industrial scale. By contrast, a two-stage process which separates primary gasification and selective tar and ash conversion would be inherently more forgiving and stable. This can be achieved with the use of a separate plasma converter, which has been successfully used in conjunction with conventional thermal treatment units, for the ability to 'polish' the producer gas by organic contaminants and collect the inorganic fraction in a molten (and inert) state. This research focused on the performance analysis of a two-stage fluid bed gasification-plasma process to transform solid waste into clean syngas. Thermodynamic assessment using the two-stage equilibrium method was carried out to determine optimum conditions for the gasification of RDF and to understand the limitations and influence of the second stage on the process performance (gas heating value, cold gas efficiency, carbon conversion efficiency), along with other parameters. Comparison with a different thermal refining stage, i.e. thermal cracking (via partial oxidation) was also performed. The analysis is supported by experimental data from a pilot plant.

  11. Continuous removal of endocrine disruptors by versatile peroxidase using a two-stage system.

    Science.gov (United States)

    Taboada-Puig, Roberto; Lu-Chau, Thelmo A; Eibes, Gemma; Feijoo, Gumersindo; Moreira, Maria T; Lema, Juan M

    2015-01-01

    The oxidant Mn(3+) -malonate, generated by the ligninolytic enzyme versatile peroxidase in a two-stage system, was used for the continuous removal of endocrine disrupting compounds (EDCs) from synthetic and real wastewaters. One plasticizer (bisphenol-A), one bactericide (triclosan) and three estrogenic compounds (estrone, 17β-estradiol, and 17α-ethinylestradiol) were removed from wastewater at degradation rates in the range of 28-58 µg/L·min, with low enzyme inactivation. First, the optimization of three main parameters affecting the generation of Mn(3+) -malonate (hydraulic retention time as well as Na-malonate and H2 O2 feeding rates) was conducted following a response surface methodology (RSM). Under optimal conditions, the degradation of the EDCs was proven at high (1.3-8.8 mg/L) and environmental (1.2-6.1 µg/L) concentrations. Finally, when the two-stage system was compared with a conventional enzymatic membrane reactor (EMR) using the same enzyme, a 14-fold increase of the removal efficiency was observed. At the same time, operational problems found during EDCs removal in the EMR system (e.g., clogging of the membrane and enzyme inactivation) were avoided by physically separating the stages of complex formation and pollutant oxidation, allowing the system to be operated for a longer period (∼8 h). This study demonstrates the feasibility of the two-stage enzymatic system for removing EDCs both at high and environmental concentrations.

  12. A two-stage Stirling-type pulse tube cryocooler with a cold inertance tube

    Science.gov (United States)

    Gan, Z. H.; Fan, B. Y.; Wu, Y. Z.; Qiu, L. M.; Zhang, X. J.; Chen, G. B.

    2010-06-01

    A thermally coupled two-stage Stirling-type pulse tube cryocooler (PTC) with inertance tubes as phase shifters has been designed, manufactured and tested. In order to obtain a larger phase shift at the low acoustic power of about 2.0 W, a cold inertance tube as well as a cold reservoir for the second stage, precooled by the cold end of the first stage, was introduced into the system. The transmission line model was used to calculate the phase shift produced by the cold inertance tube. Effect of regenerator material, geometry and charging pressure on the performance of the second stage of the two-stage PTC was investigated based on the well known regenerator model REGEN. Experimental results of the two-stage PTC were carried out with an emphasis on the performance of the second stage. A lowest cooling temperature of 23.7 K and 0.50 W at 33.9 K were obtained with an input electric power of 150.0 W and an operating frequency of 40 Hz.

  13. Rehabilitation outcomes in patients with early and two-stage reconstruction of flexor tendon injuries.

    Science.gov (United States)

    Sade, Ilgin; İnanir, Murat; Şen, Suzan; Çakmak, Esra; Kablanoğlu, Serkan; Selçuk, Barin; Dursun, Nigar

    2016-08-01

    [Purpose] The primary aim of this study was to assess rehabilitation outcomes for early and two-stage repair of hand flexor tendon injuries. The secondary purpose of this study was to compare the findings between treatment groups. [Subjects and Methods] Twenty-three patients were included in this study. Early repair (n=14) and two-stage repair (n=9) groups were included in a rehabilitation program that used hand splints. This retrospective evaluated patients according to their demographic characteristics, including age, gender, injured hand, dominant hand, cause of injury, zone of injury, number of affected fingers, and accompanying injuries. Pain, range of motion, and grip strength were evaluated using a visual analog scale, goniometer, and dynamometer, respectively. [Results] Both groups showed significant improvements in pain and finger flexion after treatment compared with baseline measurements. However, no significant differences were observed between the two treatment groups. Similar results were obtained for grip strength and pinch grip, whereas gross grip was better in the early tendon repair group. [Conclusion] Early and two-stage reconstruction of patients with flexor tendon injuries can be performed with similarly favorable responses and effective rehabilitation programs.

  14. A Comparison of Direct and Two-Stage Transportation of Patients to Hospital in Poland

    Directory of Open Access Journals (Sweden)

    Anna Rosiek

    2015-04-01

    Full Text Available Background: The rapid international expansion of telemedicine reflects the growth of technological innovations. This technological advancement is transforming the way in which patients can receive health care. Materials and Methods: The study was conducted in Poland, at the Department of Cardiology of the Regional Hospital of Louis Rydygier in Torun. The researchers analyzed the delay in the treatment of patients with acute coronary syndrome. The study was conducted as a survey and examined 67 consecutively admitted patients treated invasively in a two-stage transport system. Data were analyzed statistically. Results: Two-stage transportation does not meet the timeframe guidelines for the treatment of patients with acute myocardial infarction. Intervals for the analyzed group of patients were statistically significant (p < 0.0001. Conclusions: Direct transportation of the patient to a reference center with interventional cardiology laboratory has a significant impact on reducing in-hospital delay in case of patients with acute coronary syndrome. Perspectives: This article presents the results of two-stage transportation of the patient with acute coronary syndrome. This measure could help clinicians who seek to assess time needed for intervention. It also shows how time from the beginning of pain in chest is important and may contribute to patient disability, death or well-being.

  15. Two-Stage Liver Transplantation with Temporary Porto-Middle Hepatic Vein Shunt

    Directory of Open Access Journals (Sweden)

    Giovanni Varotti

    2010-01-01

    Full Text Available Two-stage liver transplantation (LT has been reported for cases of fulminant liver failure that can lead to toxic hepatic syndrome, or massive hemorrhages resulting in uncontrollable bleeding. Technically, the first stage of the procedure consists of a total hepatectomy with preservation of the recipient's inferior vena cava (IVC, followed by the creation of a temporary end-to-side porto-caval shunt (TPCS. The second stage consists of removing the TPCS and implanting a liver graft when one becomes available. We report a case of a two-stage total hepatectomy and LT in which a temporary end-to-end anastomosis between the portal vein and the middle hepatic vein (TPMHV was performed as an alternative to the classic end-to-end TPCS. The creation of a TPMHV proved technically feasible and showed some advantages compared to the standard TPCS. In cases in which a two-stage LT with side-to-side caval reconstruction is utilized, TPMHV can be considered as a safe and effective alternative to standard TPCS.

  16. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  17. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    Science.gov (United States)

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  18. Industrial demonstration plant for the gasification of herb residue by fluidized bed two-stage process.

    Science.gov (United States)

    Zeng, Xi; Shao, Ruyi; Wang, Fang; Dong, Pengwei; Yu, Jian; Xu, Guangwen

    2016-04-01

    A fluidized bed two-stage gasification process, consisting of a fluidized-bed (FB) pyrolyzer and a transport fluidized bed (TFB) gasifier, has been proposed to gasify biomass for fuel gas production with low tar content. On the basis of our previous fundamental study, an autothermal two-stage gasifier has been designed and built for gasify a kind of Chinese herb residue with a treating capacity of 600 kg/h. The testing data in the operational stable stage of the industrial demonstration plant showed that when keeping the reaction temperatures of pyrolyzer and gasifier respectively at about 700 °C and 850 °C, the heating value of fuel gas can reach 1200 kcal/Nm(3), and the tar content in the produced fuel gas was about 0.4 g/Nm(3). The results from this pilot industrial demonstration plant fully verified the feasibility and technical features of the proposed FB two-stage gasification process.

  19. Study on two stage activated carbon/HFC-134a based adsorption chiller

    Science.gov (United States)

    >K Habib,

    2013-06-01

    In this paper, a theoretical analysis on the performance of a thermally driven two-stage four-bed adsorption chiller utilizing low-grade waste heat of temperatures between 50°C and 70°C in combination with a heat sink (cooling water) of 30°C for air-conditioning applications has been described. Activated carbon (AC) of type Maxsorb III/HFC-134a pair has been examined as an adsorbent/refrigerant pair. FORTRAN simulation program is developed to analyze the influence of operating conditions (hot and cooling water temperatures and adsorption/desorption cycle times) on the cycle performance in terms of cooling capacity and COP. The main advantage of this two-stage chiller is that it can be operational with smaller regenerating temperature lifts than other heat-driven single-stage chillers. Simulation results shows that the two-stage chiller can be operated effectively with heat sources of 50°C and 70°C in combination with a coolant at 30°C.

  20. Effects of earthworm casts and zeolite on the two-stage composting of green waste

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Lu, E-mail: zhanglu1211@gmail.com; Sun, Xiangyang, E-mail: xysunbjfu@gmail.com

    2015-05-15

    Highlights: • Earthworm casts (EWCs) and clinoptilolite (CL) were used in green waste composting. • Addition of EWCs + CL improved physico-chemical and microbiological properties. • Addition of EWCs + CL extended the duration of thermophilic periods during composting. • Addition of EWCs + CL enhanced humification, cellulose degradation, and nutrients. • Combined addition of 0.30% EWCs + 25% CL reduced composting time to 21 days. - Abstract: Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21 days with the optimized two-stage composting method rather than in the 90–270 days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL.

  1. A Two-stage injection-locked magnetron for accelerators with superconducting cavities

    CERN Document Server

    Kazakevich, Grigory; Flanagan, Gene; Marhauser, Frank; Neubauer, Mike; Yakovlev, Vyacheslav; Chase, Brian; Nagaitsev, Sergey; Pasquinelli, Ralph; Solyak, Nikolay; Tupikov, Vitali; Wolff, Daniel

    2013-01-01

    A concept for a two-stage injection-locked CW magnetron intended to drive Superconducting Cavities (SC) for intensity-frontier accelerators has been proposed. The concept considers two magnetrons in which the output power differs by 15-20 dB and the lower power magnetron being frequency-locked from an external source locks the higher power magnetron. The injection-locked two-stage CW magnetron can be used as an RF power source for Fermilab's Project-X to feed separately each of the 1.3 GHz SC of the 8 GeV pulsed linac. We expect output/locking power ratio of about 30-40 dB assuming operation in a pulsed mode with pulse duration of ~ 8 ms and repetition rate of 10 Hz. The experimental setup of a two-stage magnetron utilising CW, S-band, 1 kW tubes operating at pulse duration of 1-10 ms, and the obtained results are presented and discussed in this paper.

  2. Study on the Control Algorithm of Two-Stage DC-DC Converter for Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Changhao Piao

    2014-01-01

    Full Text Available The fast response, high efficiency, and good reliability are very important characteristics to electric vehicles (EVs dc/dc converters. Two-stage dc-dc converter is a kind of dc-dc topologies that can offer those characteristics to EVs. Presently, nonlinear control is an active area of research in the field of the control algorithm of dc-dc converters. However, very few papers research on two-stage converter for EVs. In this paper, a fixed switching frequency sliding mode (FSFSM controller and double-integral sliding mode (DISM controller for two-stage dc-dc converter are proposed. And a conventional linear control (lag is chosen as the comparison. The performances of the proposed FSFSM controller are compared with those obtained by the lag controller. In consequence, the satisfactory simulation and experiment results show that the FSFSM controller is capable of offering good large-signal operations with fast dynamical responses to the converter. At last, some other simulation results are presented to prove that the DISM controller is a promising method for the converter to eliminate the steady-state error.

  3. Stochastic Analysis 2010

    CERN Document Server

    Crisan, Dan

    2011-01-01

    "Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa

  4. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  5. Combinatorial stresses kill pathogenic Candida species.

    Science.gov (United States)

    Kaloriti, Despoina; Tillmann, Anna; Cook, Emily; Jacobsen, Mette; You, Tao; Lenardon, Megan; Ames, Lauren; Barahona, Mauricio; Chandrasekaran, Komelapriya; Coghill, George; Goodman, Daniel; Gow, Neil A R; Grebogi, Celso; Ho, Hsueh-Lui; Ingram, Piers; McDonagh, Andrew; de Moura, Alessandro P S; Pang, Wei; Puttnam, Melanie; Radmaneshfar, Elahe; Romano, Maria Carmen; Silk, Daniel; Stark, Jaroslav; Stumpf, Michael; Thiel, Marco; Thorne, Thomas; Usher, Jane; Yin, Zhikang; Haynes, Ken; Brown, Alistair J P

    2012-10-01

    Pathogenic microbes exist in dynamic niches and have evolved robust adaptive responses to promote survival in their hosts. The major fungal pathogens of humans, Candida albicans and Candida glabrata, are exposed to a range of environmental stresses in their hosts including osmotic, oxidative and nitrosative stresses. Significant efforts have been devoted to the characterization of the adaptive responses to each of these stresses. In the wild, cells are frequently exposed simultaneously to combinations of these stresses and yet the effects of such combinatorial stresses have not been explored. We have developed a common experimental platform to facilitate the comparison of combinatorial stress responses in C. glabrata and C. albicans. This platform is based on the growth of cells in buffered rich medium at 30°C, and was used to define relatively low, medium and high doses of osmotic (NaCl), oxidative (H(2)O(2)) and nitrosative stresses (e.g., dipropylenetriamine (DPTA)-NONOate). The effects of combinatorial stresses were compared with the corresponding individual stresses under these growth conditions. We show for the first time that certain combinations of combinatorial stress are especially potent in terms of their ability to kill C. albicans and C. glabrata and/or inhibit their growth. This was the case for combinations of osmotic plus oxidative stress and for oxidative plus nitrosative stress. We predict that combinatorial stresses may be highly significant in host defences against these pathogenic yeasts.

  6. Partition functions and graphs: A combinatorial approach

    CERN Document Server

    Solomon, A I; Duchamp, G; Horzela, A; Penson, K A; Solomon, Allan I.; Blasiak, Pawel; Duchamp, Gerard; Horzela, Andrzej; Penson, Karol A.

    2004-01-01

    Although symmetry methods and analysis are a necessary ingredient in every physicist's toolkit, rather less use has been made of combinatorial methods. One exception is in the realm of Statistical Physics, where the calculation of the partition function, for example, is essentially a combinatorial problem. In this talk we shall show that one approach is via the normal ordering of the second quantized operators appearing in the partition function. This in turn leads to a combinatorial graphical description, giving essentially Feynman-type graphs associated with the theory. We illustrate this methodology by the explicit calculation of two model examples, the free boson gas and a superfluid boson model. We show how the calculation of partition functions can be facilitated by knowledge of the combinatorics of the boson normal ordering problem; this naturally gives rise to the Bell numbers of combinatorics. The associated graphical representation of these numbers gives a perturbation expansion in terms of a sequen...

  7. Accessing Specific Peptide Recognition by Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Li, Ming

    Molecular recognition is at the basis of all processes for life, and plays a central role in many biological processes, such as protein folding, the structural organization of cells and organelles, signal transduction, and the immune response. Hence, my PhD project is entitled “Accessing Specific...... Peptide Recognition by Combinatorial Chemistry”. Molecular recognition is a specific interaction between two or more molecules through noncovalent bonding, such as hydrogen bonding, metal coordination, van der Waals forces, π−π, hydrophobic, or electrostatic interactions. The association involves kinetic....... Combinatorial chemistry was invented in 1980s based on observation of functional aspects of the adaptive immune system. It was employed for drug development and optimization in conjunction with high-throughput synthesis and screening. (chapter 2) Combinatorial chemistry is able to rapidly produce many thousands...

  8. Two-Stage Power Factor Corrected Power Supplies: The Low Component-Stress Approach

    DEFF Research Database (Denmark)

    Petersen, Lars; Andersen, Michael Andreas E.

    2002-01-01

    The discussion concerning the use of single-stage contra two-stage PFC solutions has been going on for the last decade and it continues. The purpose of this paper is to direct the focus back on how the power is processed and not so much as to the number of stages or the amount of power processed....... The performance of the basic DC/DC topologies is reviewed with focus on the component stress. The knowledge obtained in this process is used to review some examples of the alternative PFC solutions and compare these solutions with the basic twostage PFC solution....

  9. Two-stage bargaining with coverage extension in a dual labour market

    DEFF Research Database (Denmark)

    Roberts, Mark A.; Stæhr, Karsten; Tranæs, Torben

    2000-01-01

    in extending coverage of a minimum wage to the non-union sector. Furthermore, the union sector does not seek to increase the non-union wage to a level above the market-clearing wage. In fact, it is optimal for the union sector to impose a market-clearing wage on the non-union sector. Finally, coverage......This paper studies coverage extension in a simple general equilibrium model with a dual labour market. The union sector is characterized by two-stage bargaining whereas the firms set wages in the non-union sector. In this model firms and unions of the union sector have a commonality of interest...

  10. SQL/JavaScript Hybrid Worms As Two-stage Quines

    CERN Document Server

    Orlicki, José I

    2009-01-01

    Delving into present trends and anticipating future malware trends, a hybrid, SQL on the server-side, JavaScript on the client-side, self-replicating worm based on two-stage quines was designed and implemented on an ad-hoc scenario instantiating a very common software pattern. The proof of concept code combines techniques seen in the wild, in the form of SQL injections leading to cross-site scripting JavaScript inclusion, and seen in the laboratory, in the form of SQL quines propa- gated via RFIDs, resulting in a hybrid code injection. General features of hybrid worms are also discussed.

  11. Two stage DOA and Fundamental Frequency Estimation based on Subspace Techniques

    DEFF Research Database (Denmark)

    Zhou, Zhenhua; Christensen, Mads Græsbøll; So, Hing-Cheung

    2012-01-01

    optimally weighted harmonic multiple signal classification (MCOW-HMUSIC) estimator is devised for the estimation of fundamental frequencies. Secondly, the spatio- temporal multiple signal classification (ST-MUSIC) estimator is proposed for the estimation of DOA with the estimated frequencies. Statistical......In this paper, the problem of fundamental frequency and direction-of-arrival (DOA) estimation for multi-channel harmonic sinusoidal signal is addressed. The estimation procedure consists of two stages. Firstly, by making use of the subspace technique and Markov-based eigenanalysis, a multi- channel...... evaluation with synthetic signals shows the high accuracy of the proposed methods compared with their non-weighting versions....

  12. Performance of the SITP 35K two-stage Stirling cryocooler

    Science.gov (United States)

    Liu, Dongyu; Li, Ao; Li, Shanshan; Wu, Yinong

    2010-04-01

    This paper presents the design, development, optimization experiment and performance of the SITP two-stage Stirling cryocooler. The geometry size of the cooler, especially the diameter and length of the regenerator were analyzed. Operating parameters by experiments were optimized to maximize the second stage cooling performance. In the test the cooler was operated at various drive frequency, phase shift between displacer and piston, fill pressure. The experimental results indicate that the cryocooler has a higher efficiency with a performance of 0.85W at 35K with a compressor input power of 56W at a phase shift of 65°, an operating frequency of 40Hz, 1MPa fill pressure.

  13. Two-Stage Bulk Electron Heating in the Diffusion Region of Anti-Parallel Symmetric Reconnection

    CERN Document Server

    Le, Ari; Daughton, William

    2016-01-01

    Electron bulk energization in the diffusion region during anti-parallel symmetric reconnection entails two stages. First, the inflowing electrons are adiabatically trapped and energized by an ambipolar parallel electric field. Next, the electrons gain energy from the reconnection electric field as they undergo meandering motion. These collisionless mechanisms have been decribed previously, and they lead to highly-structured electron velocity distributions. Nevertheless, a simplified control-volume analysis gives estimates for how the net effective heating scales with the upstream plasma conditions in agreement with fully kinetic simulations and spacecraft observations.

  14. Use of two-stage membrane countercurrent cascade for natural gas purification from carbon dioxide

    Science.gov (United States)

    Kurchatov, I. M.; Laguntsov, N. I.; Karaseva, M. D.

    2016-09-01

    Membrane technology scheme is offered and presented as a two-stage countercurrent recirculating cascade, in order to solve the problem of natural gas dehydration and purification from CO2. The first stage is a single divider, and the second stage is a recirculating two-module divider. This scheme allows natural gas to be cleaned from impurities, with any desired degree of methane extraction. In this paper, the optimal values of the basic parameters of the selected technological scheme are determined. An estimation of energy efficiency was carried out, taking into account the energy consumption of interstage compressor and methane losses in energy units.

  15. Forecasting long memory series subject to structural change: A two-stage approach

    DEFF Research Database (Denmark)

    Papailias, Fotis; Dias, Gustavo Fruet

    2015-01-01

    A two-stage forecasting approach for long memory time series is introduced. In the first step, we estimate the fractional exponent and, by applying the fractional differencing operator, obtain the underlying weakly dependent series. In the second step, we produce multi-step-ahead forecasts...... for the weakly dependent series and obtain their long memory counterparts by applying the fractional cumulation operator. The methodology applies to both stationary and nonstationary cases. Simulations and an application to seven time series provide evidence that the new methodology is more robust to structural...... change and yields good forecasting results....

  16. Space Station Freedom carbon dioxide removal assembly two-stage rotary sliding vane pump

    Science.gov (United States)

    Matteau, Dennis

    1992-07-01

    The design and development of a positive displacement pump selected to operate as an essential part of the carbon dioxide removal assembly (CDRA) are described. An oilless two-stage rotary sliding vane pump was selected as the optimum concept to meet the CDRA application requirements. This positive displacement pump is characterized by low weight and small envelope per unit flow, ability to pump saturated gases and moderate amount of liquid, small clearance volumes, and low vibration. It is easily modified to accommodate several stages on a single shaft optimizing space and weight, which makes the concept ideal for a range of demanding space applications.

  17. Two-Stage Maximum Likelihood Estimation (TSMLE for MT-CDMA Signals in the Indoor Environment

    Directory of Open Access Journals (Sweden)

    Sesay Abu B

    2004-01-01

    Full Text Available This paper proposes a two-stage maximum likelihood estimation (TSMLE technique suited for multitone code division multiple access (MT-CDMA system. Here, an analytical framework is presented in the indoor environment for determining the average bit error rate (BER of the system, over Rayleigh and Ricean fading channels. The analytical model is derived for quadrature phase shift keying (QPSK modulation technique by taking into account the number of tones, signal bandwidth (BW, bit rate, and transmission power. Numerical results are presented to validate the analysis, and to justify the approximations made therein. Moreover, these results are shown to agree completely with those obtained by simulation.

  18. Two-Stage Electric Vehicle Charging Coordination in Low Voltage Distribution Grids

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    Increased environmental awareness in the recent years has encouraged rapid growth of renewable energy sources (RESs); especially solar PV and wind. One of the effective solutions to compensate intermittencies in generation from the RESs is to enable consumer participation in demand response (DR......). Being a sizable rated element, electric vehicles (EVs) can offer a great deal of demand flexibility in future intelligent grids. This paper first investigates and analyzes driving pattern and charging requirements of EVs. Secondly, a two-stage charging algorithm, namely local adaptive control...

  19. Health care planning and education via gaming-simulation: a two-stage experiment.

    Science.gov (United States)

    Gagnon, J H; Greenblat, C S

    1977-01-01

    A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.

  20. Influence of capacity- and time-constrained intermediate storage in two-stage food production systems

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter; Gaalman, Gerard

    2007-01-01

    In food processing, two-stage production systems with a batch processor in the first stage and packaging lines in the second stage are common and mostly separated by capacity- and time-constrained intermediate storage. This combination of constraints is common in practice, but the literature hardly...... of systems like this. Contrary to the common sense in operations management, the LPT rule is able to maximize the total production volume per day. Furthermore, we show that adding one tank has considerable effects. Finally, we conclude that the optimal setup frequency for batches in the first stage...

  1. The global stability of a delayed predator-prey system with two stage-structure

    Energy Technology Data Exchange (ETDEWEB)

    Wang Fengyan [College of Science, Jimei University, Xiamen Fujian 361021 (China)], E-mail: wangfy68@163.com; Pang Guoping [Department of Mathematics and Computer Science, Yulin Normal University, Yulin Guangxi 537000 (China)

    2009-04-30

    Based on the classical delayed stage-structured model and Lotka-Volterra predator-prey model, we introduce and study a delayed predator-prey system, where prey and predator have two stages, an immature stage and a mature stage. The time delays are the time lengths between the immature's birth and maturity of prey and predator species. Results on global asymptotic stability of nonnegative equilibria of the delay system are given, which generalize and suggest that good continuity exists between the predator-prey system and its corresponding stage-structured system.

  2. A Two-Stage Assembly-Type Flowshop Scheduling Problem for Minimizing Total Tardiness

    Directory of Open Access Journals (Sweden)

    Ju-Yong Lee

    2016-01-01

    Full Text Available This research considers a two-stage assembly-type flowshop scheduling problem with the objective of minimizing the total tardiness. The first stage consists of two independent machines, and the second stage consists of a single machine. Two types of components are fabricated in the first stage, and then they are assembled in the second stage. Dominance properties and lower bounds are developed, and a branch and bound algorithm is presented that uses these properties and lower bounds as well as an upper bound obtained from a heuristic algorithm. The algorithm performance is evaluated using a series of computational experiments on randomly generated instances and the results are reported.

  3. Biomass waste gasification - can be the two stage process suitable for tar reduction and power generation?

    Science.gov (United States)

    Sulc, Jindřich; Stojdl, Jiří; Richter, Miroslav; Popelka, Jan; Svoboda, Karel; Smetana, Jiří; Vacek, Jiří; Skoblja, Siarhei; Buryan, Petr

    2012-04-01

    A pilot scale gasification unit with novel co-current, updraft arrangement in the first stage and counter-current downdraft in the second stage was developed and exploited for studying effects of two stage gasification in comparison with one stage gasification of biomass (wood pellets) on fuel gas composition and attainable gas purity. Significant producer gas parameters (gas composition, heating value, content of tar compounds, content of inorganic gas impurities) were compared for the two stage and the one stage method of the gasification arrangement with only the upward moving bed (co-current updraft). The main novel features of the gasifier conception include grate-less reactor, upward moving bed of biomass particles (e.g. pellets) by means of a screw elevator with changeable rotational speed and gradual expanding diameter of the cylindrical reactor in the part above the upper end of the screw. The gasifier concept and arrangement are considered convenient for thermal power range 100-350 kW(th). The second stage of the gasifier served mainly for tar compounds destruction/reforming by increased temperature (around 950°C) and for gasification reaction of the fuel gas with char. The second stage used additional combustion of the fuel gas by preheated secondary air for attaining higher temperature and faster gasification of the remaining char from the first stage. The measurements of gas composition and tar compound contents confirmed superiority of the two stage gasification system, drastic decrease of aromatic compounds with two and higher number of benzene rings by 1-2 orders. On the other hand the two stage gasification (with overall ER=0.71) led to substantial reduction of gas heating value (LHV=3.15 MJ/Nm(3)), elevation of gas volume and increase of nitrogen content in fuel gas. The increased temperature (>950°C) at the entrance to the char bed caused also substantial decrease of ammonia content in fuel gas. The char with higher content of ash leaving the

  4. Two-stage continuous fermentation of Saccharomycopsis fibuligeria and Candida utilis.

    Science.gov (United States)

    Admassu, W; Korus, R A; Heimsch, R C

    1983-11-01

    Biomass production and carbohydrate reduction were determined for a two-stage continuous fermentation process with a simulated potato processing waste feed. The amylolytic yeast Saccharomycopsis fibuligera was grown in the first stage and a mixed culture of S. fibuligera and Candida utilis was maintained in the second stage. All conditions for the first and second stages were fixed except the flow of medium to the second stage was varied. Maximum biomass production occurred at a second stage dilution rate, D(2), of 0.27 h (-1). Carbohydrate reduction was inversely proportional to D(2), between 0.10 and 0.35 h (-1).

  5. Structural requirements and basic design concepts for a two-stage winged launcher system (Saenger)

    Science.gov (United States)

    Kuczera, H.; Keller, K.; Kunz, R.

    1988-10-01

    An evaluation is made of materials and structures technologies deemed capable of increasing the mass fraction-to-orbit of the Saenger two-stage launcher system while adequately addressing thermal-control and cryogenic fuel storage insulation problems. Except in its leading edges, nose cone, and airbreathing propulsion system air intakes, Ti alloy-based materials will be the basis of the airframe primary structure. Lightweight metallic thermal-protection measures will be employed. Attention is given to the design of the large lower stage element of Saenger.

  6. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    OpenAIRE

    Ladan Jamshidy; Hamid Reza Mozaffari; Payam Faraji; Roohollah Sharifi

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with ...

  7. An Investigation on the Formation of Carbon Nanotubes by Two-Stage Chemical Vapor Deposition

    Directory of Open Access Journals (Sweden)

    M. S. Shamsudin

    2012-01-01

    Full Text Available High density of carbon nanotubes (CNTs has been synthesized from agricultural hydrocarbon: camphor oil using a one-hour synthesis time and a titanium dioxide sol gel catalyst. The pyrolysis temperature is studied in the range of 700–900°C at increments of 50°C. The synthesis process is done using a custom-made two-stage catalytic chemical vapor deposition apparatus. The CNT characteristics are investigated by field emission scanning electron microscopy and micro-Raman spectroscopy. The experimental results showed that structural properties of CNT are highly dependent on pyrolysis temperature changes.

  8. FORMATION OF HIGHLY RESISTANT CARBIDE AND BORIDE COATINGS BY A TWO-STAGE DEPOSITION METHOD

    Directory of Open Access Journals (Sweden)

    W. I. Sawich

    2011-01-01

    Full Text Available A study was made of the aspects of forming highly resistant coatings in the surface zone of tool steels and solid carbide inserts by a two-stage method. at the first stage of the method, pure Ta or Nb coatings were electrodeposited on samples of tool steel and solid carbide insert in a molten salt bath containing Ta and Nb fluorides. at the second stage, the electrodeposited coating of Ta (Nb was subjected to carburizing or boriding to form carbide (TaC, NbC or boride (TaB, NbB cladding layers.

  9. Combinatorial study of colored Hurwitz polyz\\^etas

    OpenAIRE

    Enjalbert, Jean-Yves; Minh, Hoang Ngoc

    2012-01-01

    A combinatorial study discloses two surjective morphisms between generalized shuffle algebras and algebras generated by the colored Hurwitz polyz\\^etas. The combinatorial aspects of the products and co-products involved in these algebras will be examined.

  10. Combinatorial set theory partition relations for cardinals

    CERN Document Server

    Erdös, P; Hajnal, A; Rado, P

    2011-01-01

    This work presents the most important combinatorial ideas in partition calculus and discusses ordinary partition relations for cardinals without the assumption of the generalized continuum hypothesis. A separate section of the book describes the main partition symbols scattered in the literature. A chapter on the applications of the combinatorial methods in partition calculus includes a section on topology with Arhangel''skii''s famous result that a first countable compact Hausdorff space has cardinality, at most continuum. Several sections on set mappings are included as well as an account of

  11. Toward Chemical Implementation of Encoded Combinatorial Libraries

    DEFF Research Database (Denmark)

    Nielsen, John; Janda, Kim D.

    1994-01-01

    by existing methodologies. Here we detail the synthesis of several matrices and the necessary chemistry to implement the conceptual scheme. In addition, we disclose how this novel technology permits a controlled ′dendritic" display of the chemical libraries. © 1994 Academic Press. All rights reserved.......The recent application of "combinatorial libraries" to supplement existing drug screening processes might simplify and accelerate the search for new lead compounds or drugs. Recently, a scheme for encoded combinatorial chemistry was put forward to surmount a number of the limitations possessed...

  12. Combinatorial designs a tribute to Haim Hanani

    CERN Document Server

    Hartman, A

    1989-01-01

    Haim Hanani pioneered the techniques for constructing designs and the theory of pairwise balanced designs, leading directly to Wilson''s Existence Theorem. He also led the way in the study of resolvable designs, covering and packing problems, latin squares, 3-designs and other combinatorial configurations.The Hanani volume is a collection of research and survey papers at the forefront of research in combinatorial design theory, including Professor Hanani''s own latest work on Balanced Incomplete Block Designs. Other areas covered include Steiner systems, finite geometries, quasigroups, and t-designs.

  13. Stochastic Jeux

    Directory of Open Access Journals (Sweden)

    Romanu Ekaterini

    2006-01-01

    Full Text Available This article shows the similarities between Claude Debussy’s and Iannis Xenakis’ philosophy of music and work, in particular the formers Jeux and the latter’s Metastasis and the stochastic works succeeding it, which seem to proceed parallel (with no personal contact to what is perceived as the evolution of 20th century Western music. Those two composers observed the dominant (German tradition as outsiders, and negated some of its elements considered as constant or natural by "traditional" innovators (i.e. serialists: the linearity of musical texture, its form and rhythm.

  14. Analytic approach to stochastic cellular automata: exponential and inverse power distributions out of Random Domino Automaton

    CERN Document Server

    Bialecki, Mariusz

    2010-01-01

    Inspired by extremely simplified view of the earthquakes we propose the stochastic domino cellular automaton model exhibiting avalanches. From elementary combinatorial arguments we derive a set of nonlinear equations describing the automaton. Exact relations between the average parameters of the model are presented. Depending on imposed triggering, the model reproduces both exponential and inverse power statistics of clusters.

  15. Complex Dynamical Behavior of a Two-Stage Colpitts Oscillator with Magnetically Coupled Inductors

    Directory of Open Access Journals (Sweden)

    V. Kamdoum Tamba

    2014-01-01

    Full Text Available A five-dimensional (5D controlled two-stage Colpitts oscillator is introduced and analyzed. This new electronic oscillator is constructed by considering the well-known two-stage Colpitts oscillator with two further elements (coupled inductors and variable resistor. In contrast to current approaches based on piecewise linear (PWL model, we propose a smooth mathematical model (with exponential nonlinearity to investigate the dynamics of the oscillator. Several issues, such as the basic dynamical behaviour, bifurcation diagrams, Lyapunov exponents, and frequency spectra of the oscillator, are investigated theoretically and numerically by varying a single control resistor. It is found that the oscillator moves from the state of fixed point motion to chaos via the usual paths of period-doubling and interior crisis routes as the single control resistor is monitored. Furthermore, an experimental study of controlled Colpitts oscillator is carried out. An appropriate electronic circuit is proposed for the investigations of the complex dynamics behaviour of the system. A very good qualitative agreement is obtained between the theoretical/numerical and experimental results.

  16. Optimization of Two-Stage Peltier Modules: Structure and Exergetic Efficiency

    Directory of Open Access Journals (Sweden)

    Cesar Ramirez-Lopez

    2012-08-01

    Full Text Available In this paper we undertake the theoretical analysis of a two-stage semiconductor thermoelectric module (TEM which contains an arbitrary and different number of thermocouples, n1 and n2, in each stage (pyramid-styled TEM. The analysis is based on a dimensionless entropy balance set of equations. We study the effects of n1 and n2, the flowing electric currents through each stage, the applied temperatures and the thermoelectric properties of the semiconductor materials on the exergetic efficiency. Our main result implies that the electric currents flowing in each stage must necessarily be different with a ratio about 4.3 if the best thermal performance and the highest temperature difference possible between the cold and hot side of the device are pursued. This fact had not been pointed out before for pyramid-styled two stage TEM. The ratio n1/n2 should be about 8.

  17. A two-stage series diode for intense large-area moderate pulsed X rays production

    Science.gov (United States)

    Lai, Dingguo; Qiu, Mengtong; Xu, Qifu; Su, Zhaofeng; Li, Mo; Ren, Shuqing; Huang, Zhongliang

    2017-01-01

    This paper presents a method for moderate pulsed X rays produced by a series diode, which can be driven by high voltage pulse to generate intense large-area uniform sub-100-keV X rays. A two stage series diode was designed for Flash-II accelerator and experimentally investigated. A compact support system of floating converter/cathode was invented, the extra cathode is floating electrically and mechanically, by withdrawing three support pins several milliseconds before a diode electrical pulse. A double ring cathode was developed to improve the surface electric field and emission stability. The cathode radii and diode separation gap were optimized to enhance the uniformity of X rays and coincidence of the two diode voltages based on the simulation and theoretical calculation. The experimental results show that the two stage series diode can work stably under 700 kV and 300 kA, the average energy of X rays is 86 keV, and the dose is about 296 rad(Si) over 615 cm2 area with uniformity 2:1 at 5 cm from the last converter. Compared with the single diode, the average X rays' energy reduces from 132 keV to 88 keV, and the proportion of sub-100-keV photons increases from 39% to 69%.

  18. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    Science.gov (United States)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  19. A separate two-stage pulse tube cooler working at liquid helium temperature

    Institute of Scientific and Technical Information of China (English)

    QIU Limin; HE Yonglin; GAN Zhihua; WAN Laihong; CHEN Guobang

    2005-01-01

    A novel 4 K separate two-stage pulse tube cooler (PTC) was designed and tested. The cooler consists of two separate pulse tube coolers, in which the cold end of the first stage regenerator is thermally connected with the middle part of the second regenerator. Compared to the traditional coupled multi-stage pulse tube cooler, the mutual interference between stages can be significantly eliminated. The lowest refrigeration temperature obtained at the first stage pulse tube was 13.8 K. This is a new record for single stage PTC. With two compressors and two rotary valves driving mode, the separate two-stage PTC obtained a refrigeration temperature of 2.5 K at the second stage. Cooling capacities of 508 mW at 4.2 K and 15 W at 37.5 K were achieved simultaneously. A one-compressor and one-rotary valve driving mode has been proposed to further simplify the structure of separate type PTC.

  20. Two-Stage Single-Compartment Models to Evaluate Dissolution in the Lower Intestine.

    Science.gov (United States)

    Markopoulos, Constantinos; Vertzoni, Maria; Symillides, Mira; Kesisoglou, Filippos; Reppas, Christos

    2015-09-01

    The purpose was to propose two-stage single-compartment models for evaluating dissolution characteristics in distal ileum and ascending colon, under conditions simulating the bioavailability and bioequivalence studies in fasted and fed state by using the mini-paddle and the compendial flow-through apparatus (closed-loop mode). Immediate release products of two highly dosed active pharmaceutical ingredients (APIs), sulfasalazine and L-870,810, and one mesalamine colon targeting product were used for evaluating their usefulness. Change of medium composition simulating the conditions in distal ileum (SIFileum ) to a medium simulating the conditions in ascending colon in fasted state and in fed state was achieved by adding an appropriate solution in SIFileum . Data with immediate release products suggest that dissolution in lower intestine is substantially different than in upper intestine and is affected by regional pH differences > type/intensity of fluid convection > differences in concentration of other luminal components. Asacol® (400 mg/tab) was more sensitive to type/intensity of fluid convection. In all the cases, data were in line with available human data. Two-stage single-compartment models may be useful for the evaluation of dissolution in lower intestine. The impact of type/intensity of fluid convection and viscosity of media on luminal performance of other APIs and drug products requires further exploration.

  1. Simultaneous bile duct and portal venous branch ligation in two-stage hepatectomy

    Institute of Scientific and Technical Information of China (English)

    Hiroya Iida; Chiaki Yasui; Tsukasa Aihara; Shinichi Ikuta; Hidenori Yoshie; Naoki Yamanaka

    2011-01-01

    Hepatectomy is an effective surgical treatment for multiple bilobar liver metastases from colon cancer; however, one of the primary obstacles to completing surgical resection for these cases is an insufficient volume of the future remnant liver, which may cause postoperative liver failure. To induce atrophy of the unilateral lobe and hypertrophy of the future remnant liver, procedures to occlude the portal vein have been conventionally used prior to major hepatectomy. We report a case of a 50-year-old woman in whom two-stage hepatectomy was performed in combination with intraoperative ligation of the portal vein and the bile duct of the right hepatic lobe. This procedure was designed to promote the atrophic effect on the right hepatic lobe more effectively than the conventional technique, and to the best of our knowledge, it was used for the first time in the present case. Despite successful induction of liver volume shift as well as the following procedure, the patient died of subsequent liver failure after developing recurrent tumors. We discuss the first case in which simultaneous ligation of the portal vein and the biliary system was successfully applied as part of the first step of two-stage hepatectomy.

  2. Metamodeling and Optimization of a Blister Copper Two-Stage Production Process

    Science.gov (United States)

    Jarosz, Piotr; Kusiak, Jan; Małecki, Stanisław; Morkisz, Paweł; Oprocha, Piotr; Pietrucha, Wojciech; Sztangret, Łukasz

    2016-06-01

    It is often difficult to estimate parameters for a two-stage production process of blister copper (containing 99.4 wt.% of Cu metal) as well as those for most industrial processes with high accuracy, which leads to problems related to process modeling and control. The first objective of this study was to model flash smelting and converting of Cu matte stages using three different techniques: artificial neural networks, support vector machines, and random forests, which utilized noisy technological data. Subsequently, more advanced models were applied to optimize the entire process (which was the second goal of this research). The obtained optimal solution was a Pareto-optimal one because the process consisted of two stages, making the optimization problem a multi-criteria one. A sequential optimization strategy was employed, which aimed for optimal control parameters consecutively for both stages. The obtained optimal output parameters for the first smelting stage were used as input parameters for the second converting stage. Finally, a search for another optimal set of control parameters for the second stage of a Kennecott-Outokumpu process was performed. The optimization process was modeled using a Monte-Carlo method, and both modeling parameters and computed optimal solutions are discussed.

  3. Development and optimization of a two-stage gasifier for heat and power production

    Science.gov (United States)

    Kosov, V. V.; Zaichenko, V. M.

    2016-11-01

    The major methods of biomass thermal conversion are combustion in excess oxygen, gasification in reduced oxygen, and pyrolysis in the absence of oxygen. The end products of these methods are heat, gas, liquid and solid fuels. From the point of view of energy production, none of these methods can be considered optimal. A two-stage thermal conversion of biomass based on pyrolysis as the first stage and pyrolysis products cracking as the second stage can be considered the optimal method for energy production that allows obtaining synthesis gas consisting of hydrogen and carbon monoxide and not containing liquid or solid particles. On the base of the two stage cracking technology, there was designed an experimental power plant of electric power up to 50 kW. The power plant consists of a thermal conversion module and a gas engine power generator adapted for operation on syngas. Purposes of the work were determination of an optimal operation temperature of the thermal conversion module and an optimal mass ratio of processed biomass and charcoal in cracking chamber of the thermal conversion module. Experiments on the pyrolysis products cracking at various temperatures show that the optimum cracking temperature is equal to 1000 °C. From the results of measuring the volume of gas produced in different mass ratios of charcoal and wood biomass processed, it follows that the maximum volume of the gas in the range of the mass ratio equal to 0.5-0.6.

  4. On bi-criteria two-stage transportation problem: a case study

    Directory of Open Access Journals (Sweden)

    Ahmad MURAD

    2010-01-01

    Full Text Available The study of the optimum distribution of goods between sources and destinations is one of the important topics in projects economics. This importance comes as a result of minimizing the transportation cost, deterioration, time, etc. The classical transportation problem constitutes one of the major areas of application for linear programming. The aim of this problem is to obtain the optimum distribution of goods from different sources to different destinations which minimizes the total transportation cost. From the practical point of view, the transportation problems may differ from the classical form. It may contain one or more objective function, one or more stage to transport, one or more type of commodity with one or more means of transport. The aim of this paper is to construct an optimization model for transportation problem for one of mill-stones companies. The model is formulated as a bi-criteria two-stage transportation problem with a special structure depending on the capacities of suppliers, warehouses and requirements of the destinations. A solution algorithm is introduced to solve this class of bi-criteria two-stage transportation problem to obtain the set of non-dominated extreme points and the efficient solutions accompanied with each one that enables the decision maker to choose the best one. The solution algorithm mainly based on the fruitful application of the methods for treating transportation problems, theory of duality of linear programming and the methods of solving bi-criteria linear programming problems.

  5. An integrated two-stage support vector machine approach to forecast inundation maps during typhoons

    Science.gov (United States)

    Jhong, Bing-Chen; Wang, Jhih-Huang; Lin, Gwo-Fong

    2017-04-01

    During typhoons, accurate forecasts of hourly inundation depths are essential for inundation warning and mitigation. Due to the lack of observed data of inundation maps, sufficient observed data are not available for developing inundation forecasting models. In this paper, the inundation depths, which are simulated and validated by a physically based two-dimensional model (FLO-2D), are used as a database for inundation forecasting. A two-stage inundation forecasting approach based on Support Vector Machine (SVM) is proposed to yield 1- to 6-h lead-time inundation maps during typhoons. In the first stage (point forecasting), the proposed approach not only considers the rainfall intensity and inundation depth as model input but also simultaneously considers cumulative rainfall and forecasted inundation depths. In the second stage (spatial expansion), the geographic information of inundation grids and the inundation forecasts of reference points are used to yield inundation maps. The results clearly indicate that the proposed approach effectively improves the forecasting performance and decreases the negative impact of increasing forecast lead time. Moreover, the proposed approach is capable of providing accurate inundation maps for 1- to 6-h lead times. In conclusion, the proposed two-stage forecasting approach is suitable and useful for improving the inundation forecasting during typhoons, especially for long lead times.

  6. The influence of partial oxidation mechanisms on tar destruction in TwoStage biomass gasification

    DEFF Research Database (Denmark)

    Ahrenfeldt, Jesper; Egsgaard, Helge; Stelte, Wolfgang

    2013-01-01

    TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction and ...... tar destruction and a high moisture content of the biomass enhances the decomposition of phenol and inhibits the formation of naphthalene. This enhances tar conversion and gasification in the char-bed, and thus contributes in-directly to the tar destruction.......TwoStage gasification of biomass results in almost tar free producer gas suitable for multiple end-use purposes. In the present study, it is investigated to what extent the partial oxidation process of the pyrolysis gas from the first stage is involved in direct and in-direct tar destruction...... and conversion. The study identifies the following major impact factors regarding tar content in the producer gas: oxidation temperature, excess air ratio and biomass moisture content. In a experimental setup, wood pellets were pyrolyzed and the resulting pyrolysis gas was transferred in a heated partial...

  7. Numerical simulation of municipal solid waste combustion in a novel two-stage reciprocating incinerator.

    Science.gov (United States)

    Huai, X L; Xu, W L; Qu, Z Y; Li, Z G; Zhang, F P; Xiang, G M; Zhu, S Y; Chen, G

    2008-01-01

    A mathematical model was presented in this paper for the combustion of municipal solid waste in a novel two-stage reciprocating grate furnace. Numerical simulations were performed to predict the temperature, the flow and the species distributions in the furnace, with practical operational conditions taken into account. The calculated results agree well with the test data, and the burning behavior of municipal solid waste in the novel two-stage reciprocating incinerator can be demonstrated well. The thickness of waste bed, the initial moisture content, the excessive air coefficient and the secondary air are the major factors that influence the combustion process. If the initial moisture content of waste is high, both the heat value of waste and the temperature inside incinerator are low, and less oxygen is necessary for combustion. The air supply rate and the primary air distribution along the grate should be adjusted according to the initial moisture content of the waste. A reasonable bed thickness and an adequate excessive air coefficient can keep a higher temperature, promote the burnout of combustibles, and consequently reduce the emission of dioxin pollutants. When the total air supply is constant, reducing primary air and introducing secondary air properly can enhance turbulence and mixing, prolong the residence time of flue gas, and promote the complete combustion of combustibles. This study provides an important reference for optimizing the design and operation of municipal solid wastes furnace.

  8. Two stage heterotrophy/photoinduction culture of Scenedesmus incrassatulus: potential for lutein production.

    Science.gov (United States)

    Flórez-Miranda, Liliana; Cañizares-Villanueva, Rosa Olivia; Melchy-Antonio, Orlando; Jerónimo, Fernando Martínez-; Flores-Ortíz, Cesar Mateo

    2017-09-16

    A biomass production process including two stages, heterotrophy/photoinduction (TSHP), was developed to improve biomass and lutein production by the green microalgae Scenedesmus incrassatulus. To determine the effects of different nitrogen sources (yeast extract and urea) and temperature in the heterotrophic stage, experiments using shake flask cultures with glucose as the carbon source were carried out. The highest biomass productivity and specific pigment concentrations were reached using urea+vitamins (U+V) at 30°C. The first stage of the TSHP process was done in a 6L bioreactor, and the inductions in a 3L airlift photobioreactor. At the end of the heterotrophic stage, S. incrassatulus achieved the maximal biomass concentration, increasing from 7.22gL(-1) to 17.98gL(-1) with an increase in initial glucose concentration from 10.6gL(-1) to 30.3gL(-1). However, the higher initial glucose concentration resulted in a lower specific growth rate (μ) and lower cell yield (Yx/s), possibly due to substrate inhibition. After 24h of photoinduction, lutein content in S. incrassatulus biomass was 7 times higher than that obtained at the end of heterotrophic cultivation, and the lutein productivity was 1.6 times higher compared with autotrophic culture of this microalga. Hence, the two-stage heterotrophy/photoinduction culture is an effective strategy for high cell density and lutein production in S. incrassatulus. Copyright © 2017. Published by Elsevier B.V.

  9. Aerobic and two-stage anaerobic-aerobic sludge digestion with pure oxygen and air aeration.

    Science.gov (United States)

    Zupancic, Gregor D; Ros, Milenko

    2008-01-01

    The degradability of excess activated sludge from a wastewater treatment plant was studied. The objective was establishing the degree of degradation using either air or pure oxygen at different temperatures. Sludge treated with pure oxygen was degraded at temperatures from 22 degrees C to 50 degrees C while samples treated with air were degraded between 32 degrees C and 65 degrees C. Using air, sludge is efficiently degraded at 37 degrees C and at 50-55 degrees C. With oxygen, sludge was most effectively degraded at 38 degrees C or at 25-30 degrees C. Two-stage anaerobic-aerobic processes were studied. The first anaerobic stage was always operated for 5 days HRT, and the second stage involved aeration with pure oxygen and an HRT between 5 and 10 days. Under these conditions, there is 53.5% VSS removal and 55.4% COD degradation at 15 days HRT - 5 days anaerobic, 10 days aerobic. Sludge digested with pure oxygen at 25 degrees C in a batch reactor converted 48% of sludge total Kjeldahl nitrogen to nitrate. Addition of an aerobic stage with pure oxygen aeration to the anaerobic digestion enhances ammonium nitrogen removal. In a two-stage anaerobic-aerobic sludge digestion process within 8 days HRT of the aerobic stage, the removal of ammonium nitrogen was 85%.

  10. Dynamics of installation way for the actuator of a two-stage active vibration-isolator

    Institute of Scientific and Technical Information of China (English)

    HU Li; HUANG Qi-bai; HE Xue-song; YUAN Ji-xuan

    2008-01-01

    We investigated the behaviors of an active control system of two-stage vibration isolation with the actuator installed in parallel with either the upper passive mount or the lower passive isolation mount. We revealed the relationships between the active control force of the actuator and the parameters of the passive isolators by studying the dynamics of two-stage active vibration isolation for the actuator at the foregoing two positions in turn. With the actuator installed beside the upper mount, a small active force can achieve a very good isolating effect when the frequency of the stimulating force is much larger than the natural frequency of the upper mount; a larger active force is required in the low-frequency domain; and the active force equals the stimulating force when the upper mount works within the resonance region, suggesting an approach to reducing wobble and ensuring desirable installation accuracy by increasing the upper-mount stiffness. In either the low or the high frequency region far away from the resonance region, the active force is smaller when the actuator is beside the lower mount than beside the upper mount.

  11. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of two “two-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  12. Hydrogen and methane production from household solid waste in the two-stage fermentation process

    DEFF Research Database (Denmark)

    Lui, D.; Liu, D.; Zeng, Raymond Jianxiong

    2006-01-01

    A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS....... Furthermore, this study also provided direct evidence in the dynamic fermentation process that, hydrogen production increase was reflected by acetate to butyrate ratio increase in liquid phase. (c) 2006 Elsevier Ltd. All rights reserved.......A two-stage process combined hydrogen and methane production from household solid waste was demonstrated working successfully. The yield of 43 mL H-2/g volatile solid (VS) added was generated in the first hydrogen production stage and the methane production in the second stage was 500 mL CH4/g VS...... added. This figure was 21% higher than the methane yield from the one-stage process, which was run as control. Sparging of the hydrogen reactor with methane gas resulted in doubling of the hydrogen production. PH was observed as a key factor affecting fermentation pathway in hydrogen production stage...

  13. Two-stage electrodialytic concentration of glyceric acid from fermentation broth.

    Science.gov (United States)

    Habe, Hiroshi; Shimada, Yuko; Fukuoka, Tokuma; Kitamoto, Dai; Itagaki, Masayuki; Watanabe, Kunihiko; Yanagishita, Hiroshi; Sakaki, Keiji

    2010-12-01

    The aim of this research was the application of a two-stage electrodialysis (ED) method for glyceric acid (GA) recovery from fermentation broth. First, by desalting ED, glycerate solutions (counterpart is Na+) were concentrated using ion-exchange membranes, and the glycerate recovery and energy consumption became more efficient with increasing the initial glycerate concentration (30 to 130 g/l). Second, by water-splitting ED, the concentrated glycerate was electroconverted to GA using bipolar membranes. Using a culture broth of Acetobacter tropicalis containing 68.6 g/l of D-glycerate, a final D-GA concentration of 116 g/l was obtained following the two-stage ED process. The total energy consumption for the D-glycerate concentration and its electroconversion to D-GA was approximately 0.92 kWh per 1 kg of D-GA. Copyright © 2010 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  14. Occurrence of two-stage hardening in C-Mn steel wire rods containing pearlitic microstructure

    Science.gov (United States)

    Singh, Balbir; Sahoo, Gadadhar; Saxena, Atul

    2016-09-01

    The 8 and 10 mm diameter wire rods intended for use as concrete reinforcement were produced/ hot rolled from C-Mn steel chemistry containing various elements within the range of C:0.55-0.65, Mn:0.85-1.50, Si:0.05-0.09, S:0.04 max, P:0.04 max and N:0.006 max wt%. Depending upon the C and Mn contents the product attained pearlitic microstructure in the range of 85-93% with balance amount of polygonal ferrite transformed at prior austenite grain boundaries. The pearlitic microstructure in the wire rods helped in achieving yield strength, tensile strength, total elongation and reduction in area values within the range of 422-515 MPa, 790-950 MPa, 22-15% and 45-35%, respectively. On analyzing the tensile results it was revealed that the material experienced hardening in two stages separable by a knee strain value of about 0.05. The occurrence of two stage hardening thus in the steel with hardening coefficients of 0.26 and 0.09 could be demonstrated with the help of derived relationships existed between flow stress and the strain.

  15. Two-stage estimation for multivariate recurrent event data with a dependent terminal event.

    Science.gov (United States)

    Chen, Chyong-Mei; Chuang, Ya-Wen; Shen, Pao-Sheng

    2015-03-01

    Recurrent event data arise in longitudinal follow-up studies, where each subject may experience the same type of events repeatedly. The work in this article is motivated by the data from a study of repeated peritonitis for patients on peritoneal dialysis. Due to the aspects of medicine and cost, the peritonitis cases were classified into two types: Gram-positive and non-Gram-positive peritonitis. Further, since the death and hemodialysis therapy preclude the occurrence of recurrent events, we face multivariate recurrent event data with a dependent terminal event. We propose a flexible marginal model, which has three characteristics: first, we assume marginal proportional hazard and proportional rates models for terminal event time and recurrent event processes, respectively; second, the inter-recurrences dependence and the correlation between the multivariate recurrent event processes and terminal event time are modeled through three multiplicative frailties corresponding to the specified marginal models; third, the rate model with frailties for recurrent events is specified only on the time before the terminal event. We propose a two-stage estimation procedure for estimating unknown parameters. We also establish the consistency of the two-stage estimator. Simulation studies show that the proposed approach is appropriate for practical use. The methodology is applied to the peritonitis cohort data that motivated this study. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Two-stage earth-to-orbit vehicles with dual-fuel propulsion in the Orbiter

    Science.gov (United States)

    Martin, J. A.

    1982-01-01

    Earth-to-orbit vehicle studies of future replacements for the Space Shuttle are needed to guide technology development. Previous studies that have examined single-stage vehicles have shown advantages for dual-fuel propulsion. Previous two-stage system studies have assumed all-hydrogen fuel for the Orbiters. The present study examined dual-fuel Orbiters and found that the system dry mass could be reduced with this concept. The possibility of staging the booster at a staging velocity low enough to allow coast-back to the launch site is shown to be beneficial, particularly in combination with a dual-fuel Orbiter. An engine evaluation indicated the same ranking of engines as did a previous single-stage study. Propane and RP-1 fuels result in lower vehicle dry mass than methane, and staged-combustion engines are preferred over gas-generator engines. The sensitivity to the engine selection is less for two-stage systems than for single-stage systems.

  17. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    Science.gov (United States)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  18. Configuration Consideration for Expander in Transcritical Carbon Dioxide Two-Stage Compression Cycle

    Institute of Scientific and Technical Information of China (English)

    MA Yitai; YANG Junlan; GUAN Haiqing; LI Minxia

    2005-01-01

    To investigate the configuration consideration of expander in transcritical carbon dioxide two-stage compression cycle, the best place in the cycle should be searched for to reinvest the recovery work so as to improve the system efficiency. The expander and the compressor are connected to the same shaft and integrated into one unit, with the latter being driven by the former, thus the transfer loss and leakage loss can be decreased greatly. In these systems, the expander can be either connected with the first stage compressor (shortened as DCDL cycle) or the second stage compressor (shortened as DCDH cycle), but the two configuration ways can get different performances. By setting up theoretical model for two kinds of expander configuration ways in the transcritical carbon dioxide two-stage compression cycle, the first and the second laws of thermodynamics are used to analyze the coefficient of performance, exergy efficiency, inter-stage pressure, discharge temperature and exergy losses of each component for the two cycles. From the model results, the performance of DCDH cycle is better than that of DCDL cycle. The analysis results are indispensable to providing a theoretical basis for practical design and operating.

  19. Two-stage coordination multi-radio multi-channel mac protocol for wireless mesh networks

    CERN Document Server

    Zhao, Bingxuan

    2011-01-01

    Within the wireless mesh network, a bottleneck problem arises as the number of concurrent traffic flows (NCTF) increases over a single common control channel, as it is for most conventional networks. To alleviate this problem, this paper proposes a two-stage coordination multi-radio multi-channel MAC (TSC-M2MAC) protocol that designates all available channels as both control channels and data channels in a time division manner through a two-stage coordination. At the first stage, a load balancing breadth-first-search-based vertex coloring algorithm for multi-radio conflict graph is proposed to intelligently allocate multiple control channels. At the second stage, a REQ/ACK/RES mechanism is proposed to realize dynamical channel allocation for data transmission. At this stage, the Channel-and-Radio Utilization Structure (CRUS) maintained by each node is able to alleviate the hidden nodes problem; also, the proposed adaptive adjustment algorithm for the Channel Negotiation and Allocation (CNA) sub-interval is ab...

  20. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    Science.gov (United States)

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  1. Two-stage image segmentation based on edge and region information

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A two-stage method for image segmentation based on edge and region information is proposed. Different deformation schemes are used at two stages for segmenting the object correctly in image plane. At the first stage, the contour of the model is divided into several segments hierarchically that deform respectively using affine transformation. After the contour is deformed to the approximate boundary of object, a fine match mechanism using statistical information of local region to redefine the external energy of the model is used to make the contour fit the object's boundary exactly. The algorithm is effective, as the hierarchical segmental deformation makes use of the globe and local information of the image, the affine transformation keeps the consistency of the model, and the reformative approaches of computing the internal energy and external energy are proposed to reduce the algorithm complexity. The adaptive method of defining the search area at the second stage makes the model converge quickly. The experimental results indicate that the proposed model is effective and robust to local minima and able to search for concave objects.

  2. Waste-gasification efficiency of a two-stage fluidized-bed gasification system.

    Science.gov (United States)

    Liu, Zhen-Shu; Lin, Chiou-Liang; Chang, Tsung-Jen; Weng, Wang-Chang

    2016-02-01

    This study employed a two-stage fluidized-bed gasifier as a gasification reactor and two additives (CaO and activated carbon) as the Stage-II bed material to investigate the effects of the operating temperature (700°C, 800°C, and 900°C) on the syngas composition, total gas yield, and gas-heating value during simulated waste gasification. The results showed that when the operating temperature increased from 700 to 900°C, the molar percentage of H2 in the syngas produced by the two-stage gasification process increased from 19.4 to 29.7mol% and that the total gas yield and gas-heating value also increased. When CaO was used as the additive, the molar percentage of CO2 in the syngas decreased, and the molar percentage of H2 increased. When activated carbon was used, the molar percentage of CH4 in the syngas increased, and the total gas yield and gas-heating value increased. Overall, CaO had better effects on the production of H2, whereas activated carbon clearly enhanced the total gas yield and gas-heating value. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  3. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction.

    Science.gov (United States)

    Zhang, Long; Li, Kang; Bai, Er-Wei; Irwin, George W

    2015-08-01

    A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

  4. Two-stage numerical simulation for temperature profile in furnace of tangentially fired pulverized coal boiler

    Institute of Scientific and Technical Information of China (English)

    ZHOU Nai-jun; XU Qiong-hui; ZHOU Ping

    2005-01-01

    Considering the fact that the temperature distribution in furnace of a tangential fired pulverized coal boiler is difficult to be measured and monitored, two-stage numerical simulation method was put forward. First, multi-field coupling simulation in typical work conditions was carried out off-line with the software CFX-4.3, and then the expression of temperature profile varying with operating parameter was obtained. According to real-time operating parameters, the temperature at arbitrary point of the furnace can be calculated by using this expression. Thus the temperature profile can be shown on-line and monitoring for combustion state in the furnace is realized. The simul-ation model was checked by the parameters measured in an operating boiler, DG130-9.8/540. The maximum of relative error is less than 12% and the absolute error is less than 120 ℃, which shows that the proposed two-stage simulation method is reliable and able to satisfy the requirement of industrial application.

  5. A low-voltage sense amplifier with two-stage operational amplifier clamping for flash memory

    Science.gov (United States)

    Guo, Jiarong

    2017-04-01

    A low-voltage sense amplifier with reference current generator utilizing two-stage operational amplifier clamp structure for flash memory is presented in this paper, capable of operating with minimum supply voltage at 1 V. A new reference current generation circuit composed of a reference cell and a two-stage operational amplifier clamping the drain pole of the reference cell is used to generate the reference current, which avoids the threshold limitation caused by current mirror transistor in the traditional sense amplifier. A novel reference voltage generation circuit using dummy bit-line structure without pull-down current is also adopted, which not only improves the sense window enhancing read precision but also saves power consumption. The sense amplifier was implemented in a flash realized in 90 nm flash technology. Experimental results show the access time is 14.7 ns with power supply of 1.2 V and slow corner at 125 °C. Project supported by the National Natural Science Fundation of China (No. 61376028).

  6. Two-stage high temperature sludge gasification using the waste heat from hot blast furnace slags.

    Science.gov (United States)

    Sun, Yongqi; Zhang, Zuotai; Liu, Lili; Wang, Xidong

    2015-12-01

    Nowadays, disposal of sewage sludge from wastewater treatment plants and recovery of waste heat from steel industry, become two important environmental issues and to integrate these two problems, a two-stage high temperature sludge gasification approach was investigated using the waste heat in hot slags herein. The whole process was divided into two stages, i.e., the low temperature sludge pyrolysis at ⩽ 900°C in argon agent and the high temperature char gasification at ⩾ 900°C in CO2 agent, during which the heat required was supplied by hot slags in different temperature ranges. Both the thermodynamic and kinetic mechanisms were identified and it was indicated that an Avrami-Erofeev model could best interpret the stage of char gasification. Furthermore, a schematic concept of this strategy was portrayed, based on which the potential CO yield and CO2 emission reduction achieved in China could be ∼1.92∗10(9)m(3) and 1.93∗10(6)t, respectively.

  7. A two-stage broadcast message propagation model in social networks

    Science.gov (United States)

    Wang, Dan; Cheng, Shun-Jun

    2016-11-01

    Message propagation in social networks is becoming a popular topic in complex networks. One of the message types in social networks is called broadcast message. It refers to a type of message which has a unique and unknown destination for the publisher, such as 'lost and found'. Its propagation always has two stages. Due to this feature, rumor propagation model and epidemic propagation model have difficulty in describing this message's propagation accurately. In this paper, an improved two-stage susceptible-infected-removed model is proposed. We come up with the concept of the first forwarding probability and the second forwarding probability. Another part of our work is figuring out the influence to the successful message transmission chance in each level resulting from multiple reasons, including the topology of the network, the receiving probability, the first stage forwarding probability, the second stage forwarding probability as well as the length of the shortest path between the publisher and the relevant destination. The proposed model has been simulated on real networks and the results proved the model's effectiveness.

  8. A two-stage method to determine optimal product sampling considering dynamic potential market.

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level.

  9. Two staged incentive contract focused on efficiency and innovation matching in critical chain project management

    Directory of Open Access Journals (Sweden)

    Min Zhang

    2014-09-01

    Full Text Available Purpose: The purpose of this paper is to define the relative optimal incentive contract to effectively encourage employees to improve work efficiency while actively implementing innovative behavior. Design/methodology/approach: This paper analyzes a two staged incentive contract coordinated with efficiency and innovation in Critical Chain Project Management using learning real options, based on principle-agent theory. The situational experiment is used to analyze the validity of the basic model. Finding: The two staged incentive scheme is more suitable for employees to create and implement learning real options, which will throw themselves into innovation process efficiently in Critical Chain Project Management. We prove that the combination of tolerance for early failure and reward for long-term success is effective in motivating innovation. Research limitations/implications: We do not include the individual characteristics of uncertain perception, which might affect the consistency of external validity. The basic model and the experiment design need to improve. Practical Implications: The project managers should pay closer attention to early innovation behavior and monitoring feedback of competition time in the implementation of Critical Chain Project Management. Originality/value: The central contribution of this paper is the theoretical and experimental analysis of incentive schemes for innovation in Critical Chain Project Management using the principal-agent theory, to encourage the completion of CCPM methods as well as imitative free-riding on the creative ideas of other members in the team.

  10. Selective capsulotomies of the expanded breast as a remodelling method in two-stage breast reconstruction.

    Science.gov (United States)

    Grimaldi, Luca; Campana, Matteo; Brandi, Cesare; Nisi, Giuseppe; Brafa, Anna; Calabrò, Massimiliano; D'Aniello, Carlo

    2013-06-01

    The two-stage breast reconstruction with tissue expander and prosthesis is nowadays a common method for achieving a satisfactory appearance in selected patients who had a mastectomy, but its most common aesthetic drawback is represented by an excessive volumetric increment of the superior half of the reconstructed breast, with a convexity of the profile in that area. A possible solution to limit this effect, and to fulfil the inferior pole, may be obtained by reducing the inferior tissue resistance by means of capsulotomies. This study reports the effects of various types of capsulotomies, performed in 72 patients after removal of the mammary expander, with the aim of emphasising the convexity of the inferior mammary aspect in the expanded breast. According to each kind of desired modification, possible solutions are described. On the basis of subjective and objective evaluations, an overall high degree of satisfaction has been evidenced. The described selective capsulotomies, when properly carried out, may significantly improve the aesthetic results in two-stage reconstructed breasts, with no additional scars, with minimal risks, and with little lengthening of the surgical time.

  11. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  12. Two-Stage Nerve Graft in Severe Scar: A Time-Course Study in a Rat Model

    Directory of Open Access Journals (Sweden)

    Shayan Zadegan

    2015-04-01

    According to the EPT and WRL, the two-stage nerve graft showed significant improvement (P=0.020 and P =0.017 respectively. The TOA showed no significant difference between the two groups. The total vascular index was significantly higher in the two-stage nerve graft group (P

  13. A New Approach for Proving or Generating Combinatorial Identities

    Science.gov (United States)

    Gonzalez, Luis

    2010-01-01

    A new method for proving, in an immediate way, many combinatorial identities is presented. The method is based on a simple recursive combinatorial formula involving n + 1 arbitrary real parameters. Moreover, this formula enables one not only to prove, but also generate many different combinatorial identities (not being required to know them "a…

  14. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis†

    Science.gov (United States)

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-01-01

    OBJECTIVES Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. METHODS From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). RESULTS The mean postoperative follow-up period was 12.5 (range: 1–24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. CONCLUSIONS Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis. PMID:23442937

  15. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis.

    Science.gov (United States)

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-06-01

    Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). The mean postoperative follow-up period was 12.5 (range: 1-24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis.

  16. Combinatorial biosynthesis of medicinal plant secondary metabolites

    NARCIS (Netherlands)

    Julsing, Mattijs K.; Koulman, Albert; Woerdenbag, Herman J.; Quax, Wim J.; Kayser, Oliver

    2006-01-01

    Combinatorial biosynthesis is a new tool in the generation of novel natural products and for the production of rare and expensive natural products. The basic concept is combining metabolic pathways in different organisms on a genetic level. As a consequence heterologous organisms provide precursors

  17. Polyhredral techniques in combinatorial optimization I: theory

    NARCIS (Netherlands)

    Aardal, K.; Hoesel, S. van

    2001-01-01

    Combinatorial optimization problems appear in many disciplines ranging from management and logistics to mathematics, physics, and chemistry. These problems are usually relatively easy to formulate mathematically, but most of them are computationally hard due to the restriction that a subset of the v

  18. Combinatorial optimization tolerances calculated in linear time

    NARCIS (Netherlands)

    Goldengorin, Boris; Sierksma, Gerard

    2003-01-01

    For a given optimal solution to a combinatorial optimization problem, we show, under very natural conditions, the equality of the minimal values of upper and lower tolerances, where the upper tolerances are calculated for the given optimal solution and the lower tolerances outside the optimal

  19. Grobner Basis Approach to Some Combinatorial Problems

    Directory of Open Access Journals (Sweden)

    Victor Ufnarovski

    2012-10-01

    Full Text Available We consider several simple combinatorial problems and discuss different ways to express them using polynomial equations and try to describe the \\GB of the corresponding ideals. The main instruments are complete symmetric polynomials that help to express different conditions in rather compact way.

  20. Grobner Basis Approach to Some Combinatorial Problems

    OpenAIRE

    2012-01-01

    We consider several simple combinatorial problems and discuss different ways to express them using polynomial equations and try to describe the \\GB of the corresponding ideals. The main instruments are complete symmetric polynomials that help to express different conditions in rather compact way.

  1. Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies

    NARCIS (Netherlands)

    Ketema, Jeroen; Simonsen, Jakob Grue

    2010-01-01

    We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in fi

  2. Erratum to Ordered Partial Combinatory Algebras

    NARCIS (Netherlands)

    Hofstra, P.; Oosten, J. van

    2003-01-01

    To our regret the paper Ordered Partial Combinatory Algebras contains a mistake which we correct here The flaw concerns the definition of compu tational density definition 3.5 which appeared in section 3.3 page 451 This definition is too rigid and as a consequence Lemma 3.6 on page 452

  3. A Model of Students' Combinatorial Thinking

    Science.gov (United States)

    Lockwood, Elise

    2013-01-01

    Combinatorial topics have become increasingly prevalent in K-12 and undergraduate curricula, yet research on combinatorics education indicates that students face difficulties when solving counting problems. The research community has not yet addressed students' ways of thinking at a level that facilitates deeper understanding of how students…

  4. A Model of Students' Combinatorial Thinking

    Science.gov (United States)

    Lockwood, Elise

    2013-01-01

    Combinatorial topics have become increasingly prevalent in K-12 and undergraduate curricula, yet research on combinatorics education indicates that students face difficulties when solving counting problems. The research community has not yet addressed students' ways of thinking at a level that facilitates deeper understanding of how students…

  5. Combinatorial optimization tolerances calculated in linear time

    NARCIS (Netherlands)

    Goldengorin, Boris; Sierksma, Gerard

    2003-01-01

    For a given optimal solution to a combinatorial optimization problem, we show, under very natural conditions, the equality of the minimal values of upper and lower tolerances, where the upper tolerances are calculated for the given optimal solution and the lower tolerances outside the optimal soluti

  6. Recent developments in dynamic combinatorial chemistry

    NARCIS (Netherlands)

    Otto, Sijbren; Furlan, Ricardo L.E.; Sanders, Jeremy K.M.

    2002-01-01

    Generating combinatorial libraries under equilibrium conditions has the important advantage that the libraries are adaptive (i.e. they can respond to exterior influences in the form of molecular recognition events). Thus, a ligand will direct and amplify the formation of its ideal receptor and vice

  7. Boltzmann Samplers for Colored Combinatorial Objects

    CERN Document Server

    Bodini, Olivier

    2009-01-01

    In this paper, we give a general framework for the Boltzmann generation of colored objects belonging to combinatorial constructible classes. We propose an intuitive notion called profiled objects which allows the sampling of size-colored objects (and also of k-colored objects) although the corresponding class cannot be described by an analytic ordinary generating function.

  8. Combinatorial biosynthesis of medicinal plant secondary metabolites

    NARCIS (Netherlands)

    Julsing, Mattijs K.; Koulman, Albert; Woerdenbag, Herman J.; Quax, Wim J.; Kayser, Oliver

    2006-01-01

    Combinatorial biosynthesis is a new tool in the generation of novel natural products and for the production of rare and expensive natural products. The basic concept is combining metabolic pathways in different organisms on a genetic level. As a consequence heterologous organisms provide precursors

  9. PIPERIDINE OLIGOMERS AND COMBINATORIAL LIBRARIES THEREOF

    DEFF Research Database (Denmark)

    1999-01-01

    The present invention relates to piperidine oligomers, methods for the preparation of piperidine oligomers and compound libraries thereof, and the use of piperidine oligomers as drug substances. The present invention also relates to the use of combinatorial libraries of piperidine oligomers...... in libraries (arrays) of compounds especially suitable for screening purposes....

  10. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  11. Two-stage vs single-stage management for concomitant gallstones and common bile duct stones

    Institute of Scientific and Technical Information of China (English)

    Jiong Lu; Yao Cheng; Xian-Ze Xiong; Yi-Xin Lin; Si-Jia Wu; Nan-Sheng Cheng

    2012-01-01

    AIM:To evaluate the safety and effectiveness of two-stage vs single-stage management for concomitant gallstones and common bile duct stones.METHODS:Four databases,including PubMed,Embase,the Cochrane Central Register of Controlled Trials and the Science Citation Index up to September 2011,were searched to identify all randomized controlled trials (RCTs).Data were extracted from the studies by two independent reviewers.The primary outcomes were stone clearance from the common bile duct,postoperative morbidity and mortality.The secondary outcomes were conversion to other procedures,number of procedures per patient,length of hospital stay,total operative time,hospitalization charges,patient acceptance and quality of life scores.RESULTS:Seven eligible RCTs [five trials (n =621)comparing preoperative endoscopic retrograde cholangiopancreatography (ERCP)/endoscopic sphincterotomy (EST) + laparoscopic cholecystectomy (LC) with LC +laparoscopic common bile duct exploration (LCBDE);two trials (n =166) comparing postoperative ERCP/EST + LC with LC + LCBDE],composed of 787 patients in total,were included in the final analysis.The metaanalysis detected no statistically significant difference between the two groups in stone clearance from the common bile duct [risk ratios (RR) =-0.10,95% confidence intervals (CI):-0.24 to 0.04,P =0.17],postoperative morbidity (RR =0.79,95% CI:0.58 to 1.10,P =0.16),mortality (RR =2.19,95% CI:0.33 to 14.67,P =0.42),conversion to other procedures (RR =1.21,95% CI:0.54 to 2.70,P =0.39),length of hospital stay (MD =0.99,95% CI:-1.59 to 3.57,P =0.45),total operative time (MD =12.14,95% CI:-1.83 to 26.10,P =0.09).Two-stage (LC + ERCP/EST) management clearly required more procedures per patient than single-stage (LC + LCBDE) management.CONCLUSION:Single-stage management is equivalent to two-stage management but requires fewer procedures.However,patient's condition,operator's expertise and local resources should be taken into account in

  12. Free-Riding and Free-Labor in Combinatorial Agency

    Science.gov (United States)

    Babaioff, Moshe; Feldman, Michal; Nisan, Noam

    This paper studies a setting where a principal needs to motivate teams of agents whose efforts lead to an outcome that stochastically depends on the combination of agents’ actions, which are not directly observable by the principal. In [1] we suggest and study a basic “combinatorial agency” model for this setting. In this paper we expose a somewhat surprising phenomenon found in this setting: cases where the principal can gain by asking agents to reduce their effort level, even when this increased effort comes for free. This phenomenon cannot occur in a setting where the principal can observe the agents’ actions, but we show that it can occur in the hidden-actions setting. We prove that for the family of technologies that exhibit “increasing returns to scale” this phenomenon cannot happen, and that in some sense this is a maximal family of technologies for which the phenomenon cannot occur. Finally, we relate our results to a basic question in production design in firms.

  13. Combinatorial structures to modeling simple games and applications

    Science.gov (United States)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  14. Two-stage dilute-acid and organic-solvent lignocellulosic pretreatment for enhanced bioprocessing

    Energy Technology Data Exchange (ETDEWEB)

    Brodeur, G.; Telotte, J.; Stickel, J. J.; Ramakrishnan, S.

    2016-11-01

    A two stage pretreatment approach for biomass is developed in the current work in which dilute acid (DA) pretreatment is followed by a solvent based pretreatment (N-methyl morpholine N oxide -- NMMO). When the combined pretreatment (DAWNT) is applied to sugarcane bagasse and corn stover, the rates of hydrolysis and overall yields (>90%) are seen to dramatically improve and under certain conditions 48 h can be taken off the time of hydrolysis with the additional NMMO step to reach similar conversions. DAWNT shows a 2-fold increase in characteristic rates and also fractionates different components of biomass -- DA treatment removes the hemicellulose while the remaining cellulose is broken down by enzymatic hydrolysis after NMMO treatment to simple sugars. The remaining residual solid is high purity lignin. Future work will focus on developing a full scale economic analysis of DAWNT for use in biomass fractionation.

  15. Reconstruction of Gene Regulatory Networks Based on Two-Stage Bayesian Network Structure Learning Algorithm

    Institute of Scientific and Technical Information of China (English)

    Gui-xia Liu; Wei Feng; Han Wang; Lei Liu; Chun-guang Zhou

    2009-01-01

    In the post-genomic biology era, the reconstruction of gene regulatory networks from microarray gene expression data is very important to understand the underlying biological system, and it has been a challenging task in bioinformatics. The Bayesian network model has been used in reconstructing the gene regulatory network for its advantages, but how to determine the network structure and parameters is still important to be explored. This paper proposes a two-stage structure learning algorithm which integrates immune evolution algorithm to build a Bayesian network .The new algorithm is evaluated with the use of both simulated and yeast cell cycle data. The experimental results indicate that the proposed algorithm can find many of the known real regulatory relationships from literature and predict the others unknown with high validity and accuracy.

  16. The Application of Two-stage Structure Decomposition Technique to the Study of Industrial Carbon Emissions

    Institute of Scientific and Technical Information of China (English)

    Yanqiu HE

    2015-01-01

    The total carbon emissions control is the ultimate goal of carbon emission reduction, while industrial carbon emissions are the basic units of the total carbon emission. On the basis of existing research results, in this paper, a two-stage input-output structure decomposition method is creatively proposed for fully combining the input-output method with the structure decomposition technique. In this study, more comprehensive technical progress indicators were chosen in comparison with the previous studies and included the utilization efficiency of all kinds of intermediate inputs such as energy and non-energy products, and finally were positioned at the factors affecting the carbon emissions of different industries. Through analysis, the affecting rate of each factor on industrial carbon emissions was acquired. Thus, a theory basis and data support is provided for the total carbon emissions control of China from the perspective of industrial emissions.

  17. A two-stage metal valorisation process from electric arc furnace dust (EAFD

    Directory of Open Access Journals (Sweden)

    H. Issa

    2016-04-01

    Full Text Available This paper demonstrates possibility of separate zinc and lead recovery from coal composite pellets, composed of EAFD with other synergetic iron-bearing wastes and by-products (mill scale, pyrite-cinder, magnetite concentrate, through a two-stage process. The results show that in the first, low temp erature stage performed in electro-resistant furnace, removal of lead is enabled due to presence of chlorides in the system. In the second stage, performed at higher temperatures in Direct Current (DC plasma furnace, valorisation of zinc is conducted. Using this process, several final products were obtained, including a higher purity zinc oxide, which, by its properties, corresponds washed Waelz oxide.

  18. A wavelet-based two-stage near-lossless coder.

    Science.gov (United States)

    Yea, Sehoon; Pearlman, William A

    2006-11-01

    In this paper, we present a two-stage near-lossless compression scheme. It belongs to the class of "lossy plus residual coding" and consists of a wavelet-based lossy layer followed by arithmetic coding of the quantized residual to guarantee a given L(infinity) error bound in the pixel domain. We focus on the selection of the optimum bit rate for the lossy layer to achieve the minimum total bit rate. Unlike other similar lossy plus lossless approaches using a wavelet-based lossy layer, the proposed method does not require iteration of decoding and inverse discrete wavelet transform in succession to locate the optimum bit rate. We propose a simple method to estimate the optimal bit rate, with a theoretical justification based on the critical rate argument from the rate-distortion theory and the independence of the residual error.

  19. Two-Stage Over-the-Air (OTA Test Method for LTE MIMO Device Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Ya Jing

    2012-01-01

    Full Text Available With MIMO technology being adopted by the wireless communication standards LTE and HSPA+, MIMO OTA research has attracted wide interest from both industry and academia. Parallel studies are underway in COST2100, CTIA, and 3GPP RAN WG4. The major test challenge for MIMO OTA is how to create a repeatable scenario which accurately reflects the MIMO antenna radiation performance in a realistic wireless propagation environment. Different MIMO OTA methods differ in the way to reproduce a specified MIMO channel model. This paper introduces a novel, flexible, and cost-effective method for measuring MIMO OTA using a two-stage approach. In the first stage, the antenna pattern is measured in an anechoic chamber using a nonintrusive approach, that is without cabled connections or modifying the device. In the second stage, the antenna pattern is convolved with the chosen channel model in a channel emulator to measure throughput using a cabled connection.

  20. Two stages of parafoveal processing during reading: Evidence from a display change detection task.

    Science.gov (United States)

    Angele, Bernhard; Slattery, Timothy J; Rayner, Keith

    2016-08-01

    We used a display change detection paradigm (Slattery, Angele, & Rayner Human Perception and Performance, 37, 1924-1938 2011) to investigate whether display change detection uses orthographic regularity and whether detection is affected by the processing difficulty of the word preceding the boundary that triggers the display change. Subjects were significantly more sensitive to display changes when the change was from a nonwordlike preview than when the change was from a wordlike preview, but the preview benefit effect on the target word was not affected by whether the preview was wordlike or nonwordlike. Additionally, we did not find any influence of preboundary word frequency on display change detection performance. Our results suggest that display change detection and lexical processing do not use the same cognitive mechanisms. We propose that parafoveal processing takes place in two stages: an early, orthography-based, preattentional stage, and a late, attention-dependent lexical access stage.

  1. Enhanced biodiesel production in Neochloris oleoabundans by a semi-continuous process in two stage photobioreactors.

    Science.gov (United States)

    Yoon, Se Young; Hong, Min Eui; Chang, Won Seok; Sim, Sang Jun

    2015-07-01

    Under autotrophic conditions, highly productive biodiesel production was achieved using a semi-continuous culture system in Neochloris oleoabundans. In particular, the flue gas generated by combustion of liquefied natural gas and natural solar radiation were used for cost-effective microalgal culture system. In semi-continuous culture, the greater part (~80%) of the culture volume containing vegetative cells grown under nitrogen-replete conditions in a first photobioreactor (PBR) was directly transferred to a second PBR and cultured sequentially under nitrogen-deplete conditions for accelerating oil accumulation. As a result, in semi-continuous culture, the productivities of biomass and biodiesel in the cells were increased by 58% (growth phase) and 51% (induction phase) compared to the cells in batch culture, respectively. The semi-continuous culture system using two stage photobioreactors is a very efficient strategy to further improve biodiesel production from microalgae under photoautotrophic conditions.

  2. The Sources of Efficiency of the Nigerian Banking Industry: A Two- Stage Approach

    Directory of Open Access Journals (Sweden)

    Frances Obafemi

    2013-11-01

    Full Text Available The paper employed a two-stage Data Envelopment Analysis (DEA approach to examine the sources oftechnical efficiency in the Nigerian banking sub-sector. Using a cross sectionof commercial and merchant banks, the study showed that the Nigerian bankingindustry was not efficient both in the pre-and-post-liberalization era. Thestudy further revealed that market share was the strongest determinant oftechnical efficiency in the Nigerian banking Industry. Thus, appropriatemacroeconomic policy, institutional development and structural reforms mustaccompany financial liberalization to create the stable environment requiredfor it to succeed. Hence, the present bank consolidation and reforms by theCentral Bank of Nigeria, which started with Soludo and continued with Sanusi,are considered necessary, especially in the areas of e banking and reorganizingthe management of banks.

  3. Two-stage triolein breath test differentiates pancreatic insufficiency from other causes of malabsorption

    Energy Technology Data Exchange (ETDEWEB)

    Goff, J.S.

    1982-07-01

    In 24 patients with malabsorption, (/sup 14/C)triolein breath tests were conducted before and together with the administration of pancreatic enzymes (Pancrease, Johnson and Johnson, Skillman, N.J.). Eleven patients with pancreatic insufficiency had a significant rise in peak percent dose per hour /sup 14/CO/sub 2/ excretion after Pancrease, whereas 13 patients with other causes of malabsorption had no increase in /sup 14/CO/sub 2/ excretion (2.61 +/- 0.96 vs. 0.15 +/- 0.45, p less than 0.001). The two-stage (/sup 14/C)triolein breath test appears to be an accurate and simple noninvasive test of fat malabsorption that differentiates steatorrhea secondary to pancreatic insufficiency from other causes of steatorrhea.

  4. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    Science.gov (United States)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  5. Product prioritization in a two-stage food production system with intermediate storage

    DEFF Research Database (Denmark)

    Akkerman, Renzo; van Donk, Dirk Pieter

    2007-01-01

    In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through the dedi......In the food-processing industry, usually a limited number of storage tanks for intermediate storage is available, which are used for different products. The market sometimes requires extremely short lead times for some products, leading to prioritization of these products, partly through...... the dedication of a storage tank. This type of situation has hardly been investigated, although planners struggle with it in practice. This paper aims at investigating the fundamental effect of prioritization and dedicated storage in a two-stage production system, for various product mixes. We show...

  6. Experimental and modeling study of a two-stage pilot scale high solid anaerobic digester system.

    Science.gov (United States)

    Yu, Liang; Zhao, Quanbao; Ma, Jingwei; Frear, Craig; Chen, Shulin

    2012-11-01

    This study established a comprehensive model to configure a new two-stage high solid anaerobic digester (HSAD) system designed for highly degradable organic fraction of municipal solid wastes (OFMSW). The HSAD reactor as the first stage was naturally separated into two zones due to biogas floatation and low specific gravity of solid waste. The solid waste was retained in the upper zone while only the liquid leachate resided in the lower zone of the HSAD reactor. Continuous stirred-tank reactor (CSTR) and advective-diffusive reactor (ADR) models were constructed in series to describe the whole system. Anaerobic digestion model No. 1 (ADM1) was used as reaction kinetics and incorporated into each reactor module. Compared with the experimental data, the simulation results indicated that the model was able to well predict the pH, volatile fatty acid (VFA) and biogas production.

  7. Study of a two-stage photobase generator for photolithography in microelectronics.

    Science.gov (United States)

    Turro, Nicholas J; Li, Yongjun; Jockusch, Steffen; Hagiwara, Yuji; Okazaki, Masahiro; Mesch, Ryan A; Schuster, David I; Willson, C Grant

    2013-03-01

    The investigation of the photochemistry of a two-stage photobase generator (PBG) is described. Absorption of a photon by a latent PBG (1) (first step) produces a PBG (2). Irradiation of 2 in the presence of water produces a base (second step). This two-photon sequence (1 + hν → 2 + hν → base) is an important component in the design of photoresists for pitch division technology, a method that doubles the resolution of projection photolithography for the production of microelectronic chips. In the present system, the excitation of 1 results in a Norrish type II intramolecular hydrogen abstraction to generate a 1,4-biradiacal that undergoes cleavage to form 2 and acetophenone (Φ ∼ 0.04). In the second step, excitation of 2 causes cleavage of the oxime ester (Φ = 0.56) followed by base generation after reaction with water.

  8. ADM1-based modeling of methane production from acidified sweet sorghum extractin a two stage process

    DEFF Research Database (Denmark)

    Antonopoulou, Georgia; Gavala, Hariklia N.; Skiadas, Ioannis

    2012-01-01

    The present study focused on the application of the Anaerobic Digestion Model 1 οn the methane production from acidified sorghum extract generated from a hydrogen producing bioreactor in a two-stage anaerobic process. The kinetic parameters for hydrogen and volatile fatty acids consumption were...... estimated through fitting of the model equations to the data obtained from batch experiments. The simulation of the continuous reactor performance at all HRTs tested (20, 15 and 10d) was very satisfactory. Specifically, the largest deviation of the theoretical predictions against the experimental data...... was 12% for the methane production rate at the HRT of 20d while the deviation values for the 15 and 10 d HRT were 1.9% and 1.1%, respectively. The model predictions regarding pH, methane percentage in the gas phase and COD removal were in very good agreement with the experimental data with a deviation...

  9. A Two-Stage Diagnosis Framework for Wind Turbine Gearbox Condition Monitoring

    Directory of Open Access Journals (Sweden)

    Janet M. Twomey

    2013-01-01

    Full Text Available Advances in high performance sensing technologies enable the development of wind turbine condition monitoring system to diagnose and predict the system-wide effects of failure events. This paper presents a vibration-based two stage fault detection framework for failure diagnosis of rotating components in wind turbines. The proposed framework integrates an analytical defect detection method and a graphical verification method together to ensure the diagnosis efficiency and accuracy. The efficacy of the proposed methodology is demonstrated with a case study with the gearbox condition monitoring Round Robin study dataset provided by the National Renewable Energy Laboratory (NREL. The developed methodology successfully picked five faults out of seven in total with accurate severity levels without producing any false alarm in the blind analysis. The case study results indicated that the developed fault detection framework is effective for analyzing gear and bearing faults in wind turbine drive train system based upon system vibration characteristics.

  10. Nitrification and microalgae cultivation for two-stage biological nutrient valorization from source separated urine.

    Science.gov (United States)

    Coppens, Joeri; Lindeboom, Ralph; Muys, Maarten; Coessens, Wout; Alloul, Abbas; Meerbergen, Ken; Lievens, Bart; Clauwaert, Peter; Boon, Nico; Vlaeminck, Siegfried E

    2016-07-01

    Urine contains the majority of nutrients in urban wastewaters and is an ideal nutrient recovery target. In this study, stabilization of real undiluted urine through nitrification and subsequent microalgae cultivation were explored as strategy for biological nutrient recovery. A nitrifying inoculum screening revealed a commercial aquaculture inoculum to have the highest halotolerance. This inoculum was compared with municipal activated sludge for the start-up of two nitrification membrane bioreactors. Complete nitrification of undiluted urine was achieved in both systems at a conductivity of 75mScm(-1) and loading rate above 450mgNL(-1)d(-1). The halotolerant inoculum shortened the start-up time with 54%. Nitrite oxidizers showed faster salt adaptation and Nitrobacter spp. became the dominant nitrite oxidizers. Nitrified urine as growth medium for Arthrospira platensis demonstrated superior growth compared to untreated urine and resulted in a high protein content of 62%. This two-stage strategy is therefore a promising approach for biological nutrient recovery.

  11. Two-stage high frequency pulse tube cooler for refrigeration at 25 K

    CERN Document Server

    Dietrich, M

    2009-01-01

    A two-stage Stirling-type U-shape pulse tube cryocooler driven by a 10 kW-class linear compressor was designed, built and tested. A special feature of the cold head is the absence of a heat exchanger at the cold end of the first stage, since the intended application requires no cooling power at an intermediate temperature. Simulations where done using Sage-software to find optimum operating conditions and cold head geometry. Flow-impedance matching was required to connect the compressor designed for 60 Hz operation to the 40 Hz cold head. A cooling power of 12.9 W at 25 K with an electrical input power of 4.6 kW has been achieved up to now. The lowest temperature reached is 13.7 K.

  12. Two-stage reflective optical system for achromatic 10 nm x-ray focusing

    Science.gov (United States)

    Motoyama, Hiroto; Mimura, Hidekazu

    2015-12-01

    Recently, coherent x-ray sources have promoted developments of optical systems for focusing, imaging, and interferometers. In this paper, we propose a two-stage focusing optical system with the goal of achromatically focusing pulses from an x-ray free-electron laser (XFEL), with a focal width of 10 nm. In this optical system, the x-ray beam is expanded by a grazing-incidence aspheric mirror, and it is focused by a mirror that is shaped as a solid of revolution. We describe the design procedure and discuss the theoretical focusing performance. In theory, soft-XFEL lights can be focused to a 10 nm area without chromatic aberration and with high reflectivity; this creates an unprecedented power density of 1020 W cm-2 in the soft-x-ray range.

  13. A Sensorless Power Reserve Control Strategy for Two-Stage Grid-Connected PV Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2017-01-01

    Due to the still increasing penetration of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A power reserve control, where namely the active power from the PV panels is reserved during operation, is required for grid...... to achieve the power reserve. In this method, the solar irradiance and temperature measurements that have been used in conventional power reserve control schemes to estimate the available PV power are not required, and thereby being a sensorless approach with reduced cost. Experimental tests have been...... support. In this paper, a cost-effective solution to realize the power reserve for two-stage grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...

  14. Sensorless Reserved Power Control Strategy for Two-Stage Grid-Connected Photovoltaic Systems

    DEFF Research Database (Denmark)

    Sangwongwanich, Ariya; Yang, Yongheng; Blaabjerg, Frede

    2016-01-01

    Due to still increasing penetration level of grid-connected Photovoltaic (PV) systems, advanced active power control functionalities have been introduced in grid regulations. A reserved power control, where the active power from the PV panels is reserved during operation, is required for grid...... to achieve the power reserve. In this method, the irradiance measurements that have been used in conventional control schemes to estimate the available PV power are not required, and thereby being a sensorless solution. Simulations and experimental tests have been performed on a 3-kW two-stage single...... support. In this paper, a cost-effective solution to realize the reserved power control for grid-connected PV systems is proposed. The proposed solution routinely employs a Maximum Power Point Tracking (MPPT) control to estimate the available PV power and a Constant Power Generation (CPG) control...

  15. Prey-Predator Model with Two-Stage Infection in Prey: Concerning Pest Control

    Directory of Open Access Journals (Sweden)

    Swapan Kumar Nandi

    2015-01-01

    Full Text Available A prey-predator model system is developed; specifically the disease is considered into the prey population. Here the prey population is taken as pest and the predators consume the selected pest. Moreover, we assume that the prey species is infected with a viral disease forming into susceptible and two-stage infected classes, and the early stage of infected prey is more vulnerable to predation by the predator. Also, it is assumed that the later stage of infected pests is not eaten by the predator. Different equilibria of the system are investigated and their stability analysis and Hopf bifurcation of the system around the interior equilibriums are discussed. A modified model has been constructed by considering some alternative source of food for the predator population and the dynamical behavior of the modified model has been investigated. We have demonstrated the analytical results by numerical analysis by taking some simulated set of parameter values.

  16. Lossless and near-lossless digital angiography coding using a two-stage motion compensation approach.

    Science.gov (United States)

    dos Santos, Rafael A P; Scharcanski, Jacob

    2008-07-01

    This paper presents a two-stage motion compensation coding scheme for image sequences in hemodynamics. The first stage of the proposed method implements motion compensation, and the second stage corrects local pixel intensity distortions with a context adaptive linear predictor. The proposed method is robust to the local intensity distortions and the noise that often degrades these image sequences, providing lossless and near-lossless quality. Our experiments with lossless compression of 12bits/pixel studies indicate that, potentially, our approach can perform 3.8%, 2% and 1.6% better than JPEG-2000, JPEG-LS and the method proposed by Scharcanski [1], respectively. The performance tends to improve for near-lossless compression. Therefore, this work presents experimental evidence that for coding image sequences in hemodynamics, an adequate motion compensation scheme can be more efficient than the still-image coding methods often used nowadays.

  17. Quasi-estimation as a Basis for Two-stage Solving of Regression Problem

    CERN Document Server

    Gordinsky, Anatoly

    2010-01-01

    An effective two-stage method for an estimation of parameters of the linear regression is considered. For this purpose we introduce a certain quasi-estimator that, in contrast to usual estimator, produces two alternative estimates. It is proved that, in comparison to the least squares estimate, one alternative has a significantly smaller quadratic risk, retaining at the same time unbiasedness and consistency. These properties hold true for one-dimensional, multi-dimensional, orthogonal and non-orthogonal problems. Moreover, a Monte-Carlo simulation confirms high robustness of the quasi-estimator to violations of the initial assumptions. Therefore, at the first stage of the estimation we calculate mentioned two alternative estimates. At the second stage we choose the better estimate out of these alternatives. In order to do so we use additional information, among it but not exclusively of a priori nature. In case of two alternatives the volume of such information should be minimal. Furthermore, the additional ...

  18. A characteristics study on the performance of a two-stage light gas gun

    Institute of Scientific and Technical Information of China (English)

    吴应湘; 郑之初; P.Kupschus

    1995-01-01

    In order to obtain an overall and systematic understanding of the performance of a two-stage light gas gun (TLGG), a numerical code to simulate the process occurring in a gun shot is advanced based on the quasi-one-dimensional unsteady equations of motion with the real gas effect, friction and heat transfer taken into account in a characteristic formulation for both driver and propellant gas. Comparisons of projectile velocities and projectile pressures along the barrel with experimental results from JET (Joint European Torus) and with computational data got by the Lagrangian method indicate that this code can provide results with good accuracy over a wide range of gun geometry and loading conditions.

  19. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2014-10-01

    Full Text Available We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs and assign medial aid points (MAPs to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i More TDCs often increase the efficiency and utility of medical supplies; (ii It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is.

  20. A Two-Stage LGSM for Three-Point BVPs of Second-Order ODEs

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2008-08-01

    Full Text Available The study in this paper is a numerical integration of second-order three-point boundary value problems under two imposed nonlocal boundary conditions at t=t0, t=ξ, and t=t1 in a general setting, where t0<ξtwo-stage Lie-group shooting method for finding unknown initial conditions, which are obtained through an iterative solution of derived algebraic equations in terms of a weighting factor r∈(0,1. The best r is selected by matching the target with a minimal discrepancy. Numerical examples are examined to confirm that the new approach has high efficiency and accuracy with a fast speed of convergence. Even for multiple solutions, the present method is also effective to find them.

  1. A Two-Stage LGSM for Three-Point BVPs of Second-Order ODEs

    Directory of Open Access Journals (Sweden)

    Liu Chein-Shan

    2008-01-01

    Full Text Available Abstract The study in this paper is a numerical integration of second-order three-point boundary value problems under two imposed nonlocal boundary conditions at , , and in a general setting, where . We construct a two-stage Lie-group shooting method for finding unknown initial conditions, which are obtained through an iterative solution of derived algebraic equations in terms of a weighting factor . The best is selected by matching the target with a minimal discrepancy. Numerical examples are examined to confirm that the new approach has high efficiency and accuracy with a fast speed of convergence. Even for multiple solutions, the present method is also effective to find them.

  2. Shaft Position Influence on Technical Characteristics of Universal Two-Stages Helical Speed Reducers

    Directory of Open Access Journals (Sweden)

    Мilan Rackov

    2005-10-01

    Full Text Available Purchasers of speed reducers decide on buying those reducers, that can the most approximately satisfy their demands with much smaller costs. Amount of used material, ie. mass and dimensions of gear unit influences on gear units price. Mass and dimensions of gear unit, besides output torque, gear unit ratio and efficiency, are the most important parameters of technical characteristics of gear units and their quality. Centre distance and position of shafts have significant influence on output torque, gear unit ratio and mass of gear unit through overall dimension of gear unit housing. Thus these characteristics are dependent on each other. This paper deals with analyzing of centre distance and shaft position influence on output torque and ratio of universal two stages gear units.

  3. A Two-stage Tuning Method of Servo Parameters for Feed Drives in Machine Tools

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the evaluation of dynamic performance for feed drives in machine tools, this paper presents a two-stage tuning method of servo parameters. In the first stage, the evaluation of dynamic performance, parameter tuning and optimization on a mechatronic integrated system simulation platform of feed drives are performed. As a result, a servo parameter combination is acquired. In the second stage, the servo parameter combination from the first stage is set and tuned further in a real machine tool whose dynamic performance is measured and evaluated using the cross grid encoder developed by Heidenhain GmbH. A case study shows that this method simplifies the test process effectively and results in a good dynamic performance in a real machine tool.

  4. Treatment of Domestic Sewage by Two-Stage-Bio-Contact Oxidation Process

    Institute of Scientific and Technical Information of China (English)

    LI Xiang-dong; FENG Qi-yan; LIU Zhong-wei; XIAO Xin; LIN Guo-hua

    2005-01-01

    Effects of hydraulic retention time (HRT) and gas volume on efficiency of wastewater treatment are discussed based on a simulation experiment in which the domestic swage was treated by the two-stage-bio-contact oxidation process. The result shows that the average CODcr, BOD5, suspended solid (SS), and ammonia-nitrogen removal rate are 94.5 %, 93.2 %, 91.7 % and 46.9 %, respectively, under the conditions of a total air/water ratio of 5: 1 , an air/water ratio of 3:1 for oxidation tank 1 and 2:1for oxidation tank 2 and a hydraulic retention time of 1 h for each stage. This method is suitable for domestic sewage treatment of residential community and small towns as well.

  5. Alignment and characterization of the two-stage time delay compensating XUV monochromator

    CERN Document Server

    Eckstein, Martin; Kubin, Markus; Yang, Chung-Hsin; Frassetto, Fabio; Poletto, Luca; Vrakking, Marc J J; Kornilov, Oleg

    2016-01-01

    We present the design, implementation and alignment procedure for a two-stage time delay compensating monochromator. The setup spectrally filters the radiation of a high-order harmonic generation source providing wavelength-selected XUV pulses with a bandwidth of 300 to 600~meV in the photon energy range of 3 to 50~eV. XUV pulses as short as $12\\pm3$~fs are demonstrated. Transmission of the 400~nm (3.1~eV) light facilitates precise alignment of the monochromator. This alignment strategy together with the stable mechanical design of the motorized beamline components enables us to automatically scan the XUV photon energ in pump-probe experiments that require XUV beam pointing stability. The performance of the beamline is demonstrated by the generation of IR-assisted sidebands in XUV photoionization of argon atoms.

  6. Final two-stage MOAO on-sky demonstration with CANARY

    Science.gov (United States)

    Gendron, E.; Morris, T.; Basden, A.; Vidal, F.; Atkinson, D.; Bitenc, U.; Buey, T.; Chemla, F.; Cohen, M.; Dickson, C.; Dipper, N.; Feautrier, P.; Gach, J.-L.; Gratadour, D.; Henry, D.; Huet, J.-M.; Morel, C.; Morris, S.; Myers, R.; Osborn, J.; Perret, D.; Reeves, A.; Rousset, G.; Sevin, A.; Stadler, E.; Talbot, G.; Todd, S.; Younger, E.

    2016-07-01

    CANARY is an on-sky Laser Guide Star (LGS) tomographic AO demonstrator in operation at the 4.2m William Herschel Telescope (WHT) in La Palma. From the early demonstration of open-loop tomography on a single deformable mirror using natural guide stars in 2010, CANARY has been progressively upgraded each year to reach its final goal in July 2015. It is now a two-stage system that mimics the future E-ELT: a GLAO-driven woofer based on 4 laser guide stars delivers a ground-layer compensated field to a figure sensor locked tweeter DM, that achieves the final on-axis tomographic compensation. We present the overall system, the control strategy and an overview of its on-sky performance.

  7. Performance of a highly loaded two stage axial-flow fan

    Science.gov (United States)

    Ruggeri, R. S.; Benser, W. A.

    1974-01-01

    A two-stage axial-flow fan with a tip speed of 1450 ft/sec (442 m/sec) and an overall pressure ratio of 2.8 was designed, built, and tested. At design speed and pressure ratio, the measured flow matched the design value of 184.2 lbm/sec (83.55kg/sec). The adiabatic efficiency at the design operating point was 85.7 percent. The stall margin at design speed was 10 percent. A first-bending-mode flutter of the second-stage rotor blades was encountered near stall at speeds between 77 and 93 percent of design, and also at high pressure ratios at speeds above 105 percent of design. A 5 deg closed reset of the first-stage stator eliminated second-stage flutter for all but a narrow speed range near 90 percent of design.

  8. A Two-stage Kalman Filter for Sensorless Direct Torque Controlled PM Synchronous Motor Drive

    Directory of Open Access Journals (Sweden)

    Boyu Yi

    2013-01-01

    Full Text Available This paper presents an optimal two-stage extended Kalman filter (OTSEKF for closed-loop flux, torque, and speed estimation of a permanent magnet synchronous motor (PMSM to achieve sensorless DTC-SVPWM operation of drive system. The novel observer is obtained by using the same transformation as in a linear Kalman observer, which is proposed by C.-S. Hsieh and F.-C. Chen in 1999. The OTSEKF is an effective implementation of the extended Kalman filter (EKF and provides a recursive optimum state estimation for PMSMs using terminal signals that may be polluted by noise. Compared to a conventional EKF, the OTSEKF reduces the number of arithmetic operations. Simulation and experimental results verify the effectiveness of the proposed OTSEKF observer for DTC of PMSMs.

  9. Synchronous rapid start-up of the methanation and anammox processes in two-stage ASBRs

    Science.gov (United States)

    Duan, Y.; Li, W. R.; Zhao, Y.

    2017-01-01

    The “methanation + anaerobic ammonia oxidation autotrophic denitrification” method was adopted by using anaerobic sequencing batch reactors (ASBRs) and realized a satisfactory synchronous removal of chemical oxygen demand (COD) and ammonia-nitrogen (NH4 +-N) in wastewater after 75 days operation. 90% of COD was removed at a COD load of 1.2 kg/(m3•d) and 90% of TN was removed at a TN load of 0.14 kg/(m3•d). The anammox reaction ratio was estimated to be 1: 1.32: 0.26. The results showed that synchronous rapid start-up of the methanation and anaerobic ammonia oxidation processes in two-stage ASBRs was feasible.

  10. a Remote Liquid Target Loading System for a Two-Stage Gas Gun

    Science.gov (United States)

    Gibson, L. L.; Bartram, B.; Dattelbaum, D. M.; Sheffield, S. A.; Stahl, D. B.

    2009-12-01

    A Remote Liquid Loading System (RLLS) was designed and tested for the application of loading high-hazard liquid materials into instrumented target cells for gas gun-driven plate impact experiments. These high hazard liquids tend to react with confining materials in a short period of time, degrading target assemblies and potentially building up pressure through the evolution of gas in the reactions. Therefore, the ability to load a gas gun target immediately prior to gun firing provides the most stable and reliable target fielding approach. We present the design and evaluation of an RLLS built for the LANL two-stage gas gun. The system has been used successfully to interrogate the shock initiation behavior of ˜98 wt% percent hydrogen peroxide (H2O2) solutions, using embedded electromagnetic gauges for measurement of shock wave profiles in-situ.

  11. Two-Stage Surgery for a Large Cervical Dumbbell Tumour in Neurofibromatosis 1: A Case Report

    Directory of Open Access Journals (Sweden)

    Mohd Ariff S

    2011-11-01

    Full Text Available Spinal neurofibromas occur sporadically and typically occur in association with neurofibromatosis 1. Patients afflicted with neurofibromatosis 1 usually present with involvement of several nerve roots. This report describes the case of a 14- year-old child with a large intraspinal, but extradural tumour with paraspinal extension, dumbbell neurofibroma of the cervical region extending from the C2 to C4 vertebrae. The lesions were readily detected by MR imaging and were successfully resected in a two-stage surgery. The time interval between the first and second surgery was one month. We provide a brief review of the literature regarding various surgical approaches, emphasising the utility of anterior and posterior approaches.

  12. Effekt of a two-stage nursing assesment and intervention - a randomized intervention study

    DEFF Research Database (Denmark)

    Rosted, Elizabeth Emilie; Poulsen, Ingrid; Hendriksen, Carsten

    to the geriatric outpatient clinic, community health centre, primary physician or arrangements with next-of-kin. Findings: Primary endpoints will be presented as unplanned readmission to ED; admission to nursing home; and death. Secondary endpoints will be presented as physical function; depressive symptoms......Background: Geriatric patients recently discharged from hospital are at risk of unplanned readmissions and admission to nursing home. When discharged directly from Emergency Department (ED) the risk increases, as time pressure often requires focus on the presenting problem, although 80...... % of geriatric patients have complex and often unresolved caring needs. The objective was to examine the effect of a two-stage nursing assessment and intervention to address the patients uncompensated problems given just after discharge from ED and one and six months after. Method: We conducted a prospective...

  13. Colorimetric characterization of liquid crystal display using an improved two-stage model

    Institute of Scientific and Technical Information of China (English)

    Yong Wang; Haisong Xu

    2006-01-01

    @@ An improved two-stage model of colorimetric characterization for liquid crystal display (LCD) was proposed. The model included an S-shape nonlinear function with four coefficients for each channel to fit the Tone reproduction curve (TRC), and a linear transfer matrix with black-level correction. To compare with the simple model (SM), gain-offset-gain (GOG), S-curve and three-one-dimensional look-up tables (3-1D LUTs) models, an identical LCD was characterized and the color differences were calculated and summarized using the set of 7 × 7 × 7 digital-to-analog converter (DAC) triplets as test data. The experimental results showed that the model was outperformed in comparison with the GOG and SM ones, and near to that of the S-curve model and 3-1D LUTs method.

  14. Fast Image Segmentation Based on a Two-Stage Geometrical Active Contour

    Institute of Scientific and Technical Information of China (English)

    肖昌炎; 张素; 陈亚珠

    2005-01-01

    A fast two-stage geometric active contour algorithm for image segmentation is developed. First, the Eikonal equation problem is quickly solved using an improved fast sweeping method, and a criterion of local minimum of area gradient (LMAG) is presented to extract the optimal arrival time. Then, the final time function is passed as an initial state to an area and length minimizing flow model, which adjusts the interface more accurately and prevents it from leaking. For object with complete and salient edge, using the first stage only is able to obtain an ideal result, and this results in a time complexity of O(M), where M is the number of points in each coordinate direction. Both stages are needed for convoluted shapes, but the computation cost can be drastically reduced. Efficiency of the algorithm is verified in segmentation experiments of real images with different feature.

  15. Parametric theoretical study of a two-stage solar organic Rankine cycle for RO desalination

    Energy Technology Data Exchange (ETDEWEB)

    Kosmadakis, G.; Manolakos, D.; Papadakis, G. [Department of Natural Resources and Agricultural Engineering, Agricultural University of Athens, 75 Iera Odos Street, 11855 Athens (Greece)

    2010-05-15

    The present work concerns the parametric study of an autonomous, two-stage solar organic Rankine cycle for RO desalination. The main goal of the current simulation is to estimate the efficiency, as well as to calculate the annual mechanical energy available for desalination in the considered cases, in order to evaluate the influence of various parameters on the performance of the system. The parametric study concerns the variation of different parameters, without changing actually the baseline case. The effect of the collectors' slope and the total number of evacuated tube collectors used, have been extensively examined. The total cost is also taken into consideration and is calculated for the different cases examined, along with the specific fresh water cost (EUR/m{sup 3}). (author)

  16. Removal of trichloroethylene (TCE) contaminated soil using a two-stage anaerobic-aerobic composting technique.

    Science.gov (United States)

    Ponza, Supat; Parkpian, Preeda; Polprasert, Chongrak; Shrestha, Rajendra P; Jugsujinda, Aroon

    2010-01-01

    The effect of organic carbon addition on remediation of trichloroethylene (TCE) contaminated clay soil was investigated using a two stage anaerobic-aerobic composting system. TCE removal rate and processes involved were determined. Uncontaminated clay soil was treated with composting materials (dried cow manure, rice husk and cane molasses) to represent carbon based treatments (5%, 10% and 20% OC). All treatments were spiked with TCE at 1,000 mg TCE/kg DW and incubated under anaerobic and mesophillic condition (35 degrees C) for 8 weeks followed by continuous aerobic condition for another 6 weeks. TCE dissipation, its metabolites and biogas composition were measured throughout the experimental period. Results show that TCE degradation depended upon the amount of organic carbon (OC) contained within the composting treatments/matrices. The highest TCE removal percentage (97%) and rate (75.06 micro Mole/kg DW/day) were obtained from a treatment of 10% OC composting matrices as compared to 87% and 27.75 micro Mole/kg DW/day for 20% OC, and 83% and 38.08 micro Mole/kg DW/day for soil control treatment. TCE removal rate was first order reaction kinetics. Highest degradation rate constant (k(1) = 0.035 day(- 1)) was also obtained from the 10% OC treatment, followed by 20% OC (k(1) = 0.026 day(- 1)) and 5% OC or soil control treatment (k(1) = 0.023 day(- 1)). The half-life was 20, 27 and 30 days, respectively. The overall results suggest that sequential two stages anaerobic-aerobic composting technique has potential for remediation of TCE in heavy texture soil, providing that easily biodegradable source of organic carbon is present.

  17. Two-Stage Surgical Treatment for Non-Union of a Shortened Osteoporotic Femur

    Directory of Open Access Journals (Sweden)

    Galal Zaki Said

    2013-01-01

    Full Text Available Introduction: We report a case of non-union with severe shortening of the femur following diaphysectomy for chronic osteomyelitis.Case Presentation: A boy, aged 16 years presented with a dangling and excessively short left lower limb. He was using an elbow crutch in his right hand to help him walk. He had a history of diaphysectomy for chronic osteomyelitis at the age of 9. Examination revealed a freely mobile non-union of the left femur. The femur was the seat of an 18 cm shortening and a 4 cm defect at the non-union site; the knee joint was ankylosed in extension. The tibia and fibula were 10 cm short. Considering the extensive shortening in the femur and tibia in addition to osteoporosis, he was treated in two stages. In stage I, the femoral non-union was treated by open reduction, internal fixation and iliac bone grafting. The patient was then allowed to walk with full weight bearing in an extension brace for 7 months. In Stage II, equalization of leg length discrepancy (LLD was achieved by simultaneous distraction of the femur and tibia by unilateral frames. At the 6 month follow- up, he was fully weight bearing without any walking aid, with a heel lift to compensate the 1.5 cm shortening. Three years later he reported that he was satisfied with the result of treatment and was leading a normal life as a university student.Conclusions: Two-stage treatment succeeded to restore about 20 cm of the femoral shortening in a severely osteoporotic bone. It has also succeeded in reducing the treatment time of the external fixator.

  18. Design and Characterization of two stage High-Speed CMOS Operational Amplifier

    Directory of Open Access Journals (Sweden)

    Rahul Chaudhari

    2014-03-01

    Full Text Available A method described in this paper is to design a Two Stage CMOS operational amplifier and analyze the effect of various aspect ratios on the characteristics of this Op-Amp, which operates at 1.8V power supply using tsmc 0.18μm CMOS technology. In this paper trade-off curves are computed between all characteristics such as Gain, PM, GBW, ICMRR, CMRR, Slew Rate etc. The OPAMP designed is a two-stage CMOS OPAMP. The OPAMP is designed to exhibit a unity gain frequency of 14MHz and exhibits a gain of 59.98dB with a 61.235 phase margin. Design has been carried out in Mentor graphics tools. Simulation results are verified using Model Sim Eldo and Design Architect IC. The task of CMOS operational amplifiers (Op-Amps design optimization is investigated in this work. This Paper focused on the optimization of various aspect ratios, which gave the result of different parameter. When this task is analyzed as a search problem, it can be translated into a multi-objective optimization application in which various Op-Amps’ specifications have to be taken into account, i.e., Gain, GBW (gain-bandwidth product, phase margin and others. The results are compared with respect to standard characteristics of the op-amp with the help of graph and table. Simulation results agree with theoretical predictions. Simulations confirm that the settling time can be further improved by increasing the value of GBW, the settling time is achieved 19ns. It has been demonstrated that when W/L increases the parameters GBW increases and settling time reduces.

  19. Anti-kindling Induced by Two-Stage Coordinated Reset Stimulation with Weak Onset Intensity

    Science.gov (United States)

    Zeitler, Magteld; Tass, Peter A.

    2016-01-01

    Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR) stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e., unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies. PMID:27242500

  20. Mineral chemistry of the Tissint meteorite: Indications of two-stage crystallization in a closed system

    Science.gov (United States)

    Liu, Yang; Baziotis, Ioannis P.; Asimow, Paul D.; Bodnar, Robert J.; Taylor, Lawrence A.

    2016-12-01

    The Tissint meteorite is a geochemically depleted, olivine-phyric shergottite. Olivine megacrysts contain 300-600 μm cores with uniform Mg# ( 80 ± 1) followed by concentric zones of Fe-enrichment toward the rims. We applied a number of tests to distinguish the relationship of these megacrysts to the host rock. Major and trace element compositions of the Mg-rich core in olivine are in equilibrium with the bulk rock, within uncertainty, and rare earth element abundances of melt inclusions in Mg-rich olivines reported in the literature are similar to those of the bulk rock. Moreover, the P Kα intensity maps of two large olivine grains show no resorption between the uniform core and the rim. Taken together, these lines of evidence suggest the olivine megacrysts are phenocrysts. Among depleted olivine-phyric shergottites, Tissint is the first one that acts mostly as a closed system with olivine megacrysts being the phenocrysts. The texture and mineral chemistry of Tissint indicate a crystallization sequence of: olivine (Mg# 80 ± 1) → olivine (Mg# 76) + chromite → olivine (Mg# 74) + Ti-chromite → olivine (Mg# 74-63) + pyroxene (Mg# 76-65) + Cr-ulvöspinel → olivine (Mg# 63-35) + pyroxene (Mg# 65-60) + plagioclase, followed by late-stage ilmenite and phosphate. The crystallization of the Tissint meteorite likely occurred in two stages: uniform olivine cores likely crystallized under equilibrium conditions; and a fractional crystallization sequence that formed the rest of the rock. The two-stage crystallization without crystal settling is simulated using MELTS and the Tissint bulk composition, and can broadly reproduce the crystallization sequence and mineral chemistry measured in the Tissint samples. The transition between equilibrium and fractional crystallization is associated with a dramatic increase in cooling rate and might have been driven by an acceleration in the ascent rate or by encounter with a steep thermal gradient in the Martian crust.

  1. Focused ultrasound simultaneous irradiation/MRI imaging, and two-stage general kinetic model.

    Directory of Open Access Journals (Sweden)

    Sheng-Yao Huang

    Full Text Available Many studies have investigated how to use focused ultrasound (FUS to temporarily disrupt the blood-brain barrier (BBB in order to facilitate the delivery of medication into lesion sites in the brain. In this study, through the setup of a real-time system, FUS irradiation and injections of ultrasound contrast agent (UCA and Gadodiamide (Gd, an MRI contrast agent can be conducted simultaneously during MRI scanning. By using this real-time system, we were able to investigate in detail how the general kinetic model (GKM is used to estimate Gd penetration in the FUS irradiated area in a rat's brain resulting from UCA concentration changes after single FUS irradiation. Two-stage GKM was proposed to estimate the Gd penetration in the FUS irradiated area in a rat's brain under experimental conditions with repeated FUS irradiation combined with different UCA concentrations. The results showed that the focal increase in the transfer rate constant of Ktrans caused by BBB disruption was dependent on the doses of UCA. Moreover, the amount of in vivo penetration of Evans blue in the FUS irradiated area in a rat's brain under various FUS irradiation experimental conditions was assessed to show the positive correlation with the transfer rate constants. Compared to the GKM method, the Two-stage GKM is more suitable for estimating the transfer rate constants of the brain treated with repeated FUS irradiations. This study demonstrated that the entire process of BBB disrupted by FUS could be quantitatively monitored by real-time dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI.

  2. Condition monitoring of distributed systems using two-stage Bayesian inference data fusion

    Science.gov (United States)

    Jaramillo, Víctor H.; Ottewill, James R.; Dudek, Rafał; Lepiarczyk, Dariusz; Pawlik, Paweł

    2017-03-01

    In industrial practice, condition monitoring is typically applied to critical machinery. A particular piece of machinery may have its own condition monitoring system that allows the health condition of said piece of equipment to be assessed independently of any connected assets. However, industrial machines are typically complex sets of components that continuously interact with one another. In some cases, dynamics resulting from the inception and development of a fault can propagate between individual components. For example, a fault in one component may lead to an increased vibration level in both the faulty component, as well as in connected healthy components. In such cases, a condition monitoring system focusing on a specific element in a connected set of components may either incorrectly indicate a fault, or conversely, a fault might be missed or masked due to the interaction of a piece of equipment with neighboring machines. In such cases, a more holistic condition monitoring approach that can not only account for such interactions, but utilize them to provide a more complete and definitive diagnostic picture of the health of the machinery is highly desirable. In this paper, a Two-Stage Bayesian Inference approach allowing data from separate condition monitoring systems to be combined is presented. Data from distributed condition monitoring systems are combined in two stages, the first data fusion occurring at a local, or component, level, and the second fusion combining data at a global level. Data obtained from an experimental rig consisting of an electric motor, two gearboxes, and a load, operating under a range of different fault conditions is used to illustrate the efficacy of the method at pinpointing the root cause of a problem. The obtained results suggest that the approach is adept at refining the diagnostic information obtained from each of the different machine components monitored, therefore improving the reliability of the health assessment of

  3. Novel two-stage piezoelectric-based ocean wave energy harvesters for moored or unmoored buoys

    Science.gov (United States)

    Murray, R.; Rastegar, J.

    2009-03-01

    Harvesting mechanical energy from ocean wave oscillations for conversion to electrical energy has long been pursued as an alternative or self-contained power source. The attraction to harvesting energy from ocean waves stems from the sheer power of the wave motion, which can easily exceed 50 kW per meter of wave front. The principal barrier to harvesting this power is the very low and varying frequency of ocean waves, which generally vary from 0.1Hz to 0.5Hz. In this paper the application of a novel class of two-stage electrical energy generators to buoyant structures is presented. The generators use the buoy's interaction with the ocean waves as a low-speed input to a primary system, which, in turn, successively excites an array of vibratory elements (secondary system) into resonance - like a musician strumming a guitar. The key advantage of the present system is that by having two decoupled systems, the low frequency and highly varying buoy motion is converted into constant and much higher frequency mechanical vibrations. Electrical energy may then be harvested from the vibrating elements of the secondary system with high efficiency using piezoelectric elements. The operating principles of the novel two-stage technique are presented, including analytical formulations describing the transfer of energy between the two systems. Also, prototypical design examples are offered, as well as an in-depth computer simulation of a prototypical heaving-based wave energy harvester which generates electrical energy from the up-and-down motion of a buoy riding on the ocean's surface.

  4. Anti-kindling induced by two-stage coordinated reset stimulation with weak onset intensity

    Directory of Open Access Journals (Sweden)

    Magteld eZeitler

    2016-05-01

    Full Text Available Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e. unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies.

  5. Two-Stage Latissimus Dorsi Flap with Implant for Unilateral Breast Reconstruction: Getting the Size Right

    Directory of Open Access Journals (Sweden)

    Jiajun Feng

    2016-03-01

    Full Text Available BackgroundThe aim of unilateral breast reconstruction after mastectomy is to craft a natural-looking breast with symmetry. The latissimus dorsi (LD flap with implant is an established technique for this purpose. However, it is challenging to obtain adequate volume and satisfactory aesthetic results using a one-stage operation when considering factors such as muscle atrophy, wound dehiscence and excessive scarring. The two-stage reconstruction addresses these difficulties by using a tissue expander to gradually enlarge the skin pocket which eventually holds an appropriately sized implant.MethodsWe analyzed nine patients who underwent unilateral two-stage LD reconstruction. In the first stage, an expander was placed along with the LD flap to reconstruct the mastectomy defect, followed by gradual tissue expansion to achieve overexpansion of the skin pocket. The final implant volume was determined by measuring the residual expander volume after aspirating the excess saline. Finally, the expander was replaced with the chosen implant.ResultsThe average volume of tissue expansion was 460 mL. The resultant expansion allowed an implant ranging in volume from 255 to 420 mL to be placed alongside the LD muscle. Seven patients scored less than six on the relative breast retraction assessment formula for breast symmetry, indicating excellent breast symmetry. The remaining two patients scored between six and eight, indicating good symmetry.ConclusionsThis approach allows the size of the eventual implant to be estimated after the skin pocket has healed completely and the LD muscle has undergone natural atrophy. Optimal reconstruction results were achieved using this approach.

  6. Stochastic modeling

    CERN Document Server

    Lanchier, Nicolas

    2017-01-01

    Three coherent parts form the material covered in this text, portions of which have not been widely covered in traditional textbooks. In this coverage the reader is quickly introduced to several different topics enriched with 175 exercises which focus on real-world problems. Exercises range from the classics of probability theory to more exotic research-oriented problems based on numerical simulations. Intended for graduate students in mathematics and applied sciences, the text provides the tools and training needed to write and use programs for research purposes. The first part of the text begins with a brief review of measure theory and revisits the main concepts of probability theory, from random variables to the standard limit theorems. The second part covers traditional material on stochastic processes, including martingales, discrete-time Markov chains, Poisson processes, and continuous-time Markov chains. The theory developed is illustrated by a variety of examples surrounding applications such as the ...

  7. Stochastic Cooling

    Energy Technology Data Exchange (ETDEWEB)

    Blaskiewicz, M.

    2011-01-01

    Stochastic Cooling was invented by Simon van der Meer and was demonstrated at the CERN ISR and ICE (Initial Cooling Experiment). Operational systems were developed at Fermilab and CERN. A complete theory of cooling of unbunched beams was developed, and was applied at CERN and Fermilab. Several new and existing rings employ coasting beam cooling. Bunched beam cooling was demonstrated in ICE and has been observed in several rings designed for coasting beam cooling. High energy bunched beams have proven more difficult. Signal suppression was achieved in the Tevatron, though operational cooling was not pursued at Fermilab. Longitudinal cooling was achieved in the RHIC collider. More recently a vertical cooling system in RHIC cooled both transverse dimensions via betatron coupling.

  8. Machine learning meliorates computing and robustness in discrete combinatorial optimization problems.

    Directory of Open Access Journals (Sweden)

    Fushing Hsieh

    2016-11-01

    Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.

  9. STOCHASTIC FLOWS OF MAPPINGS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.

  10. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  11. Stochastic wave propagation

    CERN Document Server

    Sobczyk, K

    1985-01-01

    This is a concise, unified exposition of the existing methods of analysis of linear stochastic waves with particular reference to the most recent results. Both scalar and vector waves are considered. Principal attention is concentrated on wave propagation in stochastic media and wave scattering at stochastic surfaces. However, discussion extends also to various mathematical aspects of stochastic wave equations and problems of modelling stochastic media.

  12. Stochastic homothetically revealed preference for tight stochastic demand functions

    OpenAIRE

    Jan Heufer

    2009-01-01

    This paper strengthens the framework of stochastic revealed preferences introduced by Bandyopadhyay et al. (1999, 2004) for stochastic homothetically revealed preferences for tight stochastic demand functions.

  13. Experimental and numerical studies on two-stage combustion of biomass

    Energy Technology Data Exchange (ETDEWEB)

    Houshfar, Eshan

    2012-07-01

    In this thesis, two-stage combustion of biomass was experimentally/numerically investigated in a multifuel reactor. The following emissions issues have been the main focus of the work: 1- NOx and N2O 2- Unburnt species (CO and CxHy) 3- Corrosion related emissions.The study had a focus on two-stage combustion in order to reduce pollutant emissions (primarily NOx emissions). It is well known that pollutant emissions are very dependent on the process conditions such as temperature, reactant concentrations and residence times. On the other hand, emissions are also dependent on the fuel properties (moisture content, volatiles, alkali content, etc.). A detailed study of the important parameters with suitable biomass fuels in order to optimize the various process conditions was performed. Different experimental studies were carried out on biomass fuels in order to study the effect of fuel properties and combustion parameters on pollutant emissions. Process conditions typical for biomass combustion processes were studied. Advanced experimental equipment was used in these studies. The experiments showed the effects of staged air combustion, compared to non-staged combustion, on the emission levels clearly. A NOx reduction of up to 85% was reached with staged air combustion using demolition wood as fuel. An optimum primary excess air ratio of 0.8-0.95 was found as a minimizing parameter for the NOx emissions for staged air combustion. Air staging had, however, a negative effect on N2O emissions. Even though the trends showed a very small reduction in the NOx level as temperature increased for non-staged combustion, the effect of temperature was not significant for NOx and CxHy, neither in staged air combustion or non-staged combustion, while it had a great influence on the N2O and CO emissions, with decreasing levels with increasing temperature. Furthermore, flue gas recirculation (FGR) was used in combination with staged combustion to obtain an enhanced NOx reduction. The

  14. Enhancement of bioenergy production from organic wastes by two-stage anaerobic hydrogen and methane production process

    DEFF Research Database (Denmark)

    Luo, Gang; Xie, Li; Zhou, Qi

    2011-01-01

    The present study investigated a two-stage anaerobic hydrogen and methane process for increasing bioenergy production from organic wastes. A two-stage process with hydraulic retention time (HRT) 3d for hydrogen reactor and 12d for methane reactor, obtained 11% higher energy compared to a single......-stage methanogenic process (HRT 15d) under organic loading rate (OLR) 3gVS/(Ld). The two-stage process was still stable when the OLR was increased to 4.5gVS/(Ld), while the single-stage process failed. The study further revealed that by changing the HRThydrogen:HRTmethane ratio of the two-stage process from 3...

  15. Differentiating the persistency and permanency of some two stages DNA splicing language via Yusof-Goode (Y-G) approach

    Science.gov (United States)

    Mudaber, M. H.; Yusof, Y.; Mohamad, M. S.

    2017-09-01

    Predicting the existence of restriction enzymes sequences on the recombinant DNA fragments, after accomplishing the manipulating reaction, via mathematical approach is considered as a convenient way in terms of DNA recombination. In terms of mathematics, for this characteristic of the recombinant DNA strands, which involve the recognition sites of restriction enzymes, is called persistent and permanent. Normally differentiating the persistency and permanency of two stages recombinant DNA strands using wet-lab experiment is expensive and time-consuming due to running the experiment at two stages as well as adding more restriction enzymes on the reaction. Therefore, in this research, by using Yusof-Goode (Y-G) model the difference between persistent and permanent splicing language of some two stages is investigated. Two theorems were provided, which show the persistency and non-permanency of two stages DNA splicing language.

  16. A two-stage logistic regression-ANN model for the prediction of distress banks: Evidence from 11 emerging countries

    National Research Council Canada - National Science Library

    Shu Ling Lin

    2010-01-01

      This paper proposes a new approach of two-stage hybrid model of logistic regression-ANN for the construction of a financial distress warning system for banking industry in emerging market during 1998-2006...

  17. Hydrogen and methane production from condensed molasses fermentation soluble by a two-stage anaerobic process

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Chiu-Yue; Liang, You-Chyuan; Lay, Chyi-How [Feng Chia Univ., Taichung, Taiwan (China). Dept. of Environmental Engineering and Science; Chen, Chin-Chao [Chungchou Institute of Technology, Taiwan (China). Environmental Resources Lab.; Chang, Feng-Yuan [Feng Chia Univ., Taichung, Taiwan (China). Research Center for Energy and Resources

    2010-07-01

    The treatment of condensed molasses fermentation soluble (CMS) is a troublesome problem for glutamate manufacturing factory. However, CMS contains high carbohydrate and nutrient contents and is an attractive and commercially potential feedstock for bioenergy production. The aim of this paper is to produce hydrogen and methane by two-stage anaerobic fermentation process. The fermentative hydrogen production from CMS was conducted in a continuously-stirred tank bioreactor (working volume 4 L) which was operated at a hydraulic retention time (HRT) of 8 h, organic loading rate (OLR) of 120 kg COD/m{sup 3}-d, temperature of 35 C, pH 5.5 and sewage sludge as seed. The anaerobic methane production was conducted in an up-flow bioreactor (working volume 11 L) which was operated at a HRT of 24 -60 hrs, OLR of 4.0-10 kg COD/m{sup 3}-d, temperature of 35 C, pH 7.0 with using anaerobic granule sludge from fructose manufacturing factory as the seed and the effluent from hydrogen production process as the substrate. These two reactors have been operated successfully for more than 400 days. The steady-state hydrogen content, hydrogen production rate and hydrogen production yield in the hydrogen fermentation system were 37%, 169 mmol-H{sub 2}/L-d and 93 mmol-H{sub 2}/g carbohydrate{sub removed}, respectively. In the methane fermentation system, the peak methane content and methane production rate were 66.5 and 86.8 mmol-CH{sub 4}/L-d with methane production yield of 189.3 mmol-CH{sub 4}/g COD{sub removed} at an OLR 10 kg/m{sup 3}-d. The energy production rate was used to elucidate the energy efficiency for this two-stage process. The total energy production rate of 133.3 kJ/L/d was obtained with 5.5 kJ/L/d from hydrogen fermentation and 127.8 kJ/L/d from methane fermentation. (orig.)

  18. Hydrogen production from cellulose in a two-stage process combining fermentation and electrohydrogenesis

    KAUST Repository

    Lalaurette, Elodie

    2009-08-01

    A two-stage dark-fermentation and electrohydrogenesis process was used to convert the recalcitrant lignocellulosic materials into hydrogen gas at high yields and rates. Fermentation using Clostridium thermocellum produced 1.67 mol H2/mol-glucose at a rate of 0.25 L H2/L-d with a corn stover lignocellulose feed, and 1.64 mol H2/mol-glucose and 1.65 L H2/L-d with a cellobiose feed. The lignocelluose and cellobiose fermentation effluent consisted primarily of: acetic, lactic, succinic, and formic acids and ethanol. An additional 800 ± 290 mL H2/g-COD was produced from a synthetic effluent with a wastewater inoculum (fermentation effluent inoculum; FEI) by electrohydrogensis using microbial electrolysis cells (MECs). Hydrogen yields were increased to 980 ± 110 mL H2/g-COD with the synthetic effluent by combining in the inoculum samples from multiple microbial fuel cells (MFCs) each pre-acclimated to a single substrate (single substrate inocula; SSI). Hydrogen yields and production rates with SSI and the actual fermentation effluents were 980 ± 110 mL/g-COD and 1.11 ± 0.13 L/L-d (synthetic); 900 ± 140 mL/g-COD and 0.96 ± 0.16 L/L-d (cellobiose); and 750 ± 180 mL/g-COD and 1.00 ± 0.19 L/L-d (lignocellulose). A maximum hydrogen production rate of 1.11 ± 0.13 L H2/L reactor/d was produced with synthetic effluent. Energy efficiencies based on electricity needed for the MEC using SSI were 270 ± 20% for the synthetic effluent, 230 ± 50% for lignocellulose effluent and 220 ± 30% for the cellobiose effluent. COD removals were ∼90% for the synthetic effluents, and 70-85% based on VFA removal (65% COD removal) with the cellobiose and lignocellulose effluent. The overall hydrogen yield was 9.95 mol-H2/mol-glucose for the cellobiose. These results show that pre-acclimation of MFCs to single substrates improves performance with a complex mixture of substrates, and that high hydrogen yields and gas production rates can be achieved using a two-stage fermentation and MEC

  19. AREA DETERMINATION OF DIABETIC FOOT ULCER IMAGES USING A CASCADED TWO-STAGE SVM BASED CLASSIFICATION.

    Science.gov (United States)

    Wang, Lei; Pedersen, Peder; Agu, Emmanuel; Strong, Diane; Tulu, Bengisu

    2016-11-23

    It is standard practice for clinicians and nurses to primarily assess patients' wounds via visual examination. This subjective method can be inaccurate in wound assessment and also represents a significant clinical workload. Hence, computer-based systems, especially implemented on mobile devices, can provide automatic, quantitative wound assessment and can thus be valuable for accurately monitoring wound healing status. Out of all wound assessment parameters, the measurement of the wound area is the most suitable for automated analysis. Most of the current wound boundary determination methods only process the image of the wound area along with a small amount of surrounding healthy skin. In this paper, we present a novel approach that uses Support Vector Machine (SVM) to determine the wound boundary on a foot ulcer image captured with an image capture box, which provides controlled lighting, angle and range conditions. The Simple Linear Iterative Clustering (SLIC) method is applied for effective super-pixel segmentation. A cascaded two-stage classifier is trained as follows: in the first stage a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and a set of incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from super-pixels that are used as input for each stage in the classifier training. Specifically, we apply the color and Bag-of-Word (BoW) representation of local Dense SIFT features (DSIFT) as the descriptor for ruling out irrelevant regions (first stage), and apply color and wavelet based features as descriptors for distinguishing healthy tissue from wound regions (second stage). Finally, the detected wound boundary is refined by applying a Conditional Random Field (CRF) image processing technique. We have implemented the wound classification on a Nexus

  20. A farm-scale pilot plant for biohydrogen and biomethane production by two-stage fermentation

    Directory of Open Access Journals (Sweden)

    R. Oberti

    2013-09-01

    Full Text Available Hydrogen is considered one of the possible main energy carriers for the future, thanks to its unique environmental properties. Indeed, its energy content (120 MJ/kg can be exploited virtually without emitting any exhaust in the atmosphere except for water. Renewable production of hydrogen can be obtained through common biological processes on which relies anaerobic digestion, a well-established technology in use at farm-scale for treating different biomass and residues. Despite two-stage hydrogen and methane producing fermentation is a simple variant of the traditional anaerobic digestion, it is a relatively new approach mainly studied at laboratory scale. It is based on biomass fermentation in two separate, seuqential stages, each maintaining conditions optimized to promote specific bacterial consortia: in the first acidophilic reactorhydrogen is produced production, while volatile fatty acids-rich effluent is sent to the second reactor where traditional methane rich biogas production is accomplished. A two-stage pilot-scale plant was designed, manufactured and installed at the experimental farm of the University of Milano and operated using a biomass mixture of livestock effluents mixed with sugar/starch-rich residues (rotten fruits and potatoes and expired fruit juices, afeedstock mixture based on waste biomasses directly available in the rural area where plant is installed. The hydrogenic and the methanogenic reactors, both CSTR type, had a total volume of 0.7m3 and 3.8 m3 respectively, and were operated in thermophilic conditions (55 2 °C without any external pH control, and were fully automated. After a brief description of the requirements of the system, this contribution gives a detailed description of its components and of engineering solutions to the problems encountered during the plant realization and start-up. The paper also discusses the results obtained in a first experimental run which lead to production in the range of previous

  1. Removal of cesium from simulated liquid waste with countercurrent two-stage adsorption followed by microfiltration

    Energy Technology Data Exchange (ETDEWEB)

    Han, Fei; Zhang, Guang-Hui [School of Environmental Science and Engineering, Tianjin University, Tianjin, 300072 (China); Gu, Ping, E-mail: guping@tju.edu.cn [School of Environmental Science and Engineering, Tianjin University, Tianjin, 300072 (China)

    2012-07-30

    Highlights: Black-Right-Pointing-Pointer The adsorption isotherm of cesium by copper ferrocyanide followed a Freundlich model. Black-Right-Pointing-Pointer Decontamination factor of cesium was higher in lab-scale test than that in jar test. Black-Right-Pointing-Pointer A countercurrent two-stage adsorption-microfiltration process was achieved. Black-Right-Pointing-Pointer Cesium concentration in the effluent could be calculated. Black-Right-Pointing-Pointer It is a new cesium removal process with a higher decontamination factor. - Abstract: Copper ferrocyanide (CuFC) was used as an adsorbent to remove cesium. Jar test results showed that the adsorption capacity of CuFC was better than that of potassium zinc hexacyanoferrate. Lab-scale tests were performed by an adsorption-microfiltration process, and the mean decontamination factor (DF) was 463 when the initial cesium concentration was 101.3 {mu}g/L, the dosage of CuFC was 40 mg/L and the adsorption time was 20 min. The cesium concentration in the effluent continuously decreased with the operation time, which indicated that the used adsorbent retained its adsorption capacity. To use this capacity, experiments on a countercurrent two-stage adsorption (CTA)-microfiltration (MF) process were carried out with CuFC adsorption combined with membrane separation. A calculation method for determining the cesium concentration in the effluent was given, and batch tests in a pressure cup were performed to verify the calculated method. The results showed that the experimental values fitted well with the calculated values in the CTA-MF process. The mean DF was 1123 when the dilution factor was 0.4, the initial cesium concentration was 98.75 {mu}g/L and the dosage of CuFC and adsorption time were the same as those used in the lab-scale test. The DF obtained by CTA-MF process was more than three times higher than the single-stage adsorption in the jar test.

  2. Two-stage induced differentiation of OCT4+/Nanog+ stem-like cells in lung adenocarcinoma.

    Science.gov (United States)

    Li, Rong; Huang, Jinsu; Ma, Meili; Lou, Yuqing; Zhang, Yanwei; Wu, Lixia; Chang, David W; Zhao, Picheng; Dong, Qianggang; Wu, Xifeng; Han, Baohui

    2016-10-18

    Stem-like cells in solid tumors are purported to contribute to cancer development and poor treatment outcome. The abilities to self-renew, differentiate, and resist anticancer therapies are hallmarks of these rare cells, and steering them into lineage commitment may be one strategy to curb cancer development or progression. Vitamin D is a prohormone that can alter cell growth and differentiation and may induce the differentiation cancer stem-like cells. In this study, octamer-binding transcription factor 4 (OCT4)-positive/Nanog homeobox (Nanog)- positive lung adenocarcinoma stem-like cells (LACSCs) were enriched from spheroid cultured SPC-A1 cells and differentiated by a two-stage induction (TSI) method, which involved knockdown of hypoxia-inducible factor 1-alpha (HIF1α) expression (first stage) followed by sequential induction with 1alpha,25-dihydroxyvitamin D3 (1,25(OH)2D3, VD3) and suberoylanilide hydroxamic acid (SAHA) treatment (second stage). The results showed the HIF1α-knockdowned cells displayed diminished cell invasion and clonogenic activities. Moreover, the TSI cells highly expressed tumor suppressor protein p63 (P63) and forkhead box J1 (FOXJ1) and lost stem cell characteristics, including absent expression of OCT4 and Nanog. These cells regained sensitivity to cisplatin in vitro while losing tumorigenic capacity and decreased tumor cell proliferation in vivo. Our results suggest that induced transdifferentiation of LACSCs by vitamin D and SAHA may become novel therapeutic avenue to alter tumor cell phenotypes and improve patient outcome.The development and progression of lung cancer may involve rare population of stem-like cells that have the ability to grow, differentiate, and resist drug treatment. However, current therapeutic strategies have mostly focused on tumor characteristics and neglected the potential source of cells that may contribute to poor clinical outcome. We generated lung adenocarcinoma stem-like cells from spheroid culture and

  3. First Law Analysis of a Two-stage Ejector-vapor Compression Refrigeration Cycle working with R404A

    OpenAIRE

    Feiza Memet; Daniela-Elena Mitu

    2011-01-01

    The traditional two-stage vapor compression refrigeration cycle might be replaced by a two-stage ejector-vapor compression refrigeration cycle if it is aimed the decrease of irreversibility during expansion. In this respect, the expansion valve is changed with an ejector. The performance improvement is searched in the case of choosing R404A as a refrigerant. Using the ejector as an expansion device ensures a higher value for COP compared to the traditional case. On the basis...

  4. Considerations Regarding Age at Surgery and Fistula Incidence Using One- and Two-stage Closure for Cleft Palate

    OpenAIRE

    Simona Stoicescu; Dm Enescu

    2013-01-01

    Introduction: Although cleft lip and palate (CLP) is one of the most common congenital malformations, occurring in 1 in 700 live births, there is still no generally accepted treatment protocol. Numerous surgical techniques have been described for cleft palate repair; these techniques can be divided into one-stage (one operation) cleft palate repair and two-stage cleft palate closure. The aim of this study is to present our cleft palate team experience in using the two-stage cleft palate closu...

  5. Gems of combinatorial optimization and graph algorithms

    CERN Document Server

    Skutella, Martin; Stiller, Sebastian; Wagner, Dorothea

    2015-01-01

    Are you looking for new lectures for your course on algorithms, combinatorial optimization, or algorithmic game theory?  Maybe you need a convenient source of relevant, current topics for a graduate student or advanced undergraduate student seminar?  Or perhaps you just want an enjoyable look at some beautiful mathematical and algorithmic results, ideas, proofs, concepts, and techniques in discrete mathematics and theoretical computer science?   Gems of Combinatorial Optimization and Graph Algorithms is a handpicked collection of up-to-date articles, carefully prepared by a select group of international experts, who have contributed some of their most mathematically or algorithmically elegant ideas.  Topics include longest tours and Steiner trees in geometric spaces, cartograms, resource buying games, congestion games, selfish routing, revenue equivalence and shortest paths, scheduling, linear structures in graphs, contraction hierarchies, budgeted matching problems, and motifs in networks.   This ...

  6. Three Syntactic Theories for Combinatory Graph Reduction

    DEFF Research Database (Denmark)

    Danvy, Olivier; Zerny, Ian

    2011-01-01

    We present a purely syntactic theory of graph reduction for the canonical combinators S, K, and I, where graph vertices are represented with evaluation contexts and let expressions. We express this syntactic theory as a reduction semantics, which we refocus into the first storeless abstract machine...... for combinatory graph reduction, which we refunctionalize into the first storeless natural semantics for combinatory graph reduction.We then factor out the introduction of let expressions to denote as many graph vertices as possible upfront instead of on demand, resulting in a second syntactic theory, this one...... of term graphs in the sense of Barendregt et al. The corresponding storeless abstract machine and natural semantics follow mutatis mutandis. We then interpret let expressions as operations over a global store (thus shifting, in Strachey's words, from denotable entities to storable entities), resulting...

  7. Three Syntactic Theories for Combinatory Graph Reduction

    DEFF Research Database (Denmark)

    Danvy, Olivier; Zerny, Ian

    2013-01-01

    We present a purely syntactic theory of graph reduction for the canonical combinators S, K, and I, where graph vertices are represented with evaluation contexts and let expressions. We express this rst syntactic theory as a storeless reduction semantics of combinatory terms. We then factor out...... the introduction of let expressions to denote as many graph vertices as possible upfront instead of on demand . The factored terms can be interpreted as term graphs in the sense of Barendregt et al. We express this second syntactic theory, which we prove equivalent to the rst, as a storeless reduction semantics...... of combinatory term graphs. We then recast let bindings as bindings in a global store, thus shifting, in Strachey's words, from denotable entities to storable entities. The store-based terms can still be interpreted as term graphs. We express this third syntactic theory, which we prove equivalent to the second...

  8. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods....... Many of them are investigated analytically, and the costs of the solutions are compared numerically with those of solutions obtained by simulated annealing and the costs of a global optimal solution. Using dynamical systems, a solution to the combinatorial optimization problem emerges in the limit...... of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization...

  9. Dynamic combinatorial self-replicating systems.

    Science.gov (United States)

    Moulin, Emilie; Giuseppone, Nicolas

    2012-01-01

    Thanks to their intrinsic network topologies, dynamic combinatorial libraries (DCLs) represent new tools for investigating fundamental aspects related to self-organization and adaptation processes. Very recently the first examples integrating self-replication features within DCLs have pushed even further the idea of implementing dynamic combinatorial chemistry (DCC) towards minimal systems capable of self-construction and/or evolution. Indeed, feedback loop processes - in particular in the form of autocatalytic reactions - are keystones to build dynamic supersystems which could possibly approach the roots of "Darwinian" evolvability at mesoscale. This topic of current interest also shows significant potentialities beyond its fundamental character, because truly smart and autonomous materials for the future will have to respond to changes of their environment by selecting and by exponentially amplifying their fittest constituents.

  10. Assessment of structural diversity in combinatorial synthesis.

    Science.gov (United States)

    Fergus, Suzanne; Bender, Andreas; Spring, David R

    2005-06-01

    This article covers the combinatorial synthesis of small molecules with maximal structural diversity to generate a collection of pure compounds that are attractive for lead generation in a phenotypic, high-throughput screening approach. Nature synthesises diverse small molecules, but there are disadvantages with using natural product sources. The efficient chemical synthesis of structural diversity (and complexity) is the aim of diversity-oriented synthesis, and recent progress is reviewed. Specific highlights include a discussion of strategies to obtain structural diversity and an analysis of molecular descriptors used to classify compounds. The assessment of how successful one synthesis is versus another is subjective, therefore we test-drive software to assess structural diversity in combinatorial synthesis, which is freely available via a web interface.

  11. Memetic firefly algorithm for combinatorial optimization

    CERN Document Server

    Fister, Iztok; Fister, Iztok; Brest, Janez

    2012-01-01

    Firefly algorithms belong to modern meta-heuristic algorithms inspired by nature that can be successfully applied to continuous optimization problems. In this paper, we have been applied the firefly algorithm, hybridized with local search heuristic, to combinatorial optimization problems, where we use graph 3-coloring problems as test benchmarks. The results of the proposed memetic firefly algorithm (MFFA) were compared with the results of the Hybrid Evolutionary Algorithm (HEA), Tabucol, and the evolutionary algorithm with SAW method (EA-SAW) by coloring the suite of medium-scaled random graphs (graphs with 500 vertices) generated using the Culberson random graph generator. The results of firefly algorithm were very promising and showed a potential that this algorithm could successfully be applied in near future to the other combinatorial optimization problems as well.

  12. DNA-Encoded Dynamic Combinatorial Chemical Libraries.

    Science.gov (United States)

    Reddavide, Francesco V; Lin, Weilin; Lehnert, Sarah; Zhang, Yixin

    2015-06-26

    Dynamic combinatorial chemistry (DCC) explores the thermodynamic equilibrium of reversible reactions. Its application in the discovery of protein binders is largely limited by difficulties in the analysis of complex reaction mixtures. DNA-encoded chemical library (DECL) technology allows the selection of binders from a mixture of up to billions of different compounds; however, experimental results often show low a signal-to-noise ratio and poor correlation between enrichment factor and binding affinity. Herein we describe the design and application of DNA-encoded dynamic combinatorial chemical libraries (EDCCLs). Our experiments have shown that the EDCCL approach can be used not only to convert monovalent binders into high-affinity bivalent binders, but also to cause remarkably enhanced enrichment of potent bivalent binders by driving their in situ synthesis. We also demonstrate the application of EDCCLs in DNA-templated chemical reactions.

  13. High throughput combinatorial screening of semiconductor materials

    Science.gov (United States)

    Mao, Samuel S.

    2011-11-01

    This article provides an overview of an advanced combinatorial material discovery platform developed recently for screening semiconductor materials with properties that may have applications ranging from radiation detectors to solar cells. Semiconductor thin-film libraries, each consisting of 256 materials of different composition arranged into a 16×16 matrix, were fabricated using laser-assisted evaporation process along with a combinatorial mechanism to achieve variations. The composition and microstructure of individual materials on each thin-film library were characterized with an integrated scanning micro-beam x-ray fluorescence and diffraction system, while the band gaps were determined by scanning optical reflection and transmission of the libraries. An ultrafast ultraviolet photon-induced charge probe was devised to measure the mobility and lifetime of individual thin-film materials on semiconductor libraries. Selected results on the discovery of semiconductors with desired band gaps and transport properties are illustrated.

  14. Reducing the risk of foaming and decreasing viscosity by two-stage anaerobic digestion of sugar beet pressed pulp.

    Science.gov (United States)

    Stoyanova, Elitza; Forsthuber, Boris; Pohn, Stefan; Schwarz, Christian; Fuchs, Werner; Bochmann, Günther

    2014-04-01

    Anaerobic digestion (AD) of sugar beet pressed pulp (SBPP) is a promising treatment concept. It produces biogas as a renewable energy source making sugar production more energy efficient and it turns SBPP from a residue into a valuable resource. In this study one- and two-stage mono fermentation at mesophilic conditions in a continuous stirred tank reactor were compared. Also the optimal incubation temperature for the pre-acidification stage was studied. The fastest pre-acidification, with a hydraulic retention time (HRT) of 4 days, occurred at a temperature of 55 °C. In the methanogenic reactor of the two-stage system stable fermentation at loading rate of 7 kg VS/m³ d was demonstrated. No artificial pH adjustment was necessary to maintain optimum levels in both the pre-acidification and the methanogenic reactor. The total HRT of the two-stage AD was 36 days which is considerably lower compared to the one-stage AD (50 days). The frequently observed problem of foaming at high loading rates was less severe in the two-stage reactor. Moreover the viscosity of digestate in the methanogenic stage of the two-stage fermentation was in average tenfold lower than in the one-stage fermentation. This decreases the energy input for the reactor stirring about 80 %. The observed advantages make the two-stage process economically attractive, despite higher investments for a two reactor system.

  15. COMBINATORIAL DESIGN APPROACHES FOR TEST GENERATION

    Institute of Scientific and Technical Information of China (English)

    Shi Liang; Xu Baowen; Nie Changhai

    2005-01-01

    The n-way combination testing is a specification-based testing criterion, which requires that for a system consisted of a few parameters, every combination of valid values of arbitrary n(n ≥ 2) parameters be covered by at least one test. This letter proposed two different test generation algorithms based on combinatorial design for the n-way coverage criterion. The automatic test generators are implemented and some valuable empirical results are obtained.

  16. Switched Systems and Motion Coordination: Combinatorial Challenges

    Science.gov (United States)

    Sadovsky, Alexander V.

    2016-01-01

    Problems of routing commercial air traffic in a terminal airspace encounter different constraints: separation assurance, aircraft performance limitations, regulations. The general setting of these problems is that of a switched control system. Such a system combines the differentiable motion of the aircraft with the combinatorial choices of choosing precedence when traffic routes merge and choosing branches when the routes diverge. This presentation gives an overview of the problem, the ATM context, related literature, and directions for future research.

  17. A combinatorial approach to metamaterials discovery

    CERN Document Server

    Plum, E; Chen, W T; Fedotov, V A; Tsai, D P; Zheludev, N I

    2010-01-01

    We report a high through-put combinatorial approach to photonic metamaterial optimization. The new approach is based on parallel synthesis and consecutive optical characterization of large numbers of spatially addressable nano-fabricated metamaterial samples (libraries) with quasi-continuous variation of design parameters under real manufacturing conditions. We illustrate this method for Fano-resonance plasmonic nanostructures arriving at explicit recipes for high quality factors needed for switching and sensing applications.

  18. One-parameter groups and combinatorial physics

    CERN Document Server

    Duchamp, G; Solomon, A I; Horzela, A; Blasiak, P; Duchamp, Gerard; Penson, Karol A.; Solomon, Allan I.; Horzela, Andrej; Blasiak, Pawel

    2004-01-01

    In this communication, we consider the normal ordering of sums of elements of the form (a*^r a a*^s), where a* and a are boson creation and annihilation operators. We discuss the integration of the associated one-parameter groups and their combinatorial by-products. In particular, we show how these groups can be realized as groups of substitutions with prefunctions.

  19. The Combinatorial Retention Auction Mechanism (CRAM)

    OpenAIRE

    Coughlan, Peter; Gates, William (Bill); Myung, Noah

    2013-01-01

    Approved for public release; distribution is unlimited. Revised version We propose a reverse uniform price auction called Combinatorial Retention Auction Mechanism (CRAM) that integrates both monetary and non-monetary incentives (NMIs). CRAM computes the cash bonus and NMIs to a single cost parameter, retains the lowest cost employees and provides them with compensation equal to the cost of the first excluded employee. CRAM is dominant strategy incentive compatible. We provide optimal b...

  20. Combinatorial Cis-regulation in Saccharomyces Species

    Directory of Open Access Journals (Sweden)

    Aaron T. Spivak

    2016-03-01

    Full Text Available Transcriptional control of gene expression requires interactions between the cis-regulatory elements (CREs controlling gene promoters. We developed a sensitive computational method to identify CRE combinations with conserved spacing that does not require genome alignments. When applied to seven sensu stricto and sensu lato Saccharomyces species, 80% of the predicted interactions displayed some evidence of combinatorial transcriptional behavior in several existing datasets including: (1 chromatin immunoprecipitation data for colocalization of transcription factors, (2 gene expression data for coexpression of predicted regulatory targets, and (3 gene ontology databases for common pathway membership of predicted regulatory targets. We tested several predicted CRE interactions with chromatin immunoprecipitation experiments in a wild-type strain and strains in which a predicted cofactor was deleted. Our experiments confirmed that transcription factor (TF occupancy at the promoters of the CRE combination target genes depends on the predicted cofactor while occupancy of other promoters is independent of the predicted cofactor. Our method has the additional advantage of identifying regulatory differences between species. By analyzing the S. cerevisiae and S. bayanus genomes, we identified differences in combinatorial cis-regulation between the species and showed that the predicted changes in gene regulation explain several of the species-specific differences seen in gene expression datasets. In some instances, the same CRE combinations appear to regulate genes involved in distinct biological processes in the two different species. The results of this research demonstrate that (1 combinatorial cis-regulation can be inferred by multi-genome analysis and (2 combinatorial cis-regulation can explain differences in gene expression between species.

  1. Two stage enucleation and deflation of a large unicystic ameloblastoma with mural invasion in mandible.

    Science.gov (United States)

    Sasaki, Ryo; Watanabe, Yorikatsu; Ando, Tomohiro; Akizuki, Tanetaka

    2014-06-01

    A treatment for strategy of unicystic ameloblastoma (UA) should be decided by its pathology type including luminal or mural type. Luminal type of UA can be treated only by enucleation alone, but UA with mural invasion should be treated aggressively like conventional ameloblastomas. However, it is difficult to diagnose the subtype of UA by an initial biopsy. There is a possibility that the lesion is an ordinary cyst or keratocystic odontogenic tumor, leading to a possible overtreatment. Therefore, this study performed the enucleation of the cyst wall and deflation at first, and the pathological finding confirmed mural invasion into the cystic wall, leading to the second surgery. The second surgery enucleated scar tissue, bone curettage, and deflation, and was able to contribute to the reduction of the recurrence rate by removing tumor nest in scar tissue or new bone, enhancing new bone formation, and shrinking the mandibular expanding by fenestration. In this study, a large UA with mural invasion including condyle was treated by "two-stage enucleation and deflation" in a 20-year-old patient.

  2. Contextual Classification of Point Clouds Using a Two-Stage Crf

    Science.gov (United States)

    Niemeyer, J.; Rottensteiner, F.; Soergel, U.; Heipke, C.

    2015-03-01

    In this investigation, we address the task of airborne LiDAR point cloud labelling for urban areas by presenting a contextual classification methodology based on a Conditional Random Field (CRF). A two-stage CRF is set up: in a first step, a point-based CRF is applied. The resulting labellings are then used to generate a segmentation of the classified points using a Conditional Euclidean Clustering algorithm. This algorithm combines neighbouring points with the same object label into one segment. The second step comprises the classification of these segments, again with a CRF. As the number of the segments is much smaller than the number of points, it is computationally feasible to integrate long range interactions into this framework. Additionally, two different types of interactions are introduced: one for the local neighbourhood and another one operating on a coarser scale. This paper presents the entire processing chain. We show preliminary results achieved using the Vaihingen LiDAR dataset from the ISPRS Benchmark on Urban Classification and 3D Reconstruction, which consists of three test areas characterised by different and challenging conditions. The utilised classification features are described, and the advantages and remaining problems of our approach are discussed. We also compare our results to those generated by a point-based classification and show that a slight improvement is obtained with this first implementation.

  3. CO removal by two-stage methanation for polymer electrolyte fuel cell

    Institute of Scientific and Technical Information of China (English)

    Zhiyuan Li; Wanliang Mi; Juan Gong; Zhenlong Lu; Lihao Xu; Qingquan Su

    2008-01-01

    In order to remove CO to achieve lower CO content of below 10 ppm in the CO removal step of reformer for polymer electrolyte fuel cell (PEFC) co-generation systems, CO preferential methanation under various conditions were studied in this paper. Results showed that, with a single kind of catalyst, it was difficult to reach both CO removal depth and CO2 conversion ratio of below 5%. Thus, a two-stage methanation process applying two kinds of catalysts is proposed in this study, that is, one kind of catalyst with relatively low activity and high selectivity for the first stage at higher temperature, and another kind of catalyst with relatively high activity and high selectivity for the second stage at lower temperature. Experimental results showed that at the first stage CO content was decreased from 1% to below 0.1% at 250-300 ℃, and at the second stage to below 10 ppm at 150-185 ℃. CO2 conversion was kept less than 5%. At the same time, influence of inlet CO content and GHSV on CO removal depth was also discussed in this paper.

  4. A CURRENT MIRROR BASED TWO STAGE CMOS CASCODE OP-AMP FOR HIGH FREQUENCY APPLICATION

    Directory of Open Access Journals (Sweden)

    RAMKRISHNA KUNDU

    2017-03-01

    Full Text Available This paper presents a low power, high slew rate, high gain, ultra wide band two stage CMOS cascode operational amplifier for radio frequency application. Current mirror based cascoding technique and pole zero cancelation technique is used to ameliorate the gain and enhance the unity gain bandwidth respectively, which is the novelty of the circuit. In cascading technique a common source transistor drive a common gate transistor. The cascoding is used to enhance the output resistance and hence improve the overall gain of the operational amplifier with less complexity and less power dissipation. To bias the common gate transistor, a current mirror is used in this paper. The proposed circuit is designed and simulated using Cadence analog and digital system design tools of 45 nanometer CMOS technology. The simulated results of the circuit show DC gain of 63.62 dB, unity gain bandwidth of 2.70 GHz, slew rate of 1816 V/µs, phase margin of 59.53º, power supply of the proposed operational amplifier is 1.4 V (rail-to-rail ±700 mV, and power consumption is 0.71 mW. This circuit specification has encountered the requirements of radio frequency application.

  5. Ecotoxicity of materials from integrated two-stage liquefaction and Exxon Donor Solvent processes

    Energy Technology Data Exchange (ETDEWEB)

    Dauble, D.D.; Scott, A.J.; Lusty, E.W.; Thomas, B.L.; Hanf, R.W. Jr.

    1983-05-01

    Coal-derived materials from two coal conversion processes were screened for potential ecological toxicity. We examined the toxicity of materials from different engineering or process options to an aquatic invertebrate and also related potential hazard to relative concentration, composition, and stability of water soluble components. For materials tested from the Integrated Two-Stage Liquefaction (ITSL) process, only the LC finer (LCF) 650/sup 0/F distillate was highly soluble in water at 20/sup 0/C. The LCF feed and Total Liquid Product (TLP) were not in liquid state at 20/sup 0/C and were relatively insoluble in water. Relative hazard to daphnids from ITSL materials was as follows: LCF 650/sup 0/F distillate greater than or equal to LCF feed greater than or equal to TLP. For Exxon Donor Solvent (EDS) materials, process solvent produced in the bottoms recycle mode was more soluble in water than once-through process solvent and, hence, slightly more acutely toxic to daphnids. When compared to other coal liquids or petroleum products, the ITSL or EDS liquids were intermediate in toxicity; relative hazard ranged from 1/7 to 1/13 of the Solvent Refined Coal (SRC)-II distillable blend, but was several times greater than the relative hazard for No. 2 diesel fuel oil or Prudhoe Bay crude oil. Although compositonal differences in water-soluble fractions (WSF) were noted among materials, phenolics were the major compound class in all WSFs and probably the primary contributor to acute toxicity.

  6. Two-Stage Chaos Optimization Search Application in Maximum Power Point Tracking of PV Array

    Directory of Open Access Journals (Sweden)

    Lihua Wang

    2014-01-01

    Full Text Available In order to deliver the maximum available power to the load under the condition of varying solar irradiation and environment temperature, maximum power point tracking (MPPT technologies have been used widely in PV systems. Among all the MPPT schemes, the chaos method is one of the hot topics in recent years. In this paper, a novel two-stage chaos optimization method is presented which can make search faster and more effective. In the process of proposed chaos search, the improved logistic mapping with the better ergodic is used as the first carrier process. After finding the current optimal solution in a certain guarantee, the power function carrier as the secondary carrier process is used to reduce the search space of optimized variables and eventually find the maximum power point. Comparing with the traditional chaos search method, the proposed method can track the change quickly and accurately and also has better optimization results. The proposed method provides a new efficient way to track the maximum power point of PV array.

  7. Modified landfill gas generation rate model of first-order kinetics and two-stage reaction

    Institute of Scientific and Technical Information of China (English)

    Jiajun CHEN; Hao WANG; Na ZHANG

    2009-01-01

    This investigation was carried out to establish a new domestic landfill gas (LFG) generation rate model that takes into account the impact ofleachate recirculation. The first-order kinetics and two-stage reaction (FKTSR) model of the LFG generation rate includes mechanisms of the nutrient balance for biochemical reaction in two main stages. In this study, the FKTSR model was modified by the introduction of the outflow function and the organic acid conversion coefficient in order to represent the in-situ condition of nutrient loss through leachate. Laboratory experiments were carried out to simulate the impact of leachate recirculation and verify the modified FKTSR model. The model calibration was then calculated by using the experimental data. The results suggested that the new model was in line with the experimental data. The main parameters of the modified FKTSR model, including the LFG production potential (L0), the reaction rate constant in the first stage (K1), and the reaction rate constant in the second stage (K2) of 64.746 L, 0.202 d-1, and 0.338 d-1,respectively, were comparable to the old ones of 42.069 L,0.231 d-1, and 0.231 d-1. The new model is better able to explain the mechanisms involved in LFG generation.

  8. Hypervelocity projectile acceleration with a railgun using a two-stage gas gun injector

    Science.gov (United States)

    Hawke, R. S.

    1989-04-01

    Unique potential applications of electromagnetic railguns [R.S. Hawke, IEEE Trans. Nucl. NS-28 (2) (1981) 1542] have motivated a decade of continuous development throughout the world. This effort has led to routine acceleration of projectiles of from 1 g to about 1 kg, to velocities of nearly 4 km/s. Attempts to reach higher velocities have met with problems in the 6- to 8-km/s range [J.V. Parker, Proc. 4th Symp. on Electromagnetic Launch Tech., Austin, TX, 1988, to be published in IEEE Trans. Mag.]. The principal problem is "restrike", which causes shunting of the propulsive plasma armature by the formation of a second plasma short circuit in the breech region of the railgun. One means of impeding restrike is the use of a two-stage light-gas gun (2SLGG) as a projectile injector. A joint development project was initiated in early 1986 between the Sandia National Laboratories Albuquerque (SNLA) and the Lawrence Livermore National Laboratory (LLNL). The project is based on the use of a 2SLGG to inject projectiles at about 7 km/s. The injection gas is hydrogen, which serves to inhibit formation of the secondary arc and to minimize barrel ablation and armature contamination. Results and status of this work are discussed.

  9. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    Science.gov (United States)

    Khramtsov, P. P.; Vasetskij, V. A.; Makhnach, A. I.; Grishenko, V. M.; Chernik, M. Yu; Shikh, I. A.; Doroshko, M. V.

    2016-11-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas.

  10. Biological sulfate removal from acrylic fiber manufacturing wastewater using a two-stage UASB reactor

    Institute of Scientific and Technical Information of China (English)

    Jin Li; Jun Wang; Zhaokun Luan; Zhongguang Ji; Lian Yu

    2012-01-01

    A two-stage UASB reactor was employed to remove sulfate from acrylic fiber manufacturing wastewater.Mesophilic operation (35±0.5℃) was performed with hydraulic retention time (HRT) varied between 28 and 40 hr.Mixed liquor suspended solids (MLSS)in the reactor was maintained about 8000 mg/L.The results indicated that sulfate removal was enhanced with increasing the ratio of COD/SO42-.At low COD/SO42-,the growth of the sulfate-reducing bacteria (SRB) was carbon-limited.The optimal sulfate removal efficiencies were 75% when the HRT was no less than 38 hr.Sulfidogenesis mainly happened in the sulfate-reducing stage,while methanogenesis in the methane-producing stage.Microbes in sulfate-reducing stage performed granulation better than that in methaneproducing stage.Higher extracellular polymeric substances (EPS) content in sulfate-reducing stage helped to adhere and connect the flocculent sludge particles together.SRB accounted for about 31% both in sulfate-reducing stage and methane-producing stage at COD/SO42- ratio of 0.5,while it dropped dramatically from 34% in sulfate-reducing stage to 10% in methane-producing stage corresponding to the COD/SO42- ratio of 4.7.SRB and MPA were predominant in sulfate-reducing stage and methane-producing stage respectively.

  11. Human Cough as a Two-Stage Jet and Its Role in Particle Transport

    Science.gov (United States)

    Li, Yuguo

    2017-01-01

    The human cough is a significant vector in the transmission of respiratory diseases in indoor environments. The cough flow is characterized as a two-stage jet; specifically, the starting jet (when the cough starts and flow is released) and interrupted jet (after the source supply is terminated). During the starting-jet stage, the flow rate is a function of time; three temporal profiles of the exit velocity (pulsation, sinusoidal and real-cough) were investigated in this study, and our results showed that the cough flow’s maximum penetration distance was in the range of a 50.6–85.5 opening diameter (D) under our experimental conditions. The real-cough and sinusoidal cases exhibited greater penetration ability than the pulsation cases under the same characteristic Reynolds number (Rec) and normalized cough expired volume (Q/AD, with Q as the cough expired volume and A as the opening area). However, the effects of Rec and Q/AD on the maximum penetration distances proved to be more significant; larger values of Rec and Q/AD reflected cough flows with greater penetration distances. A protocol was developed to scale the particle experiments between the prototype in air, and the model in water. The water tank experiments revealed that although medium and large particles deposit readily, their maximum spread distance is similar to that of small particles. Moreover, the leading vortex plays an important role in enhancing particle transport. PMID:28046084

  12. Quantum treatment of two-stage sub-Doppler laser cooling of magnesium atoms

    CERN Document Server

    Brazhnikov, D V; Taichenachev, A V; Yudin, V I; Bonert, A E; Il'enkov, R Ya; Goncharov, A N

    2015-01-01

    The problem of deep laser cooling of $^{24}$Mg atoms is theoretically studied. We propose two-stage sub-Doppler cooling strategy using electro-dipole transition $3^3P_2$$\\to$$3^3D_3$ ($\\lambda$=383.9 nm). The first stage implies exploiting magneto-optical trap with $\\sigma^+$ and $\\sigma^-$ light beams, while the second one uses a lin$\\perp$lin molasses. We focus on achieving large number of ultracold atoms (T$_{eff}$ < 10 $\\mu$K) in a cold atomic cloud. The calculations have been done out of many widely used approximations and based on quantum treatment with taking full account of recoil effect. Steady-state average kinetic energies and linear momentum distributions of cold atoms are analysed for various light-field intensities and frequency detunings. The results of conducted quantum analysis have revealed noticeable differences from results of semiclassical approach based on the Fokker-Planck equation. At certain conditions the second cooling stage can provide sufficiently lower kinetic energies of atom...

  13. Comparison of microalgae cultivation in photobioreactor, open raceway pond, and a two-stage hybrid system

    Directory of Open Access Journals (Sweden)

    Rakesh R Narala

    2016-08-01

    Full Text Available In the wake of intensive fossil fuel usage and CO2 accumulation in the environment, research is targeted towards sustainable alternate bioenergy that can suffice the growing need for fuel and also that leaves a minimal carbon footprint. Oil production from microalgae can potentially be carried out more efficiently, leaving a smaller footprint and without competing for arable land or biodiverse landscapes. However, current algae cultivation systems and lipid induction processes must be significantly improved and are threatened by contamination with other algae or algal grazers. To address this issue, we have developed an efficient two-stage cultivation system using the marine microalga Tetraselmis sp. M8. This hybrid system combines exponential biomass production in positive pressure air lift-driven bioreactors with a separate synchronized high-lipid induction phase in nutrient deplete open raceway ponds. A comparison to either bioreactor or open raceway pond cultivation system suggests that this process potentially leads to significantly higher productivity of algal lipids. Nutrients are only added to the closed bioreactors while open raceway ponds have turnovers of only a few days, thus reducing the issue of microalgal grazers.

  14. Dynamic two-stage mechanism of versatile DNA damage recognition by xeroderma pigmentosum group C protein

    Energy Technology Data Exchange (ETDEWEB)

    Clement, Flurina C.; Camenisch, Ulrike; Fei, Jia; Kaczmarek, Nina; Mathieu, Nadine [Institute of Pharmacology and Toxicology, University of Zuerich-Vetsuisse, Winterthurerstrasse 260, CH-8057 Zuerich (Switzerland); Naegeli, Hanspeter, E-mail: naegelih@vetpharm.uzh.ch [Institute of Pharmacology and Toxicology, University of Zuerich-Vetsuisse, Winterthurerstrasse 260, CH-8057 Zuerich (Switzerland)

    2010-03-01

    The recognition and subsequent repair of DNA damage are essential reactions for the maintenance of genome stability. A key general sensor of DNA lesions is xeroderma pigmentosum group C (XPC) protein, which recognizes a wide variety of helix-distorting DNA adducts arising from ultraviolet (UV) radiation, genotoxic chemicals and reactive metabolic byproducts. By detecting damaged DNA sites, this unique molecular sensor initiates the global genome repair (GGR) pathway, which allows for the removal of all the aforementioned lesions by a limited repertoire of excision factors. A faulty GGR activity causes the accumulation of DNA adducts leading to mutagenesis, carcinogenesis, neurological degeneration and other traits of premature aging. Recent findings indicate that XPC protein achieves its extraordinary substrate versatility by an entirely indirect readout strategy implemented in two clearly discernible stages. First, the XPC subunit uses a dynamic sensor interface to monitor the double helix for the presence of non-hydrogen-bonded bases. This initial screening generates a transient nucleoprotein intermediate that subsequently matures into the ultimate recognition complex by trapping undamaged nucleotides in the abnormally oscillating native strand, in a way that no direct contacts are made between XPC protein and the offending lesion itself. It remains to be elucidated how accessory factors like Rad23B, centrin-2 or the UV-damaged DNA-binding complex contribute to this dynamic two-stage quality control process.

  15. A two-stage storage routing model for green roof runoff detention.

    Science.gov (United States)

    Vesuviano, Gianni; Sonnenwald, Fred; Stovin, Virginia

    2014-01-01

    Green roofs have been adopted in urban drainage systems to control the total quantity and volumetric flow rate of runoff. Modern green roof designs are multi-layered, their main components being vegetation, substrate and, in almost all cases, a separate drainage layer. Most current hydrological models of green roofs combine the modelling of the separate layers into a single process; these models have limited predictive capability for roofs not sharing the same design. An adaptable, generic, two-stage model for a system consisting of a granular substrate over a hard plastic 'egg box'-style drainage layer and fibrous protection mat is presented. The substrate and drainage layer/protection mat are modelled separately by previously verified sub-models. Controlled storm events are applied to a green roof system in a rainfall simulator. The time-series modelled runoff is compared to the monitored runoff for each storm event. The modelled runoff profiles are accurate (mean Rt(2) = 0.971), but further characterization of the substrate component is required for the model to be generically applicable to other roof configurations with different substrate.

  16. Two-stage collaborative global optimization design model of the CHPG microgrid

    Science.gov (United States)

    Liao, Qingfen; Xu, Yeyan; Tang, Fei; Peng, Sicheng; Yang, Zheng

    2017-06-01

    With the continuous developing of technology and reducing of investment costs, renewable energy proportion in the power grid is becoming higher and higher because of the clean and environmental characteristics, which may need more larger-capacity energy storage devices, increasing the cost. A two-stage collaborative global optimization design model of the combined-heat-power-and-gas (abbreviated as CHPG) microgrid is proposed in this paper, to minimize the cost by using virtual storage without extending the existing storage system. P2G technology is used as virtual multi-energy storage in CHPG, which can coordinate the operation of electric energy network and natural gas network at the same time. Demand response is also one kind of good virtual storage, including economic guide for the DGs and heat pumps in demand side and priority scheduling of controllable loads. Two kinds of storage will coordinate to smooth the high-frequency fluctuations and low-frequency fluctuations of renewable energy respectively, and achieve a lower-cost operation scheme simultaneously. Finally, the feasibility and superiority of proposed design model is proved in a simulation of a CHPG microgrid.

  17. A gas-loading system for LANL two-stage gas guns

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Lloyd Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bartram, Brian Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dattelbaum, Dana Mcgraw [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lang, John Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Morris, John Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures.The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  18. An Enhanced Two-Stage Impulse Noise Removal Technique based on Fast ANFIS and Fuzzy Decision

    Directory of Open Access Journals (Sweden)

    V. Saradhadevi

    2011-09-01

    Full Text Available Image enhancement plays a vital role in various applications. There are many techniques to remove the noise from the image and produce the clear visual of the image. Moreover, there are several filters and image smoothing techniques available in the literature. All these available techniques have certain limitations. Recently, neural networks are found to be a very efficient tool for image enhancement. A novel two-stage noise removal technique for image enhancement and noise removal is proposed in this paper. In noise removal stage, Adaptive Neuro-Fuzzy Inference System (ANFIS with a Modified Levenberg-Marquardt training algorithm was used to eliminate the impulse noise. The usage of Modified Levenberg-Marquardt training algorithm will reduce the execution time. In the image enhancement stage, the fuzzy decision rules inspired by the Human Visual System (HVS are used to categorize the image pixels into human perception sensitive class and nonsensitive class, and to enhance the quality of the image. The Hyper trapezoidal fuzzy membership function is used in the proposed technique. In order to improve the sensitive regions with higher visual quality, a Neural Network (NN is proposed. The experiment is conducted with standard image. It is observed from the experimental result that the proposed FANFIS shows significant performance when compared to existing methods.

  19. Prediction of syngas quality for two-stage gasification of selected waste feedstocks.

    Science.gov (United States)

    De Filippis, Paolo; Borgianni, Carlo; Paolucci, Martino; Pochetti, Fausto

    2004-01-01

    This paper compares the syngas produced from methane with the syngas obtained from the gasification, in a two-stage reactor, of various waste feedstocks. The syngas composition and the gasification conditions were simulated using a simple thermodynamic model. The waste feedstocks considered are: landfill gas, waste oil, municipal solid waste (MSW) typical of a low-income country, the same MSW blended with landfill gas, refuse derived fuel (RDF) made from the same MSW, the same RDF blended with waste oil and a MSW typical of a high-income country. Energy content, the sum of H2 and CO gas percentages, and the ratio of H2 to CO are considered as measures of syngas quality. The simulation shows that landfill gas gives the best results in terms of both H2+CO and H2/CO, and that the MSW of low-income countries can be expected to provide inferior syngas on all three quality measures. Co-gasification of the MSW from low-income countries with landfill gas, and the mixture of waste oil with RDF from low-income MSW are considered as options to improve gas quality.

  20. Two-stage Hydrolysis of Invasive Algal Feedstock for Ethanol Fermentation

    Institute of Scientific and Technical Information of China (English)

    Xin Wang; Xianhua Liu; Guangyi Wang

    2011-01-01

    The overall goal of this work was to develop a saccharification method for the production of third generation biofuel(i.e.bioethanol) using feedstock of the invasive marine macroalga Gracilaria salicornia.Under optimum conditions(120℃ and 2% sulfuric acid for 30 min), dilute acid hydrolysis of the homogenized invasive plants yielded a low concentration of glucose(4.1mM or 4.3g glucose/kg fresh algal biomass). However, two-stage hydrolysis of the homogenates (combination of dilute acid hydrolysis with enzymatic hydrolysis) produced 13.8g of glucose from one kilogram of fresh algal feedstock. Batch fermentation analysis produced 79.1g EtOH from one kilogram of dried invasive algal feedstock using the ethanologenic strain Escherichia coli K011. Furthermore, ethanol production kinetics indicated that the invasive algal feedstock contained different types of sugar, including C5-sugar. This study represents the first report on third generation biofuel production from invasive macroalgae, suggesting that there is great potential for the production of renewable energy using marine invasive biomass.