WorldWideScience

Sample records for two-stage stochastic combinatorial

  1. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    Science.gov (United States)

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  2. A two-stage stochastic programming approach for operating multi-energy systems

    DEFF Research Database (Denmark)

    Zeng, Qing; Fang, Jiakun; Chen, Zhe

    2017-01-01

    This paper provides a two-stage stochastic programming approach for joint operating multi-energy systems under uncertainty. Simulation is carried out in a test system to demonstrate the feasibility and efficiency of the proposed approach. The test energy system includes a gas subsystem with a gas...

  3. Multiobjective Two-Stage Stochastic Programming Problems with Interval Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    S. K. Barik

    2012-01-01

    Full Text Available Most of the real-life decision-making problems have more than one conflicting and incommensurable objective functions. In this paper, we present a multiobjective two-stage stochastic linear programming problem considering some parameters of the linear constraints as interval type discrete random variables with known probability distribution. Randomness of the discrete intervals are considered for the model parameters. Further, the concepts of best optimum and worst optimum solution are analyzed in two-stage stochastic programming. To solve the stated problem, first we remove the randomness of the problem and formulate an equivalent deterministic linear programming model with multiobjective interval coefficients. Then the deterministic multiobjective model is solved using weighting method, where we apply the solution procedure of interval linear programming technique. We obtain the upper and lower bound of the objective function as the best and the worst value, respectively. It highlights the possible risk involved in the decision-making tool. A numerical example is presented to demonstrate the proposed solution procedure.

  4. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    OpenAIRE

    Liu Yang; Yao Xiong; Xiao-jiao Tong

    2017-01-01

    We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD) constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA) method to approximate the expected values of the underlying r...

  5. Adaptive Urban Stormwater Management Using a Two-stage Stochastic Optimization Model

    Science.gov (United States)

    Hung, F.; Hobbs, B. F.; McGarity, A. E.

    2014-12-01

    In many older cities, stormwater results in combined sewer overflows (CSOs) and consequent water quality impairments. Because of the expense of traditional approaches for controlling CSOs, cities are considering the use of green infrastructure (GI) to reduce runoff and pollutants. Examples of GI include tree trenches, rain gardens, green roofs, and rain barrels. However, the cost and effectiveness of GI are uncertain, especially at the watershed scale. We present a two-stage stochastic extension of the Stormwater Investment Strategy Evaluation (StormWISE) model (A. McGarity, JWRPM, 2012, 111-24) to explicitly model and optimize these uncertainties in an adaptive management framework. A two-stage model represents the immediate commitment of resources ("here & now") followed by later investment and adaptation decisions ("wait & see"). A case study is presented for Philadelphia, which intends to extensively deploy GI over the next two decades (PWD, "Green City, Clean Water - Implementation and Adaptive Management Plan," 2011). After first-stage decisions are made, the model updates the stochastic objective and constraints (learning). We model two types of "learning" about GI cost and performance. One assumes that learning occurs over time, is automatic, and does not depend on what has been done in stage one (basic model). The other considers learning resulting from active experimentation and learning-by-doing (advanced model). Both require expert probability elicitations, and learning from research and monitoring is modelled by Bayesian updating (as in S. Jacobi et al., JWRPM, 2013, 534-43). The model allocates limited financial resources to GI investments over time to achieve multiple objectives with a given reliability. Objectives include minimizing construction and O&M costs; achieving nutrient, sediment, and runoff volume targets; and community concerns, such as aesthetics, CO2 emissions, heat islands, and recreational values. CVaR (Conditional Value at Risk) and

  6. A two-stage stochastic programming model for the optimal design of distributed energy systems

    International Nuclear Information System (INIS)

    Zhou, Zhe; Zhang, Jianyun; Liu, Pei; Li, Zheng; Georgiadis, Michael C.; Pistikopoulos, Efstratios N.

    2013-01-01

    Highlights: ► The optimal design of distributed energy systems under uncertainty is studied. ► A stochastic model is developed using genetic algorithm and Monte Carlo method. ► The proposed system possesses inherent robustness under uncertainty. ► The inherent robustness is due to energy storage facilities and grid connection. -- Abstract: A distributed energy system is a multi-input and multi-output energy system with substantial energy, economic and environmental benefits. The optimal design of such a complex system under energy demand and supply uncertainty poses significant challenges in terms of both modelling and corresponding solution strategies. This paper proposes a two-stage stochastic programming model for the optimal design of distributed energy systems. A two-stage decomposition based solution strategy is used to solve the optimization problem with genetic algorithm performing the search on the first stage variables and a Monte Carlo method dealing with uncertainty in the second stage. The model is applied to the planning of a distributed energy system in a hotel. Detailed computational results are presented and compared with those generated by a deterministic model. The impacts of demand and supply uncertainty on the optimal design of distributed energy systems are systematically investigated using proposed modelling framework and solution approach.

  7. Two-stage stochastic programming model for the regional-scale electricity planning under demand uncertainty

    International Nuclear Information System (INIS)

    Huang, Yun-Hsun; Wu, Jung-Hua; Hsu, Yu-Ju

    2016-01-01

    Traditional electricity supply planning models regard the electricity demand as a deterministic parameter and require the total power output to satisfy the aggregate electricity demand. But in today's world, the electric system planners are facing tremendously complex environments full of uncertainties, where electricity demand is a key source of uncertainty. In addition, electricity demand patterns are considerably different for different regions. This paper developed a multi-region optimization model based on two-stage stochastic programming framework to incorporate the demand uncertainty. Furthermore, the decision tree method and Monte Carlo simulation approach are integrated into the model to simplify electricity demands in the form of nodes and determine the values and probabilities. The proposed model was successfully applied to a real case study (i.e. Taiwan's electricity sector) to show its applicability. Detail simulation results were presented and compared with those generated by a deterministic model. Finally, the long-term electricity development roadmap at a regional level could be provided on the basis of our simulation results. - Highlights: • A multi-region, two-stage stochastic programming model has been developed. • The decision tree and Monte Carlo simulation are integrated into the framework. • Taiwan's electricity sector is used to illustrate the applicability of the model. • The results under deterministic and stochastic cases are shown for comparison. • Optimal portfolios of regional generation technologies can be identified.

  8. A Two-Stage Maximum Entropy Prior of Location Parameter with a Stochastic Multivariate Interval Constraint and Its Properties

    Directory of Open Access Journals (Sweden)

    Hea-Jung Kim

    2016-05-01

    Full Text Available This paper proposes a two-stage maximum entropy prior to elicit uncertainty regarding a multivariate interval constraint of the location parameter of a scale mixture of normal model. Using Shannon’s entropy, this study demonstrates how the prior, obtained by using two stages of a prior hierarchy, appropriately accounts for the information regarding the stochastic constraint and suggests an objective measure of the degree of belief in the stochastic constraint. The study also verifies that the proposed prior plays the role of bridging the gap between the canonical maximum entropy prior of the parameter with no interval constraint and that with a certain multivariate interval constraint. It is shown that the two-stage maximum entropy prior belongs to the family of rectangle screened normal distributions that is conjugate for samples from a normal distribution. Some properties of the prior density, useful for developing a Bayesian inference of the parameter with the stochastic constraint, are provided. We also propose a hierarchical constrained scale mixture of normal model (HCSMN, which uses the prior density to estimate the constrained location parameter of a scale mixture of normal model and demonstrates the scope of its applicability.

  9. Electricity price forecast using Combinatorial Neural Network trained by a new stochastic search method

    International Nuclear Information System (INIS)

    Abedinia, O.; Amjady, N.; Shafie-khah, M.; Catalão, J.P.S.

    2015-01-01

    Highlights: • Presenting a Combinatorial Neural Network. • Suggesting a new stochastic search method. • Adapting the suggested method as a training mechanism. • Proposing a new forecast strategy. • Testing the proposed strategy on real-world electricity markets. - Abstract: Electricity price forecast is key information for successful operation of electricity market participants. However, the time series of electricity price has nonlinear, non-stationary and volatile behaviour and so its forecast method should have high learning capability to extract the complex input/output mapping function of electricity price. In this paper, a Combinatorial Neural Network (CNN) based forecasting engine is proposed to predict the future values of price data. The CNN-based forecasting engine is equipped with a new training mechanism for optimizing the weights of the CNN. This training mechanism is based on an efficient stochastic search method, which is a modified version of chemical reaction optimization algorithm, giving high learning ability to the CNN. The proposed price forecast strategy is tested on the real-world electricity markets of Pennsylvania–New Jersey–Maryland (PJM) and mainland Spain and its obtained results are extensively compared with the results obtained from several other forecast methods. These comparisons illustrate effectiveness of the proposed strategy.

  10. An inexact two-stage stochastic robust programming for residential micro-grid management-based on random demand

    International Nuclear Information System (INIS)

    Ji, L.; Niu, D.X.; Huang, G.H.

    2014-01-01

    In this paper a stochastic robust optimization problem of residential micro-grid energy management is presented. Combined cooling, heating and electricity technology (CCHP) is introduced to satisfy various energy demands. Two-stage programming is utilized to find the optimal installed capacity investment and operation control of CCHP (combined cooling heating and power). Moreover, interval programming and robust stochastic optimization methods are exploited to gain interval robust solutions under different robustness levels which are feasible for uncertain data. The obtained results can help micro-grid managers minimizing the investment and operation cost with lower system failure risk when facing fluctuant energy market and uncertain technology parameters. The different robustness levels reflect the risk preference of micro-grid manager. The proposed approach is applied to residential area energy management in North China. Detailed computational results under different robustness level are presented and analyzed for providing investment decision and operation strategies. - Highlights: • An inexact two-stage stochastic robust programming model for CCHP management. • The energy market and technical parameters uncertainties were considered. • Investment decision, operation cost, and system safety were analyzed. • Uncertainties expressed as discrete intervals and probability distributions

  11. Optimal design of distributed energy resource systems based on two-stage stochastic programming

    International Nuclear Information System (INIS)

    Yang, Yun; Zhang, Shijie; Xiao, Yunhan

    2017-01-01

    Highlights: • A two-stage stochastic programming model is built to design DER systems under uncertainties. • Uncertain energy demands have a significant effect on the optimal design. • Uncertain energy prices and renewable energy intensity have little effect on the optimal design. • The economy is overestimated if the system is designed without considering the uncertainties. • The uncertainty in energy prices has the significant and greatest effect on the economy. - Abstract: Multiple uncertainties exist in the optimal design of distributed energy resource (DER) systems. The expected energy, economic, and environmental benefits may not be achieved and a deficit in energy supply may occur if the uncertainties are not handled properly. This study focuses on the optimal design of DER systems with consideration of the uncertainties. A two-stage stochastic programming model is built in consideration of the discreteness of equipment capacities, equipment partial load operation and output bounds as well as of the influence of ambient temperature on gas turbine performance. The stochastic model is then transformed into its deterministic equivalent and solved. For an illustrative example, the model is applied to a hospital in Lianyungang, China. Comparative studies are performed to evaluate the effect of the uncertainties in load demands, energy prices, and renewable energy intensity separately and simultaneously on the system’s economy and optimal design. Results show that the uncertainties in load demands have a significant effect on the optimal system design, whereas the uncertainties in energy prices and renewable energy intensity have almost no effect. Results regarding economy show that it is obviously overestimated if the system is designed without considering the uncertainties.

  12. Risk averse optimal operation of a virtual power plant using two stage stochastic programming

    International Nuclear Information System (INIS)

    Tajeddini, Mohammad Amin; Rahimi-Kian, Ashkan; Soroudi, Alireza

    2014-01-01

    VPP (Virtual Power Plant) is defined as a cluster of energy conversion/storage units which are centrally operated in order to improve the technical and economic performance. This paper addresses the optimal operation of a VPP considering the risk factors affecting its daily operation profits. The optimal operation is modelled in both day ahead and balancing markets as a two-stage stochastic mixed integer linear programming in order to maximize a GenCo (generation companies) expected profit. Furthermore, the CVaR (Conditional Value at Risk) is used as a risk measure technique in order to control the risk of low profit scenarios. The uncertain parameters, including the PV power output, wind power output and day-ahead market prices are modelled through scenarios. The proposed model is successfully applied to a real case study to show its applicability and the results are presented and thoroughly discussed. - Highlights: • Virtual power plant modelling considering a set of energy generating and conversion units. • Uncertainty modelling using two stage stochastic programming technique. • Risk modelling using conditional value at risk. • Flexible operation of renewable energy resources. • Electricity price uncertainty in day ahead energy markets

  13. A Smoothing Algorithm for a New Two-Stage Stochastic Model of Supply Chain Based on Sample Average Approximation

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2017-01-01

    Full Text Available We construct a new two-stage stochastic model of supply chain with multiple factories and distributors for perishable product. By introducing a second-order stochastic dominance (SSD constraint, we can describe the preference consistency of the risk taker while minimizing the expected cost of company. To solve this problem, we convert it into a one-stage stochastic model equivalently; then we use sample average approximation (SAA method to approximate the expected values of the underlying random functions. A smoothing approach is proposed with which we can get the global solution and avoid introducing new variables and constraints. Meanwhile, we investigate the convergence of an optimal value from solving the transformed model and show that, with probability approaching one at exponential rate, the optimal value converges to its counterpart as the sample size increases. Numerical results show the effectiveness of the proposed algorithm and analysis.

  14. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    OpenAIRE

    Cunha, P.S.A.; Oliveira, F.; Raupp, Fernanda M.P.

    2017-01-01

    ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which a...

  15. A novel two-stage stochastic programming model for uncertainty characterization in short-term optimal strategy for a distribution company

    International Nuclear Information System (INIS)

    Ahmadi, Abdollah; Charwand, Mansour; Siano, Pierluigi; Nezhad, Ali Esmaeel; Sarno, Debora; Gitizadeh, Mohsen; Raeisi, Fatima

    2016-01-01

    In order to supply the demands of the end users in a competitive market, a distribution company purchases energy from the wholesale market while other options would be in access in the case of possessing distributed generation units and interruptible loads. In this regard, this study presents a two-stage stochastic programming model for a distribution company energy acquisition market model to manage the involvement of different electric energy resources characterized by uncertainties with the minimum cost. In particular, the distribution company operations planning over a day-ahead horizon is modeled as a stochastic mathematical optimization, with the objective of minimizing costs. By this, distribution company decisions on grid purchase, owned distributed generation units and interruptible load scheduling are determined. Then, these decisions are considered as boundary constraints to a second step, which deals with distribution company's operations in the hour-ahead market with the objective of minimizing the short-term cost. The uncertainties in spot market prices and wind speed are modeled by means of probability distribution functions of their forecast errors and the roulette wheel mechanism and lattice Monte Carlo simulation are used to generate scenarios. Numerical results show the capability of the proposed method. - Highlights: • Proposing a new a stochastic-based two-stage operations framework in retail competitive markets. • Proposing a Mixed Integer Non-Linear stochastic programming. • Employing roulette wheel mechanism and Lattice Monte Carlo Simulation.

  16. 2–stage stochastic Runge–Kutta for stochastic delay differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Rosli, Norhayati; Jusoh Awang, Rahimah [Faculty of Industrial Science and Technology, Universiti Malaysia Pahang, Lebuhraya Tun Razak, 26300, Gambang, Pahang (Malaysia); Bahar, Arifah; Yeak, S. H. [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)

    2015-05-15

    This paper proposes a newly developed one-step derivative-free method, that is 2-stage stochastic Runge-Kutta (SRK2) to approximate the solution of stochastic delay differential equations (SDDEs) with a constant time lag, r > 0. General formulation of stochastic Runge-Kutta for SDDEs is introduced and Stratonovich Taylor series expansion for numerical solution of SRK2 is presented. Local truncation error of SRK2 is measured by comparing the Stratonovich Taylor expansion of the exact solution with the computed solution. Numerical experiment is performed to assure the validity of the method in simulating the strong solution of SDDEs.

  17. Quantum fields and processes a combinatorial approach

    CERN Document Server

    Gough, John

    2018-01-01

    Wick ordering of creation and annihilation operators is of fundamental importance for computing averages and correlations in quantum field theory and, by extension, in the Hudson-Parthasarathy theory of quantum stochastic processes, quantum mechanics, stochastic processes, and probability. This book develops the unified combinatorial framework behind these examples, starting with the simplest mathematically, and working up to the Fock space setting for quantum fields. Emphasizing ideas from combinatorics such as the role of lattice of partitions for multiple stochastic integrals by Wallstrom-Rota and combinatorial species by Joyal, it presents insights coming from quantum probability. It also introduces a 'field calculus' which acts as a succinct alternative to standard Feynman diagrams and formulates quantum field theory (cumulant moments, Dyson-Schwinger equation, tree expansions, 1-particle irreducibility) in this language. Featuring many worked examples, the book is aimed at mathematical physicists, quant...

  18. Quantum fields and processes a combinatorial approach

    CERN Document Server

    Gough, John

    2018-01-01

    Wick ordering of creation and annihilation operators is of fundamental importance for computing averages and correlations in quantum field theory and, by extension, in the Hudson–Parthasarathy theory of quantum stochastic processes, quantum mechanics, stochastic processes, and probability. This book develops the unified combinatorial framework behind these examples, starting with the simplest mathematically, and working up to the Fock space setting for quantum fields. Emphasizing ideas from combinatorics such as the role of lattice of partitions for multiple stochastic integrals by Wallstrom–Rota and combinatorial species by Joyal, it presents insights coming from quantum probability. It also introduces a 'field calculus' which acts as a succinct alternative to standard Feynman diagrams and formulates quantum field theory (cumulant moments, Dyson–Schwinger equation, tree expansions, 1-particle irreducibility) in this language. Featuring many worked examples, the book is aimed at mathematical physicists,...

  19. Decomposition and (importance) sampling techniques for multi-stage stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G.

    1993-11-01

    The difficulty of solving large-scale multi-stage stochastic linear programs arises from the sheer number of scenarios associated with numerous stochastic parameters. The number of scenarios grows exponentially with the number of stages and problems get easily out of hand even for very moderate numbers of stochastic parameters per stage. Our method combines dual (Benders) decomposition with Monte Carlo sampling techniques. We employ importance sampling to efficiently obtain accurate estimates of both expected future costs and gradients and right-hand sides of cuts. The method enables us to solve practical large-scale problems with many stages and numerous stochastic parameters per stage. We discuss the theory of sharing and adjusting cuts between different scenarios in a stage. We derive probabilistic lower and upper bounds, where we use importance path sampling for the upper bound estimation. Initial numerical results turned out to be promising.

  20. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    Science.gov (United States)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  1. A review of simheuristics: Extending metaheuristics to deal with stochastic combinatorial optimization problems

    Directory of Open Access Journals (Sweden)

    Angel A. Juan

    2015-12-01

    Full Text Available Many combinatorial optimization problems (COPs encountered in real-world logistics, transportation, production, healthcare, financial, telecommunication, and computing applications are NP-hard in nature. These real-life COPs are frequently characterized by their large-scale sizes and the need for obtaining high-quality solutions in short computing times, thus requiring the use of metaheuristic algorithms. Metaheuristics benefit from different random-search and parallelization paradigms, but they frequently assume that the problem inputs, the underlying objective function, and the set of optimization constraints are deterministic. However, uncertainty is all around us, which often makes deterministic models oversimplified versions of real-life systems. After completing an extensive review of related work, this paper describes a general methodology that allows for extending metaheuristics through simulation to solve stochastic COPs. ‘Simheuristics’ allow modelers for dealing with real-life uncertainty in a natural way by integrating simulation (in any of its variants into a metaheuristic-driven framework. These optimization-driven algorithms rely on the fact that efficient metaheuristics already exist for the deterministic version of the corresponding COP. Simheuristics also facilitate the introduction of risk and/or reliability analysis criteria during the assessment of alternative high-quality solutions to stochastic COPs. Several examples of applications in different fields illustrate the potential of the proposed methodology.

  2. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    W. Romeijnders; L. Stougie (Leen); M. van der Vlerk

    2014-01-01

    htmlabstractApproximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value.

  3. Approximation in two-stage stochastic integer programming

    NARCIS (Netherlands)

    Romeijnders, W.; Stougie, L.; van der Vlerk, M.H.

    2014-01-01

    Approximation algorithms are the prevalent solution methods in the field of stochastic programming. Problems in this field are very hard to solve. Indeed, most of the research in this field has concentrated on designing solution methods that approximate the optimal solution value. However,

  4. A simulation-based interval two-stage stochastic model for agricultural nonpoint source pollution control through land retirement

    International Nuclear Information System (INIS)

    Luo, B.; Li, J.B.; Huang, G.H.; Li, H.L.

    2006-01-01

    This study presents a simulation-based interval two-stage stochastic programming (SITSP) model for agricultural nonpoint source (NPS) pollution control through land retirement under uncertain conditions. The modeling framework was established by the development of an interval two-stage stochastic program, with its random parameters being provided by the statistical analysis of the simulation outcomes of a distributed water quality approach. The developed model can deal with the tradeoff between agricultural revenue and 'off-site' water quality concern under random effluent discharge for a land retirement scheme through minimizing the expected value of long-term total economic and environmental cost. In addition, the uncertainties presented as interval numbers in the agriculture-water system can be effectively quantified with the interval programming. By subdividing the whole agricultural watershed into different zones, the most pollution-related sensitive cropland can be identified and an optimal land retirement scheme can be obtained through the modeling approach. The developed method was applied to the Swift Current Creek watershed in Canada for soil erosion control through land retirement. The Hydrological Simulation Program-FORTRAN (HSPF) was used to simulate the sediment information for this case study. Obtained results indicate that the total economic and environmental cost of the entire agriculture-water system can be limited within an interval value for the optimal land retirement schemes. Meanwhile, a best and worst land retirement scheme was obtained for the study watershed under various uncertainties

  5. Risk-Based Two-Stage Stochastic Optimization Problem of Micro-Grid Operation with Renewables and Incentive-Based Demand Response Programs

    Directory of Open Access Journals (Sweden)

    Pouria Sheikhahmadi

    2018-03-01

    Full Text Available The operation problem of a micro-grid (MG in grid-connected mode is an optimization one in which the main objective of the MG operator (MGO is to minimize the operation cost with optimal scheduling of resources and optimal trading energy with the main grid. The MGO can use incentive-based demand response programs (DRPs to pay an incentive to the consumers to change their demands in the peak hours. Moreover, the MGO forecasts the output power of renewable energy resources (RERs and models their uncertainties in its problem. In this paper, the operation problem of an MGO is modeled as a risk-based two-stage stochastic optimization problem. To model the uncertainties of RERs, two-stage stochastic programming is considered and conditional value at risk (CVaR index is used to manage the MGO’s risk-level. Moreover, the non-linear economic models of incentive-based DRPs are used by the MGO to change the peak load. The numerical studies are done to investigate the effect of incentive-based DRPs on the operation problem of the MGO. Moreover, to show the effect of the risk-averse parameter on MGO decisions, a sensitivity analysis is carried out.

  6. Effects of Risk Aversion on Market Outcomes: A Stochastic Two-Stage Equilibrium Model

    DEFF Research Database (Denmark)

    Kazempour, Jalal; Pinson, Pierre

    2016-01-01

    This paper evaluates how different risk preferences of electricity producers alter the market-clearing outcomes. Toward this goal, we propose a stochastic equilibrium model for electricity markets with two settlements, i.e., day-ahead and balancing, in which a number of conventional and stochastic...... by its optimality conditions, resulting in a mixed complementarity problem. Numerical results from a case study based on the IEEE one-area reliability test system are derived and discussed....

  7. Capacity expansion of stochastic power generation under two-stage electricity markets

    DEFF Research Database (Denmark)

    Pineda, Salvador; Morales González, Juan Miguel

    2016-01-01

    are first formulated from the standpoint of a social planner to characterize a perfectly competitive market. We investigate the effect of two paradigmatic market designs on generation expansion planning: a day-ahead market that is cleared following a conventional cost merit-order principle, and an ideal...... of stochastic power generating units. This framework includes the explicit representation of a day-ahead and a balancing market-clearing mechanisms to properly capture the impact of forecast errors of power production on the short-term operation of a power system. The proposed generation expansion problems...... market-clearing procedure that determines day-ahead dispatch decisions accounting for their impact on balancing operation costs. Furthermore, we reformulate the proposed models to determine the optimal expansion decisions that maximize the profit of a collusion of stochastic power producers in order...

  8. The priming of basic combinatory responses in MEG.

    Science.gov (United States)

    Blanco-Elorrieta, Esti; Ferreira, Victor S; Del Prato, Paul; Pylkkänen, Liina

    2018-01-01

    Priming has been a powerful tool for the study of human memory and especially the memory representations relevant for language. However, although it is well established that lexical access can be primed, we do not know exactly what types of computations can be primed above the word level. This work took a neurobiological approach and assessed the ways in which the complex representation of a minimal combinatory phrase, such as red boat, can be primed, as evidenced by the spatiotemporal profiles of magnetoencephalography (MEG) signals. Specifically, we built upon recent progress on the neural signatures of phrasal composition and tested whether the brain activities implicated for the basic combination of two words could be primed. In two experiments, MEG was recorded during a picture naming task where the prime trials were designed to replicate previously reported combinatory effects and the target trials to test whether those combinatory effects could be primed. The manipulation of the primes was successful in eliciting larger activity for adjective-noun combinations than single nouns in left anterior temporal and ventromedial prefrontal cortices, replicating prior MEG studies on parallel contrasts. Priming of similarly timed activity was observed during target trials in anterior temporal cortex, but only when the prime and target shared an adjective. No priming in temporal cortex was observed for single word repetition and two control tasks showed that the priming effect was not elicited if the prime pictures were simply viewed but not named. In sum, this work provides evidence that very basic combinatory operations can be primed, with the necessity for some lexical overlap between prime and target suggesting combinatory conceptual, as opposed to syntactic processing. Both our combinatory and priming effects were early, onsetting between 100 and 150ms after picture onset and thus are likely to reflect the very earliest planning stages of a combinatory message

  9. On A Two-Stage Supply Chain Model In The Manufacturing Industry ...

    African Journals Online (AJOL)

    We model a two-stage supply chain where the upstream stage (stage 2) always meet demand from the downstream stage (stage 1).Demand is stochastic hence shortages will occasionally occur at stage 2. Stage 2 must fill these shortages by expediting using overtime production and/or backordering. We derive optimal ...

  10. Mortgage Loan Portfolio Optimization Using Multi-Stage Stochastic Programming

    DEFF Research Database (Denmark)

    Rasmussen, Kourosh Marjani; Clausen, Jens

    2007-01-01

    We consider the dynamics of the Danish mortgage loan system and propose several models to reflect the choices of a mortgagor as well as his attitude towards risk. The models are formulated as multi stage stochastic integer programs, which are difficult to solve for more than 10 stages. Scenario...

  11. Machine learning meliorates computing and robustness in discrete combinatorial optimization problems.

    Directory of Open Access Journals (Sweden)

    Fushing Hsieh

    2016-11-01

    Full Text Available Discrete combinatorial optimization problems in real world are typically defined via an ensemble of potentially high dimensional measurements pertaining to all subjects of a system under study. We point out that such a data ensemble in fact embeds with system's information content that is not directly used in defining the combinatorial optimization problems. Can machine learning algorithms extract such information content and make combinatorial optimizing tasks more efficient? Would such algorithmic computations bring new perspectives into this classic topic of Applied Mathematics and Theoretical Computer Science? We show that answers to both questions are positive. One key reason is due to permutation invariance. That is, the data ensemble of subjects' measurement vectors is permutation invariant when it is represented through a subject-vs-measurement matrix. An unsupervised machine learning algorithm, called Data Mechanics (DM, is applied to find optimal permutations on row and column axes such that the permuted matrix reveals coupled deterministic and stochastic structures as the system's information content. The deterministic structures are shown to facilitate geometry-based divide-and-conquer scheme that helps optimizing task, while stochastic structures are used to generate an ensemble of mimicries retaining the deterministic structures, and then reveal the robustness pertaining to the original version of optimal solution. Two simulated systems, Assignment problem and Traveling Salesman problem, are considered. Beyond demonstrating computational advantages and intrinsic robustness in the two systems, we propose brand new robust optimal solutions. We believe such robust versions of optimal solutions are potentially more realistic and practical in real world settings.

  12. Combined Two-Stage Stochastic Programming and Receding Horizon Control Strategy for Microgrid Energy Management Considering Uncertainty

    Directory of Open Access Journals (Sweden)

    Zhongwen Li

    2016-06-01

    Full Text Available Microgrids (MGs are presented as a cornerstone of smart grids. With the potential to integrate intermittent renewable energy sources (RES in a flexible and environmental way, the MG concept has gained even more attention. Due to the randomness of RES, load, and electricity price in MG, the forecast errors of MGs will affect the performance of the power scheduling and the operating cost of an MG. In this paper, a combined stochastic programming and receding horizon control (SPRHC strategy is proposed for microgrid energy management under uncertainty, which combines the advantages of two-stage stochastic programming (SP and receding horizon control (RHC strategy. With an SP strategy, a scheduling plan can be derived that minimizes the risk of uncertainty by involving the uncertainty of MG in the optimization model. With an RHC strategy, the uncertainty within the MG can be further compensated through a feedback mechanism with the lately updated forecast information. In our approach, a proper strategy is also proposed to maintain the SP model as a mixed integer linear constrained quadratic programming (MILCQP problem, which is solvable without resorting to any heuristics algorithms. The results of numerical experiments explicitly demonstrate the superiority of the proposed strategy for both island and grid-connected operating modes of an MG.

  13. Stochastic optimization: beyond mathematical programming

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Stochastic optimization, among which bio-inspired algorithms, is gaining momentum in areas where more classical optimization algorithms fail to deliver satisfactory results, or simply cannot be directly applied. This presentation will introduce baseline stochastic optimization algorithms, and illustrate their efficiency in different domains, from continuous non-convex problems to combinatorial optimization problem, to problems for which a non-parametric formulation can help exploring unforeseen possible solution spaces.

  14. PERIODIC REVIEW SYSTEM FOR INVENTORY REPLENISHMENT CONTROL FOR A TWO-ECHELON LOGISTICS NETWORK UNDER DEMAND UNCERTAINTY: A TWO-STAGE STOCHASTIC PROGRAMING APPROACH

    Directory of Open Access Journals (Sweden)

    P.S.A. Cunha

    Full Text Available ABSTRACT Here, we propose a novel methodology for replenishment and control systems for inventories of two-echelon logistics networks using a two-stage stochastic programming, considering periodic review and uncertain demands. In addition, to achieve better customer services, we introduce a variable rationing rule to address quantities of the item in short. The devised models are reformulated into their deterministic equivalent, resulting in nonlinear mixed-integer programming models, which are then approximately linearized. To deal with the uncertain nature of the item demand levels, we apply a Monte Carlo simulation-based method to generate finite and discrete sets of scenarios. Moreover, the proposed approach does not require restricted assumptions to the behavior of the probabilistic phenomena, as does several existing methods in the literature. Numerical experiments with the proposed approach for randomly generated instances of the problem show results with errors around 1%.

  15. A two-factor, stochastic programming model of Danish mortgage-backed securities

    DEFF Research Database (Denmark)

    Nielsen, Søren S.; Poulsen, Rolf

    2004-01-01

    -trivial, both in terms of deciding on an initial mortgage, and in terms of managing (rebalancing) it optimally.We propose a two-factor, arbitrage-free interest-rate model, calibrated to observable security prices, and implement on top of it a multi-stage, stochastic optimization program with the purpose...

  16. An inexact fuzzy two-stage stochastic model for quantifying the efficiency of nonpoint source effluent trading under uncertainty

    International Nuclear Information System (INIS)

    Luo, B.; Maqsood, I.; Huang, G.H.; Yin, Y.Y.; Han, D.J.

    2005-01-01

    Reduction of nonpoint source (NPS) pollution from agricultural lands is a major concern in most countries. One method to reduce NPS pollution is through land retirement programs. This method, however, may result in enormous economic costs especially when large sums of croplands need to be retired. To reduce the cost, effluent trading can be employed to couple with land retirement programs. However, the trading efforts can also become inefficient due to various uncertainties existing in stochastic, interval, and fuzzy formats in agricultural systems. Thus, it is desired to develop improved methods to effectively quantify the efficiency of potential trading efforts by considering those uncertainties. In this respect, this paper presents an inexact fuzzy two-stage stochastic programming model to tackle such problems. The proposed model can facilitate decision-making to implement trading efforts for agricultural NPS pollution reduction through land retirement programs. The applicability of the model is demonstrated through a hypothetical effluent trading program within a subcatchment of the Lake Tai Basin in China. The study results indicate that the efficiency of the trading program is significantly influenced by precipitation amount, agricultural activities, and level of discharge limits of pollutants. The results also show that the trading program will be more effective for low precipitation years and with stricter discharge limits

  17. Combinatorial vector fields and the valley structure of fitness landscapes.

    Science.gov (United States)

    Stadler, Bärbel M R; Stadler, Peter F

    2010-12-01

    Adaptive (downhill) walks are a computationally convenient way of analyzing the geometric structure of fitness landscapes. Their inherently stochastic nature has limited their mathematical analysis, however. Here we develop a framework that interprets adaptive walks as deterministic trajectories in combinatorial vector fields and in return associate these combinatorial vector fields with weights that measure their steepness across the landscape. We show that the combinatorial vector fields and their weights have a product structure that is governed by the neutrality of the landscape. This product structure makes practical computations feasible. The framework presented here also provides an alternative, and mathematically more convenient, way of defining notions of valleys, saddle points, and barriers in landscape. As an application, we propose a refined approximation for transition rates between macrostates that are associated with the valleys of the landscape.

  18. A multi-stage stochastic transmission expansion planning method

    International Nuclear Information System (INIS)

    Akbari, Tohid; Rahimikian, Ashkan; Kazemi, Ahad

    2011-01-01

    Highlights: → We model a multi-stage stochastic transmission expansion planning problem. → We include available transfer capability (ATC) in our model. → Involving this criterion will increase the ATC between source and sink points. → Power system reliability will be increased and more money can be saved. - Abstract: This paper presents a multi-stage stochastic model for short-term transmission expansion planning considering the available transfer capability (ATC). The ATC can have a huge impact on the power market outcomes and the power system reliability. The transmission expansion planning (TEP) studies deal with many uncertainties, such as system load uncertainties that are considered in this paper. The Monte Carlo simulation method has been applied for generating different scenarios. A scenario reduction technique is used for reducing the number of scenarios. The objective is to minimize the sum of investment costs (IC) and the expected operation costs (OC). The solution technique is based on the benders decomposition algorithm. The N-1 contingency analysis is also done for the TEP problem. The proposed model is applied to the IEEE 24 bus reliability test system and the results are efficient and promising.

  19. A combinatorial framework to quantify peak/pit asymmetries in complex dynamics

    NARCIS (Netherlands)

    Hasson, Uri; Iacovacci, Jacopo; Davis, Ben; Flanagan, Ryan; Tagliazucchi, E.; Laufs, Helmut; Lacasa, Lucas

    2018-01-01

    We explore a combinatorial framework which efficiently quantifies the asymmetries between minima and maxima in local fluctuations of time series. We first showcase its performance by applying it to a battery of synthetic cases. We find rigorous results on some canonical dynamical models (stochastic

  20. Combinatorial application of two aldehyde oxidoreductases on isobutanol production in the presence of furfural.

    Science.gov (United States)

    Seo, Hyung-Min; Jeon, Jong-Min; Lee, Ju Hee; Song, Hun-Suk; Joo, Han-Byul; Park, Sung-Hee; Choi, Kwon-Young; Kim, Yong Hyun; Park, Kyungmoon; Ahn, Jungoh; Lee, Hongweon; Yang, Yung-Hun

    2016-01-01

    Furfural is a toxic by-product formulated from pretreatment processes of lignocellulosic biomass. In order to utilize the lignocellulosic biomass on isobutanol production, inhibitory effect of the furfural on isobutanol production was investigated and combinatorial application of two oxidoreductases, FucO and YqhD, was suggested as an alternative strategy. Furfural decreased cell growth and isobutanol production when only YqhD or FucO was employed as an isobutyraldehyde oxidoreductase. However, combinatorial overexpression of FucO and YqhD could overcome the inhibitory effect of furfural giving higher isobutanol production by 110% compared with overexpression of YqhD. The combinatorial oxidoreductases increased furfural detoxification rate 2.1-fold and also accelerated glucose consumption 1.4-fold. When it compares to another known system increasing furfural tolerance, membrane-bound transhydrogenase (pntAB), the combinatorial aldehyde oxidoreductases were better on cell growth and production. Thus, to control oxidoreductases is important to produce isobutanol using furfural-containing biomass and the combinatorial overexpression of FucO and YqhD can be an alternative strategy.

  1. An inexact two-stage stochastic energy systems planning model for managing greenhouse gas emission at a municipal level

    International Nuclear Information System (INIS)

    Lin, Q.G.; Huang, G.H.

    2010-01-01

    Energy management systems are highly complicated with greenhouse-gas emission reduction issues and a variety of social, economic, political, environmental and technical factors. To address such complexities, municipal energy systems planning models are desired as they can take account of these factors and their interactions within municipal energy management systems. This research is to develop an interval-parameter two-stage stochastic municipal energy systems planning model (ITS-MEM) for supporting decisions of energy systems planning and GHG (greenhouse gases) emission management at a municipal level. ITS-MEM is then applied to a case study. The results indicated that the developed model was capable of supporting municipal energy systems planning and environmental management under uncertainty. Solutions of ITS-MEM would provide an effective linkage between the pre-regulated environmental policies (GHG-emission reduction targets) and the associated economic implications (GHG-emission credit trading).

  2. Stochastic integer programming by dynamic programming

    NARCIS (Netherlands)

    Lageweg, B.J.; Lenstra, J.K.; Rinnooy Kan, A.H.G.; Stougie, L.; Ermoliev, Yu.; Wets, R.J.B.

    1988-01-01

    Stochastic integer programming is a suitable tool for modeling hierarchical decision situations with combinatorial features. In continuation of our work on the design and analysis of heuristics for such problems, we now try to find optimal solutions. Dynamic programming techniques can be used to

  3. Stochastic programming with integer recourse

    NARCIS (Netherlands)

    van der Vlerk, Maarten Hendrikus

    1995-01-01

    In this thesis we consider two-stage stochastic linear programming models with integer recourse. Such models are at the intersection of two different branches of mathematical programming. On the one hand some of the model parameters are random, which places the problem in the field of stochastic

  4. Combinatorial Clustering Algorithm of Quantum-Behaved Particle Swarm Optimization and Cloud Model

    Directory of Open Access Journals (Sweden)

    Mi-Yuan Shan

    2013-01-01

    Full Text Available We propose a combinatorial clustering algorithm of cloud model and quantum-behaved particle swarm optimization (COCQPSO to solve the stochastic problem. The algorithm employs a novel probability model as well as a permutation-based local search method. We are setting the parameters of COCQPSO based on the design of experiment. In the comprehensive computational study, we scrutinize the performance of COCQPSO on a set of widely used benchmark instances. By benchmarking combinatorial clustering algorithm with state-of-the-art algorithms, we can show that its performance compares very favorably. The fuzzy combinatorial optimization algorithm of cloud model and quantum-behaved particle swarm optimization (FCOCQPSO in vague sets (IVSs is more expressive than the other fuzzy sets. Finally, numerical examples show the clustering effectiveness of COCQPSO and FCOCQPSO clustering algorithms which are extremely remarkable.

  5. A heterogeneous stochastic FEM framework for elliptic PDEs

    International Nuclear Information System (INIS)

    Hou, Thomas Y.; Liu, Pengfei

    2015-01-01

    We introduce a new concept of sparsity for the stochastic elliptic operator −div(a(x,ω)∇(⋅)), which reflects the compactness of its inverse operator in the stochastic direction and allows for spatially heterogeneous stochastic structure. This new concept of sparsity motivates a heterogeneous stochastic finite element method (HSFEM) framework for linear elliptic equations, which discretizes the equations using the heterogeneous coupling of spatial basis with local stochastic basis to exploit the local stochastic structure of the solution space. We also provide a sampling method to construct the local stochastic basis for this framework using the randomized range finding techniques. The resulting HSFEM involves two stages and suits the multi-query setting: in the offline stage, the local stochastic structure of the solution space is identified; in the online stage, the equation can be efficiently solved for multiple forcing functions. An online error estimation and correction procedure through Monte Carlo sampling is given. Numerical results for several problems with high dimensional stochastic input are presented to demonstrate the efficiency of the HSFEM in the online stage

  6. Optimization of stochastic discrete systems and control on complex networks computational networks

    CERN Document Server

    Lozovanu, Dmitrii

    2014-01-01

    This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors' new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book's final chapter is devoted to finite horizon stochastic con...

  7. Combinatorial chemistry

    DEFF Research Database (Denmark)

    Nielsen, John

    1994-01-01

    An overview of combinatorial chemistry is presented. Combinatorial chemistry, sometimes referred to as `irrational drug design,' involves the generation of molecular diversity. The resulting chemical library is then screened for biologically active compounds.......An overview of combinatorial chemistry is presented. Combinatorial chemistry, sometimes referred to as `irrational drug design,' involves the generation of molecular diversity. The resulting chemical library is then screened for biologically active compounds....

  8. Planning under uncertainty solving large-scale stochastic linear programs

    Energy Technology Data Exchange (ETDEWEB)

    Infanger, G. [Stanford Univ., CA (United States). Dept. of Operations Research]|[Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft

    1992-12-01

    For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.

  9. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  10. River water quality management considering agricultural return flows: application of a nonlinear two-stage stochastic fuzzy programming.

    Science.gov (United States)

    Tavakoli, Ali; Nikoo, Mohammad Reza; Kerachian, Reza; Soltani, Maryam

    2015-04-01

    In this paper, a new fuzzy methodology is developed to optimize water and waste load allocation (WWLA) in rivers under uncertainty. An interactive two-stage stochastic fuzzy programming (ITSFP) method is utilized to handle parameter uncertainties, which are expressed as fuzzy boundary intervals. An iterative linear programming (ILP) is also used for solving the nonlinear optimization model. To accurately consider the impacts of the water and waste load allocation strategies on the river water quality, a calibrated QUAL2Kw model is linked with the WWLA optimization model. The soil, water, atmosphere, and plant (SWAP) simulation model is utilized to determine the quantity and quality of each agricultural return flow. To control pollution loads of agricultural networks, it is assumed that a part of each agricultural return flow can be diverted to an evaporation pond and also another part of it can be stored in a detention pond. In detention ponds, contaminated water is exposed to solar radiation for disinfecting pathogens. Results of applying the proposed methodology to the Dez River system in the southwestern region of Iran illustrate its effectiveness and applicability for water and waste load allocation in rivers. In the planning phase, this methodology can be used for estimating the capacities of return flow diversion system and evaporation and detention ponds.

  11. A production planning model considering uncertain demand using two-stage stochastic programming in a fresh vegetable supply chain context.

    Science.gov (United States)

    Mateo, Jordi; Pla, Lluis M; Solsona, Francesc; Pagès, Adela

    2016-01-01

    Production planning models are achieving more interest for being used in the primary sector of the economy. The proposed model relies on the formulation of a location model representing a set of farms susceptible of being selected by a grocery shop brand to supply local fresh products under seasonal contracts. The main aim is to minimize overall procurement costs and meet future demand. This kind of problem is rather common in fresh vegetable supply chains where producers are located in proximity either to processing plants or retailers. The proposed two-stage stochastic model determines which suppliers should be selected for production contracts to ensure high quality products and minimal time from farm-to-table. Moreover, Lagrangian relaxation and parallel computing algorithms are proposed to solve these instances efficiently in a reasonable computational time. The results obtained show computational gains from our algorithmic proposals in front of the usage of plain CPLEX solver. Furthermore, the results ensure the competitive advantages of using the proposed model by purchase managers in the fresh vegetables industry.

  12. An Efficient Robust Solution to the Two-Stage Stochastic Unit Commitment Problem

    DEFF Research Database (Denmark)

    Blanco, Ignacio; Morales González, Juan Miguel

    2017-01-01

    This paper proposes a reformulation of the scenario-based two-stage unitcommitment problem under uncertainty that allows finding unit-commitment plansthat perform reasonably well both in expectation and for the worst caserealization of the uncertainties. The proposed reformulation is based onpart...

  13. Stochastic modelling of two-phase flows including phase change

    International Nuclear Information System (INIS)

    Hurisse, O.; Minier, J.P.

    2011-01-01

    Stochastic modelling has already been developed and applied for single-phase flows and incompressible two-phase flows. In this article, we propose an extension of this modelling approach to two-phase flows including phase change (e.g. for steam-water flows). Two aspects are emphasised: a stochastic model accounting for phase transition and a modelling constraint which arises from volume conservation. To illustrate the whole approach, some remarks are eventually proposed for two-fluid models. (authors)

  14. Solving stochastic multiobjective vehicle routing problem using probabilistic metaheuristic

    Directory of Open Access Journals (Sweden)

    Gannouni Asmae

    2017-01-01

    closed form expression. This novel approach is based on combinatorial probability and can be incorporated in a multiobjective evolutionary algorithm. (iiProvide probabilistic approaches to elitism and diversification in multiobjective evolutionary algorithms. Finally, The behavior of the resulting Probabilistic Multi-objective Evolutionary Algorithms (PrMOEAs is empirically investigated on the multi-objective stochastic VRP problem.

  15. Applications of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2013-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management. The three volumes of the Combinatorial Optimization series aims to cover a wide range of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization. "Applications of Combinatorial Optimization" is presenting a certain number among the most common and well-known applications of Combinatorial Optimization.

  16. Concepts of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2014-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management.  The three volumes of the Combinatorial Optimization series aim to cover a wide range  of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization.Concepts of Combinatorial Optimization, is divided into three parts:- On the complexity of combinatorial optimization problems, presenting basics about worst-case and randomi

  17. Effect of the Implicit Combinatorial Model on Combinatorial Reasoning in Secondary School Pupils.

    Science.gov (United States)

    Batanero, Carmen; And Others

    1997-01-01

    Elementary combinatorial problems may be classified into three different combinatorial models: (1) selection; (2) partition; and (3) distribution. The main goal of this research was to determine the effect of the implicit combinatorial model on pupils' combinatorial reasoning before and after instruction. Gives an analysis of variance of the…

  18. Implementation of equity in resource allocation for regional earthquake risk mitigation using two-stage stochastic programming.

    Science.gov (United States)

    Zolfaghari, Mohammad R; Peyghaleh, Elnaz

    2015-03-01

    This article presents a new methodology to implement the concept of equity in regional earthquake risk mitigation programs using an optimization framework. It presents a framework that could be used by decisionmakers (government and authorities) to structure budget allocation strategy toward different seismic risk mitigation measures, i.e., structural retrofitting for different building structural types in different locations and planning horizons. A two-stage stochastic model is developed here to seek optimal mitigation measures based on minimizing mitigation expenditures, reconstruction expenditures, and especially large losses in highly seismically active countries. To consider fairness in the distribution of financial resources among different groups of people, the equity concept is incorporated using constraints in model formulation. These constraints limit inequity to the user-defined level to achieve the equity-efficiency tradeoff in the decision-making process. To present practical application of the proposed model, it is applied to a pilot area in Tehran, the capital city of Iran. Building stocks, structural vulnerability functions, and regional seismic hazard characteristics are incorporated to compile a probabilistic seismic risk model for the pilot area. Results illustrate the variation of mitigation expenditures by location and structural type for buildings. These expenditures are sensitive to the amount of available budget and equity consideration for the constant risk aversion. Most significantly, equity is more easily achieved if the budget is unlimited. Conversely, increasing equity where the budget is limited decreases the efficiency. The risk-return tradeoff, equity-reconstruction expenditures tradeoff, and variation of per-capita expected earthquake loss in different income classes are also presented. © 2015 Society for Risk Analysis.

  19. Stochastic programming and market equilibrium analysis of microgrids energy management systems

    International Nuclear Information System (INIS)

    Hu, Ming-Che; Lu, Su-Ying; Chen, Yen-Haw

    2016-01-01

    Microgrids facilitate optimum utilization of distributed renewable energy, provides better local energy supply, and reduces transmission loss and greenhouse gas emission. Because the uncertainty in energy demand affects the energy demand and supply system, the aim of this research is to develop a stochastic optimization and its market equilibrium for microgrids in the electricity market. Therefore, a two-stage stochastic programming model for microgrids and the market competition model are derived in this paper. In the stochastic model, energy demand and supply uncertainties are considered. Furthermore, a case study of the stochastic model is conducted to simulate the uncertainties on the INER microgrids in Taiwanese market. The optimal investment of the generators and batteries installation and operating strategies are determined under energy demand and supply uncertainties for the INER microgrids. The results show optimal investment and operating strategies for the current INER microgrids are also determined by the proposed two-stage stochastic model in the market. In addition, trade-off between the battery capacity and microgrids performance is investigated. Battery usage and power trading between the microgrids and main grid systems are the functions of battery capacity. - Highlights: • A two-stage stochastic programming model is developed for microgrids. • Market equilibrium analysis of microgrids is conducted. • A case study of the stochastic model is conducted for INER microgrids.

  20. Design of problem-specific evolutionary algorithm/mixed-integer programming hybrids: two-stage stochastic integer programming applied to chemical batch scheduling

    Science.gov (United States)

    Urselmann, Maren; Emmerich, Michael T. M.; Till, Jochen; Sand, Guido; Engell, Sebastian

    2007-07-01

    Engineering optimization often deals with large, mixed-integer search spaces with a rigid structure due to the presence of a large number of constraints. Metaheuristics, such as evolutionary algorithms (EAs), are frequently suggested as solution algorithms in such cases. In order to exploit the full potential of these algorithms, it is important to choose an adequate representation of the search space and to integrate expert-knowledge into the stochastic search operators, without adding unnecessary bias to the search. Moreover, hybridisation with mathematical programming techniques such as mixed-integer programming (MIP) based on a problem decomposition can be considered for improving algorithmic performance. In order to design problem-specific EAs it is desirable to have a set of design guidelines that specify properties of search operators and representations. Recently, a set of guidelines has been proposed that gives rise to so-called Metric-based EAs (MBEAs). Extended by the minimal moves mutation they allow for a generalization of EA with self-adaptive mutation strength in discrete search spaces. In this article, a problem-specific EA for process engineering task is designed, following the MBEA guidelines and minimal moves mutation. On the background of the application, the usefulness of the design framework is discussed, and further extensions and corrections proposed. As a case-study, a two-stage stochastic programming problem in chemical batch process scheduling is considered. The algorithm design problem can be viewed as the choice of a hierarchical decision structure, where on different layers of the decision process symmetries and similarities can be exploited for the design of minimal moves. After a discussion of the design approach and its instantiation for the case-study, the resulting problem-specific EA/MIP is compared to a straightforward application of a canonical EA/MIP and to a monolithic mathematical programming algorithm. In view of the

  1. Tumor-targeting peptides from combinatorial libraries*

    Science.gov (United States)

    Liu, Ruiwu; Li, Xiaocen; Xiao, Wenwu; Lam, Kit S.

    2018-01-01

    Cancer is one of the major and leading causes of death worldwide. Two of the greatest challenges infighting cancer are early detection and effective treatments with no or minimum side effects. Widespread use of targeted therapies and molecular imaging in clinics requires high affinity, tumor-specific agents as effective targeting vehicles to deliver therapeutics and imaging probes to the primary or metastatic tumor sites. Combinatorial libraries such as phage-display and one-bead one-compound (OBOC) peptide libraries are powerful approaches in discovering tumor-targeting peptides. This review gives an overview of different combinatorial library technologies that have been used for the discovery of tumor-targeting peptides. Examples of tumor-targeting peptides identified from each combinatorial library method will be discussed. Published tumor-targeting peptide ligands and their applications will also be summarized by the combinatorial library methods and their corresponding binding receptors. PMID:27210583

  2. Distributing the computation in combinatorial optimization experiments over the cloud

    Directory of Open Access Journals (Sweden)

    Mario Brcic

    2017-12-01

    Full Text Available Combinatorial optimization is an area of great importance since many of the real-world problems have discrete parameters which are part of the objective function to be optimized. Development of combinatorial optimization algorithms is guided by the empirical study of the candidate ideas and their performance over a wide range of settings or scenarios to infer general conclusions. Number of scenarios can be overwhelming, especially when modeling uncertainty in some of the problem’s parameters. Since the process is also iterative and many ideas and hypotheses may be tested, execution time of each experiment has an important role in the efficiency and successfulness. Structure of such experiments allows for significant execution time improvement by distributing the computation. We focus on the cloud computing as a cost-efficient solution in these circumstances. In this paper we present a system for validating and comparing stochastic combinatorial optimization algorithms. The system also deals with selection of the optimal settings for computational nodes and number of nodes in terms of performance-cost tradeoff. We present applications of the system on a new class of project scheduling problem. We show that we can optimize the selection over cloud service providers as one of the settings and, according to the model, it resulted in a substantial cost-savings while meeting the deadline.

  3. A Three-Stage Optimization Algorithm for the Stochastic Parallel Machine Scheduling Problem with Adjustable Production Rates

    Directory of Open Access Journals (Sweden)

    Rui Zhang

    2013-01-01

    Full Text Available We consider a parallel machine scheduling problem with random processing/setup times and adjustable production rates. The objective functions to be minimized consist of two parts; the first part is related with the due date performance (i.e., the tardiness of the jobs, while the second part is related with the setting of machine speeds. Therefore, the decision variables include both the production schedule (sequences of jobs and the production rate of each machine. The optimization process, however, is significantly complicated by the stochastic factors in the manufacturing system. To address the difficulty, a simulation-based three-stage optimization framework is presented in this paper for high-quality robust solutions to the integrated scheduling problem. The first stage (crude optimization is featured by the ordinal optimization theory, the second stage (finer optimization is implemented with a metaheuristic called differential evolution, and the third stage (fine-tuning is characterized by a perturbation-based local search. Finally, computational experiments are conducted to verify the effectiveness of the proposed approach. Sensitivity analysis and practical implications are also discussed.

  4. A primal-dual decomposition based interior point approach to two-stage stochastic linear programming

    NARCIS (Netherlands)

    A.B. Berkelaar (Arjan); C.L. Dert (Cees); K.P.B. Oldenkamp; S. Zhang (Shuzhong)

    1999-01-01

    textabstractDecision making under uncertainty is a challenge faced by many decision makers. Stochastic programming is a major tool developed to deal with optimization with uncertainties that has found applications in, e.g. finance, such as asset-liability and bond-portfolio management.

  5. Two Stochastic Resonances Induced by Two Different Multiplicative Telegraphic Noises for an Electric System

    International Nuclear Information System (INIS)

    Li Jinghui

    2008-01-01

    In this paper, an electric system with two dichotomous resistors is investigated. It is shown that this system can display two stochastic resonances, which are the amplitude of the periodic response as the functions of the two dichotomous resistors strengthes respectively. In the limits of Gaussian white noise and shot white noise (i.e., the two noises are both Gaussian white noise or shot white noise), no phenomena of resonance appear. By further study, we find that when the system is with three or more multiplicative telegraphic noises, there are three or more stochastic resonances

  6. A multi-stage stochastic program for supply chain network redesign problem with price-dependent uncertain demands

    DEFF Research Database (Denmark)

    Fattahi, Mohammad; Govindan, Kannan; Keyvanshokooh, Esmaeil

    2018-01-01

    In this paper, we address a multi-period supply chain network redesign problem in which customer zones have price-dependent stochastic demand for multiple products. A novel multi-stage stochastic program is proposed to simultaneously make tactical decisions including products' prices and strategic...... redesign decisions. Existing uncertainty in potential demands of customer zones is modeled through a finite set of scenarios, described in the form of a scenario tree. The scenarios are generated using a Latin Hypercube Sampling method and then a forward scenario construction technique is employed...

  7. Stochastic Real-World Drive Cycle Generation Based on a Two Stage Markov Chain Approach

    NARCIS (Netherlands)

    Balau, A.E.; Kooijman, D.; Vazquez Rodarte, I.; Ligterink, N.

    2015-01-01

    This paper presents a methodology and tool that stochastically generates drive cycles based on measured data, with the purpose of testing and benchmarking light duty vehicles in a simulation environment or on a test-bench. The WLTP database, containing real world driving measurements, was used as

  8. Combinatorial Models for Assembly and Decomposition of Products

    OpenAIRE

    A. N. Bojko

    2015-01-01

    The paper discusses the most popular combinatorial models that are used for the synthesis of design solutions at the stage of the assembly process flow preparation. It shows that while assembling the product the relations of parts can be represented as a structure of preferences, which is formed on the basis of objective design restrictions put in at the stage of the product design. This structure is a binary preference relation pre-order. Its symmetrical part is equivalence and describes the...

  9. Two-dimensional combinatorial screening enables the bottom-up design of a microRNA-10b inhibitor.

    Science.gov (United States)

    Velagapudi, Sai Pradeep; Disney, Matthew D

    2014-03-21

    The RNA motifs that bind guanidinylated kanamycin A (G Kan A) and guanidinylated neomycin B (G Neo B) were identified via two-dimensional combinatorial screening (2DCS). The results of these studies enabled the "bottom-up" design of a small molecule inhibitor of oncogenic microRNA-10b.

  10. Combinatorial Libraries of Bis-Heterocyclic Compounds with Skeletal Diversity

    OpenAIRE

    Soural, Miroslav; Bouillon, Isabelle; Krchňák, Viktor

    2008-01-01

    Combinatorial solid-phase synthesis of bis-heterocyclic compounds, characterized by the presence of two heterocyclic cores connected by a spacer of variable length/structure, provided structurally heterogeneous libraries with skeletal diversity. Both heterocyclic rings were assembled on resin in a combinatorial fashion.

  11. Combinatorial Libraries of Bis-Heterocyclic Compounds with Skeletal Diversity

    Science.gov (United States)

    Soural, Miroslav; Bouillon, Isabelle; Krchňák, Viktor

    2009-01-01

    Combinatorial solid-phase synthesis of bis-heterocyclic compounds, characterized by the presence of two heterocyclic cores connected by a spacer of variable length/structure, provided structurally heterogeneous libraries with skeletal diversity. Both heterocyclic rings were assembled on resin in a combinatorial fashion. PMID:18811208

  12. Combinatorial commutative algebra

    CERN Document Server

    Miller, Ezra

    2005-01-01

    Offers an introduction to combinatorial commutative algebra, focusing on combinatorial techniques for multigraded polynomial rings, semigroup algebras, and determined rings. The chapters in this work cover topics ranging from homological invariants of monomial ideals and their polyhedral resolutions, to tools for studying algebraic varieties.

  13. Dynamic combinatorial chemistry

    NARCIS (Netherlands)

    Otto, Sijbren; Furlan, Ricardo L.E.; Sanders, Jeremy K.M.

    2002-01-01

    A combinatorial library that responds to its target by increasing the concentration of strong binders at the expense of weak binders sounds ideal. Dynamic combinatorial chemistry has the potential to achieve exactly this. In this review, we will highlight the unique features that distinguish dynamic

  14. Introduction to stochastic dynamic programming

    CERN Document Server

    Ross, Sheldon M; Lukacs, E

    1983-01-01

    Introduction to Stochastic Dynamic Programming presents the basic theory and examines the scope of applications of stochastic dynamic programming. The book begins with a chapter on various finite-stage models, illustrating the wide range of applications of stochastic dynamic programming. Subsequent chapters study infinite-stage models: discounting future returns, minimizing nonnegative costs, maximizing nonnegative returns, and maximizing the long-run average return. Each of these chapters first considers whether an optimal policy need exist-providing counterexamples where appropriate-and the

  15. Simultaneous Disulfide and Boronic Acid Ester Exchange in Dynamic Combinatorial Libraries

    Directory of Open Access Journals (Sweden)

    Sanna L. Diemer

    2015-09-01

    Full Text Available Dynamic combinatorial chemistry has emerged as a promising tool for the discovery of complex receptors in supramolecular chemistry. At the heart of dynamic combinatorial chemistry are the reversible reactions that enable the exchange of building blocks between library members in dynamic combinatorial libraries (DCLs ensuring thermodynamic control over the system. If more than one reversible reaction operates in a single dynamic combinatorial library, the complexity of the system increases dramatically, and so does its possible applications. One can imagine two reversible reactions that operate simultaneously or two reversible reactions that operate independently. Both these scenarios have advantages and disadvantages. In this contribution, we show how disulfide exchange and boronic ester transesterification can function simultaneous in dynamic combinatorial libraries under appropriate conditions. We describe the detailed studies necessary to establish suitable reaction conditions and highlight the analytical techniques appropriate to study this type of system.

  16. The Combinatorial Rigidity Conjecture is False for Cubic Polynomials

    DEFF Research Database (Denmark)

    Henriksen, Christian

    2003-01-01

    We show that there exist two cubic polynomials with connected Julia sets which are combinatorially equivalent but not topologically conjugate on their Julia sets. This disproves a conjecture by McMullen from 1995.......We show that there exist two cubic polynomials with connected Julia sets which are combinatorially equivalent but not topologically conjugate on their Julia sets. This disproves a conjecture by McMullen from 1995....

  17. Relativity in Combinatorial Gravitational Fields

    Directory of Open Access Journals (Sweden)

    Mao Linfan

    2010-04-01

    Full Text Available A combinatorial spacetime $(mathscr{C}_G| uboverline{t}$ is a smoothly combinatorial manifold $mathscr{C}$ underlying a graph $G$ evolving on a time vector $overline{t}$. As we known, Einstein's general relativity is suitable for use only in one spacetime. What is its disguise in a combinatorial spacetime? Applying combinatorial Riemannian geometry enables us to present a combinatorial spacetime model for the Universe and suggest a generalized Einstein gravitational equation in such model. Forfinding its solutions, a generalized relativity principle, called projective principle is proposed, i.e., a physics law ina combinatorial spacetime is invariant under a projection on its a subspace and then a spherically symmetric multi-solutions ofgeneralized Einstein gravitational equations in vacuum or charged body are found. We also consider the geometrical structure in such solutions with physical formations, and conclude that an ultimate theory for the Universe maybe established if all such spacetimes in ${f R}^3$. Otherwise, our theory is only an approximate theory and endless forever.

  18. Combinatorial Nano-Bio Interfaces.

    Science.gov (United States)

    Cai, Pingqiang; Zhang, Xiaoqian; Wang, Ming; Wu, Yun-Long; Chen, Xiaodong

    2018-06-08

    Nano-bio interfaces are emerging from the convergence of engineered nanomaterials and biological entities. Despite rapid growth, clinical translation of biomedical nanomaterials is heavily compromised by the lack of comprehensive understanding of biophysicochemical interactions at nano-bio interfaces. In the past decade, a few investigations have adopted a combinatorial approach toward decoding nano-bio interfaces. Combinatorial nano-bio interfaces comprise the design of nanocombinatorial libraries and high-throughput bioevaluation. In this Perspective, we address challenges in combinatorial nano-bio interfaces and call for multiparametric nanocombinatorics (composition, morphology, mechanics, surface chemistry), multiscale bioevaluation (biomolecules, organelles, cells, tissues/organs), and the recruitment of computational modeling and artificial intelligence. Leveraging combinatorial nano-bio interfaces will shed light on precision nanomedicine and its potential applications.

  19. Stochastic volatility and stochastic leverage

    DEFF Research Database (Denmark)

    Veraart, Almut; Veraart, Luitgard A. M.

    This paper proposes the new concept of stochastic leverage in stochastic volatility models. Stochastic leverage refers to a stochastic process which replaces the classical constant correlation parameter between the asset return and the stochastic volatility process. We provide a systematic...... treatment of stochastic leverage and propose to model the stochastic leverage effect explicitly, e.g. by means of a linear transformation of a Jacobi process. Such models are both analytically tractable and allow for a direct economic interpretation. In particular, we propose two new stochastic volatility...... models which allow for a stochastic leverage effect: the generalised Heston model and the generalised Barndorff-Nielsen & Shephard model. We investigate the impact of a stochastic leverage effect in the risk neutral world by focusing on implied volatilities generated by option prices derived from our new...

  20. Two-stage stochastic day-ahead optimal resource scheduling in a distribution network with intensive use of distributed energy resources

    DEFF Research Database (Denmark)

    Sousa, Tiago; Ghazvini, Mohammad Ali Fotouhi; Morais, Hugo

    2015-01-01

    The integration of renewable sources and electric vehicles will introduce new uncertainties to the optimal resource scheduling, namely at the distribution level. These uncertainties are mainly originated by the power generated by renewables sources and by the electric vehicles charge requirements....... This paper proposes a two-state stochastic programming approach to solve the day-ahead optimal resource scheduling problem. The case study considers a 33-bus distribution network with 66 distributed generation units and 1000 electric vehicles....

  1. Threshold Dynamics of a Stochastic Chemostat Model with Two Nutrients and One Microorganism

    Directory of Open Access Journals (Sweden)

    Jian Zhang

    2017-01-01

    Full Text Available A new stochastic chemostat model with two substitutable nutrients and one microorganism is proposed and investigated. Firstly, for the corresponding deterministic model, the threshold for extinction and permanence of the microorganism is obtained by analyzing the stability of the equilibria. Then, for the stochastic model, the threshold of the stochastic chemostat for extinction and permanence of the microorganism is explored. Difference of the threshold of the deterministic model and the stochastic model shows that a large stochastic disturbance can affect the persistence of the microorganism and is harmful to the cultivation of the microorganism. To illustrate this phenomenon, we give some computer simulations with different intensity of stochastic noise disturbance.

  2. Integer and combinatorial optimization

    CERN Document Server

    Nemhauser, George L

    1999-01-01

    Rave reviews for INTEGER AND COMBINATORIAL OPTIMIZATION ""This book provides an excellent introduction and survey of traditional fields of combinatorial optimization . . . It is indeed one of the best and most complete texts on combinatorial optimization . . . available. [And] with more than 700 entries, [it] has quite an exhaustive reference list.""-Optima ""A unifying approach to optimization problems is to formulate them like linear programming problems, while restricting some or all of the variables to the integers. This book is an encyclopedic resource for such f

  3. Digital hardware implementation of a stochastic two-dimensional neuron model.

    Science.gov (United States)

    Grassia, F; Kohno, T; Levi, T

    2016-11-01

    This study explores the feasibility of stochastic neuron simulation in digital systems (FPGA), which realizes an implementation of a two-dimensional neuron model. The stochasticity is added by a source of current noise in the silicon neuron using an Ornstein-Uhlenbeck process. This approach uses digital computation to emulate individual neuron behavior using fixed point arithmetic operation. The neuron model's computations are performed in arithmetic pipelines. It was designed in VHDL language and simulated prior to mapping in the FPGA. The experimental results confirmed the validity of the developed stochastic FPGA implementation, which makes the implementation of the silicon neuron more biologically plausible for future hybrid experiments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Stochastic reactive power market with volatility of wind power considering voltage security

    International Nuclear Information System (INIS)

    Kargarian, A.; Raoofat, M.

    2011-01-01

    While wind power generation is growing rapidly around the globe; its stochastic nature affects the system operation in many different aspects. In this paper, the impact of wind power volatility on the reactive power market is taken into account. The paper presents a novel stochastic method for optimal reactive power market clearing considering voltage security and volatile nature of the wind. The proposed optimization algorithm uses a multiobjective nonlinear programming technique to minimize market payment and simultaneously maximize voltage security margin. Considering a set of probable wind speeds, in the first stage, the proposed algorithm seeks to minimize expected system payment which is summation of reactive power payment and transmission loss cost. The object of the second stage is maximization of expected voltage security margin to increase the system loadability and security. Finally, in the last stage, a multiobjective function is presented to schedule the stochastic reactive power market using results of two previous stages. The proposed algorithm is applied to IEEE 14-bus test system. As a benchmark, Monte Carlo Simulation method is utilized to simulate the actual market of given period of time to evaluate results of the proposed algorithm, and satisfactory results are achieved. -- Highlights: →The paper proposes a new algorithm for stochastic reactive power market clearing. →The stochastic nature of the wind which impacts the system operation and market clearing process, is taken into account. →The paper suggests an expected voltage stability margin and optimizes it in conjunction with expected total market payment. →To clear the market with two mentioned objective functions, a three-stage multiobjective nonlinear programming is implemented. →Also, a simple method is suggested to determine a suitable priority coefficient between two individual objective functions.

  5. Accessing Specific Peptide Recognition by Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Li, Ming

    Molecular recognition is at the basis of all processes for life, and plays a central role in many biological processes, such as protein folding, the structural organization of cells and organelles, signal transduction, and the immune response. Hence, my PhD project is entitled “Accessing Specific...... Peptide Recognition by Combinatorial Chemistry”. Molecular recognition is a specific interaction between two or more molecules through noncovalent bonding, such as hydrogen bonding, metal coordination, van der Waals forces, π−π, hydrophobic, or electrostatic interactions. The association involves kinetic....... Combinatorial chemistry was invented in 1980s based on observation of functional aspects of the adaptive immune system. It was employed for drug development and optimization in conjunction with high-throughput synthesis and screening. (chapter 2) Combinatorial chemistry is able to rapidly produce many thousands...

  6. Some experience of shielding calculations by combinatorial method

    International Nuclear Information System (INIS)

    Korobejnikov, V.V.; Oussanov, V.I.

    1996-01-01

    Some aspects of the compound systems shielding calculations by a combinatorial approach are discussed. The effectiveness of such an approach is based on the fundamental characteristic of a compound system: if some element of the system have in itself mathematical or physical properties favorable for calculation, these properties may be used in a combinatorial approach and are lost when the system is being calculated in the whole by a direct approach. The combinatorial technique applied is well known. A compound system are being splitting for two or more auxiliary subsystems (so that calculation each of them is a more simple problem than calculation of the original problem (or at last is a soluble problem if original one is not). Calculation of every subsystem are carried out by suitable method and code, the coupling being made through boundary conditions or boundary source. The special consideration in the paper is given to a fast reactor shielding combinatorial analysis and to the testing of the results received. (author)

  7. Two is better than one; toward a rational design of combinatorial therapy.

    Science.gov (United States)

    Chen, Sheng-Hong; Lahav, Galit

    2016-12-01

    Drug combination is an appealing strategy for combating the heterogeneity of tumors and evolution of drug resistance. However, the rationale underlying combinatorial therapy is often not well established due to lack of understandings of the specific pathways responding to the drugs, and their temporal dynamics following each treatment. Here we present several emerging trends in harnessing properties of biological systems for the optimal design of drug combinations, including the type of drugs, specific concentration, sequence of addition and the temporal schedule of treatments. We highlight recent studies showing different approaches for efficient design of drug combinations including single-cell signaling dynamics, adaption and pathway crosstalk. Finally, we discuss novel and feasible approaches that can facilitate the optimal design of combinatorial therapy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. On an extension of a combinatorial identity

    Indian Academy of Sciences (India)

    to an infinite family of 4-way combinatorial identities. In some particular cases we get even 5-way combinatorial identities which give us four new combinatorial versions of. Göllnitz–Gordon identities. Keywords. n-Color partitions; lattice paths; Frobenius partitions; Göllnitz–Gordon identities; combinatorial interpretations. 1.

  9. A stochastic programming approach to manufacturing flow control

    OpenAIRE

    Haurie, Alain; Moresino, Francesco

    2012-01-01

    This paper proposes and tests an approximation of the solution of a class of piecewise deterministic control problems, typically used in the modeling of manufacturing flow processes. This approximation uses a stochastic programming approach on a suitably discretized and sampled system. The method proceeds through two stages: (i) the Hamilton-Jacobi-Bellman (HJB) dynamic programming equations for the finite horizon continuous time stochastic control problem are discretized over a set of sample...

  10. Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.

    Science.gov (United States)

    Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi

    2016-06-01

    Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.

  11. Nonparametric combinatorial sequence models.

    Science.gov (United States)

    Wauthier, Fabian L; Jordan, Michael I; Jojic, Nebojsa

    2011-11-01

    This work considers biological sequences that exhibit combinatorial structures in their composition: groups of positions of the aligned sequences are "linked" and covary as one unit across sequences. If multiple such groups exist, complex interactions can emerge between them. Sequences of this kind arise frequently in biology but methodologies for analyzing them are still being developed. This article presents a nonparametric prior on sequences which allows combinatorial structures to emerge and which induces a posterior distribution over factorized sequence representations. We carry out experiments on three biological sequence families which indicate that combinatorial structures are indeed present and that combinatorial sequence models can more succinctly describe them than simpler mixture models. We conclude with an application to MHC binding prediction which highlights the utility of the posterior distribution over sequence representations induced by the prior. By integrating out the posterior, our method compares favorably to leading binding predictors.

  12. Combinatorial synthesis of ceramic materials

    Science.gov (United States)

    Lauf, Robert J.; Walls, Claudia A.; Boatner, Lynn A.

    2006-11-14

    A combinatorial library includes a gelcast substrate defining a plurality of cavities in at least one surface thereof; and a plurality of gelcast test materials in the cavities, at least two of the test materials differing from the substrate in at least one compositional characteristic, the two test materials differing from each other in at least one compositional characteristic.

  13. Stochastic optimization of a multi-feedstock lignocellulosic-based bioethanol supply chain under multiple uncertainties

    International Nuclear Information System (INIS)

    Osmani, Atif; Zhang, Jun

    2013-01-01

    An integrated multi-feedstock (i.e. switchgrass and crop residue) lignocellulosic-based bioethanol supply chain is studied under jointly occurring uncertainties in switchgrass yield, crop residue purchase price, bioethanol demand and sales price. A two-stage stochastic mathematical model is proposed to maximize expected profit by optimizing the strategic and tactical decisions. A case study based on ND (North Dakota) state in the U.S. demonstrates that in a stochastic environment it is cost effective to meet 100% of ND's annual gasoline demand from bioethanol by using switchgrass as a primary and crop residue as a secondary biomass feedstock. Although results show that the financial performance is degraded as variability of the uncertain parameters increases, the proposed stochastic model increasingly outperforms the deterministic model under uncertainties. The locations of biorefineries (i.e. first-stage integer variables) are insensitive to the uncertainties. Sensitivity analysis shows that “mean” value of stochastic parameters has a significant impact on the expected profit and optimal values of first-stage continuous variables. Increase in level of mean ethanol demand and mean sale price results in higher bioethanol production. When mean switchgrass yield is at low level and mean crop residue price is at high level, all the available marginal land is used for switchgrass cultivation. - Highlights: • Two-stage stochastic MILP model for maximizing profit of a multi-feedstock lignocellulosic-based bioethanol supply chain. • Multiple uncertainties in switchgrass yield, crop residue purchase price, bioethanol demand, and bioethanol sale price. • Proposed stochastic model outperforms the traditional deterministic model under uncertainties. • Stochastic parameters significantly affect marginal land allocation for switchgrass cultivation and bioethanol production. • Location of biorefineries is found to be insensitive to the stochastic environment

  14. Application of computer assisted combinatorial chemistry in antivirial, antimalarial and anticancer agents design

    Science.gov (United States)

    Burello, E.; Bologa, C.; Frecer, V.; Miertus, S.

    Combinatorial chemistry and technologies have been developed to a stage where synthetic schemes are available for generation of a large variety of organic molecules. The innovative concept of combinatorial design assumes that screening of a large and diverse library of compounds will increase the probability of finding an active analogue among the compounds tested. Since the rate at which libraries are screened for activity currently constitutes a limitation to the use of combinatorial technologies, it is important to be selective about the number of compounds to be synthesized. Early experience with combinatorial chemistry indicated that chemical diversity alone did not result in a significant increase in the number of generated lead compounds. Emphasis has therefore been increasingly put on the use of computer assisted combinatorial chemical techniques. Computational methods are valuable in the design of virtual libraries of molecular models. Selection strategies based on computed physicochemical properties of the models or of a target compound are introduced to reduce the time and costs of library synthesis and screening. In addition, computational structure-based library focusing methods can be used to perform in silico screening of the activity of compounds against a target receptor by docking the ligands into the receptor model. Three case studies are discussed dealing with the design of targeted combinatorial libraries of inhibitors of HIV-1 protease, P. falciparum plasmepsin and human urokinase as potential antivirial, antimalarial and anticancer drugs. These illustrate library focusing strategies.

  15. Runway Operations Planning: A Two-Stage Solution Methodology

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the

  16. An Empirical Application of a Two-Factor Model of Stochastic Volatility

    Czech Academy of Sciences Publication Activity Database

    Kuchyňka, Alexandr

    2008-01-01

    Roč. 17, č. 3 (2008), s. 243-253 ISSN 1210-0455 R&D Projects: GA ČR GA402/07/1113; GA MŠk(CZ) LC06075 Institutional research plan: CEZ:AV0Z10750506 Keywords : stochastic volatility * Kalman filter Subject RIV: AH - Economics http://library.utia.cas.cz/separaty/2008/E/kuchynka-an empirical application of a two-factor model of stochastic volatility.pdf

  17. An Investigation into Post-Secondary Students' Understanding of Combinatorial Questions

    Science.gov (United States)

    Bulone, Vincent William

    2017-01-01

    The purpose of this dissertation was to study aspects of how post-secondary students understand combinatorial problems. Within this dissertation, I considered understanding through two different lenses: i) student connections to previous problems; and ii) common combinatorial distinctions such as ordered versus unordered and repetitive versus…

  18. Two projects in theoretical neuroscience: A convolution-based metric for neural membrane potentials and a combinatorial connectionist semantic network method

    Science.gov (United States)

    Evans, Garrett Nolan

    In this work, I present two projects that both contribute to the aim of discovering how intelligence manifests in the brain. The first project is a method for analyzing recorded neural signals, which takes the form of a convolution-based metric on neural membrane potential recordings. Relying only on integral and algebraic operations, the metric compares the timing and number of spikes within recordings as well as the recordings' subthreshold features: summarizing differences in these with a single "distance" between the recordings. Like van Rossum's (2001) metric for spike trains, the metric is based on a convolution operation that it performs on the input data. The kernel used for the convolution is carefully chosen such that it produces a desirable frequency space response and, unlike van Rossum's kernel, causes the metric to be first order both in differences between nearby spike times and in differences between same-time membrane potential values: an important trait. The second project is a combinatorial syntax method for connectionist semantic network encoding. Combinatorial syntax has been a point on which those who support a symbol-processing view of intelligent processing and those who favor a connectionist view have had difficulty seeing eye-to-eye. Symbol-processing theorists have persuasively argued that combinatorial syntax is necessary for certain intelligent mental operations, such as reasoning by analogy. Connectionists have focused on the versatility and adaptability offered by self-organizing networks of simple processing units. With this project, I show that there is a way to reconcile the two perspectives and to ascribe a combinatorial syntax to a connectionist network. The critical principle is to interpret nodes, or units, in the connectionist network as bound integrations of the interpretations for nodes that they share links with. Nodes need not correspond exactly to neurons and may correspond instead to distributed sets, or assemblies, of

  19. Stochastic search, optimization and regression with energy applications

    Science.gov (United States)

    Hannah, Lauren A.

    Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression

  20. Domain decomposition method of stochastic PDEs: a two-level scalable preconditioner

    International Nuclear Information System (INIS)

    Subber, Waad; Sarkar, Abhijit

    2012-01-01

    For uncertainty quantification in many practical engineering problems, the stochastic finite element method (SFEM) may be computationally challenging. In SFEM, the size of the algebraic linear system grows rapidly with the spatial mesh resolution and the order of the stochastic dimension. In this paper, we describe a non-overlapping domain decomposition method, namely the iterative substructuring method to tackle the large-scale linear system arising in the SFEM. The SFEM is based on domain decomposition in the geometric space and a polynomial chaos expansion in the probabilistic space. In particular, a two-level scalable preconditioner is proposed for the iterative solver of the interface problem for the stochastic systems. The preconditioner is equipped with a coarse problem which globally connects the subdomains both in the geometric and probabilistic spaces via their corner nodes. This coarse problem propagates the information quickly across the subdomains leading to a scalable preconditioner. For numerical illustrations, a two-dimensional stochastic elliptic partial differential equation (SPDE) with spatially varying non-Gaussian random coefficients is considered. The numerical scalability of the the preconditioner is investigated with respect to the mesh size, subdomain size, fixed problem size per subdomain and order of polynomial chaos expansion. The numerical experiments are performed on a Linux cluster using MPI and PETSc parallel libraries.

  1. A Two Stage Solution Procedure for Production Planning System with Advance Demand Information

    Science.gov (United States)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji

    We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.

  2. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  3. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    Science.gov (United States)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  4. Combinatorial Hybrid Systems

    DEFF Research Database (Denmark)

    Larsen, Jesper Abildgaard; Wisniewski, Rafal; Grunnet, Jacob Deleuran

    2008-01-01

    indicates for a given face the future simplex. In the suggested definition we allow nondeterminacy in form of splitting and merging of solution trajectories. The combinatorial vector field gives rise to combinatorial counterparts of most concepts from dynamical systems, such as duals to vector fields, flow......, flow lines, fixed points and Lyapunov functions. Finally it will be shown how this theory extends to switched dynamical systems and an algorithmic overview of how to do supervisory control will be shown towards the end....

  5. A Data-Driven Stochastic Reactive Power Optimization Considering Uncertainties in Active Distribution Networks and Decomposition Method

    DEFF Research Database (Denmark)

    Ding, Tao; Yang, Qingrun; Yang, Yongheng

    2018-01-01

    To address the uncertain output of distributed generators (DGs) for reactive power optimization in active distribution networks, the stochastic programming model is widely used. The model is employed to find an optimal control strategy with minimum expected network loss while satisfying all......, in this paper, a data-driven modeling approach is introduced to assume that the probability distribution from the historical data is uncertain within a confidence set. Furthermore, a data-driven stochastic programming model is formulated as a two-stage problem, where the first-stage variables find the optimal...... control for discrete reactive power compensation equipment under the worst probability distribution of the second stage recourse. The second-stage variables are adjusted to uncertain probability distribution. In particular, this two-stage problem has a special structure so that the second-stage problem...

  6. Fleet Planning Decision-Making: Two-Stage Optimization with Slot Purchase

    Directory of Open Access Journals (Sweden)

    Lay Eng Teoh

    2016-01-01

    Full Text Available Essentially, strategic fleet planning is vital for airlines to yield a higher profit margin while providing a desired service frequency to meet stochastic demand. In contrast to most studies that did not consider slot purchase which would affect the service frequency determination of airlines, this paper proposes a novel approach to solve the fleet planning problem subject to various operational constraints. A two-stage fleet planning model is formulated in which the first stage selects the individual operating route that requires slot purchase for network expansions while the second stage, in the form of probabilistic dynamic programming model, determines the quantity and type of aircraft (with the corresponding service frequency to meet the demand profitably. By analyzing an illustrative case study (with 38 international routes, the results show that the incorporation of slot purchase in fleet planning is beneficial to airlines in achieving economic and social sustainability. The developed model is practically viable for airlines not only to provide a better service quality (via a higher service frequency to meet more demand but also to obtain a higher revenue and profit margin, by making an optimal slot purchase and fleet planning decision throughout the long-term planning horizon.

  7. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    Science.gov (United States)

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  8. cDREM: inferring dynamic combinatorial gene regulation.

    Science.gov (United States)

    Wise, Aaron; Bar-Joseph, Ziv

    2015-04-01

    Genes are often combinatorially regulated by multiple transcription factors (TFs). Such combinatorial regulation plays an important role in development and facilitates the ability of cells to respond to different stresses. While a number of approaches have utilized sequence and ChIP-based datasets to study combinational regulation, these have often ignored the combinational logic and the dynamics associated with such regulation. Here we present cDREM, a new method for reconstructing dynamic models of combinatorial regulation. cDREM integrates time series gene expression data with (static) protein interaction data. The method is based on a hidden Markov model and utilizes the sparse group Lasso to identify small subsets of combinatorially active TFs, their time of activation, and the logical function they implement. We tested cDREM on yeast and human data sets. Using yeast we show that the predicted combinatorial sets agree with other high throughput genomic datasets and improve upon prior methods developed to infer combinatorial regulation. Applying cDREM to study human response to flu, we were able to identify several combinatorial TF sets, some of which were known to regulate immune response while others represent novel combinations of important TFs.

  9. Coordinating two-period ordering and advertising policies in a dynamic market with stochastic demand

    Science.gov (United States)

    Wang, Junping; Wang, Shengdong; Min, Jie

    2015-03-01

    In this paper, we study the optimal two-stage advertising and ordering policies and the channel coordination issues in a supply chain composed of one manufacturer and one retailer. The manufacturer sells a short-life-cycle product through the retailer facing stochastic demand in dynamic markets characterised by price declines and product obsolescence. Following a two-period newsvendor framework, we develop two members' optimal ordering and advertising models under both the centralised and decentralised settings, and present the closed-form solutions to the developed models as well. Moreover, we design a two-period revenue-sharing contract, and develop sufficient conditions such that the channel coordination can be achieved and a win-win outcome can be guaranteed. Our analysis suggests that the centralised decision creates an incentive for the retailer to increase the advertising investments in two periods and put the purchase forward, but the decentralised decision mechanism forces the retailer to decrease the advertising investments in two periods and postpone/reduce its purchase in the first period. This phenomenon becomes more evident when demand variability is high.

  10. Combinatorial designs constructions and analysis

    CERN Document Server

    Stinson, Douglas R

    2004-01-01

    Created to teach students many of the most important techniques used for constructing combinatorial designs, this is an ideal textbook for advanced undergraduate and graduate courses in combinatorial design theory. The text features clear explanations of basic designs, such as Steiner and Kirkman triple systems, mutual orthogonal Latin squares, finite projective and affine planes, and Steiner quadruple systems. In these settings, the student will master various construction techniques, both classic and modern, and will be well-prepared to construct a vast array of combinatorial designs. Design theory offers a progressive approach to the subject, with carefully ordered results. It begins with simple constructions that gradually increase in complexity. Each design has a construction that contains new ideas or that reinforces and builds upon similar ideas previously introduced. A new text/reference covering all apsects of modern combinatorial design theory. Graduates and professionals in computer science, applie...

  11. Combinatorial methods for advanced materials research and development

    Energy Technology Data Exchange (ETDEWEB)

    Cremer, R.; Dondorf, S.; Hauck, M.; Horbach, D.; Kaiser, M.; Krysta, S.; Kyrylov, O.; Muenstermann, E.; Philipps, M.; Reichert, K.; Strauch, G. [Rheinisch-Westfaelische Technische Hochschule Aachen (Germany). Lehrstuhl fuer Theoretische Huettenkunde

    2001-10-01

    The applicability of combinatorial methods in developing advanced materials is illustrated presenting four examples for the deposition and characterization of one- and two-dimensionally laterally graded coatings, which were deposited by means of (reactive) magnetron sputtering and plasma-enhanced chemical vapor deposition. To emphasize the advantages of combinatorial approaches, metastable hard coatings like (Ti,Al)N and (Ti,Al,Hf)N respectively, as well as Ge-Sb-Te based films for rewritable optical data storage were investigated with respect to the relations between structure, composition, and the desired materials properties. (orig.)

  12. Markov's theorem and algorithmically non-recognizable combinatorial manifolds

    International Nuclear Information System (INIS)

    Shtan'ko, M A

    2004-01-01

    We prove the theorem of Markov on the existence of an algorithmically non-recognizable combinatorial n-dimensional manifold for every n≥4. We construct for the first time a concrete manifold which is algorithmically non-recognizable. A strengthened form of Markov's theorem is proved using the combinatorial methods of regular neighbourhoods and handle theory. The proofs coincide for all n≥4. We use Borisov's group with insoluble word problem. It has two generators and twelve relations. The use of this group forms the base for proving the strengthened form of Markov's theorem

  13. Stochastic and non-stochastic effects - a conceptual analysis

    International Nuclear Information System (INIS)

    Karhausen, L.R.

    1980-01-01

    The attempt to divide radiation effects into stochastic and non-stochastic effects is discussed. It is argued that radiation or toxicological effects are contingently related to radiation or chemical exposure. Biological effects in general can be described by general laws but these laws never represent a necessary connection. Actually stochastic effects express contingent, or empirical, connections while non-stochastic effects represent semantic and non-factual connections. These two expressions stem from two different levels of discourse. The consequence of this analysis for radiation biology and radiation protection is discussed. (author)

  14. Expansion or extinction: deterministic and stochastic two-patch models with Allee effects.

    Science.gov (United States)

    Kang, Yun; Lanchier, Nicolas

    2011-06-01

    We investigate the impact of Allee effect and dispersal on the long-term evolution of a population in a patchy environment. Our main focus is on whether a population already established in one patch either successfully invades an adjacent empty patch or undergoes a global extinction. Our study is based on the combination of analytical and numerical results for both a deterministic two-patch model and a stochastic counterpart. The deterministic model has either two, three or four attractors. The existence of a regime with exactly three attractors only appears when patches have distinct Allee thresholds. In the presence of weak dispersal, the analysis of the deterministic model shows that a high-density and a low-density populations can coexist at equilibrium in nearby patches, whereas the analysis of the stochastic model indicates that this equilibrium is metastable, thus leading after a large random time to either a global expansion or a global extinction. Up to some critical dispersal, increasing the intensity of the interactions leads to an increase of both the basin of attraction of the global extinction and the basin of attraction of the global expansion. Above this threshold, for both the deterministic and the stochastic models, the patches tend to synchronize as the intensity of the dispersal increases. This results in either a global expansion or a global extinction. For the deterministic model, there are only two attractors, while the stochastic model no longer exhibits a metastable behavior. In the presence of strong dispersal, the limiting behavior is entirely determined by the value of the Allee thresholds as the global population size in the deterministic and the stochastic models evolves as dictated by their single-patch counterparts. For all values of the dispersal parameter, Allee effects promote global extinction in terms of an expansion of the basin of attraction of the extinction equilibrium for the deterministic model and an increase of the

  15. Combinatorial Cis-regulation in Saccharomyces Species

    Directory of Open Access Journals (Sweden)

    Aaron T. Spivak

    2016-03-01

    Full Text Available Transcriptional control of gene expression requires interactions between the cis-regulatory elements (CREs controlling gene promoters. We developed a sensitive computational method to identify CRE combinations with conserved spacing that does not require genome alignments. When applied to seven sensu stricto and sensu lato Saccharomyces species, 80% of the predicted interactions displayed some evidence of combinatorial transcriptional behavior in several existing datasets including: (1 chromatin immunoprecipitation data for colocalization of transcription factors, (2 gene expression data for coexpression of predicted regulatory targets, and (3 gene ontology databases for common pathway membership of predicted regulatory targets. We tested several predicted CRE interactions with chromatin immunoprecipitation experiments in a wild-type strain and strains in which a predicted cofactor was deleted. Our experiments confirmed that transcription factor (TF occupancy at the promoters of the CRE combination target genes depends on the predicted cofactor while occupancy of other promoters is independent of the predicted cofactor. Our method has the additional advantage of identifying regulatory differences between species. By analyzing the S. cerevisiae and S. bayanus genomes, we identified differences in combinatorial cis-regulation between the species and showed that the predicted changes in gene regulation explain several of the species-specific differences seen in gene expression datasets. In some instances, the same CRE combinations appear to regulate genes involved in distinct biological processes in the two different species. The results of this research demonstrate that (1 combinatorial cis-regulation can be inferred by multi-genome analysis and (2 combinatorial cis-regulation can explain differences in gene expression between species.

  16. LP formulation of asymmetric zero-sum stochastic games

    KAUST Repository

    Li, Lichun

    2014-12-15

    This paper provides an efficient linear programming (LP) formulation of asymmetric two player zero-sum stochastic games with finite horizon. In these stochastic games, only one player is informed of the state at each stage, and the transition law is only controlled by the informed player. Compared with the LP formulation of extensive stochastic games whose size grows polynomially with respect to the size of the state and the size of the uninformed player\\'s actions, our proposed LP formulation has its size to be linear with respect to the size of the state and the size of the uninformed player, and hence greatly reduces the computational complexity. A travelling inspector problem is used to demonstrate the efficiency of the proposed LP formulation.

  17. LP formulation of asymmetric zero-sum stochastic games

    KAUST Repository

    Li, Lichun; Shamma, Jeff S.

    2014-01-01

    This paper provides an efficient linear programming (LP) formulation of asymmetric two player zero-sum stochastic games with finite horizon. In these stochastic games, only one player is informed of the state at each stage, and the transition law is only controlled by the informed player. Compared with the LP formulation of extensive stochastic games whose size grows polynomially with respect to the size of the state and the size of the uninformed player's actions, our proposed LP formulation has its size to be linear with respect to the size of the state and the size of the uninformed player, and hence greatly reduces the computational complexity. A travelling inspector problem is used to demonstrate the efficiency of the proposed LP formulation.

  18. INdAM conference "CoMeTA 2013 - Combinatorial Methods in Topology and Algebra"

    CERN Document Server

    Delucchi, Emanuele; Moci, Luca

    2015-01-01

    Combinatorics plays a prominent role in contemporary mathematics, due to the vibrant development it has experienced in the last two decades and its many interactions with other subjects. This book arises from the INdAM conference "CoMeTA 2013 - Combinatorial Methods in Topology and Algebra,'' which was held in Cortona in September 2013. The event brought together emerging and leading researchers at the crossroads of Combinatorics, Topology and Algebra, with a particular focus on new trends in subjects such as: hyperplane arrangements; discrete geometry and combinatorial topology; polytope theory and triangulations of manifolds; combinatorial algebraic geometry and commutative algebra; algebraic combinatorics; and combinatorial representation theory. The book is divided into two parts. The first expands on the topics discussed at the conference by providing additional background and explanations, while the second presents original contributions on new trends in the topics addressed by the conference.

  19. Scheduling stochastic two-machine flow shop problems to minimize expected makespan

    Directory of Open Access Journals (Sweden)

    Mehdi Heydari

    2013-07-01

    Full Text Available During the past few years, despite tremendous contribution on deterministic flow shop problem, there are only limited number of works dedicated on stochastic cases. This paper examines stochastic scheduling problems in two-machine flow shop environment for expected makespan minimization where processing times of jobs are normally distributed. Since jobs have stochastic processing times, to minimize the expected makespan, the expected sum of the second machine’s free times is minimized. In other words, by minimization waiting times for the second machine, it is possible to reach the minimum of the objective function. A mathematical method is proposed which utilizes the properties of the normal distributions. Furthermore, this method can be used as a heuristic method for other distributions, as long as the means and variances are available. The performance of the proposed method is explored using some numerical examples.

  20. Combinatorial structures to modeling simple games and applications

    Science.gov (United States)

    Molinero, Xavier

    2017-09-01

    We connect three different topics: combinatorial structures, game theory and chemistry. In particular, we establish the bases to represent some simple games, defined as influence games, and molecules, defined from atoms, by using combinatorial structures. First, we characterize simple games as influence games using influence graphs. It let us to modeling simple games as combinatorial structures (from the viewpoint of structures or graphs). Second, we formally define molecules as combinations of atoms. It let us to modeling molecules as combinatorial structures (from the viewpoint of combinations). It is open to generate such combinatorial structures using some specific techniques as genetic algorithms, (meta-)heuristics algorithms and parallel programming, among others.

  1. One-stage and two-stage penile buccal mucosa urethroplasty

    Directory of Open Access Journals (Sweden)

    G. Barbagli

    2016-03-01

    Full Text Available The paper provides the reader with the detailed description of current techniques of one-stage and two-stage penile buccal mucosa urethroplasty. The paper provides the reader with the preoperative patient evaluation paying attention to the use of diagnostic tools. The one-stage penile urethroplasty using buccal mucosa graft with the application of glue is preliminary showed and discussed. Two-stage penile urethroplasty is then reported. A detailed description of first-stage urethroplasty according Johanson technique is reported. A second-stage urethroplasty using buccal mucosa graft and glue is presented. Finally postoperative course and follow-up are addressed.

  2. Dynamic combinatorial libraries : new opportunities in systems chemistry

    NARCIS (Netherlands)

    Hunt, Rosemary A. R.; Otto, Sijbren; Hunt, Rosemary A.R.

    2011-01-01

    Combinatorial chemistry is a tool for selecting molecules with special properties. Dynamic combinatorial chemistry started off aiming to be just that. However, unlike ordinary combinatorial chemistry, the interconnectedness of dynamic libraries gives them an extra dimension. An understanding of

  3. Combinatorial techniques to efficiently investigate and optimize organic thin film processing and properties.

    Science.gov (United States)

    Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner

    2013-04-08

    In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.

  4. Combinatorial Techniques to Efficiently Investigate and Optimize Organic Thin Film Processing and Properties

    Directory of Open Access Journals (Sweden)

    Hans-Werner Schmidt

    2013-04-01

    Full Text Available In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.

  5. A combinatorial perspective of the protein inference problem.

    Science.gov (United States)

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2013-01-01

    In a shotgun proteomics experiment, proteins are the most biologically meaningful output. The success of proteomics studies depends on the ability to accurately and efficiently identify proteins. Many methods have been proposed to facilitate the identification of proteins from peptide identification results. However, the relationship between protein identification and peptide identification has not been thoroughly explained before. In this paper, we devote ourselves to a combinatorial perspective of the protein inference problem. We employ combinatorial mathematics to calculate the conditional protein probabilities (protein probability means the probability that a protein is correctly identified) under three assumptions, which lead to a lower bound, an upper bound, and an empirical estimation of protein probabilities, respectively. The combinatorial perspective enables us to obtain an analytical expression for protein inference. Our method achieves comparable results with ProteinProphet in a more efficient manner in experiments on two data sets of standard protein mixtures and two data sets of real samples. Based on our model, we study the impact of unique peptides and degenerate peptides (degenerate peptides are peptides shared by at least two proteins) on protein probabilities. Meanwhile, we also study the relationship between our model and ProteinProphet. We name our program ProteinInfer. Its Java source code, our supplementary document and experimental results are available at: >http://bioinformatics.ust.hk/proteininfer.

  6. Boltzmann Oracle for Combinatorial Systems

    OpenAIRE

    Pivoteau , Carine; Salvy , Bruno; Soria , Michèle

    2008-01-01

    International audience; Boltzmann random generation applies to well-defined systems of recursive combinatorial equations. It relies on oracles giving values of the enumeration generating series inside their disk of convergence. We show that the combinatorial systems translate into numerical iteration schemes that provide such oracles. In particular, we give a fast oracle based on Newton iteration.

  7. Combinatorial Interpretation of General Eulerian Numbers

    OpenAIRE

    Tingyao Xiong; Jonathan I. Hall; Hung-Ping Tsao

    2014-01-01

    Since 1950s, mathematicians have successfully interpreted the traditional Eulerian numbers and $q-$Eulerian numbers combinatorially. In this paper, the authors give a combinatorial interpretation to the general Eulerian numbers defined on general arithmetic progressions { a, a+d, a+2d,...}.

  8. Stochastic Sizing of Energy Storage Systems for Wind Integration

    Directory of Open Access Journals (Sweden)

    D. D. Le

    2018-06-01

    Full Text Available In this paper, we present an optimal capacity decision model for energy storage systems (ESSs in combined operation with wind energy in power systems. We use a two-stage stochastic programming approach to take into account both wind and load uncertainties. The planning problem is formulated as an AC optimal power flow (OPF model with the objective of minimizing ESS installation cost and system operation cost. Stochastic wind and load inputs for the model are generated from historical data using clustering technique. The model is tested on the IEEE 39-bus system.

  9. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  10. Two new algorithms to combine kriging with stochastic modelling

    Science.gov (United States)

    Venema, Victor; Lindau, Ralf; Varnai, Tamas; Simmer, Clemens

    2010-05-01

    Two main groups of statistical methods used in the Earth sciences are geostatistics and stochastic modelling. Geostatistical methods, such as various kriging algorithms, aim at estimating the mean value for every point as well as possible. In case of sparse measurements, such fields have less variability at small scales and a narrower distribution as the true field. This can lead to biases if a nonlinear process is simulated driven by such a kriged field. Stochastic modelling aims at reproducing the statistical structure of the data in space and time. One of the stochastic modelling methods, the so-called surrogate data approach, replicates the value distribution and power spectrum of a certain data set. While stochastic methods reproduce the statistical properties of the data, the location of the measurement is not considered. This requires the use of so-called constrained stochastic models. Because radiative transfer through clouds is a highly nonlinear process, it is essential to model the distribution (e.g. of optical depth, extinction, liquid water content or liquid water path) accurately. In addition, the correlations within the cloud field are important, especially because of horizontal photon transport. This explains the success of surrogate cloud fields for use in 3D radiative transfer studies. Up to now, however, we could only achieve good results for the radiative properties averaged over the field, but not for a radiation measurement located at a certain position. Therefore we have developed a new algorithm that combines the accuracy of stochastic (surrogate) modelling with the positioning capabilities of kriging. In this way, we can automatically profit from the large geostatistical literature and software. This algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually

  11. A new evolutionary algorithm with LQV learning for combinatorial problems optimization

    International Nuclear Information System (INIS)

    Machado, Marcelo Dornellas; Schirru, Roberto

    2000-01-01

    Genetic algorithms are biologically motivated adaptive systems which have been used, with good results, for combinatorial problems optimization. In this work, a new learning mode, to be used by the population-based incremental learning algorithm, has the aim to build a new evolutionary algorithm to be used in optimization of numerical problems and combinatorial problems. This new learning mode uses a variable learning rate during the optimization process, constituting a process known as proportional reward. The development of this new algorithm aims its application in the optimization of reload problem of PWR nuclear reactors, in order to increase the useful life of the nuclear fuel. For the test, two classes of problems are used: numerical problems and combinatorial problems. Due to the fact that the reload problem is a combinatorial problem, the major interest relies on the last class. The results achieved with the tests indicate the applicability of the new learning mode, showing its potential as a developing tool in the solution of reload problem. (author)

  12. Combinatorial optimization on a Boltzmann machine

    NARCIS (Netherlands)

    Korst, J.H.M.; Aarts, E.H.L.

    1989-01-01

    We discuss the problem of solving (approximately) combinatorial optimization problems on a Boltzmann machine. It is shown for a number of combinatorial optimization problems how they can be mapped directly onto a Boltzmann machine by choosing appropriate connection patterns and connection strengths.

  13. Combinatorial synthesis of natural products

    DEFF Research Database (Denmark)

    Nielsen, John

    2002-01-01

    Combinatorial syntheses allow production of compound libraries in an expeditious and organized manner immediately applicable for high-throughput screening. Natural products possess a pedigree to justify quality and appreciation in drug discovery and development. Currently, we are seeing a rapid...... increase in application of natural products in combinatorial chemistry and vice versa. The therapeutic areas of infectious disease and oncology still dominate but many new areas are emerging. Several complex natural products have now been synthesised by solid-phase methods and have created the foundation...... for preparation of combinatorial libraries. In other examples, natural products or intermediates have served as building blocks or scaffolds in the synthesis of complex natural products, bioactive analogues or designed hybrid molecules. Finally, structural motifs from the biologically active parent molecule have...

  14. Neural Meta-Memes Framework for Combinatorial Optimization

    Science.gov (United States)

    Song, Li Qin; Lim, Meng Hiot; Ong, Yew Soon

    In this paper, we present a Neural Meta-Memes Framework (NMMF) for combinatorial optimization. NMMF is a framework which models basic optimization algorithms as memes and manages them dynamically when solving combinatorial problems. NMMF encompasses neural networks which serve as the overall planner/coordinator to balance the workload between memes. We show the efficacy of the proposed NMMF through empirical study on a class of combinatorial problem, the quadratic assignment problem (QAP).

  15. Identification and Construction of Combinatory Cancer Hallmark-Based Gene Signature Sets to Predict Recurrence and Chemotherapy Benefit in Stage II Colorectal Cancer.

    Science.gov (United States)

    Gao, Shanwu; Tibiche, Chabane; Zou, Jinfeng; Zaman, Naif; Trifiro, Mark; O'Connor-McCourt, Maureen; Wang, Edwin

    2016-01-01

    Decisions regarding adjuvant therapy in patients with stage II colorectal cancer (CRC) have been among the most challenging and controversial in oncology over the past 20 years. To develop robust combinatory cancer hallmark-based gene signature sets (CSS sets) that more accurately predict prognosis and identify a subset of patients with stage II CRC who could gain survival benefits from adjuvant chemotherapy. Thirteen retrospective studies of patients with stage II CRC who had clinical follow-up and adjuvant chemotherapy were analyzed. Respective totals of 162 and 843 patients from 2 and 11 independent cohorts were used as the discovery and validation cohorts, respectively. A total of 1005 patients with stage II CRC were included in the 13 cohorts. Among them, 84 of 416 patients in 3 independent cohorts received fluorouracil-based adjuvant chemotherapy. Identification of CSS sets to predict relapse-free survival and identify a subset of patients with stage II CRC who could gain substantial survival benefits from fluorouracil-based adjuvant chemotherapy. Eight cancer hallmark-based gene signatures (30 genes each) were identified and used to construct CSS sets for determining prognosis. The CSS sets were validated in 11 independent cohorts of 767 patients with stage II CRC who did not receive adjuvant chemotherapy. The CSS sets accurately stratified patients into low-, intermediate-, and high-risk groups. Five-year relapse-free survival rates were 94%, 78%, and 45%, respectively, representing 60%, 28%, and 12% of patients with stage II disease. The 416 patients with CSS set-defined high-risk stage II CRC who received fluorouracil-based adjuvant chemotherapy showed a substantial gain in survival benefits from the treatment (ie, recurrence reduced by 30%-40% in 5 years). The CSS sets substantially outperformed other prognostic predictors of stage 2 CRC. They are more accurate and robust for prognostic predictions and facilitate the identification of patients with stage

  16. Number systems and combinatorial problems

    OpenAIRE

    Yordzhev, Krasimir

    2014-01-01

    The present work has been designed for students in secondary school and their teachers in mathematics. We will show how with the help of our knowledge of number systems we can solve problems from other fields of mathematics for example in combinatorial analysis and most of all when proving some combinatorial identities. To demonstrate discussed in this article method we have chosen several suitable mathematical tasks.

  17. Optics of two-stage photovoltaic concentrators with dielectric second stages

    Science.gov (United States)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  18. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    Science.gov (United States)

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  19. Introduction to combinatorial geometry

    International Nuclear Information System (INIS)

    Gabriel, T.A.; Emmett, M.B.

    1985-01-01

    The combinatorial geometry package as used in many three-dimensional multimedia Monte Carlo radiation transport codes, such as HETC, MORSE, and EGS, is becoming the preferred way to describe simple and complicated systems. Just about any system can be modeled using the package with relatively few input statements. This can be contrasted against the older style geometry packages in which the required input statements could be large even for relatively simple systems. However, with advancements come some difficulties. The users of combinatorial geometry must be able to visualize more, and, in some instances, all of the system at a time. Errors can be introduced into the modeling which, though slight, and at times hard to detect, can have devastating effects on the calculated results. As with all modeling packages, the best way to learn the combinatorial geometry is to use it, first on a simple system then on more complicated systems. The basic technique for the description of the geometry consists of defining the location and shape of the various zones in terms of the intersections and unions of geometric bodies. The geometric bodies which are generally included in most combinatorial geometry packages are: (1) box, (2) right parallelepiped, (3) sphere, (4) right circular cylinder, (5) right elliptic cylinder, (6) ellipsoid, (7) truncated right cone, (8) right angle wedge, and (9) arbitrary polyhedron. The data necessary to describe each of these bodies are given. As can be easily noted, there are some subsets included for simplicity

  20. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    Science.gov (United States)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  1. Introduction to combinatorial designs

    CERN Document Server

    Wallis, WD

    2007-01-01

    Combinatorial theory is one of the fastest growing areas of modern mathematics. Focusing on a major part of this subject, Introduction to Combinatorial Designs, Second Edition provides a solid foundation in the classical areas of design theory as well as in more contemporary designs based on applications in a variety of fields. After an overview of basic concepts, the text introduces balanced designs and finite geometries. The author then delves into balanced incomplete block designs, covering difference methods, residual and derived designs, and resolvability. Following a chapter on the e

  2. Use of combinatorial chemistry to speed drug discovery.

    Science.gov (United States)

    Rádl, S

    1998-10-01

    IBC's International Conference on Integrating Combinatorial Chemistry into the Discovery Pipeline was held September 14-15, 1998. The program started with a pre-conference workshop on High-Throughput Compound Characterization and Purification. The agenda of the main conference was divided into sessions of Synthesis, Automation and Unique Chemistries; Integrating Combinatorial Chemistry, Medicinal Chemistry and Screening; Combinatorial Chemistry Applications for Drug Discovery; and Information and Data Management. This meeting was an excellent opportunity to see how big pharma, biotech and service companies are addressing the current bottlenecks in combinatorial chemistry to speed drug discovery. (c) 1998 Prous Science. All rights reserved.

  3. A Convergent Solid-Phase Synthesis of Actinomycin Analogues - Towards Implementation of Double-Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Tong, Glenn; Nielsen, John

    1996-01-01

    The actinomycin antibiotics bind to nucleic acids via both intercalation and hydrogen bonding. We found this 'double-action attack' mechanism very attractive in our search for a novel class of nucleic acid binders. A highly convergent, solid-phase synthetic strategy has been developed for a class...... with the requirements for combinatorial synthesis and furthermore, the final segment condensation allows, for the first time, double-combinatorial chemistry to be performed where two combinatorial libraries can be reacted with each other. Copyright (C) 1996 Elsevier Science Ltd....

  4. Dynamic electricity pricing for electric vehicles using stochastic programming

    International Nuclear Information System (INIS)

    Soares, João; Ghazvini, Mohammad Ali Fotouhi; Borges, Nuno; Vale, Zita

    2017-01-01

    Electric Vehicles (EVs) are an important source of uncertainty, due to their variable demand, departure time and location. In smart grids, the electricity demand can be controlled via Demand Response (DR) programs. Smart charging and vehicle-to-grid seem highly promising methods for EVs control. However, high capital costs remain a barrier to implementation. Meanwhile, incentive and price-based schemes that do not require high level of control can be implemented to influence the EVs' demand. Having effective tools to deal with the increasing level of uncertainty is increasingly important for players, such as energy aggregators. This paper formulates a stochastic model for day-ahead energy resource scheduling, integrated with the dynamic electricity pricing for EVs, to address the challenges brought by the demand and renewable sources uncertainty. The two-stage stochastic programming approach is used to obtain the optimal electricity pricing for EVs. A realistic case study projected for 2030 is presented based on Zaragoza network. The results demonstrate that it is more effective than the deterministic model and that the optimal pricing is preferable. This study indicates that adequate DR schemes like the proposed one are promising to increase the customers' satisfaction in addition to improve the profitability of the energy aggregation business. - Highlights: • A stochastic model for energy scheduling tackling several uncertainty sources. • A two-stage stochastic programming is used to tackle the developed model. • Optimal EV electricity pricing seems to improve the profits. • The propose results suggest to increase the customers' satisfaction.

  5. The critical domain size of stochastic population models.

    Science.gov (United States)

    Reimer, Jody R; Bonsall, Michael B; Maini, Philip K

    2017-02-01

    Identifying the critical domain size necessary for a population to persist is an important question in ecology. Both demographic and environmental stochasticity impact a population's ability to persist. Here we explore ways of including this variability. We study populations with distinct dispersal and sedentary stages, which have traditionally been modelled using a deterministic integrodifference equation (IDE) framework. Individual-based models (IBMs) are the most intuitive stochastic analogues to IDEs but yield few analytic insights. We explore two alternate approaches; one is a scaling up to the population level using the Central Limit Theorem, and the other a variation on both Galton-Watson branching processes and branching processes in random environments. These branching process models closely approximate the IBM and yield insight into the factors determining the critical domain size for a given population subject to stochasticity.

  6. Log-balanced combinatorial sequences

    Directory of Open Access Journals (Sweden)

    Tomislav Došlic

    2005-01-01

    Full Text Available We consider log-convex sequences that satisfy an additional constraint imposed on their rate of growth. We call such sequences log-balanced. It is shown that all such sequences satisfy a pair of double inequalities. Sufficient conditions for log-balancedness are given for the case when the sequence satisfies a two- (or more- term linear recurrence. It is shown that many combinatorially interesting sequences belong to this class, and, as a consequence, that the above-mentioned double inequalities are valid for all of them.

  7. Asessing for Structural Understanding in Childrens' Combinatorial Problem Solving.

    Science.gov (United States)

    English, Lyn

    1999-01-01

    Assesses children's structural understanding of combinatorial problems when presented in a variety of task situations. Provides an explanatory model of students' combinatorial understandings that informs teaching and assessment. Addresses several components of children's structural understanding of elementary combinatorial problems. (Contains 50…

  8. Dynamic combinatorial chemistry with diselenides and disulfides in water

    DEFF Research Database (Denmark)

    Rasmussen, Brian; Sørensen, Anne; Gotfredsen, Henrik

    2014-01-01

    Diselenide exchange is introduced as a reversible reaction in dynamic combinatorial chemistry in water. At neutral pH, diselenides are found to mix with disulfides and form dynamic combinatorial libraries of diselenides, disulfides, and selenenylsulfides. This journal is......Diselenide exchange is introduced as a reversible reaction in dynamic combinatorial chemistry in water. At neutral pH, diselenides are found to mix with disulfides and form dynamic combinatorial libraries of diselenides, disulfides, and selenenylsulfides. This journal is...

  9. ON 3-WAY COMBINATORIAL IDENTITIES A. K. AGARWAL MEGHA ...

    Indian Academy of Sciences (India)

    36

    ∗Corresponding author: Department of Basic and Applied Sciences, University College of Engineering,. Punjabi ... In this paper we provide combinatorial meanings to two generalized basic ... 2010 Mathematics Subject Classification. 05A15 ...

  10. On some interconnections between combinatorial optimization and extremal graph theory

    Directory of Open Access Journals (Sweden)

    Cvetković Dragoš M.

    2004-01-01

    Full Text Available The uniting feature of combinatorial optimization and extremal graph theory is that in both areas one should find extrema of a function defined in most cases on a finite set. While in combinatorial optimization the point is in developing efficient algorithms and heuristics for solving specified types of problems, the extremal graph theory deals with finding bounds for various graph invariants under some constraints and with constructing extremal graphs. We analyze by examples some interconnections and interactions of the two theories and propose some conclusions.

  11. Functional completeness of the mixed λ-calculus and combinatory logic

    DEFF Research Database (Denmark)

    Nielson, Hanne Riis; Nielson, Flemming

    1990-01-01

    Functional completeness of the combinatory logic means that every lambda-expression may be translated into an equivalent combinator expression and this is the theoretical basis for the implementation of functional languages on combinator-based abstract machines. To obtain efficient implementations...... it is important to distinguish between early and late binding times, i.e. to distinguish between compile-time and run-time computations. The authors therefore introduce a two-level version of the lambda-calculus where this distinction is made in an explicit way. Turning to the combinatory logic they generate...

  12. Fourier analysis in combinatorial number theory

    International Nuclear Information System (INIS)

    Shkredov, Il'ya D

    2010-01-01

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  13. Fourier analysis in combinatorial number theory

    Energy Technology Data Exchange (ETDEWEB)

    Shkredov, Il' ya D [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)

    2010-09-16

    In this survey applications of harmonic analysis to combinatorial number theory are considered. Discussion topics include classical problems of additive combinatorics, colouring problems, higher-order Fourier analysis, theorems about sets of large trigonometric sums, results on estimates for trigonometric sums over subgroups, and the connection between combinatorial and analytic number theory. Bibliography: 162 titles.

  14. Stochastic resonance in multi-stable coupled systems driven by two driving signals

    Science.gov (United States)

    Xu, Pengfei; Jin, Yanfei

    2018-02-01

    The stochastic resonance (SR) in multi-stable coupled systems subjected to Gaussian white noises and two different driving signals is investigated in this paper. Using the adiabatic approximation and the perturbation method, the coupled systems with four-well potential are transformed into the master equations and the amplitude of the response is obtained. The signal-to-noise ratio (SNR) is calculated numerically to demonstrate the occurrence of SR. For the case of two driving signals with different amplitudes, the interwell resonance between two wells S1 and S3 emerges for strong coupling. The SR can appear in the subsystem with weaker signal amplitude or even without driving signal with the help of coupling. For the case of two driving signals with different frequencies, the effects of SR in two subsystems driven by high and low frequency signals are both weakened with an increase in coupling strength. The stochastic multi-resonance phenomenon is observed in the subsystem subjected to the low frequency signal. Moreover, an effective scheme for phase suppressing SR is proposed by using a relative phase between two driving signals.

  15. Solid-Phase Synthesis of Small Molecule Libraries using Double Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Nielsen, John; Jensen, Flemming R.

    1997-01-01

    The first synthesis of a combinatorial library using double combinatorial chemistry is presented. Coupling of unprotected Fmoc-tyrosine to the solid support was followed by Mitsunobu O-alkylation. Introduction of a diacid linker yields a system in which the double combinatorial step can be demons......The first synthesis of a combinatorial library using double combinatorial chemistry is presented. Coupling of unprotected Fmoc-tyrosine to the solid support was followed by Mitsunobu O-alkylation. Introduction of a diacid linker yields a system in which the double combinatorial step can...

  16. Local formulae for combinatorial Pontryagin classes

    International Nuclear Information System (INIS)

    Gaifullin, Alexander A

    2004-01-01

    Let p(|K|) be the characteristic class of a combinatorial manifold K given by a polynomial p in the rational Pontryagin classes of K. We prove that for any polynomial p there is a function taking each combinatorial manifold K to a cycle z p (K) in its rational simplicial chains such that: 1) the Poincare dual of z p (K) represents the cohomology class p(|K|); 2) the coefficient of each simplex Δ in the cycle z p (K) is determined solely by the combinatorial type of linkΔ. We explicitly describe all such functions for the first Pontryagin class. We obtain estimates for the denominators of the coefficients of the simplices in the cycles z p (K)

  17. Stochastic analysis of an ecosystem of two competing species

    Indian Academy of Sciences (India)

    Ecosystem; competing species; stochastic model; Monte Carlo .... probability density p(g) of the grass density for the same system but for different initial states .... Li Q C, Lin Y K 1995 New stochastic theory for bridge stability in turbulent flow, II.

  18. Heterogenous phase as a mean in combinatorial chemistry

    International Nuclear Information System (INIS)

    Abdel-Hamid, S.G.

    2007-01-01

    Combinatorial chemistry is a rapid and inexpensive technique for the synthesis of hundreds of thousands of organic compounds of potential medicinal activity. In the past few decades a large number of combinatorial libraries have been constructed, and significantly supplement the chemical diversity of the traditional collections of the potentially active medicinal compounds. Solid phase synthesis was used to enrich the combinatorial chemistry libraries, through the use of solid supports (resins) and their modified forms. Most of the new libraries of compounds appeared recently, were synthesized by the use of solid-phase. Solid-phase combinatorial chemistry (SPCC) is now considered as an outstanding branch in pharmaceutical chemistry research and used extensively as a tool for drug discovery within the context of high-throughput chemical synthesis. The best pure libraries synthesized by the use of solid phase combinatorial chemistry (SPCC) may well be those of intermediate complexity that are free of artifact-causing nuisance compounds. (author)

  19. MIFT: GIFT Combinatorial Geometry Input to VCS Code

    Science.gov (United States)

    1977-03-01

    r-w w-^ H ^ß0318is CQ BRL °RCUMr REPORT NO. 1967 —-S: ... MIFT: GIFT COMBINATORIAL GEOMETRY INPUT TO VCS CODE Albert E...TITLE (and Subtitle) MIFT: GIFT Combinatorial Geometry Input to VCS Code S. TYPE OF REPORT & PERIOD COVERED FINAL 6. PERFORMING ORG. REPORT NUMBER...Vehicle Code System (VCS) called MORSE was modified to accept the GIFT combinatorial geometry package. GIFT , as opposed to the geometry package

  20. Comparisons of single-stage and two-stage approaches to genomic selection.

    Science.gov (United States)

    Schulz-Streeck, Torben; Ogutu, Joseph O; Piepho, Hans-Peter

    2013-01-01

    Genomic selection (GS) is a method for predicting breeding values of plants or animals using many molecular markers that is commonly implemented in two stages. In plant breeding the first stage usually involves computation of adjusted means for genotypes which are then used to predict genomic breeding values in the second stage. We compared two classical stage-wise approaches, which either ignore or approximate correlations among the means by a diagonal matrix, and a new method, to a single-stage analysis for GS using ridge regression best linear unbiased prediction (RR-BLUP). The new stage-wise method rotates (orthogonalizes) the adjusted means from the first stage before submitting them to the second stage. This makes the errors approximately independently and identically normally distributed, which is a prerequisite for many procedures that are potentially useful for GS such as machine learning methods (e.g. boosting) and regularized regression methods (e.g. lasso). This is illustrated in this paper using componentwise boosting. The componentwise boosting method minimizes squared error loss using least squares and iteratively and automatically selects markers that are most predictive of genomic breeding values. Results are compared with those of RR-BLUP using fivefold cross-validation. The new stage-wise approach with rotated means was slightly more similar to the single-stage analysis than the classical two-stage approaches based on non-rotated means for two unbalanced datasets. This suggests that rotation is a worthwhile pre-processing step in GS for the two-stage approaches for unbalanced datasets. Moreover, the predictive accuracy of stage-wise RR-BLUP was higher (5.0-6.1%) than that of componentwise boosting.

  1. Public Transportation Hub Location with Stochastic Demand: An Improved Approach Based on Multiple Attribute Group Decision-Making

    Directory of Open Access Journals (Sweden)

    Sen Liu

    2015-01-01

    Full Text Available Urban public transportation hubs are the key nodes of the public transportation system. The location of such hubs is a combinatorial problem. Many factors can affect the decision-making of location, including both quantitative and qualitative factors; however, most current research focuses solely on either the quantitative or the qualitative factors. Little has been done to combine these two approaches. To fulfill this gap in the research, this paper proposes a novel approach to the public transportation hub location problem, which takes both quantitative and qualitative factors into account. In this paper, an improved multiple attribute group decision-making (MAGDM method based on TOPSIS (Technique for Order Preference by Similarity to Ideal Solution and deviation is proposed to convert the qualitative factors of each hub into quantitative evaluation values. A location model with stochastic passenger flows is then established based on the above evaluation values. Finally, stochastic programming theory is applied to solve the model and to determine the location result. A numerical study shows that this approach is applicable and effective.

  2. Combinatorial optimization theory and algorithms

    CERN Document Server

    Korte, Bernhard

    2018-01-01

    This comprehensive textbook on combinatorial optimization places special emphasis on theoretical results and algorithms with provably good performance, in contrast to heuristics. It is based on numerous courses on combinatorial optimization and specialized topics, mostly at graduate level. This book reviews the fundamentals, covers the classical topics (paths, flows, matching, matroids, NP-completeness, approximation algorithms) in detail, and proceeds to advanced and recent topics, some of which have not appeared in a textbook before. Throughout, it contains complete but concise proofs, and also provides numerous exercises and references. This sixth edition has again been updated, revised, and significantly extended. Among other additions, there are new sections on shallow-light trees, submodular function maximization, smoothed analysis of the knapsack problem, the (ln 4+ɛ)-approximation for Steiner trees, and the VPN theorem. Thus, this book continues to represent the state of the art of combinatorial opti...

  3. Combinatorial stresses kill pathogenic Candida species

    Science.gov (United States)

    Kaloriti, Despoina; Tillmann, Anna; Cook, Emily; Jacobsen, Mette; You, Tao; Lenardon, Megan; Ames, Lauren; Barahona, Mauricio; Chandrasekaran, Komelapriya; Coghill, George; Goodman, Daniel; Gow, Neil A. R.; Grebogi, Celso; Ho, Hsueh-Lui; Ingram, Piers; McDonagh, Andrew; De Moura, Alessandro P. S.; Pang, Wei; Puttnam, Melanie; Radmaneshfar, Elahe; Romano, Maria Carmen; Silk, Daniel; Stark, Jaroslav; Stumpf, Michael; Thiel, Marco; Thorne, Thomas; Usher, Jane; Yin, Zhikang; Haynes, Ken; Brown, Alistair J. P.

    2012-01-01

    Pathogenic microbes exist in dynamic niches and have evolved robust adaptive responses to promote survival in their hosts. The major fungal pathogens of humans, Candida albicans and Candida glabrata, are exposed to a range of environmental stresses in their hosts including osmotic, oxidative and nitrosative stresses. Significant efforts have been devoted to the characterization of the adaptive responses to each of these stresses. In the wild, cells are frequently exposed simultaneously to combinations of these stresses and yet the effects of such combinatorial stresses have not been explored. We have developed a common experimental platform to facilitate the comparison of combinatorial stress responses in C. glabrata and C. albicans. This platform is based on the growth of cells in buffered rich medium at 30°C, and was used to define relatively low, medium and high doses of osmotic (NaCl), oxidative (H 2O2) and nitrosative stresses (e.g., dipropylenetriamine (DPTA)-NONOate). The effects of combinatorial stresses were compared with the corresponding individual stresses under these growth conditions. We show for the first time that certain combinations of combinatorial stress are especially potent in terms of their ability to kill C. albicans and C. glabrata and/or inhibit their growth. This was the case for combinations of osmotic plus oxidative stress and for oxidative plus nitrosative stress. We predict that combinatorial stresses may be highly signif cant in host defences against these pathogenic yeasts. PMID:22463109

  4. The economics of planning electricity transmission to accommodate renewables: Using two-stage optimisation to evaluate flexibility and the cost of disregarding uncertainty

    International Nuclear Information System (INIS)

    Weijde, Adriaan Hendrik van der; Hobbs, Benjamin F.

    2012-01-01

    Aggressive development of renewable electricity sources will require significant expansions in transmission infrastructure. We present a stochastic two-stage optimisation model that captures the multistage nature of transmission planning under uncertainty and use it to evaluate interregional grid reinforcements in Great Britain (GB). In our model, a proactive transmission planner makes investment decisions in two time periods, each time followed by a market response. Uncertainty is represented by economic, technology, and regulatory scenarios, and first-stage investments must be made before it is known which scenario will occur. The model allows us to identify expected cost-minimising first-stage investments, as well as estimate the value of information, the cost of ignoring uncertainty, and the value of flexibility. Our results show that ignoring risk in planning transmission for renewables has quantifiable economic consequences, and that considering uncertainty can yield decisions that have lower expected costs than traditional deterministic planning methods. In the GB case, the value of information and cost of disregarding uncertainty in transmission planning were of the same order of magnitude (approximately £100 M, in present worth terms). Further, the best plan under a risk-neutral decision criterion can differ from the best under risk-aversion. Finally, a traditional sensitivity analysis-based robustness analysis also yields different results than the stochastic model, although the former's expected cost is not much higher.

  5. Toward Chemical Implementation of Encoded Combinatorial Libraries

    DEFF Research Database (Denmark)

    Nielsen, John; Janda, Kim D.

    1994-01-01

    The recent application of "combinatorial libraries" to supplement existing drug screening processes might simplify and accelerate the search for new lead compounds or drugs. Recently, a scheme for encoded combinatorial chemistry was put forward to surmount a number of the limitations possessed...

  6. A New Approach for Proving or Generating Combinatorial Identities

    Science.gov (United States)

    Gonzalez, Luis

    2010-01-01

    A new method for proving, in an immediate way, many combinatorial identities is presented. The method is based on a simple recursive combinatorial formula involving n + 1 arbitrary real parameters. Moreover, this formula enables one not only to prove, but also generate many different combinatorial identities (not being required to know them "a…

  7. Comparative effectiveness of one-stage versus two-stage basilic vein transposition arteriovenous fistulas.

    Science.gov (United States)

    Ghaffarian, Amir A; Griffin, Claire L; Kraiss, Larry W; Sarfati, Mark R; Brooke, Benjamin S

    2018-02-01

    Basilic vein transposition (BVT) fistulas may be performed as either a one-stage or two-stage operation, although there is debate as to which technique is superior. This study was designed to evaluate the comparative clinical efficacy and cost-effectiveness of one-stage vs two-stage BVT. We identified all patients at a single large academic hospital who had undergone creation of either a one-stage or two-stage BVT between January 2007 and January 2015. Data evaluated included patient demographics, comorbidities, medication use, reasons for abandonment, and interventions performed to maintain patency. Costs were derived from the literature, and effectiveness was expressed in quality-adjusted life-years (QALYs). We analyzed primary and secondary functional patency outcomes as well as survival during follow-up between one-stage and two-stage BVT procedures using multivariate Cox proportional hazards models and Kaplan-Meier analysis with log-rank tests. The incremental cost-effectiveness ratio was used to determine cost savings. We identified 131 patients in whom 57 (44%) one-stage BVT and 74 (56%) two-stage BVT fistulas were created among 8 different vascular surgeons during the study period that each performed both procedures. There was no significant difference in the mean age, male gender, white race, diabetes, coronary disease, or medication profile among patients undergoing one- vs two-stage BVT. After fistula transposition, the median follow-up time was 8.3 months (interquartile range, 3-21 months). Primary patency rates of one-stage BVT were 56% at 12-month follow-up, whereas primary patency rates of two-stage BVT were 72% at 12-month follow-up. Patients undergoing two-stage BVT also had significantly higher rates of secondary functional patency at 12 months (57% for one-stage BVT vs 80% for two-stage BVT) and 24 months (44% for one-stage BVT vs 73% for two-stage BVT) of follow-up (P < .001 using log-rank test). However, there was no significant difference

  8. Combinatorial Aspects of the Generalized Euler's Totient

    Directory of Open Access Journals (Sweden)

    Nittiya Pabhapote

    2010-01-01

    Full Text Available A generalized Euler's totient is defined as a Dirichlet convolution of a power function and a product of the Souriau-Hsu-Möbius function with a completely multiplicative function. Two combinatorial aspects of the generalized Euler's totient, namely, its connections to other totients and its relations with counting formulae, are investigated.

  9. Combinatorial methods with computer applications

    CERN Document Server

    Gross, Jonathan L

    2007-01-01

    Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp

  10. Stochastic resonance and noise delayed extinction in a model of two competing species

    Science.gov (United States)

    Valenti, D.; Fiasconaro, A.; Spagnolo, B.

    2004-01-01

    We study the role of the noise in the dynamics of two competing species. We consider generalized Lotka-Volterra equations in the presence of a multiplicative noise, which models the interaction between the species and the environment. The interaction parameter between the species is a random process which obeys a stochastic differential equation with a generalized bistable potential in the presence of a periodic driving term, which accounts for the environment temperature variation. We find noise-induced periodic oscillations of the species concentrations and stochastic resonance phenomenon. We find also a nonmonotonic behavior of the mean extinction time of one of the two competing species as a function of the additive noise intensity.

  11. View Discovery in OLAP Databases through Statistical Combinatorial Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Joslyn, Cliff A.; Burke, Edward J.; Critchlow, Terence J.

    2009-05-01

    The capability of OLAP database software systems to handle data complexity comes at a high price for analysts, presenting them a combinatorially vast space of views of a relational database. We respond to the need to deploy technologies sufficient to allow users to guide themselves to areas of local structure by casting the space of ``views'' of an OLAP database as a combinatorial object of all projections and subsets, and ``view discovery'' as an search process over that lattice. We equip the view lattice with statistical information theoretical measures sufficient to support a combinatorial optimization process. We outline ``hop-chaining'' as a particular view discovery algorithm over this object, wherein users are guided across a permutation of the dimensions by searching for successive two-dimensional views, pushing seen dimensions into an increasingly large background filter in a ``spiraling'' search process. We illustrate this work in the context of data cubes recording summary statistics for radiation portal monitors at US ports.

  12. Design considerations for single-stage and two-stage pneumatic pellet injectors

    International Nuclear Information System (INIS)

    Gouge, M.J.; Combs, S.K.; Fisher, P.W.; Milora, S.L.

    1988-09-01

    Performance of single-stage pneumatic pellet injectors is compared with several models for one-dimensional, compressible fluid flow. Agreement is quite good for models that reflect actual breech chamber geometry and incorporate nonideal effects such as gas friction. Several methods of improving the performance of single-stage pneumatic pellet injectors in the near term are outlined. The design and performance of two-stage pneumatic pellet injectors are discussed, and initial data from the two-stage pneumatic pellet injector test facility at Oak Ridge National Laboratory are presented. Finally, a concept for a repeating two-stage pneumatic pellet injector is described. 27 refs., 8 figs., 3 tabs

  13. A two-stage inexact joint-probabilistic programming method for air quality management under uncertainty.

    Science.gov (United States)

    Lv, Y; Huang, G H; Li, Y P; Yang, Z F; Sun, W

    2011-03-01

    A two-stage inexact joint-probabilistic programming (TIJP) method is developed for planning a regional air quality management system with multiple pollutants and multiple sources. The TIJP method incorporates the techniques of two-stage stochastic programming, joint-probabilistic constraint programming and interval mathematical programming, where uncertainties expressed as probability distributions and interval values can be addressed. Moreover, it can not only examine the risk of violating joint-probability constraints, but also account for economic penalties as corrective measures against any infeasibility. The developed TIJP method is applied to a case study of a regional air pollution control problem, where the air quality index (AQI) is introduced for evaluation of the integrated air quality management system associated with multiple pollutants. The joint-probability exists in the environmental constraints for AQI, such that individual probabilistic constraints for each pollutant can be efficiently incorporated within the TIJP model. The results indicate that useful solutions for air quality management practices have been generated; they can help decision makers to identify desired pollution abatement strategies with minimized system cost and maximized environmental efficiency. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. The Yoccoz Combinatorial Analytic Invariant

    DEFF Research Database (Denmark)

    Petersen, Carsten Lunde; Roesch, Pascale

    2008-01-01

    In this paper we develop a combinatorial analytic encoding of the Mandelbrot set M. The encoding is implicit in Yoccoz' proof of local connectivity of M at any Yoccoz parameter, i.e. any at most finitely renormalizable parameter for which all periodic orbits are repelling. Using this encoding we ...... to reprove that the dyadic veins of M are arcs and that more generally any two Yoccoz parameters are joined by a unique ruled (in the sense of Douady-Hubbard) arc in M....

  15. Two-Stage Series-Resonant Inverter

    Science.gov (United States)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  16. Gas-Foamed Scaffold Gradients for Combinatorial Screening in 3D

    Directory of Open Access Journals (Sweden)

    Joachim Kohn

    2012-03-01

    Full Text Available Current methods for screening cell-material interactions typically utilize a two-dimensional (2D culture format where cells are cultured on flat surfaces. However, there is a need for combinatorial and high-throughput screening methods to systematically screen cell-biomaterial interactions in three-dimensional (3D tissue scaffolds for tissue engineering. Previously, we developed a two-syringe pump approach for making 3D scaffold gradients for use in combinatorial screening of salt-leached scaffolds. Herein, we demonstrate that the two-syringe pump approach can also be used to create scaffold gradients using a gas-foaming approach. Macroporous foams prepared by a gas-foaming technique are commonly used for fabrication of tissue engineering scaffolds due to their high interconnectivity and good mechanical properties. Gas-foamed scaffold gradient libraries were fabricated from two biodegradable tyrosine-derived polycarbonates: poly(desaminotyrosyl-tyrosine ethyl ester carbonate (pDTEc and poly(desaminotyrosyl-tyrosine octyl ester carbonate (pDTOc. The composition of the libraries was assessed with Fourier transform infrared spectroscopy (FTIR and showed that pDTEc/pDTOc gas-foamed scaffold gradients could be repeatably fabricated. Scanning electron microscopy showed that scaffold morphology was similar between the pDTEc-rich ends and the pDTOc-rich ends of the gradient. These results introduce a method for fabricating gas-foamed polymer scaffold gradients that can be used for combinatorial screening of cell-material interactions in 3D.

  17. Stochastic local search foundations and applications

    CERN Document Server

    Hoos, Holger H; Stutzle, Thomas

    2004-01-01

    Stochastic local search (SLS) algorithms are among the most prominent and successful techniques for solving computationally difficult problems in many areas of computer science and operations research, including propositional satisfiability, constraint satisfaction, routing, and scheduling. SLS algorithms have also become increasingly popular for solving challenging combinatorial problems in many application areas, such as e-commerce and bioinformatics. Hoos and Stützle offer the first systematic and unified treatment of SLS algorithms. In this groundbreaking new book, they examine the general concepts and specific instances of SLS algorithms and carefully consider their development, analysis and application. The discussion focuses on the most successful SLS methods and explores their underlying principles, properties, and features. This book gives hands-on experience with some of the most widely used search techniques, and provides readers with the necessary understanding and skills to use this powerful too...

  18. Analysis and design of algorithms for combinatorial problems

    CERN Document Server

    Ausiello, G

    1985-01-01

    Combinatorial problems have been from the very beginning part of the history of mathematics. By the Sixties, the main classes of combinatorial problems had been defined. During that decade, a great number of research contributions in graph theory had been produced, which laid the foundations for most of the research in graph optimization in the following years. During the Seventies, a large number of special purpose models were developed. The impressive growth of this field since has been strongly determined by the demand of applications and influenced by the technological increases in computing power and the availability of data and software. The availability of such basic tools has led to the feasibility of the exact or well approximate solution of large scale realistic combinatorial optimization problems and has created a number of new combinatorial problems.

  19. Two-stage anaerobic digestion of cheese whey

    Energy Technology Data Exchange (ETDEWEB)

    Lo, K V; Liao, P H

    1986-01-01

    A two-stage digestion of cheese whey was studied using two anaerobic rotating biological contact reactors. The second-stage reactor receiving partially treated effluent from the first-stage reactor could be operated at a hydraulic retention time of one day. The results indicated that two-stage digestion is a feasible alternative for treating whey. 6 references.

  20. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  1. Designing time-of-use program based on stochastic security constrained unit commitment considering reliability index

    International Nuclear Information System (INIS)

    Nikzad, Mehdi; Mozafari, Babak; Bashirvand, Mahdi; Solaymani, Soodabeh; Ranjbar, Ali Mohamad

    2012-01-01

    Recently in electricity markets, a massive focus has been made on setting up opportunities for participating demand side. Such opportunities, also known as demand response (DR) options, are triggered by either a grid reliability problem or high electricity prices. Two important challenges that market operators are facing are appropriate designing and reasonable pricing of DR options. In this paper, time-of-use program (TOU) as a prevalent time-varying program is modeled linearly based on own and cross elasticity definition. In order to decide on TOU rates, a stochastic model is proposed in which the optimum TOU rates are determined based on grid reliability index set by the operator. Expected Load Not Supplied (ELNS) is used to evaluate reliability of the power system in each hour. The proposed stochastic model is formulated as a two-stage stochastic mixed-integer linear programming (SMILP) problem and solved using CPLEX solver. The validity of the method is tested over the IEEE 24-bus test system. In this regard, the impact of the proposed pricing method on system load profile; operational costs and required capacity of up- and down-spinning reserve as well as improvement of load factor is demonstrated. Also the sensitivity of the results to elasticity coefficients is investigated. -- Highlights: ► Time-of-use demand response program is linearly modeled. ► A stochastic model is proposed to determine the optimum TOU rates based on ELNS index set by the operator. ► The model is formulated as a short-term two-stage stochastic mixed-integer linear programming problem.

  2. Stochastic dynamics for two biological species and ecological niches

    Science.gov (United States)

    Ruziska, Flávia M.; Arashiro, Everaldo; Tomé, Tânia

    2018-01-01

    We consider an ecological system in which two species interact with two niches. To this end we introduce a stochastic model with four states. Our analysis is founded in three approaches: Monte Carlo simulations of the model on a square lattice, mean-field approximation, and birth and death master equation. From this last approach we obtain a description in terms of Langevin equations which show in an explicit way the role of noise in population biology. We focus mainly on the description of time oscillations of the species population and the alternating dominance between them. The model treated here may provide insights on these properties.

  3. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail

    2012-04-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general two-stage M-estimator, and provide their interpretations. We illustrate our results in the case of the two-stage maximum likelihood estimator and the two-stage least squares estimator. © 2011.

  4. Some Combinatorial Interpretations and Applications of Fuss-Catalan Numbers

    OpenAIRE

    Lin, Chin-Hung

    2011-01-01

    Fuss-Catalan number is a family of generalized Catalan numbers. We begin by two definitions of Fuss-Catalan numbers and some basic properties. And we give some combinatorial interpretations different from original Catalan numbers. Finally we generalize the Jonah's theorem as its applications.

  5. Morphological Constraints on Cerebellar Granule Cell Combinatorial Diversity.

    Science.gov (United States)

    Gilmer, Jesse I; Person, Abigail L

    2017-12-13

    Combinatorial expansion by the cerebellar granule cell layer (GCL) is fundamental to theories of cerebellar contributions to motor control and learning. Granule cells (GrCs) sample approximately four mossy fiber inputs and are thought to form a combinatorial code useful for pattern separation and learning. We constructed a spatially realistic model of the cerebellar GCL and examined how GCL architecture contributes to GrC combinatorial diversity. We found that GrC combinatorial diversity saturates quickly as mossy fiber input diversity increases, and that this saturation is in part a consequence of short dendrites, which limit access to diverse inputs and favor dense sampling of local inputs. This local sampling also produced GrCs that were combinatorially redundant, even when input diversity was extremely high. In addition, we found that mossy fiber clustering, which is a common anatomical pattern, also led to increased redundancy of GrC input combinations. We related this redundancy to hypothesized roles of temporal expansion of GrC information encoding in service of learned timing, and we show that GCL architecture produces GrC populations that support both temporal and combinatorial expansion. Finally, we used novel anatomical measurements from mice of either sex to inform modeling of sparse and filopodia-bearing mossy fibers, finding that these circuit features uniquely contribute to enhancing GrC diversification and redundancy. Our results complement information theoretic studies of granule layer structure and provide insight into the contributions of granule layer anatomical features to afferent mixing. SIGNIFICANCE STATEMENT Cerebellar granule cells are among the simplest neurons, with tiny somata and, on average, just four dendrites. These characteristics, along with their dense organization, inspired influential theoretical work on the granule cell layer as a combinatorial expander, where each granule cell represents a unique combination of inputs

  6. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-07-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  7. Sensitivity Analysis in Two-Stage DEA

    Directory of Open Access Journals (Sweden)

    Athena Forghani

    2015-12-01

    Full Text Available Data envelopment analysis (DEA is a method for measuring the efficiency of peer decision making units (DMUs which uses a set of inputs to produce a set of outputs. In some cases, DMUs have a two-stage structure, in which the first stage utilizes inputs to produce outputs used as the inputs of the second stage to produce final outputs. One important issue in two-stage DEA is the sensitivity of the results of an analysis to perturbations in the data. The current paper looks into combined model for two-stage DEA and applies the sensitivity analysis to DMUs on the entire frontier. In fact, necessary and sufficient conditions for preserving a DMU's efficiency classiffication are developed when various data changes are applied to all DMUs.

  8. A complementarity model for solving stochastic natural gas market equilibria

    International Nuclear Information System (INIS)

    Jifang Zhuang; Gabriel, S.A.

    2008-01-01

    This paper presents a stochastic equilibrium model for deregulated natural gas markets. Each market participant (pipeline operators, producers, etc.) solves a stochastic optimization problem whose optimality conditions, when combined with market-clearing conditions give rise to a certain mixed complementarity problem (MiCP). The stochastic aspects are depicted by a recourse problem for each player in which the first-stage decisions relate to long-term contracts and the second-stage decisions relate to spot market activities for three seasons. Besides showing that such a market model is an instance of a MiCP, we provide theoretical results concerning long-term and spot market prices and solve the resulting MiCP for a small yet representative market. We also note an interesting observation for the value of the stochastic solution for non-optimization problems. (author)

  9. A complementarity model for solving stochastic natural gas market equilibria

    International Nuclear Information System (INIS)

    Zhuang Jifang; Gabriel, Steven A.

    2008-01-01

    This paper presents a stochastic equilibrium model for deregulated natural gas markets. Each market participant (pipeline operators, producers, etc.) solves a stochastic optimization problem whose optimality conditions, when combined with market-clearing conditions give rise to a certain mixed complementarity problem (MiCP). The stochastic aspects are depicted by a recourse problem for each player in which the first-stage decisions relate to long-term contracts and the second-stage decisions relate to spot market activities for three seasons. Besides showing that such a market model is an instance of a MiCP, we provide theoretical results concerning long-term and spot market prices and solve the resulting MiCP for a small yet representative market. We also note an interesting observation for the value of the stochastic solution for non-optimization problems

  10. Concept of combinatorial de novo design of drug-like molecules by particle swarm optimization.

    Science.gov (United States)

    Hartenfeller, Markus; Proschak, Ewgenij; Schüller, Andreas; Schneider, Gisbert

    2008-07-01

    We present a fast stochastic optimization algorithm for fragment-based molecular de novo design (COLIBREE, Combinatorial Library Breeding). The search strategy is based on a discrete version of particle swarm optimization. Molecules are represented by a scaffold, which remains constant during optimization, and variable linkers and side chains. Different linkers represent virtual chemical reactions. Side-chain building blocks were obtained from pseudo-retrosynthetic dissection of large compound databases. Here, ligand-based design was performed using chemically advanced template search (CATS) topological pharmacophore similarity to reference ligands as fitness function. A weighting scheme was included for particle swarm optimization-based molecular design, which permits the use of many reference ligands and allows for positive and negative design to be performed simultaneously. In a case study, the approach was applied to the de novo design of potential peroxisome proliferator-activated receptor subtype-selective agonists. The results demonstrate the ability of the technique to cope with large combinatorial chemistry spaces and its applicability to focused library design. The technique was able to perform exploitation of a known scheme and at the same time explorative search for novel ligands within the framework of a given molecular core structure. It thereby represents a practical solution for compound screening in the early hit and lead finding phase of a drug discovery project.

  11. Two stage-type railgun accelerator

    International Nuclear Information System (INIS)

    Ogino, Mutsuo; Azuma, Kingo.

    1995-01-01

    The present invention provides a two stage-type railgun accelerator capable of spiking a flying body (ice pellet) formed by solidifying a gaseous hydrogen isotope as a fuel to a thermonuclear reactor at a higher speed into a central portion of plasmas. Namely, the two stage-type railgun accelerator accelerates the flying body spiked from a initial stage accelerator to a portion between rails by Lorentz force generated when electric current is supplied to the two rails by way of a plasma armature. In this case, two sets of solenoids are disposed for compressing the plasma armature in the longitudinal direction of the rails. The first and the second sets of solenoid coils are previously supplied with electric current. After passing of the flying body, the armature formed into plasmas by a gas laser disposed at the back of the flying body is compressed in the longitudinal direction of the rails by a magnetic force of the first and the second sets of solenoid coils to increase the plasma density. A current density is also increased simultaneously. Then, the first solenoid coil current is turned OFF to accelerate the flying body in two stages by the compressed plasma armature. (I.S.)

  12. Maximization of Tsallis entropy in the combinatorial formulation

    International Nuclear Information System (INIS)

    Suyari, Hiroki

    2010-01-01

    This paper presents the mathematical reformulation for maximization of Tsallis entropy S q in the combinatorial sense. More concretely, we generalize the original derivation of Maxwell-Boltzmann distribution law to Tsallis statistics by means of the corresponding generalized multinomial coefficient. Our results reveal that maximization of S 2-q under the usual expectation or S q under q-average using the escort expectation are naturally derived from the combinatorial formulations for Tsallis statistics with respective combinatorial dualities, that is, one for additive duality and the other for multiplicative duality.

  13. Jack superpolynomials: physical and combinatorial definitions

    International Nuclear Information System (INIS)

    Desrosiers, P.; Mathieu, P.; Lapointe, L.

    2004-01-01

    Jack superpolynomials are eigenfunctions of the supersymmetric extension of the quantum trigonometric Calogero-Moser-Sutherland Hamiltonian. They are orthogonal with respect to the scalar product, dubbed physical, that is naturally induced by this quantum-mechanical problem. But Jack superpolynomials can also be defined more combinatorially, starting from the multiplicative bases of symmetric superpolynomials, enforcing orthogonality with respect to a one-parameter deformation of the combinatorial scalar product. Both constructions turn out to be equivalent. (author)

  14. Torus actions, combinatorial topology, and homological algebra

    International Nuclear Information System (INIS)

    Bukhshtaber, V M; Panov, T E

    2000-01-01

    This paper is a survey of new results and open problems connected with fundamental combinatorial concepts, including polytopes, simplicial complexes, cubical complexes, and arrangements of subspaces. Attention is concentrated on simplicial and cubical subdivisions of manifolds, and especially on spheres. Important constructions are described that enable one to study these combinatorial objects by using commutative and homological algebra. The proposed approach to combinatorial problems is based on the theory of moment-angle complexes recently developed by the authors. The crucial construction assigns to each simplicial complex K with m vertices a T m -space Z K with special bigraded cellular decomposition. In the framework of this theory, well-known non-singular toric varieties arise as orbit spaces of maximally free actions of subtori on moment-angle complexes corresponding to simplicial spheres. It is shown that diverse invariants of simplicial complexes and related combinatorial-geometric objects can be expressed in terms of bigraded cohomology rings of the corresponding moment-angle complexes. Finally, it is shown that the new relationships between combinatorics, geometry, and topology lead to solutions of some well-known topological problems

  15. Use of combinatorial pharmacogenomic testing in two cases from community psychiatry

    Directory of Open Access Journals (Sweden)

    Fields ES

    2016-08-01

    Full Text Available Eve S Fields,1 Raymond A Lorenz,2 Joel G Winner2 1Northwest Center for Community Mental Health, Reston, VA, USA; 2Assurex Health, Mason, OH, USA Abstract: This report describes two cases in which pharmacogenomic testing was utilized to guide medication selection for difficult to treat patients. The first patient is a 29-year old male with bipolar disorder who had severe akathisia due to his long acting injectable antipsychotic. The second patient is a 59-year old female with major depressive disorder who was not ­responding to her medication. In both cases, a proprietary combinatorial pharmacogenomic test was used to inform medication changes and improve patient outcomes. The first patient was switched to a long acting injectable that was not affected by his genetic profile and his adverse effects abated. The second patient had her medications discontinued due to the results of the genetic testing and more intense psychotherapy initiated. While pharmacogenomic testing may be ­helpful in cases such as these presented here, it should never serve as a proxy for a comprehensive biopsychosocial approach. The pharmacogenomic information may be selectively added to this comprehensive approach to support medication treatment. Keywords: pharmacogenomics, adverse effects, risperidone, nortriptyline, paliperidone

  16. A robust decision-making approach for p-hub median location problems based on two-stage stochastic programming and mean-variance theory : a real case study

    NARCIS (Netherlands)

    Ahmadi, T.; Karimi, H.; Davoudpour, H.

    2015-01-01

    The stochastic location-allocation p-hub median problems are related to long-term decisions made in risky situations. Due to the importance of this type of problems in real-world applications, the authors were motivated to propose an approach to obtain more reliable policies in stochastic

  17. Dynamic combinatorial libraries based on hydrogen-bonde molecular boxes

    NARCIS (Netherlands)

    Kerckhoffs, J.M.C.A.; Mateos timoneda, Miguel; Reinhoudt, David; Crego Calama, Mercedes

    2007-01-01

    This article describes two different types of dynamic combinatorial libraries of host and guest molecules. The first part of this article describes the encapsulation of alizarin trimer 2 a3 by dynamic mixtures of up to twenty different self-assembled molecular receptors together with the

  18. A Two-Stage Stochastic Mixed-Integer Programming Approach to the Smart House Scheduling Problem

    Science.gov (United States)

    Ozoe, Shunsuke; Tanaka, Yoichi; Fukushima, Masao

    A “Smart House” is a highly energy-optimized house equipped with photovoltaic systems (PV systems), electric battery systems, fuel cell cogeneration systems (FC systems), electric vehicles (EVs) and so on. Smart houses are attracting much attention recently thanks to their enhanced ability to save energy by making full use of renewable energy and by achieving power grid stability despite an increased power draw for installed PV systems. Yet running a smart house's power system, with its multiple power sources and power storages, is no simple task. In this paper, we consider the problem of power scheduling for a smart house with a PV system, an FC system and an EV. We formulate the problem as a mixed integer programming problem, and then extend it to a stochastic programming problem involving recourse costs to cope with uncertain electricity demand, heat demand and PV power generation. Using our method, we seek to achieve the optimal power schedule running at the minimum expected operation cost. We present some results of numerical experiments with data on real-life demands and PV power generation to show the effectiveness of our method.

  19. Two-stage implant systems.

    Science.gov (United States)

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  20. Two-step two-stage fission gas release model

    International Nuclear Information System (INIS)

    Kim, Yong-soo; Lee, Chan-bock

    2006-01-01

    Based on the recent theoretical model, two-step two-stage model is developed which incorporates two stage diffusion processes, grain lattice and grain boundary diffusion, coupled with the two step burn-up factor in the low and high burn-up regime. FRAPCON-3 code and its in-pile data sets have been used for the benchmarking and validation of this model. Results reveals that its prediction is in better agreement with the experimental measurements than that by any model contained in the FRAPCON-3 code such as ANS 5.4, modified ANS5.4, and Forsberg-Massih model over whole burn-up range up to 70,000 MWd/MTU. (author)

  1. Two-stage revision of septic knee prosthesis with articulating knee spacers yields better infection eradication rate than one-stage or two-stage revision with static spacers.

    Science.gov (United States)

    Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L

    2012-12-01

    The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.

  2. Two stages of economic development

    OpenAIRE

    Gong, Gang

    2016-01-01

    This study suggests that the development process of a less-developed country can be divided into two stages, which demonstrate significantly different properties in areas such as structural endowments, production modes, income distribution, and the forces that drive economic growth. The two stages of economic development have been indicated in the growth theory of macroeconomics and in the various "turning point" theories in development economics, including Lewis's dual economy theory, Kuznet...

  3. Combinatorial Optimization in Project Selection Using Genetic Algorithm

    Science.gov (United States)

    Dewi, Sari; Sawaluddin

    2018-01-01

    This paper discusses the problem of project selection in the presence of two objective functions that maximize profit and minimize cost and the existence of some limitations is limited resources availability and time available so that there is need allocation of resources in each project. These resources are human resources, machine resources, raw material resources. This is treated as a consideration to not exceed the budget that has been determined. So that can be formulated mathematics for objective function (multi-objective) with boundaries that fulfilled. To assist the project selection process, a multi-objective combinatorial optimization approach is used to obtain an optimal solution for the selection of the right project. It then described a multi-objective method of genetic algorithm as one method of multi-objective combinatorial optimization approach to simplify the project selection process in a large scope.

  4. Economic and environmental optimization of a large scale sustainable dual feedstock lignocellulosic-based bioethanol supply chain in a stochastic environment

    International Nuclear Information System (INIS)

    Osmani, Atif; Zhang, Jun

    2014-01-01

    Highlights: • 2-Stage stochastic MILP model for optimizing the performance of a sustainable lignocellulosic-based biofuel supply chain. • Multiple uncertainties in biomass supply, purchase price of biomass, bioethanol demand, and sale price of bioethanol. • Stochastic parameters significantly impact the allocation of biomass processing capacities of biorefineries. • Location of biorefineries and choice of conversion technology is found to be insensitive to the stochastic environment. • Use of Sample Average Approximation (SAA) algorithm as a decomposition technique. - Abstract: This work proposes a two-stage stochastic optimization model to maximize the expected profit and simultaneously minimize carbon emissions of a dual-feedstock lignocellulosic-based bioethanol supply chain (LBSC) under uncertainties in supply, demand and prices. The model decides the optimal first-stage decisions and the expected values of the second-stage decisions. A case study based on a 4-state Midwestern region in the US demonstrates the effectiveness of the proposed stochastic model over a deterministic model under uncertainties. Two regional modes are considered for the geographic scale of the LBSC. Under co-operation mode the 4 states are considered as a combined region while under stand-alone mode each of the 4 states is considered as an individual region. Each state under co-operation mode gives better financial and environmental outcomes when compared to stand-alone mode. Uncertainty has a significant impact on the biomass processing capacity of biorefineries. While the location and the choice of conversion technology for biorefineries i.e. biochemical vs. thermochemical, are insensitive to the stochastic environment. As variability of the stochastic parameters increases, the financial and environmental performance is degraded. Sensitivity analysis shows that levels of tax credit and carbon price have a major impact on the choice of conversion technology for a selected

  5. Combinatorial matrix theory

    CERN Document Server

    Mitjana, Margarida

    2018-01-01

    This book contains the notes of the lectures delivered at an Advanced Course on Combinatorial Matrix Theory held at Centre de Recerca Matemàtica (CRM) in Barcelona. These notes correspond to five series of lectures. The first series is dedicated to the study of several matrix classes defined combinatorially, and was delivered by Richard A. Brualdi. The second one, given by Pauline van den Driessche, is concerned with the study of spectral properties of matrices with a given sign pattern. Dragan Stevanović delivered the third one, devoted to describing the spectral radius of a graph as a tool to provide bounds of parameters related with properties of a graph. The fourth lecture was delivered by Stephen Kirkland and is dedicated to the applications of the Group Inverse of the Laplacian matrix. The last one, given by Ángeles Carmona, focuses on boundary value problems on finite networks with special in-depth on the M-matrix inverse problem.

  6. Combinatorial identities for tenth order mock theta functions

    Indian Academy of Sciences (India)

    44

    which lead us to one 4-way and one 3-way combinatorial identity. ... mock theta functions, partition identities and different combinatorial parameters, see for ... 3. Example 1.1. There are twelve (n + 1)–color partitions of 2: 21, 21 + 01, 11 + 11, ...

  7. On the combinatorial foundations of Regge-calculus

    International Nuclear Information System (INIS)

    Budach, L.

    1989-01-01

    Lipschitz-Killing curvatures of piecewise flat spaces are combinatorial analogues of Lipschitz-Killing curvatures of Riemannian manifolds. In the following paper rigorous combinatorial representations and proofs of all basic results for Lipschitz-Killing curvatures not using analytic arguments are given. The principal tools for an elementary representation of Regge calculus can be developed by means of basic properties of dihedral angles. (author)

  8. Combinatorial Speculations and the Combinatorial Conjecture for Mathematics

    OpenAIRE

    Mao, Linfan

    2006-01-01

    Combinatorics is a powerful tool for dealing with relations among objectives mushroomed in the past century. However, an more important work for mathematician is to apply combinatorics to other mathematics and other sciences not merely to find combinatorial behavior for objectives. Recently, such research works appeared on journals for mathematics and theoretical physics on cosmos. The main purpose of this paper is to survey these thinking and ideas for mathematics and cosmological physics, s...

  9. Balancing Two-Player Stochastic Games with Soft Q-Learning

    OpenAIRE

    Grau-Moya, Jordi; Leibfried, Felix; Bou-Ammar, Haitham

    2018-01-01

    Within the context of video games the notion of perfectly rational agents can be undesirable as it leads to uninteresting situations, where humans face tough adversarial decision makers. Current frameworks for stochastic games and reinforcement learning prohibit tuneable strategies as they seek optimal performance. In this paper, we enable such tuneable behaviour by generalising soft Q-learning to stochastic games, where more than one agent interact strategically. We contribute both theoretic...

  10. Intrinsic information carriers in combinatorial dynamical systems

    Science.gov (United States)

    Harmer, Russ; Danos, Vincent; Feret, Jérôme; Krivine, Jean; Fontana, Walter

    2010-09-01

    Many proteins are composed of structural and chemical features—"sites" for short—characterized by definite interaction capabilities, such as noncovalent binding or covalent modification of other proteins. This modularity allows for varying degrees of independence, as the behavior of a site might be controlled by the state of some but not all sites of the ambient protein. Independence quickly generates a startling combinatorial complexity that shapes most biological networks, such as mammalian signaling systems, and effectively prevents their study in terms of kinetic equations—unless the complexity is radically trimmed. Yet, if combinatorial complexity is key to the system's behavior, eliminating it will prevent, not facilitate, understanding. A more adequate representation of a combinatorial system is provided by a graph-based framework of rewrite rules where each rule specifies only the information that an interaction mechanism depends on. Unlike reactions, which deal with molecular species, rules deal with patterns, i.e., multisets of molecular species. Although the stochastic dynamics induced by a collection of rules on a mixture of molecules can be simulated, it appears useful to capture the system's average or deterministic behavior by means of differential equations. However, expansion of the rules into kinetic equations at the level of molecular species is not only impractical, but conceptually indefensible. If rules describe bona fide patterns of interaction, molecular species are unlikely to constitute appropriate units of dynamics. Rather, we must seek aggregate variables reflective of the causal structure laid down by the rules. We call these variables "fragments" and the process of identifying them "fragmentation." Ideally, fragments are aspects of the system's microscopic population that the set of rules can actually distinguish on average; in practice, it may only be feasible to identify an approximation to this. Most importantly, fragments are

  11. Intrinsic information carriers in combinatorial dynamical systems.

    Science.gov (United States)

    Harmer, Russ; Danos, Vincent; Feret, Jérôme; Krivine, Jean; Fontana, Walter

    2010-09-01

    Many proteins are composed of structural and chemical features--"sites" for short--characterized by definite interaction capabilities, such as noncovalent binding or covalent modification of other proteins. This modularity allows for varying degrees of independence, as the behavior of a site might be controlled by the state of some but not all sites of the ambient protein. Independence quickly generates a startling combinatorial complexity that shapes most biological networks, such as mammalian signaling systems, and effectively prevents their study in terms of kinetic equations-unless the complexity is radically trimmed. Yet, if combinatorial complexity is key to the system's behavior, eliminating it will prevent, not facilitate, understanding. A more adequate representation of a combinatorial system is provided by a graph-based framework of rewrite rules where each rule specifies only the information that an interaction mechanism depends on. Unlike reactions, which deal with molecular species, rules deal with patterns, i.e., multisets of molecular species. Although the stochastic dynamics induced by a collection of rules on a mixture of molecules can be simulated, it appears useful to capture the system's average or deterministic behavior by means of differential equations. However, expansion of the rules into kinetic equations at the level of molecular species is not only impractical, but conceptually indefensible. If rules describe bona fide patterns of interaction, molecular species are unlikely to constitute appropriate units of dynamics. Rather, we must seek aggregate variables reflective of the causal structure laid down by the rules. We call these variables "fragments" and the process of identifying them "fragmentation." Ideally, fragments are aspects of the system's microscopic population that the set of rules can actually distinguish on average; in practice, it may only be feasible to identify an approximation to this. Most importantly, fragments are

  12. Graphical-based construction of combinatorial geometries for radiation transport and shielding applications

    International Nuclear Information System (INIS)

    Burns, T.J.

    1992-01-01

    A graphical-based code system is being developed at ORNL to manipulate combinatorial geometries for radiation transport and shielding applications. The current version (basically a combinatorial geometry debugger) consists of two parts: a FORTRAN-based ''view'' generator and a Microsoft Windows application for displaying the geometry. Options and features of both modules are discussed. Examples illustrating the various options available are presented. The potential for utilizing the images produced using the debugger as a visualization tool for the output of the radiation transport codes is discussed as is the future direction of the development

  13. Manipulating Combinatorial Structures.

    Science.gov (United States)

    Labelle, Gilbert

    This set of transparencies shows how the manipulation of combinatorial structures in the context of modern combinatorics can easily lead to interesting teaching and learning activities at every level of education from elementary school to university. The transparencies describe: (1) the importance and relations of combinatorics to science and…

  14. Cubical version of combinatorial differential forms

    DEFF Research Database (Denmark)

    Kock, Anders

    2010-01-01

    The theory of combinatorial differential forms is usually presented in simplicial terms. We present here a cubical version; it depends on the possibility of forming affine combinations of mutual neighbour points in a manifold, in the context of synthetic differential geometry.......The theory of combinatorial differential forms is usually presented in simplicial terms. We present here a cubical version; it depends on the possibility of forming affine combinations of mutual neighbour points in a manifold, in the context of synthetic differential geometry....

  15. Homogenization of the stochastic Navier–Stokes equation with a stochastic slip boundary condition

    KAUST Repository

    Bessaih, Hakima

    2015-11-02

    The two-dimensional Navier–Stokes equation in a perforated domain with a dynamical slip boundary condition is considered. We assume that the dynamic is driven by a stochastic perturbation on the interior of the domain and another stochastic perturbation on the boundaries of the holes. We consider a scaling (ᵋ for the viscosity and 1 for the density) that will lead to a time-dependent limit problem. However, the noncritical scaling (ᵋ, β > 1) is considered in front of the nonlinear term. The homogenized system in the limit is obtained as a Darcy’s law with memory with two permeabilities and an extra term that is due to the stochastic perturbation on the boundary of the holes. The nonhomogeneity on the boundary contains a stochastic part that yields in the limit an additional term in the Darcy’s law. We use the two-scale convergence method after extending the solution with 0 inside the holes to pass to the limit. By Itô stochastic calculus, we get uniform estimates on the solution in appropriate spaces. Due to the stochastic integral, the pressure that appears in the variational formulation does not have enough regularity in time. This fact made us rely only on the variational formulation for the passage to the limit on the solution. We obtain a variational formulation for the limit that is solution of a Stokes system with two pressures. This two-scale limit gives rise to three cell problems, two of them give the permeabilities while the third one gives an extra term in the Darcy’s law due to the stochastic perturbation on the boundary of the holes.

  16. Solid-Phase Synthesis of Small Molecule Libraries using Double Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Nielsen, John; Jensen, Flemming R.

    1997-01-01

    The first synthesis of a combinatorial library using double combinatorial chemistry is presented. Coupling of unprotected Fmoc-tyrosine to the solid support was followed by Mitsunobu O-alkylation. Introduction of a diacid linker yields a system in which the double combinatorial step can be demons...

  17. Two-boundary first exit time of Gauss-Markov processes for stochastic modeling of acto-myosin dynamics.

    Science.gov (United States)

    D'Onofrio, Giuseppe; Pirozzi, Enrica

    2017-05-01

    We consider a stochastic differential equation in a strip, with coefficients suitably chosen to describe the acto-myosin interaction subject to time-varying forces. By simulating trajectories of the stochastic dynamics via an Euler discretization-based algorithm, we fit experimental data and determine the values of involved parameters. The steps of the myosin are represented by the exit events from the strip. Motivated by these results, we propose a specific stochastic model based on the corresponding time-inhomogeneous Gauss-Markov and diffusion process evolving between two absorbing boundaries. We specify the mean and covariance functions of the stochastic modeling process taking into account time-dependent forces including the effect of an external load. We accurately determine the probability density function (pdf) of the first exit time (FET) from the strip by solving a system of two non singular second-type Volterra integral equations via a numerical quadrature. We provide numerical estimations of the mean of FET as approximations of the dwell-time of the proteins dynamics. The percentage of backward steps is given in agreement to experimental data. Numerical and simulation results are compared and discussed.

  18. The mathematics of a successful deconvolution: a quantitative assessment of mixture-based combinatorial libraries screened against two formylpeptide receptors.

    Science.gov (United States)

    Santos, Radleigh G; Appel, Jon R; Giulianotti, Marc A; Edwards, Bruce S; Sklar, Larry A; Houghten, Richard A; Pinilla, Clemencia

    2013-05-30

    In the past 20 years, synthetic combinatorial methods have fundamentally advanced the ability to synthesize and screen large numbers of compounds for drug discovery and basic research. Mixture-based libraries and positional scanning deconvolution combine two approaches for the rapid identification of specific scaffolds and active ligands. Here we present a quantitative assessment of the screening of 32 positional scanning libraries in the identification of highly specific and selective ligands for two formylpeptide receptors. We also compare and contrast two mixture-based library approaches using a mathematical model to facilitate the selection of active scaffolds and libraries to be pursued for further evaluation. The flexibility demonstrated in the differently formatted mixture-based libraries allows for their screening in a wide range of assays.

  19. A combined stochastic programming and optimal control approach to personal finance and pensions

    DEFF Research Database (Denmark)

    Konicz, Agnieszka Karolina; Pisinger, David; Rasmussen, Kourosh Marjani

    2015-01-01

    The paper presents a model that combines a dynamic programming (stochastic optimal control) approach and a multi-stage stochastic linear programming approach (SLP), integrated into one SLP formulation. Stochastic optimal control produces an optimal policy that is easy to understand and implement....

  20. Stochastic layer scaling in the two-wire model for divertor tokamaks

    Science.gov (United States)

    Ali, Halima; Punjabi, Alkesh; Boozer, Allen

    2009-06-01

    The question of magnetic field structure in the vicinity of the separatrix in divertor tokamaks is studied. The authors have investigated this problem earlier in a series of papers, using various mathematical techniques. In the present paper, the two-wire model (TWM) [Reiman, A. 1996 Phys. Plasmas 3, 906] is considered. It is noted that, in the TWM, it is useful to consider an extra equation expressing magnetic flux conservation. This equation does not add any more information to the TWM, since the equation is derived from the TWM. This equation is useful for controlling the step size in the numerical integration of the TWM equations. The TWM with the extra equation is called the flux-preserving TWM. Nevertheless, the technique is apparently still plagued by numerical inaccuracies when the perturbation level is low, resulting in an incorrect scaling of the stochastic layer width. The stochastic broadening of the separatrix in the flux-preserving TWM is compared with that in the low mn (poloidal mode number m and toroidal mode number n) map (LMN) [Ali, H., Punjabi, A., Boozer, A. and Evans, T. 2004 Phys. Plasmas 11, 1908]. The flux-preserving TWM and LMN both give Boozer-Rechester 0.5 power scaling of the stochastic layer width with the amplitude of magnetic perturbation when the perturbation is sufficiently large [Boozer, A. and Rechester, A. 1978, Phys. Fluids 21, 682]. The flux-preserving TWM gives a larger stochastic layer width when the perturbation is low, while the LMN gives correct scaling in the low perturbation region. Area-preserving maps such as the LMN respect the Hamiltonian structure of field line trajectories, and have the added advantage of computational efficiency. Also, for a $1\\frac12$ degree of freedom Hamiltonian system such as field lines, maps do not give Arnold diffusion.

  1. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    Science.gov (United States)

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  2. Comparison of single-stage and temperature-phased two-stage anaerobic digestion of oily food waste

    International Nuclear Information System (INIS)

    Wu, Li-Jie; Kobayashi, Takuro; Li, Yu-You; Xu, Kai-Qin

    2015-01-01

    Highlights: • A single-stage and two two-stage anaerobic systems were synchronously operated. • Similar methane production 0.44 L/g VS_a_d_d_e_d from oily food waste was achieved. • The first stage of the two-stage process became inefficient due to serious pH drop. • Recycle favored the hythan production in the two-stage digestion. • The conversion of unsaturated fatty acids was enhanced by recycle introduction. - Abstract: Anaerobic digestion is an effective technology to recover energy from oily food waste. A single-stage system and temperature-phased two-stage systems with and without recycle for anaerobic digestion of oily food waste were constructed to compare the operation performances. The synchronous operation indicated the similar ability to produce methane in the three systems, with a methane yield of 0.44 L/g VS_a_d_d_e_d. The pH drop to less than 4.0 in the first stage of two-stage system without recycle resulted in poor hydrolysis, and methane or hydrogen was not produced in this stage. Alkalinity supplement from the second stage of two-stage system with recycle improved pH in the first stage to 5.4. Consequently, 35.3% of the particulate COD in the influent was reduced in the first stage of two-stage system with recycle according to a COD mass balance, and hydrogen was produced with a percentage of 31.7%, accordingly. Similar solids and organic matter were removed in the single-stage system and two-stage system without recycle. More lipid degradation and the conversion of long-chain fatty acids were achieved in the single-stage system. Recycling was proved to be effective in promoting the conversion of unsaturated long-chain fatty acids into saturated fatty acids in the two-stage system.

  3. Adaptive Synchronization for Two Different Stochastic Chaotic Systems with Unknown Parameters via a Sliding Mode Controller

    Directory of Open Access Journals (Sweden)

    Zengyun Wang

    2013-01-01

    Full Text Available This paper investigates the problem of synchronization for two different stochastic chaotic systems with unknown parameters and uncertain terms. The main work of this paper consists of the following aspects. Firstly, based on the Lyapunov theory in stochastic differential equations and the theory of sliding mode control, we propose a simple sliding surface and discuss the occurrence of the sliding motion. Secondly, we design an adaptive sliding mode controller to realize the asymptotical synchronization in mean squares. Thirdly, we design an adaptive sliding mode controller to realize the almost surely synchronization. Finally, the designed adaptive sliding mode controllers are used to achieve synchronization between two pairs of different stochastic chaos systems (Lorenz-Chen and Chen-Lu in the presence of the uncertainties and unknown parameters. Numerical simulations are given to demonstrate the robustness and efficiency of the proposed robust adaptive sliding mode controller.

  4. Immune-Stimulating Combinatorial Therapy for Prostate Cancer

    Science.gov (United States)

    2016-10-01

    Overlap: None 20 90061946 (Drake) Title: Epigenetic Drugs and Immuno Therapy for Prostate Cancer (EDIT-PC) Effort: 1.2 calendar months (10% effort...AWARD NUMBER: W81XWH-15-1-0667 TITLE: Immune-Stimulating Combinatorial Therapy for Prostate Cancer PRINCIPAL INVESTIGATOR: Robert Ivkov...Stimulating Combinatorial Therapy for Prostate Cancer 5a. CONTRACT NUMBER 5b. GRANT NUMBER W81XWH-15-1-0667 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  5. A comparative study of two stochastic mode reduction methods

    Energy Technology Data Exchange (ETDEWEB)

    Stinis, Panagiotis

    2005-09-01

    We present a comparative study of two methods for thereduction of the dimensionality of a system of ordinary differentialequations that exhibits time-scale separation. Both methods lead to areduced system of stochastic differential equations. The novel feature ofthese methods is that they allow the use, in the reduced system, ofhigher order terms in the resolved variables. The first method, proposedby Majda, Timofeyev and Vanden-Eijnden, is based on an asymptoticstrategy developed by Kurtz. The second method is a short-memoryapproximation of the Mori-Zwanzig projection formalism of irreversiblestatistical mechanics, as proposed by Chorin, Hald and Kupferman. Wepresent conditions under which the reduced models arising from the twomethods should have similar predictive ability. We apply the two methodsto test cases that satisfy these conditions. The form of the reducedmodels and the numerical simulations show that the two methods havesimilar predictive ability as expected.

  6. Approximate method for stochastic chemical kinetics with two-time scales by chemical Langevin equations

    International Nuclear Information System (INIS)

    Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George

    2016-01-01

    The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766–1793 (1996); ibid. 56, 1794–1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.

  7. Criticism of EFSA's scientific opinion on combinatorial effects of 'stacked' GM plants.

    Science.gov (United States)

    Bøhn, Thomas

    2018-01-01

    Recent genetically modified plants tend to include both insect resistance and herbicide tolerance traits. Some of these 'stacked' GM plants have multiple Cry-toxins expressed as well as tolerance to several herbicides. This means that non-target organisms in the environment (biodiversity) will be co-exposed to multiple stressors simultaneously. A similar co-exposure may happen to consumers through chemical residues in the food chain. EFSA, the responsible unit for minimizing risk of harm in European food chains, has expressed its scientific interest in combinatorial effects. However, when new data showed how two Cry-toxins acted in combination (added toxicity), and that the same Cry-toxins showed combinatorial effects when co-exposed with Roundup (Bøhn et al., 2016), EFSA dismissed these new peer-reviewed results. In effect, EFSA claimed that combinatorial effects are not relevant for itself. EFSA was justifying this by referring to a policy question, and by making invalid assumptions, which could have been checked directly with the lead-author. With such approach, EFSA may miss the opportunity to improve its environmental and health risk assessment of toxins and pesticides in the food chain. Failure to follow its own published requests for combinatorial effects research, may also risk jeopardizing EFSA's scientific and public reputation. Copyright © 2017. Published by Elsevier Ltd.

  8. A Stochastic Programming Approach for a Multi-Site Supply Chain Planning in Textile and Apparel Industry under Demand Uncertainty

    Directory of Open Access Journals (Sweden)

    Houssem Felfel

    2015-11-01

    Full Text Available In this study, a new stochastic model is proposed to deal with a multi-product, multi-period, multi-stage, multi-site production and transportation supply chain planning problem under demand uncertainty. A two-stage stochastic linear programming approach is used to maximize the expected profit. Decisions such as the production amount, the inventory level of finished and semi-finished product, the amount of backorder and the quantity of products to be transported between upstream and downstream plants in each period are considered. The robustness of production supply chain plan is then evaluated using statistical and risk measures. A case study from a real textile and apparel industry is shown in order to compare the performances of the proposed stochastic programming model and the deterministic model.

  9. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    Directory of Open Access Journals (Sweden)

    Ladan Jamshidy

    2016-01-01

    Full Text Available Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  10. On the Cut-off Point for Combinatorial Group Testing

    DEFF Research Database (Denmark)

    Fischer, Paul; Klasner, N.; Wegener, I.

    1999-01-01

    is answered by 1 if Q contains at least one essential object and by 0 otherwise. In the statistical setting the objects are essential, independently of each other, with a given probability p combinatorial setting the number k ... group testing is equal to p* = 12(3 - 5), i.e., the strategy of testing each object individually minimizes the average number of queries iff p >= p* or n = 1. In the combinatorial setting the worst case number of queries is of interest. It has been conjectured that the cut-off point of combinatorial...

  11. Two-Stage Centrifugal Fan

    Science.gov (United States)

    Converse, David

    2011-01-01

    Fan designs are often constrained by envelope, rotational speed, weight, and power. Aerodynamic performance and motor electrical performance are heavily influenced by rotational speed. The fan used in this work is at a practical limit for rotational speed due to motor performance characteristics, and there is no more space available in the packaging for a larger fan. The pressure rise requirements keep growing. The way to ordinarily accommodate a higher DP is to spin faster or grow the fan rotor diameter. The invention is to put two radially oriented stages on a single disk. Flow enters the first stage from the center; energy is imparted to the flow in the first stage blades, the flow is redirected some amount opposite to the direction of rotation in the fixed stators, and more energy is imparted to the flow in the second- stage blades. Without increasing either rotational speed or disk diameter, it is believed that as much as 50 percent more DP can be achieved with this design than with an ordinary, single-stage centrifugal design. This invention is useful primarily for fans having relatively low flow rates with relatively high pressure rise requirements.

  12. A combinatorial approach to the design of vaccines.

    Science.gov (United States)

    Martínez, Luis; Milanič, Martin; Legarreta, Leire; Medvedev, Paul; Malaina, Iker; de la Fuente, Ildefonso M

    2015-05-01

    We present two new problems of combinatorial optimization and discuss their applications to the computational design of vaccines. In the shortest λ-superstring problem, given a family S1,...,S(k) of strings over a finite alphabet, a set Τ of "target" strings over that alphabet, and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ target strings as substrings of S(i). In the shortest λ-cover superstring problem, given a collection X1,...,X(n) of finite sets of strings over a finite alphabet and an integer λ, the task is to find a string of minimum length containing, for each i, at least λ elements of X(i) as substrings. The two problems are polynomially equivalent, and the shortest λ-cover superstring problem is a common generalization of two well known combinatorial optimization problems, the shortest common superstring problem and the set cover problem. We present two approaches to obtain exact or approximate solutions to the shortest λ-superstring and λ-cover superstring problems: one based on integer programming, and a hill-climbing algorithm. An application is given to the computational design of vaccines and the algorithms are applied to experimental data taken from patients infected by H5N1 and HIV-1.

  13. Infinitary Combinatory Reduction Systems

    DEFF Research Database (Denmark)

    Ketema, Jeroen; Simonsen, Jakob Grue

    2011-01-01

    We define infinitary Combinatory Reduction Systems (iCRSs), thus providing the first notion of infinitary higher-order rewriting. The systems defined are sufficiently general that ordinary infinitary term rewriting and infinitary ¿-calculus are special cases. Furthermore,we generalise a number...

  14. Combinatorial Quantum Field Theory and Gluing Formula for Determinants

    NARCIS (Netherlands)

    Reshetikhin, N.; Vertman, B.

    2015-01-01

    We define the combinatorial Dirichlet-to-Neumann operator and establish a gluing formula for determinants of discrete Laplacians using a combinatorial Gaussian quantum field theory. In case of a diagonal inner product on cochains we provide an explicit local expression for the discrete

  15. Conferences on Combinatorial and Additive Number Theory

    CERN Document Server

    2014-01-01

    This proceedings volume is based on papers presented at the Workshops on Combinatorial and Additive Number Theory (CANT), which were held at the Graduate Center of the City University of New York in 2011 and 2012. The goal of the workshops is to survey recent progress in combinatorial number theory and related parts of mathematics. The workshop attracts researchers and students who discuss the state-of-the-art, open problems, and future challenges in number theory.

  16. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  17. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.

    2011-10-19

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches to less detailed compartment-based simulations. Compartment-based approaches yield quick and accurate mesoscopic results, but lack the level of detail that is characteristic of the computationally intensive molecular-based models. Often microscopic detail is only required in a small region (e.g. close to the cell membrane). Currently, the best way to achieve microscopic detail is to use a resource-intensive simulation over the whole domain. We develop the two-regime method (TRM) in which a molecular-based algorithm is used where desired and a compartment-based approach is used elsewhere. We present easy-to-implement coupling conditions which ensure that the TRM results have the same accuracy as a detailed molecular-based model in the whole simulation domain. Therefore, the TRM combines strengths of previously developed stochastic reaction-diffusion software to efficiently explore the behaviour of biological models. Illustrative examples and the mathematical justification of the TRM are also presented.

  18. A time consistent risk averse three-stage stochastic mixed integer optimization model for power generation capacity expansion

    International Nuclear Information System (INIS)

    Pisciella, P.; Vespucci, M.T.; Bertocchi, M.; Zigrino, S.

    2016-01-01

    We propose a multi-stage stochastic optimization model for the generation capacity expansion problem of a price-taker power producer. Uncertainties regarding the evolution of electricity prices and fuel costs play a major role in long term investment decisions, therefore the objective function represents a trade-off between expected profit and risk. The Conditional Value at Risk is the risk measure used and is defined by a nested formulation that guarantees time consistency in the multi-stage model. The proposed model allows one to determine a long term expansion plan which takes into account uncertainty, while the LCoE approach, currently used by decision makers, only allows one to determine which technology should be chosen for the next power plant to be built. A sensitivity analysis is performed with respect to the risk weighting factor and budget amount. - Highlights: • We propose a time consistent risk averse multi-stage model for capacity expansion. • We introduce a case study with uncertainty on electricity prices and fuel costs. • Increased budget moves the investment from gas towards renewables and then coal. • Increased risk aversion moves the investment from coal towards renewables. • Time inconsistency leads to a profit gap between planned and implemented policies.

  19. Dynamic combinatorial libraries: from exploring molecular recognition to systems chemistry.

    Science.gov (United States)

    Li, Jianwei; Nowak, Piotr; Otto, Sijbren

    2013-06-26

    Dynamic combinatorial chemistry (DCC) is a subset of combinatorial chemistry where the library members interconvert continuously by exchanging building blocks with each other. Dynamic combinatorial libraries (DCLs) are powerful tools for discovering the unexpected and have given rise to many fascinating molecules, ranging from interlocked structures to self-replicators. Furthermore, dynamic combinatorial molecular networks can produce emergent properties at systems level, which provide exciting new opportunities in systems chemistry. In this perspective we will highlight some new methodologies in this field and analyze selected examples of DCLs that are under thermodynamic control, leading to synthetic receptors, catalytic systems, and complex self-assembled supramolecular architectures. Also reviewed are extensions of the principles of DCC to systems that are not at equilibrium and may therefore harbor richer functional behavior. Examples include self-replication and molecular machines.

  20. View discovery in OLAP databases through statistical combinatorial optimization

    Energy Technology Data Exchange (ETDEWEB)

    Hengartner, Nick W [Los Alamos National Laboratory; Burke, John [PNNL; Critchlow, Terence [PNNL; Joslyn, Cliff [PNNL; Hogan, Emilie [PNNL

    2009-01-01

    OnLine Analytical Processing (OLAP) is a relational database technology providing users with rapid access to summary, aggregated views of a single large database, and is widely recognized for knowledge representation and discovery in high-dimensional relational databases. OLAP technologies provide intuitive and graphical access to the massively complex set of possible summary views available in large relational (SQL) structured data repositories. The capability of OLAP database software systems to handle data complexity comes at a high price for analysts, presenting them a combinatorially vast space of views of a relational database. We respond to the need to deploy technologies sufficient to allow users to guide themselves to areas of local structure by casting the space of 'views' of an OLAP database as a combinatorial object of all projections and subsets, and 'view discovery' as an search process over that lattice. We equip the view lattice with statistical information theoretical measures sufficient to support a combinatorial optimization process. We outline 'hop-chaining' as a particular view discovery algorithm over this object, wherein users are guided across a permutation of the dimensions by searching for successive two-dimensional views, pushing seen dimensions into an increasingly large background filter in a 'spiraling' search process. We illustrate this work in the context of data cubes recording summary statistics for radiation portal monitors at US ports.

  1. Combinatorial Micro-Macro Dynamical Systems

    OpenAIRE

    Diaz, Rafael; Villamarin, Sergio

    2015-01-01

    The second law of thermodynamics states that the entropy of an isolated system is almost always increasing. We propose combinatorial formalizations of the second law and explore their conditions of possibilities.

  2. A stochastic algorithm for global optimization and for best populations: A test case of side chains in proteins

    Science.gov (United States)

    Glick, Meir; Rayan, Anwar; Goldblum, Amiram

    2002-01-01

    The problem of global optimization is pivotal in a variety of scientific fields. Here, we present a robust stochastic search method that is able to find the global minimum for a given cost function, as well as, in most cases, any number of best solutions for very large combinatorial “explosive” systems. The algorithm iteratively eliminates variable values that contribute consistently to the highest end of a cost function's spectrum of values for the full system. Values that have not been eliminated are retained for a full, exhaustive search, allowing the creation of an ordered population of best solutions, which includes the global minimum. We demonstrate the ability of the algorithm to explore the conformational space of side chains in eight proteins, with 54 to 263 residues, to reproduce a population of their low energy conformations. The 1,000 lowest energy solutions are identical in the stochastic (with two different seed numbers) and full, exhaustive searches for six of eight proteins. The others retain the lowest 141 and 213 (of 1,000) conformations, depending on the seed number, and the maximal difference between stochastic and exhaustive is only about 0.15 Kcal/mol. The energy gap between the lowest and highest of the 1,000 low-energy conformers in eight proteins is between 0.55 and 3.64 Kcal/mol. This algorithm offers real opportunities for solving problems of high complexity in structural biology and in other fields of science and technology. PMID:11792838

  3. Disentangling mechanisms that mediate the balance between stochastic and deterministic processes in microbial succession.

    Science.gov (United States)

    Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão

    2015-03-17

    Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.

  4. Multiscale Hy3S: Hybrid stochastic simulation for supercomputers

    Directory of Open Access Journals (Sweden)

    Kaznessis Yiannis N

    2006-02-01

    Full Text Available Abstract Background Stochastic simulation has become a useful tool to both study natural biological systems and design new synthetic ones. By capturing the intrinsic molecular fluctuations of "small" systems, these simulations produce a more accurate picture of single cell dynamics, including interesting phenomena missed by deterministic methods, such as noise-induced oscillations and transitions between stable states. However, the computational cost of the original stochastic simulation algorithm can be high, motivating the use of hybrid stochastic methods. Hybrid stochastic methods partition the system into multiple subsets and describe each subset as a different representation, such as a jump Markov, Poisson, continuous Markov, or deterministic process. By applying valid approximations and self-consistently merging disparate descriptions, a method can be considerably faster, while retaining accuracy. In this paper, we describe Hy3S, a collection of multiscale simulation programs. Results Building on our previous work on developing novel hybrid stochastic algorithms, we have created the Hy3S software package to enable scientists and engineers to both study and design extremely large well-mixed biological systems with many thousands of reactions and chemical species. We have added adaptive stochastic numerical integrators to permit the robust simulation of dynamically stiff biological systems. In addition, Hy3S has many useful features, including embarrassingly parallelized simulations with MPI; special discrete events, such as transcriptional and translation elongation and cell division; mid-simulation perturbations in both the number of molecules of species and reaction kinetic parameters; combinatorial variation of both initial conditions and kinetic parameters to enable sensitivity analysis; use of NetCDF optimized binary format to quickly read and write large datasets; and a simple graphical user interface, written in Matlab, to help users

  5. On Stochastic Dependence

    Science.gov (United States)

    Meyer, Joerg M.

    2018-01-01

    The contrary of stochastic independence splits up into two cases: pairs of events being favourable or being unfavourable. Examples show that both notions have quite unexpected properties, some of them being opposite to intuition. For example, transitivity does not hold. Stochastic dependence is also useful to explain cases of Simpson's paradox.

  6. On Definitions and Existence of Combinatorial Entropy of 2d Fields

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto; Shtarkov, Yuri; Justesen, Jørn

    1998-01-01

    Different definitions of combinatorial entropy is presented and conditions for their existence examined.......Different definitions of combinatorial entropy is presented and conditions for their existence examined....

  7. New high-throughput material-exploration system based on combinatorial chemistry and electrostatic atomization

    International Nuclear Information System (INIS)

    Fujimoto, K.; Takahashi, H.; Ito, S.; Inoue, S.; Watanabe, M.

    2006-01-01

    As a tool to facilitate future material explorations, our group has developed a new combinatorial system for the high-throughput preparation of compounds made up of more than three components. The system works in two steps: the atomization of a liquid by a high electric field followed by deposition to a grounded substrate. The combinatorial system based on this method has plural syringe pumps. The each starting materials are fed through the syringe pumps into a manifold, thoroughly mixed as they pass through the manifold, and atomized from the tip of a stainless steel nozzle onto a grounded substrate

  8. Stochastic coordination of joint wind and photovoltaic systems with energy storage in day-ahead market

    International Nuclear Information System (INIS)

    Gomes, I.L.R.; Pousinho, H.M.I.; Melício, R.; Mendes, V.M.F.

    2017-01-01

    This paper presents an optimal bid submission in a day-ahead electricity market for the problem of joint operation of wind with photovoltaic power systems having an energy storage device. Uncertainty not only due to the electricity market price, but also due to wind and photovoltaic powers is one of the main characteristics of this submission. The problem is formulated as a two-stage stochastic programming problem. The optimal bids and the energy flow in the batteries are the first-stage variables and the energy deviation is the second stage variable of the problem. Energy storage is a way to harness renewable energy conversion, allowing the store and discharge of energy at conveniently market prices. A case study with data from the Iberian day-ahead electricity market is presented and a comparison between joint and disjoint operations is discussed. - • Joint wind and PV systems with energy storage. • Electricity markets. • Stochastic optimization. • Day-ahead market.

  9. Space-time-modulated stochastic processes

    Science.gov (United States)

    Giona, Massimiliano

    2017-10-01

    Starting from the physical problem associated with the Lorentzian transformation of a Poisson-Kac process in inertial frames, the concept of space-time-modulated stochastic processes is introduced for processes possessing finite propagation velocity. This class of stochastic processes provides a two-way coupling between the stochastic perturbation acting on a physical observable and the evolution of the physical observable itself, which in turn influences the statistical properties of the stochastic perturbation during its evolution. The definition of space-time-modulated processes requires the introduction of two functions: a nonlinear amplitude modulation, controlling the intensity of the stochastic perturbation, and a time-horizon function, which modulates its statistical properties, providing irreducible feedback between the stochastic perturbation and the physical observable influenced by it. The latter property is the peculiar fingerprint of this class of models that makes them suitable for extension to generic curved-space times. Considering Poisson-Kac processes as prototypical examples of stochastic processes possessing finite propagation velocity, the balance equations for the probability density functions associated with their space-time modulations are derived. Several examples highlighting the peculiarities of space-time-modulated processes are thoroughly analyzed.

  10. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  11. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  12. Microscopic description of pair transfer between two superfluid Fermi systems: Combining phase-space averaging and combinatorial techniques

    Science.gov (United States)

    Regnier, David; Lacroix, Denis; Scamps, Guillaume; Hashimoto, Yukio

    2018-03-01

    In a mean-field description of superfluidity, particle number and gauge angle are treated as quasiclassical conjugated variables. This level of description was recently used to describe nuclear reactions around the Coulomb barrier. Important effects of the relative gauge angle between two identical superfluid nuclei (symmetric collisions) on transfer probabilities and fusion barrier have been uncovered. A theory making contact with experiments should at least average over different initial relative gauge-angles. In the present work, we propose a new approach to obtain the multiple pair transfer probabilities between superfluid systems. This method, called phase-space combinatorial (PSC) technique, relies both on phase-space averaging and combinatorial arguments to infer the full pair transfer probability distribution at the cost of multiple mean-field calculations only. After benchmarking this approach in a schematic model, we apply it to the collision 20O+20O at various energies below the Coulomb barrier. The predictions for one pair transfer are similar to results obtained with an approximated projection method, whereas significant differences are found for two pairs transfer. Finally, we investigated the applicability of the PSC method to the contact between nonidentical superfluid systems. A generalization of the method is proposed and applied to the schematic model showing that the pair transfer probabilities are reasonably reproduced. The applicability of the PSC method to asymmetric nuclear collisions is investigated for the 14O+20O collision and it turns out that unrealistically small single- and multiple pair transfer probabilities are obtained. This is explained by the fact that relative gauge angle play in this case a minor role in the particle transfer process compared to other mechanisms, such as equilibration of the charge/mass ratio. We conclude that the best ground for probing gauge-angle effects in nuclear reaction and/or for applying the proposed

  13. Combinatorial Mathematics: Research into Practice

    Science.gov (United States)

    Sriraman, Bharath; English, Lyn D.

    2004-01-01

    Implications and suggestions for using combinatorial mathematics in the classroom through a survey and synthesis of numerous research studies are presented. The implications revolve around five major themes that emerge from analysis of these studies.

  14. Recent advances in combinatorial biosynthesis for drug discovery

    Directory of Open Access Journals (Sweden)

    Sun H

    2015-02-01

    Full Text Available Huihua Sun,1,* Zihe Liu,1,* Huimin Zhao,1,2 Ee Lui Ang1 1Metabolic Engineering Research Laboratory, Institute of Chemical and Engineering Sciences, Agency for Science, Technology and Research, Singapore; 2Department of Chemical and Biomolecular Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA *These authors contributed equally to this work Abstract: Because of extraordinary structural diversity and broad biological activities, natural products have played a significant role in drug discovery. These therapeutically important secondary metabolites are assembled and modified by dedicated biosynthetic pathways in their host living organisms. Traditionally, chemists have attempted to synthesize natural product analogs that are important sources of new drugs. However, the extraordinary structural complexity of natural products sometimes makes it challenging for traditional chemical synthesis, which usually involves multiple steps, harsh conditions, toxic organic solvents, and byproduct wastes. In contrast, combinatorial biosynthesis exploits substrate promiscuity and employs engineered enzymes and pathways to produce novel “unnatural” natural products, substantially expanding the structural diversity of natural products with potential pharmaceutical value. Thus, combinatorial biosynthesis provides an environmentally friendly way to produce natural product analogs. Efficient expression of the combinatorial biosynthetic pathway in genetically tractable heterologous hosts can increase the titer of the compound, eventually resulting in less expensive drugs. In this review, we will discuss three major strategies for combinatorial biosynthesis: 1 precursor-directed biosynthesis; 2 enzyme-level modification, which includes swapping of the entire domains, modules and subunits, site-specific mutagenesis, and directed evolution; 3 pathway-level recombination. Recent examples of combinatorial biosynthesis employing these

  15. FSILP: fuzzy-stochastic-interval linear programming for supporting municipal solid waste management.

    Science.gov (United States)

    Li, Pu; Chen, Bing

    2011-04-01

    Although many studies on municipal solid waste management (MSW management) were conducted under uncertain conditions of fuzzy, stochastic, and interval coexistence, the solution to the conventional linear programming problems of integrating fuzzy method with the other two was inefficient. In this study, a fuzzy-stochastic-interval linear programming (FSILP) method is developed by integrating Nguyen's method with conventional linear programming for supporting municipal solid waste management. The Nguyen's method was used to convert the fuzzy and fuzzy-stochastic linear programming problems into the conventional linear programs, by measuring the attainment values of fuzzy numbers and/or fuzzy random variables, as well as superiority and inferiority between triangular fuzzy numbers/triangular fuzzy-stochastic variables. The developed method can effectively tackle uncertainties described in terms of probability density functions, fuzzy membership functions, and discrete intervals. Moreover, the method can also improve upon the conventional interval fuzzy programming and two-stage stochastic programming approaches, with advantageous capabilities that are easily achieved with fewer constraints and significantly reduces consumption time. The developed model was applied to a case study of municipal solid waste management system in a city. The results indicated that reasonable solutions had been generated. The solution can help quantify the relationship between the change of system cost and the uncertainties, which could support further analysis of tradeoffs between the waste management cost and the system failure risk. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. Two-stage free electron laser research

    Science.gov (United States)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  17. Stochastic resonance induced by the novel random transitions of two-dimensional weak damping bistable duffing oscillator and bifurcation of moment equation

    International Nuclear Information System (INIS)

    Zhang Guangjun; Xu Jianxue; Wang Jue; Yue Zhifeng; Zou Hailin

    2009-01-01

    In this paper stochastic resonance induced by the novel random transitions of two-dimensional weak damping bistable Duffing oscillator is analyzed by moment method. This kind of novel transition refers to the one among three potential well on two sides of bifurcation point of original system at the presence of internal noise. Several conclusions are drawn. First, the semi-analytical result of stochastic resonance induced by the novel random transitions of two-dimensional weak damping bistable Duffing oscillator can be obtained, and the semi-analytical result is qualitatively compatible with the one of Monte Carlo simulation. Second, a bifurcation of double-branch fixed point curves occurs in the moment equations with noise intensity as their bifurcation parameter. Third, the bifurcation of moment equations corresponds to stochastic resonance of original system. Finally, the mechanism of stochastic resonance is presented from another viewpoint through analyzing the energy transfer induced by the bifurcation of moment equation.

  18. Combinatorial designs a tribute to Haim Hanani

    CERN Document Server

    Hartman, A

    1989-01-01

    Haim Hanani pioneered the techniques for constructing designs and the theory of pairwise balanced designs, leading directly to Wilson''s Existence Theorem. He also led the way in the study of resolvable designs, covering and packing problems, latin squares, 3-designs and other combinatorial configurations.The Hanani volume is a collection of research and survey papers at the forefront of research in combinatorial design theory, including Professor Hanani''s own latest work on Balanced Incomplete Block Designs. Other areas covered include Steiner systems, finite geometries, quasigroups, and t-designs.

  19. A quark interpretation of the combinatorial hierarchy

    International Nuclear Information System (INIS)

    Enqvist, Kari.

    1979-01-01

    We propose a physical interpretation of the second level of the combinatorial hierarchy in terms of three quarks, three antiquarks and the vacuum. This interpretation allows us to introduce a new quantum number, which measures electromagnetic mass splitting of the quarks. We extend our argument by analogue to baryons, and find some SU(3) and some new mass formulas for baryons. The generalization of our approach to other hierarchy levels is discussed. We present also an empirical mass formula for baryons, which seems to be loosely connected with the combinatorial hierarchy. (author)

  20. A combinatorial approach to diffeomorphism invariant quantum gauge theories

    International Nuclear Information System (INIS)

    Zapata, J.A.

    1997-01-01

    Quantum gauge theory in the connection representation uses functions of holonomies as configuration observables. Physical observables (gauge and diffeomorphism invariant) are represented in the Hilbert space of physical states; physical states are gauge and diffeomorphism invariant distributions on the space of functions of the holonomies of the edges of a certain family of graphs. Then a family of graphs embedded in the space manifold (satisfying certain properties) induces a representation of the algebra of physical observables. We construct a quantum model from the set of piecewise linear graphs on a piecewise linear manifold, and another manifestly combinatorial model from graphs defined on a sequence of increasingly refined simplicial complexes. Even though the two models are different at the kinematical level, they provide unitarily equivalent representations of the algebra of physical observables in separable Hilbert spaces of physical states (their s-knot basis is countable). Hence, the combinatorial framework is compatible with the usual interpretation of quantum field theory. copyright 1997 American Institute of Physics

  1. Random vs. Combinatorial Methods for Discrete Event Simulation of a Grid Computer Network

    Science.gov (United States)

    Kuhn, D. Richard; Kacker, Raghu; Lei, Yu

    2010-01-01

    This study compared random and t-way combinatorial inputs of a network simulator, to determine if these two approaches produce significantly different deadlock detection for varying network configurations. Modeling deadlock detection is important for analyzing configuration changes that could inadvertently degrade network operations, or to determine modifications that could be made by attackers to deliberately induce deadlock. Discrete event simulation of a network may be conducted using random generation, of inputs. In this study, we compare random with combinatorial generation of inputs. Combinatorial (or t-way) testing requires every combination of any t parameter values to be covered by at least one test. Combinatorial methods can be highly effective because empirical data suggest that nearly all failures involve the interaction of a small number of parameters (1 to 6). Thus, for example, if all deadlocks involve at most 5-way interactions between n parameters, then exhaustive testing of all n-way interactions adds no additional information that would not be obtained by testing all 5-way interactions. While the maximum degree of interaction between parameters involved in the deadlocks clearly cannot be known in advance, covering all t-way interactions may be more efficient than using random generation of inputs. In this study we tested this hypothesis for t = 2, 3, and 4 for deadlock detection in a network simulation. Achieving the same degree of coverage provided by 4-way tests would have required approximately 3.2 times as many random tests; thus combinatorial methods were more efficient for detecting deadlocks involving a higher degree of interactions. The paper reviews explanations for these results and implications for modeling and simulation.

  2. Optimal Control for Stochastic Delay Evolution Equations

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Qingxin, E-mail: mqx@hutc.zj.cn [Huzhou University, Department of Mathematical Sciences (China); Shen, Yang, E-mail: skyshen87@gmail.com [York University, Department of Mathematics and Statistics (Canada)

    2016-08-15

    In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we apply stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.

  3. Stochastic Averaging and Stochastic Extremum Seeking

    CERN Document Server

    Liu, Shu-Jun

    2012-01-01

    Stochastic Averaging and Stochastic Extremum Seeking develops methods of mathematical analysis inspired by the interest in reverse engineering  and analysis of bacterial  convergence by chemotaxis and to apply similar stochastic optimization techniques in other environments. The first half of the text presents significant advances in stochastic averaging theory, necessitated by the fact that existing theorems are restricted to systems with linear growth, globally exponentially stable average models, vanishing stochastic perturbations, and prevent analysis over infinite time horizon. The second half of the text introduces stochastic extremum seeking algorithms for model-free optimization of systems in real time using stochastic perturbations for estimation of their gradients. Both gradient- and Newton-based algorithms are presented, offering the user the choice between the simplicity of implementation (gradient) and the ability to achieve a known, arbitrary convergence rate (Newton). The design of algorithms...

  4. Stochastic linear programming models, theory, and computation

    CERN Document Server

    Kall, Peter

    2011-01-01

    This new edition of Stochastic Linear Programming: Models, Theory and Computation has been brought completely up to date, either dealing with or at least referring to new material on models and methods, including DEA with stochastic outputs modeled via constraints on special risk functions (generalizing chance constraints, ICC’s and CVaR constraints), material on Sharpe-ratio, and Asset Liability Management models involving CVaR in a multi-stage setup. To facilitate use as a text, exercises are included throughout the book, and web access is provided to a student version of the authors’ SLP-IOR software. Additionally, the authors have updated the Guide to Available Software, and they have included newer algorithms and modeling systems for SLP. The book is thus suitable as a text for advanced courses in stochastic optimization, and as a reference to the field. From Reviews of the First Edition: "The book presents a comprehensive study of stochastic linear optimization problems and their applications. … T...

  5. A stage is a stage is a stage: a direct comparison of two scoring systems.

    Science.gov (United States)

    Dawson, Theo L

    2003-09-01

    L. Kohlberg (1969) argued that his moral stages captured a developmental sequence specific to the moral domain. To explore that contention, the author compared stage assignments obtained with the Standard Issue Scoring System (A. Colby & L. Kohlberg, 1987a, 1987b) and those obtained with a generalized content-independent stage-scoring system called the Hierarchical Complexity Scoring System (T. L. Dawson, 2002a), on 637 moral judgment interviews (participants' ages ranged from 5 to 86 years). The correlation between stage scores produced with the 2 systems was .88. Although standard issue scoring and hierarchical complexity scoring often awarded different scores up to Kohlberg's Moral Stage 2/3, from his Moral Stage 3 onward, scores awarded with the two systems predominantly agreed. The author explores the implications for developmental research.

  6. Stochastic modeling of wetland-groundwater systems

    Science.gov (United States)

    Bertassello, Leonardo Enrico; Rao, P. Suresh C.; Park, Jeryang; Jawitz, James W.; Botter, Gianluca

    2018-02-01

    Modeling and data analyses were used in this study to examine the temporal hydrological variability in geographically isolated wetlands (GIWs), as influenced by hydrologic connectivity to shallow groundwater, wetland bathymetry, and subject to stochastic hydro-climatic forcing. We examined the general case of GIWs coupled to shallow groundwater through exfiltration or infiltration across wetland bottom. We also examined limiting case with the wetland stage as the local expression of the shallow groundwater. We derive analytical expressions for the steady-state probability density functions (pdfs) for wetland water storage and stage using few, scaled, physically-based parameters. In addition, we analyze the hydrologic crossing time properties of wetland stage, and the dependence of the mean hydroperiod on climatic and wetland morphologic attributes. Our analyses show that it is crucial to account for shallow groundwater connectivity to fully understand the hydrologic dynamics in wetlands. The application of the model to two different case studies in Florida, jointly with a detailed sensitivity analysis, allowed us to identify the main drivers of hydrologic dynamics in GIWs under different climate and morphologic conditions.

  7. Combinatorial Dyson-Schwinger equations and inductive data types

    Science.gov (United States)

    Kock, Joachim

    2016-06-01

    The goal of this contribution is to explain the analogy between combinatorial Dyson-Schwinger equations and inductive data types to a readership of mathematical physicists. The connection relies on an interpretation of combinatorial Dyson-Schwinger equations as fixpoint equations for polynomial functors (established elsewhere by the author, and summarised here), combined with the now-classical fact that polynomial functors provide semantics for inductive types. The paper is expository, and comprises also a brief introduction to type theory.

  8. Fluctuations induced extinction and stochastic resonance effect in a model of tumor growth with periodic treatment

    Energy Technology Data Exchange (ETDEWEB)

    Li Dongxi, E-mail: lidongxi@mail.nwpu.edu.c [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China); Xu Wei; Guo, Yongfeng; Xu Yong [Department of Applied Mathematics, Northwestern Polytechnical University, Xi' an 710072 (China)

    2011-01-31

    We investigate a stochastic model of tumor growth derived from the catalytic Michaelis-Menten reaction with positional and environmental fluctuations under subthreshold periodic treatment. Firstly, the influences of environmental fluctuations on the treatable stage are analyzed numerically. Applying the standard theory of stochastic resonance derived from the two-state approach, we derive the signal-to-noise ratio (SNR) analytically, which is used to measure the stochastic resonance phenomenon. It is found that the weak environmental fluctuations could induce the extinction of tumor cells in the subthreshold periodic treatment. The positional stability is better in favor of the treatment of the tumor cells. Besides, the appropriate and feasible treatment intensity and the treatment cycle should be highlighted considered in the treatment of tumor cells.

  9. Fluctuations induced extinction and stochastic resonance effect in a model of tumor growth with periodic treatment

    International Nuclear Information System (INIS)

    Li Dongxi; Xu Wei; Guo, Yongfeng; Xu Yong

    2011-01-01

    We investigate a stochastic model of tumor growth derived from the catalytic Michaelis-Menten reaction with positional and environmental fluctuations under subthreshold periodic treatment. Firstly, the influences of environmental fluctuations on the treatable stage are analyzed numerically. Applying the standard theory of stochastic resonance derived from the two-state approach, we derive the signal-to-noise ratio (SNR) analytically, which is used to measure the stochastic resonance phenomenon. It is found that the weak environmental fluctuations could induce the extinction of tumor cells in the subthreshold periodic treatment. The positional stability is better in favor of the treatment of the tumor cells. Besides, the appropriate and feasible treatment intensity and the treatment cycle should be highlighted considered in the treatment of tumor cells.

  10. Two-stage thermal/nonthermal waste treatment process

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Anderson, G.K.; Coogan, J.J.; Kang, M.; Tennant, R.A.; Wantuck, P.J.

    1993-01-01

    An innovative waste treatment technology is being developed in Los Alamos to address the destruction of hazardous organic wastes. The technology described in this report uses two stages: a packed bed reactor (PBR) in the first stage to volatilize and/or combust liquid organics and a silent discharge plasma (SDP) reactor to remove entrained hazardous compounds in the off-gas to even lower levels. We have constructed pre-pilot-scale PBR-SDP apparatus and tested the two stages separately and in combined modes. These tests are described in the report

  11. Path to Stochastic Stability: Comparative Analysis of Stochastic Learning Dynamics in Games

    KAUST Repository

    Jaleel, Hassan

    2018-04-08

    Stochastic stability is a popular solution concept for stochastic learning dynamics in games. However, a critical limitation of this solution concept is its inability to distinguish between different learning rules that lead to the same steady-state behavior. We address this limitation for the first time and develop a framework for the comparative analysis of stochastic learning dynamics with different update rules but same steady-state behavior. We present the framework in the context of two learning dynamics: Log-Linear Learning (LLL) and Metropolis Learning (ML). Although both of these dynamics have the same stochastically stable states, LLL and ML correspond to different behavioral models for decision making. Moreover, we demonstrate through an example setup of sensor coverage game that for each of these dynamics, the paths to stochastically stable states exhibit distinctive behaviors. Therefore, we propose multiple criteria to analyze and quantify the differences in the short and medium run behavior of stochastic learning dynamics. We derive and compare upper bounds on the expected hitting time to the set of Nash equilibria for both LLL and ML. For the medium to long-run behavior, we identify a set of tools from the theory of perturbed Markov chains that result in a hierarchical decomposition of the state space into collections of states called cycles. We compare LLL and ML based on the proposed criteria and develop invaluable insights into the comparative behavior of the two dynamics.

  12. Solving the neutron diffusion equation on combinatorial geometry computational cells for reactor physics calculations

    International Nuclear Information System (INIS)

    Azmy, Y. Y.

    2004-01-01

    An approach is developed for solving the neutron diffusion equation on combinatorial geometry computational cells, that is computational cells composed by combinatorial operations involving simple-shaped component cells. The only constraint on the component cells from which the combinatorial cells are assembled is that they possess a legitimate discretization of the underlying diffusion equation. We use the Finite Difference (FD) approximation of the x, y-geometry diffusion equation in this work. Performing the same combinatorial operations involved in composing the combinatorial cell on these discrete-variable equations yields equations that employ new discrete variables defined only on the combinatorial cell's volume and faces. The only approximation involved in this process, beyond the truncation error committed in discretizing the diffusion equation over each component cell, is a consistent-order Legendre series expansion. Preliminary results for simple configurations establish the accuracy of the solution to the combinatorial geometry solution compared to straight FD as the system dimensions decrease. Furthermore numerical results validate the consistent Legendre-series expansion order by illustrating the second order accuracy of the combinatorial geometry solution, the same as standard FD. Nevertheless the magnitude of the error for the new approach is larger than FD's since it incorporates the additional truncated series approximation. (authors)

  13. Comparison of Oone-Stage Free Gracilis Muscle Flap With Two-Stage Method in Chronic Facial Palsy

    Directory of Open Access Journals (Sweden)

    J Ghaffari

    2007-08-01

    Full Text Available Background:Rehabilitation of facial paralysis is one of the greatest challenges faced by reconstructive surgeons today. The traditional method for treatment of patients with facial palsy is the two-stage free gracilis flap which has a long latency period of between the two stages of surgery.Methods: In this paper, we prospectively compared the results of the one-stage gracilis flap method with the two -stage technique.Results:Out of 41 patients with facial palsy refered to Hazrat-e-Fatemeh Hospital 31 were selected from whom 22 underwent two- stage and 9 one-stage method treatment. The two groups were identical according to age,sex,intensity of illness, duration, and chronicity of illness. Mean duration of follow up was 37 months. There was no significant relation between the two groups regarding the symmetry of face in repose, smiling, whistling and nasolabial folds. Frequency of complications was equal in both groups. The postoperative surgeons and patients' satisfaction were equal in both groups. There was no significant difference between the mean excursion of muscle flap in one-stage (9.8 mm and two-stage groups (8.9 mm. The ratio of contraction of the affected side compared to the normal side was similar in both groups. The mean time of the initial contraction of the muscle flap in the one-stage group (5.5 months had a significant difference (P=0.001 with the two-stage one (6.5 months.The study revealed a highly significant difference (P=0.0001 between the mean waiting period from the first operation to the beginning of muscle contraction in one-stage(5.5 monthsand two-stage groups(17.1 months.Conclusion:It seems that the results and complication of the two methods are the same,but the one-stage method requires less time for facial reanimation,and is costeffective because it saves time and decreases hospitalization costs.

  14. Stochastic aspects of two-dimensional vibration diagnostics

    International Nuclear Information System (INIS)

    Pazsit, I.; Antonopoulos-Domis, M.; Gloeckler, O.

    1985-01-01

    The aim of this paper is to investigate the stochastic features of two-dimensional lateral damped oscillations of PWR core internals, that are induced by random force components. It is also investigated how these vibrating components, or the forces giving rise to the vibrations could be diagnosed through the analysis of displacement or neutron noise signals. The approach pursued here is to select a realisation of the random force components, then the equations of the motion are integrated and the time history of displacement components is obtained. From here various statistical descriptors of the motion, such as trajectory pattern, spectra and PDF functions, etc. can be calculated. It was investigated how these statistical descriptors depend on the characteristics of the driving force for both stationary and non-stationary cases. A conclusion of possible diagnostical relevance is that, under certain circumstances, the PDF functions could be an indicator of whether a particular peak in the corresponding power spectra belongs to a resonance in system transfer or rather a resonance in the external driving force. (author)

  15. Stochastic aspects of two-dimensional vibration diagnostics

    International Nuclear Information System (INIS)

    Pazsit, I.; Antonopoulos-Domis, M.; Glockler, O.

    1984-01-01

    The aim of this paper is to investigate the stochastic features of two-dimensional lateral damped oscillations of PWR core internals that are induced by random force components. It is also investigated how these vibrating components, or the forces giving rise to the vibrations, could be diagnosed through the analysis of displacement or neutron noise signals. The approach pursued here is to select a realisation of the random force components, then the equations of the motion are integrated and the time history of displacement components is obtained. From here various statistical descriptors of the motion, such as trajectory pattern, spectra and PDF functions etc., can be calculated. It was investigated how these statistical descriptors depend on the characteristics of the driving force for both stationary and non-stationary cases. A conclusion of possible diagnostical relevance is that, under certain circumstances, the PDF functions could be an indicator of whether a particular peak in the corresponding power spectra belongs to a resonance in system transfer or rather a resonance in the external driving force

  16. Stochastic aspects of two-dimensional vibration diagnostics

    International Nuclear Information System (INIS)

    Pazsit, I.; Gloeckler, O.

    1984-01-01

    The aim of this paper is to investigate the stochastic features of two-dimensional lateral damped oscillations of PWR core internals that are induced by random force components. It is also investigated how these vibrating components, or the forces giving rise to the vibrations, could be diagnosed through the analysis of displacement or neutron noise signals. The approach pursued here is to select a realisation of the random force components, then the equations of the motion ar integrated and the time history of displacement components is obtained. From here various statistical descriptors of the motion, such as trajectory pattern, spectra and PDF functions etc., can be calculated. It was investigated how these statistical descriptors depend on the characteristics of the driving force for both stationary and non-stationary cases. A conclusion of possible diagnostical relevance is that, under certain circumstances, the PDF functions could be an indicator of whether a particular peak in the corresponding power spectra belongs to a resonance in system transfer or rather a resonance in the external driving force. (author)

  17. Stochastic response of van der Pol oscillator with two kinds of fractional derivatives under Gaussian white noise excitation

    International Nuclear Information System (INIS)

    Yang Yong-Ge; Xu Wei; Sun Ya-Hui; Gu Xu-Dong

    2016-01-01

    This paper aims to investigate the stochastic response of the van der Pol (VDP) oscillator with two kinds of fractional derivatives under Gaussian white noise excitation. First, the fractional VDP oscillator is replaced by an equivalent VDP oscillator without fractional derivative terms by using the generalized harmonic balance technique. Then, the stochastic averaging method is applied to the equivalent VDP oscillator to obtain the analytical solution. Finally, the analytical solutions are validated by numerical results from the Monte Carlo simulation of the original fractional VDP oscillator. The numerical results not only demonstrate the accuracy of the proposed approach but also show that the fractional order, the fractional coefficient and the intensity of Gaussian white noise play important roles in the responses of the fractional VDP oscillator. An interesting phenomenon we found is that the effects of the fractional order of two kinds of fractional derivative items on the fractional stochastic systems are totally contrary. (paper)

  18. Stochastic line motion and stochastic flux conservation for nonideal hydromagnetic models

    International Nuclear Information System (INIS)

    Eyink, Gregory L.

    2009-01-01

    We prove that smooth solutions of nonideal (viscous and resistive) incompressible magnetohydrodynamic (MHD) equations satisfy a stochastic law of flux conservation. This property implies that the magnetic flux through a surface is equal to the average of the magnetic fluxes through an ensemble of surfaces advected backward in time by the plasma velocity perturbed with a random white noise. Our result is an analog of the well-known Alfven theorem of ideal MHD and is valid for any value of the magnetic Prandtl number. A second stochastic conservation law is shown to hold at unit Prandtl number, a random version of the generalized Kelvin theorem derived by Bekenstein and Oron for ideal MHD. These stochastic conservation laws are not only shown to be consequences of the nonideal MHD equations but are proved in fact to be equivalent to those equations. We derive similar results for two more refined hydromagnetic models, Hall MHD and the two-fluid plasma model, still assuming incompressible velocities and isotropic transport coefficients. Finally, we use these results to discuss briefly the infinite-Reynolds-number limit of hydromagnetic turbulence and to support the conjecture that flux conservation remains stochastic in that limit.

  19. Stem cell proliferation and differentiation and stochastic bistability in gene expression

    International Nuclear Information System (INIS)

    Zhdanov, V. P.

    2007-01-01

    The process of proliferation and differentiation of stem cells is inherently stochastic in the sense that the outcome of cell division is characterized by probabilities that depend on the intracellular properties, extracellular medium, and cell-cell communication. Despite four decades of intensive studies, the understanding of the physics behind this stochasticity is still limited, both in details and conceptually. Here, we suggest a simple scheme showing that the stochastic behavior of a single stem cell may be related to (i) the existence of a short stage of decision whether it will proliferate or differentiate and (ii) control of this stage by stochastic bistability in gene expression or, more specifically, by transcriptional 'bursts.' Our Monte Carlo simulations indicate that our proposed scheme may operate if the number of mRNA (or protein) molecules generated during the high-reactive periods of gene expression is below or about 50. The stochastic-burst window in the space of kinetic parameters is found to increase with decreasing the mRNA and/or regulatory-protein numbers and increasing the number of regulatory sites. For mRNA production with three regulatory sites, for example, the mRNA degradation rate constant may change in the range ±10%

  20. Changing contributions of stochastic and deterministic processes in community assembly over a successional gradient.

    Science.gov (United States)

    Måren, Inger Elisabeth; Kapfer, Jutta; Aarrestad, Per Arild; Grytnes, John-Arvid; Vandvik, Vigdis

    2018-01-01

    Successional dynamics in plant community assembly may result from both deterministic and stochastic ecological processes. The relative importance of different ecological processes is expected to vary over the successional sequence, between different plant functional groups, and with the disturbance levels and land-use management regimes of the successional systems. We evaluate the relative importance of stochastic and deterministic processes in bryophyte and vascular plant community assembly after fire in grazed and ungrazed anthropogenic coastal heathlands in Northern Europe. A replicated series of post-fire successions (n = 12) were initiated under grazed and ungrazed conditions, and vegetation data were recorded in permanent plots over 13 years. We used redundancy analysis (RDA) to test for deterministic successional patterns in species composition repeated across the replicate successional series and analyses of co-occurrence to evaluate to what extent species respond synchronously along the successional gradient. Change in species co-occurrences over succession indicates stochastic successional dynamics at the species level (i.e., species equivalence), whereas constancy in co-occurrence indicates deterministic dynamics (successional niche differentiation). The RDA shows high and deterministic vascular plant community compositional change, especially early in succession. Co-occurrence analyses indicate stochastic species-level dynamics the first two years, which then give way to more deterministic replacements. Grazed and ungrazed successions are similar, but the early stage stochasticity is higher in ungrazed areas. Bryophyte communities in ungrazed successions resemble vascular plant communities. In contrast, bryophytes in grazed successions showed consistently high stochasticity and low determinism in both community composition and species co-occurrence. In conclusion, stochastic and individualistic species responses early in succession give way to more

  1. The construction of combinatorial manifolds with prescribed sets of links of vertices

    International Nuclear Information System (INIS)

    Gaifullin, A A

    2008-01-01

    To every oriented closed combinatorial manifold we assign the set (with repetitions) of isomorphism classes of links of its vertices. The resulting transformation L is the main object of study in this paper. We pose an inversion problem for L and show that this problem is closely related to Steenrod's problem on the realization of cycles and to the Rokhlin-Schwartz-Thom construction of combinatorial Pontryagin classes. We obtain a necessary condition for a set of isomorphism classes of combinatorial spheres to belong to the image of L. (Sets satisfying this condition are said to be balanced.) We give an explicit construction showing that every balanced set of isomorphism classes of combinatorial spheres falls into the image of L after passing to a multiple set and adding several pairs of the form (Z,-Z), where -Z is the sphere Z with the orientation reversed. Given any singular simplicial cycle ξ of a space X, this construction enables us to find explicitly a combinatorial manifold M and a map φ:M→X such that φ * [M]=r[ξ] for some positive integer r. The construction is based on resolving singularities of ξ. We give applications of the main construction to cobordisms of manifolds with singularities and cobordisms of simple cells. In particular, we prove that every rational additive invariant of cobordisms of manifolds with singularities admits a local formula. Another application is the construction of explicit (though inefficient) local combinatorial formulae for polynomials in the rational Pontryagin classes of combinatorial manifolds

  2. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    Science.gov (United States)

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  3. A Two-Stage Queue Model to Optimize Layout of Urban Drainage System considering Extreme Rainstorms

    Directory of Open Access Journals (Sweden)

    Xinhua He

    2017-01-01

    Full Text Available Extreme rainstorm is a main factor to cause urban floods when urban drainage system cannot discharge stormwater successfully. This paper investigates distribution feature of rainstorms and draining process of urban drainage systems and uses a two-stage single-counter queue method M/M/1→M/D/1 to model urban drainage system. The model emphasizes randomness of extreme rainstorms, fuzziness of draining process, and construction and operation cost of drainage system. Its two objectives are total cost of construction and operation and overall sojourn time of stormwater. An improved genetic algorithm is redesigned to solve this complex nondeterministic problem, which incorporates with stochastic and fuzzy characteristics in whole drainage process. A numerical example in Shanghai illustrates how to implement the model, and comparisons with alternative algorithms show its performance in computational flexibility and efficiency. Discussions on sensitivity of four main parameters, that is, quantity of pump stations, drainage pipe diameter, rainstorm precipitation intensity, and confidence levels, are also presented to provide guidance for designing urban drainage system.

  4. Dendrimer-based dynamic combinatorial libraries

    NARCIS (Netherlands)

    Chang, T.; Meijer, E.W.

    2005-01-01

    The aim of this project is to create water-sol. dynamic combinatorial libraries based upon dendrimer-guest complexes. The guest mols. are designed to bind to dendrimers using multiple secondary interactions, such as electrostatics and hydrogen bonding. We have been able to incorporate various guest

  5. Computational Complexity of Combinatorial Surfaces

    NARCIS (Netherlands)

    Vegter, Gert; Yap, Chee K.

    1990-01-01

    We investigate the computational problems associated with combinatorial surfaces. Specifically, we present an algorithm (based on the Brahana-Dehn-Heegaard approach) for transforming the polygonal schema of a closed triangulated surface into its canonical form in O(n log n) time, where n is the

  6. Combinatorial auctions for electronic business

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    (6) Information feedback: An auction protocol may be a direct mechanism or an .... transparency of allocation decisions arise in resolving these ties. .... bidding, however more recently, combinatorial bids are allowed [50] making ...... Also, truth revelation and other game theoretic considerations are not taken into account.

  7. Combinatorial Proofs and Algebraic Proofs

    Indian Academy of Sciences (India)

    Permanent link: https://www.ias.ac.in/article/fulltext/reso/018/07/0630-0645. Keywords. Combinatorial proof; algebraic proof; binomial identity; recurrence relation; composition; Fibonacci number; Fibonacci identity; Pascal triangle. Author Affiliations. Shailesh A Shirali1. Sahyadri School Tiwai Hill, Rajgurunagar Pune 410 ...

  8. Fuzzy stochastic analysis of serviceability and ultimate limit states of two-span pedestrian steel bridge

    Science.gov (United States)

    Kala, Zdeněk; Sandovič, GiedrÄ--

    2012-09-01

    The paper deals with non-linear analysis of ultimate and serviceability limit states of two-span pedestrian steel bridge. The effects of random material and geometrical characteristics on limit states are analyzed. The Monte Carlo method was applied to stochastic analysis. For the serviceability limit state, also influence of fuzzy uncertainty of the limit deflection value on random characteristics of load capacity of variable action was studied. The results prove that, for the type of structure studied, the serviceability limit state is decisive from the point of view of design. The present paper opens a discussion on the use of stochastic analysis to verify the limit deflections given in the standards EUROCODES.

  9. Polyhedral and semidefinite programming methods in combinatorial optimization

    CERN Document Server

    Tunçel, Levent

    2010-01-01

    Since the early 1960s, polyhedral methods have played a central role in both the theory and practice of combinatorial optimization. Since the early 1990s, a new technique, semidefinite programming, has been increasingly applied to some combinatorial optimization problems. The semidefinite programming problem is the problem of optimizing a linear function of matrix variables, subject to finitely many linear inequalities and the positive semidefiniteness condition on some of the matrix variables. On certain problems, such as maximum cut, maximum satisfiability, maximum stable set and geometric r

  10. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    International Nuclear Information System (INIS)

    Ariunbaatar, Javkhlan; Scotto Di Perta, Ester; Panico, Antonio; Frunzo, Luigi; Esposito, Giovanni; Lens, Piet N.L.; Pirozzi, Francesco

    2015-01-01

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC 50 of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid

  11. Staging of gastric adenocarcinoma using two-phase spiral CT: correlation with pathologic staging

    International Nuclear Information System (INIS)

    Seo, Tae Seok; Lee, Dong Ho; Ko, Young Tae; Lim, Joo Won

    1998-01-01

    To correlate the preoperative staging of gastric adenocarcinoma using two-phase spiral CT with pathologic staging. One hundred and eighty patients with gastric cancers confirmed during surgery underwent two-phase spiral CT, and were evaluated retrospectively. CT scans were obtained in the prone position after ingestion of water. Scans were performed 35 and 80 seconds after the start of infusion of 120mL of non-ionic contrast material with the speed of 3mL/sec. Five mm collimation, 7mm/sec table feed and 5mm reconstruction interval were used. T-and N-stage were determined using spiral CT images, without knowledge of the pathologic results. Pathologic staging was later compared with CT staging. Pathologic T-stage was T1 in 70 cases(38.9%), T2 in 33(18.3%), T3 in 73(40.6%), and T4 in 4(2.2%). Type-I or IIa elevated lesions accouted for 10 of 70 T1 cases(14.3%) and flat or depressed lesions(type IIb, IIc, or III) for 60(85.7%). Pathologic N-stage was NO in 85 cases(47.2%), N1 in 42(23.3%), N2 in 31(17.2%), and N3 in 22(12,2%). The detection rate of early gastric cancer using two-phase spiral CT was 100.0%(10 of 10 cases) among elevated lesions and 78.3%(47 of 60 cases) among flat or depressed lesions. With regard to T-stage, there was good correlation between CT image and pathology in 86 of 180 cases(47.8%). Overstaging occurred in 23.3%(42 of 180 cases) and understaging in 28.9%(52 of 180 cases). With regard to N-stage, good correlation between CT image and pathology was noted in 94 of 180 cases(52.2%). The rate of understaging(31.7%, 57 of 180 cases) was higher than that of overstaging(16.1%, 29 of 180 cases)(p<0.001). The detection rate of early gastric cancer using two-phase spiral CT was 81.4%, and there was no significant difference in detectability between elevated and depressed lesions. Two-phase spiral CT for determing the T-and N-stage of gastric cancer was not effective;it was accurate in abont 50% of cases understaging tended to occur.=20

  12. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    Science.gov (United States)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  13. Stochastic time series analysis of hydrology data for water resources

    Science.gov (United States)

    Sathish, S.; Khadar Babu, S. K.

    2017-11-01

    The prediction to current publication of stochastic time series analysis in hydrology and seasonal stage. The different statistical tests for predicting the hydrology time series on Thomas-Fiering model. The hydrology time series of flood flow have accept a great deal of consideration worldwide. The concentration of stochastic process areas of time series analysis method are expanding with develop concerns about seasonal periods and global warming. The recent trend by the researchers for testing seasonal periods in the hydrologic flowseries using stochastic process on Thomas-Fiering model. The present article proposed to predict the seasonal periods in hydrology using Thomas-Fiering model.

  14. On Stochastic Fishery Games with Endogenous Stage-Payoffs and Transition Probabilities

    NARCIS (Netherlands)

    Joosten, Reinoud A.M.G.; Samuel, Llea; Li, Deng-Feng; Yang, Xiao-Guang; Uetz, Marc; Xu, Gen-Jiu

    2017-01-01

    We engineered a stochastic fishery game in which overfishing has a twofold effect: it gradually damages the fish stock inducing lower catches in states High and Low, and it gradually causes the system to spend more time in the latter state with lower landings. To analyze the effects of this ‘double

  15. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes.

    Science.gov (United States)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  16. Exact protein distributions for stochastic models of gene expression using partitioning of Poisson processes

    Science.gov (United States)

    Pendar, Hodjat; Platini, Thierry; Kulkarni, Rahul V.

    2013-04-01

    Stochasticity in gene expression gives rise to fluctuations in protein levels across a population of genetically identical cells. Such fluctuations can lead to phenotypic variation in clonal populations; hence, there is considerable interest in quantifying noise in gene expression using stochastic models. However, obtaining exact analytical results for protein distributions has been an intractable task for all but the simplest models. Here, we invoke the partitioning property of Poisson processes to develop a mapping that significantly simplifies the analysis of stochastic models of gene expression. The mapping leads to exact protein distributions using results for mRNA distributions in models with promoter-based regulation. Using this approach, we derive exact analytical results for steady-state and time-dependent distributions for the basic two-stage model of gene expression. Furthermore, we show how the mapping leads to exact protein distributions for extensions of the basic model that include the effects of posttranscriptional and posttranslational regulation. The approach developed in this work is widely applicable and can contribute to a quantitative understanding of stochasticity in gene expression and its regulation.

  17. Joint market clearing in a stochastic framework considering power system security

    International Nuclear Information System (INIS)

    Aghaei, J.; Shayanfar, H.A.; Amjady, N.

    2009-01-01

    This paper presents a new stochastic framework for provision of reserve requirements (spinning and non-spinning reserves) as well as energy in day-ahead simultaneous auctions by pool-based aggregated market scheme. The uncertainty of generating units in the form of system contingencies are considered in the market clearing procedure by the stochastic model. The solution methodology consists of two stages, which firstly, employs Monte-Carlo Simulation (MCS) for random scenario generation. Then, the stochastic market clearing procedure is implemented as a series of deterministic optimization problems (scenarios) including non-contingent scenario and different post-contingency states. The objective function of each of these deterministic optimization problems consists of offered cost function (including both energy and reserves offer costs), Lost Opportunity Cost (LOC) and Expected Interruption Cost (EIC). Each optimization problem is solved considering AC power flow and security constraints of the power system. The model is applied to the IEEE 24-bus Reliability Test System (IEEE 24-bus RTS) and simulation studies are carried out to examine the effectiveness of the proposed method.

  18. A generic methodology for the optimisation of sewer systems using stochastic programming and self-optimizing control

    DEFF Research Database (Denmark)

    Maurico-Iglesias, Miguel; Castro, Ignacio Montero; Mollerup, Ane Loft

    2015-01-01

    . Such controller is aimed at keeping the system close to the optimal performance, thanks to an optimal selection of controlled variables. The definition of an optimal performance was carried out by a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current......The design of sewer system control is a complex task given the large size of the sewer networks, the transient dynamics of the water flow and the stochastic nature of rainfall. This contribution presents a generic methodology for the design of a self-optimising controller in sewer systems...

  19. Two-stage discrete-continuous multi-objective load optimization: An industrial consumer utility approach to demand response

    International Nuclear Information System (INIS)

    Abdulaal, Ahmed; Moghaddass, Ramin; Asfour, Shihab

    2017-01-01

    Highlights: •Two-stage model links discrete-optimization to real-time system dynamics operation. •The solutions obtained are non-dominated Pareto optimal solutions. •Computationally efficient GA solver through customized chromosome coding. •Modest to considerable savings are achieved depending on the consumer’s preference. -- Abstract: In the wake of today’s highly dynamic and competitive energy markets, optimal dispatching of energy sources requires effective demand responsiveness. Suppliers have adopted a dynamic pricing strategy in efforts to control the downstream demand. This method however requires consumer awareness, flexibility, and timely responsiveness. While residential activities are more flexible and schedulable, larger commercial consumers remain an obstacle due to the impacts on industrial performance. This paper combines methods from quadratic, stochastic, and evolutionary programming with multi-objective optimization and continuous simulation, to propose a two-stage discrete-continuous multi-objective load optimization (DiCoMoLoOp) autonomous approach for industrial consumer demand response (DR). Stage 1 defines discrete-event load shifting targets. Accordingly, controllable loads are continuously optimized in stage 2 while considering the consumer’s utility. Utility functions, which measure the loads’ time value to the consumer, are derived and weights are assigned through an analytical hierarchy process (AHP). The method is demonstrated for an industrial building model using real data. The proposed method integrates with building energy management system and solves in real-time with autonomous and instantaneous load shifting in the hour-ahead energy price (HAP) market. The simulation shows the occasional existence of multiple load management options on the Pareto frontier. Finally, the computed savings, based on the simulation analysis with real consumption, climate, and price data, ranged from modest to considerable amounts

  20. Paths and partitions: Combinatorial descriptions of the parafermionic states

    Science.gov (United States)

    Mathieu, Pierre

    2009-09-01

    The Zk parafermionic conformal field theories, despite the relative complexity of their modes algebra, offer the simplest context for the study of the bases of states and their different combinatorial representations. Three bases are known. The classic one is given by strings of the fundamental parafermionic operators whose sequences of modes are in correspondence with restricted partitions with parts at distance k -1 differing at least by 2. Another basis is expressed in terms of the ordered modes of the k -1 different parafermionic fields, which are in correspondence with the so-called multiple partitions. Both types of partitions have a natural (Bressoud) path representation. Finally, a third basis, formulated in terms of different paths, is inherited from the solution of the restricted solid-on-solid model of Andrews-Baxter-Forrester. The aim of this work is to review, in a unified and pedagogical exposition, these four different combinatorial representations of the states of the Zk parafermionic models. The first part of this article presents the different paths and partitions and their bijective relations; it is purely combinatorial, self-contained, and elementary; it can be read independently of the conformal-field-theory applications. The second part links this combinatorial analysis with the bases of states of the Zk parafermionic theories. With the prototypical example of the parafermionic models worked out in detail, this analysis contributes to fix some foundations for the combinatorial study of more complicated theories. Indeed, as we briefly indicate in ending, generalized versions of both the Bressoud and the Andrews-Baxter-Forrester paths emerge naturally in the description of the minimal models.

  1. Combinatorial algebra syntax and semantics

    CERN Document Server

    Sapir, Mark V

    2014-01-01

    Combinatorial Algebra: Syntax and Semantics provides a comprehensive account of many areas of combinatorial algebra. It contains self-contained proofs of  more than 20 fundamental results, both classical and modern. This includes Golod–Shafarevich and Olshanskii's solutions of Burnside problems, Shirshov's solution of Kurosh's problem for PI rings, Belov's solution of Specht's problem for varieties of rings, Grigorchuk's solution of Milnor's problem, Bass–Guivarc'h theorem about the growth of nilpotent groups, Kleiman's solution of Hanna Neumann's problem for varieties of groups, Adian's solution of von Neumann-Day's problem, Trahtman's solution of the road coloring problem of Adler, Goodwyn and Weiss. The book emphasize several ``universal" tools, such as trees, subshifts, uniformly recurrent words, diagrams and automata.   With over 350 exercises at various levels of difficulty and with hints for the more difficult problems, this book can be used as a textbook, and aims to reach a wide and diversified...

  2. Stochastic quantization and gravity

    International Nuclear Information System (INIS)

    Rumpf, H.

    1984-01-01

    We give a preliminary account of the application of stochastic quantization to the gravitational field. We start in Section I from Nelson's formulation of quantum mechanics as Newtonian stochastic mechanics and only then introduce the Parisi-Wu stochastic quantization scheme on which all the later discussion will be based. In Section II we present a generalization of the scheme that is applicable to fields in physical (i.e. Lorentzian) space-time and treat the free linearized gravitational field in this manner. The most remarkable result of this is the noncausal propagation of conformal gravitons. Moreover the concept of stochastic gauge-fixing is introduced and a complete discussion of all the covariant gauges is given. A special symmetry relating two classes of covariant gauges is exhibited. Finally Section III contains some preliminary remarks on full nonlinear gravity. In particular we argue that in contrast to gauge fields the stochastic gravitational field cannot be transformed to a Gaussian process. (Author)

  3. Scenario-based stochastic optimal operation of wind, photovoltaic, pump-storage hybrid system in frequency- based pricing

    International Nuclear Information System (INIS)

    Zare Oskouei, Morteza; Sadeghi Yazdankhah, Ahmad

    2015-01-01

    Highlights: • Two-stage objective function is proposed for optimization problem. • Hourly-based optimal contractual agreement is calculated. • Scenario-based stochastic optimization problem is solved. • Improvement of system frequency by utilizing PSH unit. - Abstract: This paper proposes the operating strategy of a micro grid connected wind farm, photovoltaic and pump-storage hybrid system. The strategy consists of two stages. In the first stage, the optimal hourly contractual agreement is determined. The second stage corresponds to maximizing its profit by adapting energy management strategy of wind and photovoltaic in coordination with optimum operating schedule of storage device under frequency based pricing for a day ahead electricity market. The pump-storage hydro plant is utilized to minimize unscheduled interchange flow and maximize the system benefit by participating in frequency control based on energy price. Because of uncertainties in power generation of renewable sources and market prices, generation scheduling is modeled by a stochastic optimization problem. Uncertainties of parameters are modeled by scenario generation and scenario reduction method. A powerful optimization algorithm is proposed using by General Algebraic Modeling System (GAMS)/CPLEX. In order to verify the efficiency of the method, the algorithm is applied to various scenarios with different wind and photovoltaic power productions in a day ahead electricity market. The numerical results demonstrate the effectiveness of the proposed approach.

  4. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    Directory of Open Access Journals (Sweden)

    Yanju Chen

    2015-01-01

    Full Text Available This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the second-stage programming problem is derived. As a result, the proposed two-stage model is equivalent to a single-stage model, and the analytical optimal solution of the two-stage model is obtained, which helps us to discuss the properties of the optimal solution. Finally, some numerical experiments are performed to demonstrate the new modeling idea and the effectiveness. The computational results provided by the proposed model show that the more risk-averse investor will invest more wealth in the risk-free security. They also show that the optimal invested amount in risky security increases as the risk-free return decreases and the optimal utility increases as the risk-free return increases, whereas the optimal utility increases as the transaction costs decrease. In most instances the utilities provided by the proposed two-stage model are larger than those provided by the single-stage model.

  5. Wide-bandwidth bilateral control using two-stage actuator system

    International Nuclear Information System (INIS)

    Kokuryu, Saori; Izutsu, Masaki; Kamamichi, Norihiro; Ishikawa, Jun

    2015-01-01

    This paper proposes a two-stage actuator system that consists of a coarse actuator driven by a ball screw with an AC motor (the first stage) and a fine actuator driven by a voice coil motor (the second stage). The proposed two-stage actuator system is applied to make a wide-bandwidth bilateral control system without needing expensive high-performance actuators. In the proposed system, the first stage has a wide moving range with a narrow control bandwidth, and the second stage has a narrow moving range with a wide control bandwidth. By consolidating these two inexpensive actuators with different control bandwidths in a complementary manner, a wide bandwidth bilateral control system can be constructed based on a mechanical impedance control. To show the validity of the proposed method, a prototype of the two-stage actuator system has been developed and basic performance was evaluated by experiment. The experimental results showed that a light mechanical impedance with a mass of 10 g and a damping coefficient of 2.5 N/(m/s) that is an important factor to establish good transparency in bilateral control has been successfully achieved and also showed that a better force and position responses between a master and slave is achieved by using the proposed two-stage actuator system compared with a narrow bandwidth case using a single ball screw system. (author)

  6. Recognition of Equations Using a Two-Dimensional Stochastic Context-Free Grammar

    Science.gov (United States)

    Chou, Philip A.

    1989-11-01

    We propose using two-dimensional stochastic context-free grammars for image recognition, in a manner analogous to using hidden Markov models for speech recognition. The value of the approach is demonstrated in a system that recognizes printed, noisy equations. The system uses a two-dimensional probabilistic version of the Cocke-Younger-Kasami parsing algorithm to find the most likely parse of the observed image, and then traverses the corresponding parse tree in accordance with translation formats associated with each production rule, to produce eqn I troff commands for the imaged equation. In addition, it uses two-dimensional versions of the Inside/Outside and Baum re-estimation algorithms for learning the parameters of the grammar from a training set of examples. Parsing the image of a simple noisy equation currently takes about one second of cpu time on an Alliant FX/80.

  7. Design and spectroscopic reflectometry characterization of pulsed laser deposition combinatorial libraries

    International Nuclear Information System (INIS)

    Schenck, Peter K.; Bassim, Nabil D.; Otani, Makoto; Oguchi, Hiroyuki; Green, Martin L.

    2007-01-01

    The goal of the design of pulsed laser deposition (PLD) combinatorial library films is to optimize the compositional coverage of the films while maintaining a uniform thickness. The deposition pattern of excimer laser PLD films can be modeled with a bimodal cos n distribution. Deposited films were characterized using a spectroscopic reflectometer (250-1000 nm) to map the thickness of both single composition calibration films and combinatorial library films. These distribution functions were used to simulate the composition and thickness of multiple target combinatorial library films. The simulations were correlated with electron-probe microanalysis wavelength-dispersive spectroscopy (EPMA-WDS) composition maps. The composition and thickness of the library films can be fine-tuned by adjusting the laser spot size, fluence, background gas pressure, target geometry and other processing parameters which affect the deposition pattern. Results from compositionally graded combinatorial library films of the ternary system Al 2 O 3 -HfO 2 -Y 2 O 3 are discussed

  8. Two-stage precipitation of neptunium (IV) oxalate

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1983-07-01

    Neptunium (IV) oxalate was precipitated using a two-stage precipitation system. A series of precipitation experiments was used to identify the significant process variables affecting precipitate characteristics. Process variables tested were input concentrations, solubility conditions in the first stage precipitator, precipitation temperatures, and residence time in the first stage precipitator. A procedure has been demonstrated that produces neptunium (IV) oxalate particles that filter well and readily calcine to the oxide

  9. Sampling strategies and stopping criteria for stochastic dual dynamic programming: a case study in long-term hydrothermal scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Homem-de-Mello, Tito [University of Illinois at Chicago, Department of Mechanical and Industrial Engineering, Chicago, IL (United States); Matos, Vitor L. de; Finardi, Erlon C. [Universidade Federal de Santa Catarina, LabPlan - Laboratorio de Planejamento de Sistemas de Energia Eletrica, Florianopolis (Brazil)

    2011-03-15

    The long-term hydrothermal scheduling is one of the most important problems to be solved in the power systems area. This problem aims to obtain an optimal policy, under water (energy) resources uncertainty, for hydro and thermal plants over a multi-annual planning horizon. It is natural to model the problem as a multi-stage stochastic program, a class of models for which algorithms have been developed. The original stochastic process is represented by a finite scenario tree and, because of the large number of stages, a sampling-based method such as the Stochastic Dual Dynamic Programming (SDDP) algorithm is required. The purpose of this paper is two-fold. Firstly, we study the application of two alternative sampling strategies to the standard Monte Carlo - namely, Latin hypercube sampling and randomized quasi-Monte Carlo - for the generation of scenario trees, as well as for the sampling of scenarios that is part of the SDDP algorithm. Secondly, we discuss the formulation of stopping criteria for the optimization algorithm in terms of statistical hypothesis tests, which allows us to propose an alternative criterion that is more robust than that originally proposed for the SDDP. We test these ideas on a problem associated with the whole Brazilian power system, with a three-year planning horizon. (orig.)

  10. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    Energy Technology Data Exchange (ETDEWEB)

    Ariunbaatar, Javkhlan, E-mail: jaka@unicas.it [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Scotto Di Perta, Ester [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy); Panico, Antonio [Telematic University PEGASO, Piazza Trieste e Trento, 48, 80132 Naples (Italy); Frunzo, Luigi [Department of Mathematics and Applications Renato Caccioppoli, University of Naples Federico II, Via Claudio, 21, 80125 Naples (Italy); Esposito, Giovanni [Department of Civil and Mechanical Engineering, University of Cassino and Southern Lazio, Via Di Biasio 43, 03043 Cassino, FR (Italy); Lens, Piet N.L. [UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft (Netherlands); Pirozzi, Francesco [Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Via Claudio 21, 80125 Naples (Italy)

    2015-04-15

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC{sub 50} of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid.

  11. Enumeration of Combinatorial Classes of Single Variable Complex Polynomial Vector Fields

    DEFF Research Database (Denmark)

    Dias, Kealey

    A vector field in the space of degree d monic, centered single variable complex polynomial vector fields has a combinatorial structure which can be fully described by a combinatorial data set consisting of an equivalence relation and a marked subset on the integers mod 2d-2, satisfying certain...

  12. Quantum stochastics

    CERN Document Server

    Chang, Mou-Hsiung

    2015-01-01

    The classical probability theory initiated by Kolmogorov and its quantum counterpart, pioneered by von Neumann, were created at about the same time in the 1930s, but development of the quantum theory has trailed far behind. Although highly appealing, the quantum theory has a steep learning curve, requiring tools from both probability and analysis and a facility for combining the two viewpoints. This book is a systematic, self-contained account of the core of quantum probability and quantum stochastic processes for graduate students and researchers. The only assumed background is knowledge of the basic theory of Hilbert spaces, bounded linear operators, and classical Markov processes. From there, the book introduces additional tools from analysis, and then builds the quantum probability framework needed to support applications to quantum control and quantum information and communication. These include quantum noise, quantum stochastic calculus, stochastic quantum differential equations, quantum Markov semigrou...

  13. Two-stage dental implants inserted in a one-stage procedure : a prospective comparative clinical study

    NARCIS (Netherlands)

    Heijdenrijk, Kees

    2002-01-01

    The results of this study indicate that dental implants designed for a submerged implantation procedure can be used in a single-stage procedure and may be as predictable as one-stage implants. Although one-stage implant systems and two-stage.

  14. An Optimization Model for Kardeh Reservoir Operation Using Interval-Parameter, Multi-stage, Stochastic Programming

    Directory of Open Access Journals (Sweden)

    Fatemeh Rastegaripour

    2010-09-01

    Full Text Available The present study investigates water allocation of Kardeh Reservoir to domestic and agricultural users using an Interval Parameter, Multi-stage, Stochastic Programming (IMSLP under uncertainty. The advantages of the method include its dynamics nature, use of a pre-defined policy in its optimization process, and the use of interval parameter and probability under uncertainty conditions. Additionally, it offers different decision-making alternatives for different scenarios of water shortage. The required data were collected from Khorasan Razavi Regional Water Organization and from the Water and Wastewater Co. for the period 1988-2007. Results showed that, under the worst conditions, the water deficits expected to occur for each of the next 3 years will be 1.9, 2.55, and 3.11 million cubic meters for the domestic use and 0.22, 0.32, 0.75 million cubic meters for irrigation. Approximate reductions of 0.5, 0.7, and 1 million cubic meters in the monthly consumption of the urban community and enhanced irrigation efficiencies of about 6, 11, and 20% in the agricultural sector are recommended as approaches for combating the water shortage over the next 3 years.

  15. Stochastic resonance during a polymer translocation process

    International Nuclear Information System (INIS)

    Mondal, Debasish; Muthukumar, M.

    2016-01-01

    We have studied the occurrence of stochastic resonance when a flexible polymer chain undergoes a single-file translocation through a nano-pore separating two spherical cavities, under a time-periodic external driving force. The translocation of the chain is controlled by a free energy barrier determined by chain length, pore length, pore-polymer interaction, and confinement inside the donor and receiver cavities. The external driving force is characterized by a frequency and amplitude. By combining the Fokker-Planck formalism for polymer translocation and a two-state model for stochastic resonance, we have derived analytical formulas for criteria for emergence of stochastic resonance during polymer translocation. We show that no stochastic resonance is possible if the free energy barrier for polymer translocation is purely entropic in nature. The polymer chain exhibits stochastic resonance only in the presence of an energy threshold in terms of polymer-pore interactions. Once stochastic resonance is feasible, the chain entropy controls the optimal synchronization conditions significantly.

  16. Using linear programming to analyze and optimize stochastic flow lines

    DEFF Research Database (Denmark)

    Helber, Stefan; Schimmelpfeng, Katja; Stolletz, Raik

    2011-01-01

    This paper presents a linear programming approach to analyze and optimize flow lines with limited buffer capacities and stochastic processing times. The basic idea is to solve a huge but simple linear program that models an entire simulation run of a multi-stage production process in discrete time...... programming and hence allows us to solve buffer allocation problems. We show under which conditions our method works well by comparing its results to exact values for two-machine models and approximate simulation results for longer lines....

  17. Stochastic volatility of volatility in continuous time

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Veraart, Almut

    This paper introduces the concept of stochastic volatility of volatility in continuous time and, hence, extends standard stochastic volatility (SV) models to allow for an additional source of randomness associated with greater variability in the data. We discuss how stochastic volatility...... of volatility can be defined both non-parametrically, where we link it to the quadratic variation of the stochastic variance process, and parametrically, where we propose two new SV models which allow for stochastic volatility of volatility. In addition, we show that volatility of volatility can be estimated...

  18. Comparative assessment of single-stage and two-stage anaerobic digestion for the treatment of thin stillage.

    Science.gov (United States)

    Nasr, Noha; Elbeshbishy, Elsayed; Hafez, Hisham; Nakhla, George; El Naggar, M Hesham

    2012-05-01

    A comparative evaluation of single-stage and two-stage anaerobic digestion processes for biomethane and biohydrogen production using thin stillage was performed to assess the impact of separating the acidogenic and methanogenic stages on anaerobic digestion. Thin stillage, the main by-product from ethanol production, was characterized by high total chemical oxygen demand (TCOD) of 122 g/L and total volatile fatty acids (TVFAs) of 12 g/L. A maximum methane yield of 0.33 L CH(4)/gCOD(added) (STP) was achieved in the two-stage process while a single-stage process achieved a maximum yield of only 0.26 L CH(4)/gCOD(added) (STP). The separation of acidification stage increased the TVFAs to TCOD ratio from 10% in the raw thin stillage to 54% due to the conversion of carbohydrates into hydrogen and VFAs. Comparison of the two processes based on energy outcome revealed that an increase of 18.5% in the total energy yield was achieved using two-stage anaerobic digestion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Condensate from a two-stage gasifier

    DEFF Research Database (Denmark)

    Bentzen, Jens Dall; Henriksen, Ulrik Birk; Hindsgaul, Claus

    2000-01-01

    Condensate, produced when gas from downdraft biomass gasifier is cooled, contains organic compounds that inhibit nitrifiers. Treatment with activated carbon removes most of the organics and makes the condensate far less inhibitory. The condensate from an optimised two-stage gasifier is so clean...... that the organic compounds and the inhibition effect are very low even before treatment with activated carbon. The moderate inhibition effect relates to a high content of ammonia in the condensate. The nitrifiers become tolerant to the condensate after a few weeks of exposure. The level of organic compounds...... and the level of inhibition are so low that condensate from the optimised two-stage gasifier can be led to the public sewer....

  20. Modelling Cow Behaviour Using Stochastic Automata

    DEFF Research Database (Denmark)

    Jónsson, Ragnar Ingi

    This report covers an initial study on the modelling of cow behaviour using stochastic automata with the aim of detecting lameness. Lameness in cows is a serious problem that needs to be dealt with because it results in less profitable production units and in reduced quality of life...... for the affected livestock. By featuring training data consisting of measurements of cow activity, three different models are obtained, namely an autonomous stochastic automaton, a stochastic automaton with coinciding state and output and an autonomous stochastic automaton with coinciding state and output, all...... of which describe the cows' activity in the two regarded behavioural scenarios, non-lame and lame. Using the experimental measurement data the different behavioural relations for the two regarded behavioural scenarios are assessed. The three models comprise activity within last hour, activity within last...

  1. Evidence of two-stage melting of Wigner solids

    Science.gov (United States)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  2. Two-stage liquefaction of a Spanish subbituminous coal

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, M.T.; Fernandez, I.; Benito, A.M.; Cebolla, V.; Miranda, J.L.; Oelert, H.H. (Instituto de Carboquimica, Zaragoza (Spain))

    1993-05-01

    A Spanish subbituminous coal has been processed in two-stage liquefaction in a non-integrated process. The first-stage coal liquefaction has been carried out in a continuous pilot plant in Germany at Clausthal Technical University at 400[degree]C, 20 MPa hydrogen pressure and anthracene oil as solvent. The second-stage coal liquefaction has been performed in continuous operation in a hydroprocessing unit at the Instituto de Carboquimica at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The total conversion for the first-stage coal liquefaction was 75.41 wt% (coal d.a.f.), being 3.79 wt% gases, 2.58 wt% primary condensate and 69.04 wt% heavy liquids. The heteroatoms removal for the second-stage liquefaction was 97-99 wt% of S, 85-87 wt% of N and 93-100 wt% of O. The hydroprocessed liquids have about 70% of compounds with boiling point below 350[degree]C, and meet the sulphur and nitrogen specifications for refinery feedstocks. Liquids from two-stage coal liquefaction have been distilled, and the naphtha, kerosene and diesel fractions obtained have been characterized. 39 refs., 3 figs., 8 tabs.

  3. Proceedings of the 8th Nordic Combinatorial Conference

    DEFF Research Database (Denmark)

    Geil, Olav; Andersen, Lars Døvling

    The Nordic Combinatorial Conferences were initiated in 1981 by mathematicians from Stavanger. Held approximately every three years since then, the conferences have been able to sustain the interest from combinatorialists all over the Nordic countries. In 2004 the 8th conference is held in Aalborg......, Denmark. We are pleased that so many people have chosen to attend, and that lectures were offered from more participants than we had originally reserved time for. We asked two mathematicians to give special lectures and are happy that both accepted immediately. Andries Brouwer from the Technical...

  4. Stochastic Stability of Endogenous Growth: Theory and Applications

    OpenAIRE

    Boucekkine, Raouf; Pintus, Patrick; Zou, Benteng

    2015-01-01

    We examine the issue of stability of stochastic endogenous growth. First, stochastic stability concepts are introduced and applied to stochastic linear homogenous differen- tial equations to which several stochastic endogenous growth models reduce. Second, we apply the mathematical theory to two models, starting with the stochastic AK model. It’s shown that in this case exponential balanced paths, which characterize optimal trajectories in the absence of uncertainty, are not robust to uncerta...

  5. Logging to Facilitate Combinatorial System Testing

    NARCIS (Netherlands)

    Kruse, P.M.; Prasetya, I.S.W.B.; Hage, J; Elyasov, Alexander

    2014-01-01

    Testing a web application is typically very complicated. Imposing simple coverage criteria such as function or line coverage is often not sufficient to uncover bugs due to incorrect components integration. Combinatorial testing can enforce a stronger criterion, while still allowing the

  6. Combinatorial computational chemistry approach to the design of metal catalysts for deNOx

    International Nuclear Information System (INIS)

    Endou, Akira; Jung, Changho; Kusagaya, Tomonori; Kubo, Momoji; Selvam, Parasuraman; Miyamoto, Akira

    2004-01-01

    Combinatorial chemistry is an efficient technique for the synthesis and screening of a large number of compounds. Recently, we introduced the combinatorial approach to computational chemistry for catalyst design and proposed a new method called ''combinatorial computational chemistry''. In the present study, we have applied this combinatorial computational chemistry approach to the design of precious metal catalysts for deNO x . As the first step of the screening of the metal catalysts, we studied Rh, Pd, Ag, Ir, Pt, and Au clusters regarding the adsorption properties towards NO molecule. It was demonstrated that the energetically most stable adsorption state of NO on Ir model cluster, which was irrespective of both the shape and number of atoms including the model clusters

  7. Balancing focused combinatorial libraries based on multiple GPCR ligands

    Science.gov (United States)

    Soltanshahi, Farhad; Mansley, Tamsin E.; Choi, Sun; Clark, Robert D.

    2006-08-01

    G-Protein coupled receptors (GPCRs) are important targets for drug discovery, and combinatorial chemistry is an important tool for pharmaceutical development. The absence of detailed structural information, however, limits the kinds of combinatorial design techniques that can be applied to GPCR targets. This is particularly problematic given the current emphasis on focused combinatorial libraries. By linking an incremental construction method (OptDesign) to the very fast shape-matching capability of ChemSpace, we have created an efficient method for designing targeted sublibraries that are topomerically similar to known actives. Multi-objective scoring allows consideration of multiple queries (actives) simultaneously. This can lead to a distribution of products skewed towards one particular query structure, however, particularly when the ligands of interest are quite dissimilar to one another. A novel pivoting technique is described which makes it possible to generate promising designs even under those circumstances. The approach is illustrated by application to some serotonergic agonists and chemokine antagonists.

  8. The combinatorial derivation

    Directory of Open Access Journals (Sweden)

    Igor V. Protasov

    2013-09-01

    $\\Delta(A=\\{g\\in G:|gA\\cap A|=\\infty\\}$. The mapping $\\Delta:\\mathcal{P}_G\\rightarrow\\mathcal{P}_G$, $A\\mapsto\\Delta(A$, is called a combinatorial derivation and can be considered as an analogue of the topological derivation $d:\\mathcal{P}_X\\rightarrow\\mathcal{P}_X$, $A\\mapsto A^d$, where $X$ is a topological space and $A^d$ is the set of all limit points of $A$. Content: elementary properties, thin and almost thin subsets, partitions, inverse construction and $\\Delta$-trajectories,  $\\Delta$ and $d$.

  9. Dynamic Combinatorial Chemistry

    DEFF Research Database (Denmark)

    Lisbjerg, Micke

    This thesis is divided into seven chapters, which can all be read individually. The first chapter, however, contains a general introduction to the chemistry used in the remaining six chapters, and it is therefore recommended to read chapter one before reading the other chapters. Chapter 1...... is a general introductory chapter for the whole thesis. The history and concepts of dynamic combinatorial chemistry are described, as are some of the new and intriguing results recently obtained. Finally, the properties of a broad range of hexameric macrocycles are described in detail. Chapter 2 gives...

  10. Exact Algorithms for Solving Stochastic Games

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Koucky, Michal; Lauritzen, Niels

    2012-01-01

    Shapley's discounted stochastic games, Everett's recursive games and Gillette's undiscounted stochastic games are classical models of game theory describing two-player zero-sum games of potentially infinite duration. We describe algorithms for exactly solving these games....

  11. Stochastic reservoir operation under drought with fuzzy objectives

    International Nuclear Information System (INIS)

    Parent, E.; Duckstein, L.

    1993-01-01

    Biojective reservoir operation under drought conditions is investigated using stochastic dynamic programming. As both objectives (irrigation water supply, water quality) can only be defined imprecisely, a fuzzy set approach is used to encode the decision maker (DM)'s preferences. The nature driven components are modeled by means of classical stage-state system analysis. The state is three dimensional (inflow memory, drought irrigation index, reservoir level); the decision vector elements are release and irrigation allocation. Stochasticity stems from the random nature of inflows and irrigation demands. The transition function includes a lag one inflow Markov model and mass balance equations. The human driven component is designed as a confluence of fuzzy objectives and constraints after Bellman and Zadeh. Fuzzy numbers are assessed to represent the DM's objectives by two different techniques, the direct one and indirect pairwise comparison. The real case study of the Neste river system in southwestern France is used to illustrate the approach; the result are compared to a classical sequential decision theoretical model derived earlier from the viewpoints of ease of modeling, computational efforts, plausibility and robustness of results

  12. Systematic identification of combinatorial drivers and targets in cancer cell lines.

    Directory of Open Access Journals (Sweden)

    Adel Tabchy

    Full Text Available There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance.

  13. Systematic identification of combinatorial drivers and targets in cancer cell lines.

    Science.gov (United States)

    Tabchy, Adel; Eltonsy, Nevine; Housman, David E; Mills, Gordon B

    2013-01-01

    There is an urgent need to elicit and validate highly efficacious targets for combinatorial intervention from large scale ongoing molecular characterization efforts of tumors. We established an in silico bioinformatic platform in concert with a high throughput screening platform evaluating 37 novel targeted agents in 669 extensively characterized cancer cell lines reflecting the genomic and tissue-type diversity of human cancers, to systematically identify combinatorial biomarkers of response and co-actionable targets in cancer. Genomic biomarkers discovered in a 141 cell line training set were validated in an independent 359 cell line test set. We identified co-occurring and mutually exclusive genomic events that represent potential drivers and combinatorial targets in cancer. We demonstrate multiple cooperating genomic events that predict sensitivity to drug intervention independent of tumor lineage. The coupling of scalable in silico and biologic high throughput cancer cell line platforms for the identification of co-events in cancer delivers rational combinatorial targets for synthetic lethal approaches with a high potential to pre-empt the emergence of resistance.

  14. One-stage and two-stage penile buccal mucosa urethroplasty

    African Journals Online (AJOL)

    G. Barbagli

    2015-12-02

    Dec 2, 2015 ... there also seems to be a trend of decreasing urethritis and an increase of instrumentation and catheter related strictures in these countries as well [4–6]. The repair of penile urethral strictures may require one- or two- stage urethroplasty [7–10]. Certainly, sexual function can be placed at risk by any surgery ...

  15. Two-Stage Variable Sample-Rate Conversion System

    Science.gov (United States)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  16. Some results from the combinatorial approach to quantum logic

    International Nuclear Information System (INIS)

    Greechie, R.J.

    1976-01-01

    The combinatorial approach to quantum logic focuses on certain interconnections between graphs, combinatorial designs, and convex sets as applied to a quantum logic. This article is concerned only with orthomodular lattices and associated structures. A class of complete atomic irreducible semimodular orthomodular lattices is derived which may not be represented as linear subspaces of a vector space over a division ring. Each of these lattices is a proposition system of dimension three. These proposition systems form orthocomplemented non-Desarguesian projective geometries. (B.R.H.)

  17. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-01-01

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  18. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-07-10

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  19. Combinatorial enzyme technology for the conversion of agricultural fibers to functional properties

    Science.gov (United States)

    The concept of combinatorial chemistry has received little attention in agriculture and food research, although its applications in this area were described more than fifteen years ago (1, 2). More recently, interest in the use of combinatorial chemistry in agrochemical discovery has been revitalize...

  20. DNA-Encoded Dynamic Combinatorial Chemical Libraries.

    Science.gov (United States)

    Reddavide, Francesco V; Lin, Weilin; Lehnert, Sarah; Zhang, Yixin

    2015-06-26

    Dynamic combinatorial chemistry (DCC) explores the thermodynamic equilibrium of reversible reactions. Its application in the discovery of protein binders is largely limited by difficulties in the analysis of complex reaction mixtures. DNA-encoded chemical library (DECL) technology allows the selection of binders from a mixture of up to billions of different compounds; however, experimental results often show low a signal-to-noise ratio and poor correlation between enrichment factor and binding affinity. Herein we describe the design and application of DNA-encoded dynamic combinatorial chemical libraries (EDCCLs). Our experiments have shown that the EDCCL approach can be used not only to convert monovalent binders into high-affinity bivalent binders, but also to cause remarkably enhanced enrichment of potent bivalent binders by driving their in situ synthesis. We also demonstrate the application of EDCCLs in DNA-templated chemical reactions. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. "One-sample concept" micro-combinatory for high throughput TEM of binary films.

    Science.gov (United States)

    Sáfrán, György

    2018-04-01

    Phases of thin films may remarkably differ from that of bulk. Unlike to the comprehensive data files of Binary Phase Diagrams [1] available for bulk, complete phase maps for thin binary layers do not exist. This is due to both the diverse metastable, non-equilibrium or instable phases feasible in thin films and the required volume of characterization work with analytical techniques like TEM, SAED and EDS. The aim of the present work was to develop a method that remarkably facilitates the TEM study of the diverse binary phases of thin films, or the creation of phase maps. A micro-combinatorial method was worked out that enables both preparation and study of a gradient two-component film within a single TEM specimen. For a demonstration of the technique thin Mn x Al 1- x binary samples with evolving concentration from x = 0 to x = 1 have been prepared so that the transition from pure Mn to pure Al covers a 1.5 mm long track within the 3 mm diameter TEM grid. The proposed method enables the preparation and study of thin combinatorial samples including all feasible phases as a function of composition or other deposition parameters. Contrary to known "combinatorial chemistry", in which a series of different samples are deposited in one run, and investigated, one at a time, the present micro-combinatorial method produces a single specimen condensing a complete library of a binary system that can be studied, efficiently, within a single TEM session. That provides extremely high throughput for TEM characterization of composition-dependent phases, exploration of new materials, or the construction of phase diagrams of binary films. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan

    2016-11-02

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  3. Quality control system response to stochastic growth of amyloid fibrils

    DEFF Research Database (Denmark)

    Pigolotti, S.; Lizana, L.; Sneppen, K.

    2013-01-01

    We introduce a stochastic model describing aggregation of misfolded proteins and degradation by the protein quality control system in a single cell. Aggregate growth is contrasted by the cell quality control system, that attacks them at different stages of the growth process, with an efficiency...... that decreases with their size. Model parameters are estimated from experimental data. Two qualitatively different behaviors emerge: a homeostatic state, where the quality control system is stable and aggregates of large sizes are not formed, and an oscillatory state, where the quality control system...

  4. Combinatorial thin film materials science: From alloy discovery and optimization to alloy design

    Energy Technology Data Exchange (ETDEWEB)

    Gebhardt, Thomas, E-mail: gebhardt@mch.rwth-aachen.de; Music, Denis; Takahashi, Tetsuya; Schneider, Jochen M.

    2012-06-30

    This paper provides an overview of modern alloy development, from discovery and optimization towards alloy design, based on combinatorial thin film materials science. The combinatorial approach, combining combinatorial materials synthesis of thin film composition-spreads with high-throughput property characterization has proven to be a powerful tool to delineate composition-structure-property relationships, and hence to efficiently identify composition windows with enhanced properties. Furthermore, and most importantly for alloy design, theoretical models and hypotheses can be critically appraised. Examples for alloy discovery, optimization, and alloy design of functional as well as structural materials are presented. Using Fe-Mn based alloys as an example, we show that the combination of modern electronic-structure calculations with the highly efficient combinatorial thin film composition-spread method constitutes an effective tool for knowledge-based alloy design.

  5. Combinatorial thin film materials science: From alloy discovery and optimization to alloy design

    International Nuclear Information System (INIS)

    Gebhardt, Thomas; Music, Denis; Takahashi, Tetsuya; Schneider, Jochen M.

    2012-01-01

    This paper provides an overview of modern alloy development, from discovery and optimization towards alloy design, based on combinatorial thin film materials science. The combinatorial approach, combining combinatorial materials synthesis of thin film composition-spreads with high-throughput property characterization has proven to be a powerful tool to delineate composition–structure–property relationships, and hence to efficiently identify composition windows with enhanced properties. Furthermore, and most importantly for alloy design, theoretical models and hypotheses can be critically appraised. Examples for alloy discovery, optimization, and alloy design of functional as well as structural materials are presented. Using Fe-Mn based alloys as an example, we show that the combination of modern electronic-structure calculations with the highly efficient combinatorial thin film composition-spread method constitutes an effective tool for knowledge-based alloy design.

  6. The solution of the neutron point kinetics equation with stochastic extension: an analysis of two moments

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Milena Wollmann da; Vilhena, Marco Tullio M.B.; Bodmann, Bardo Ernst J.; Vasques, Richard, E-mail: milena.wollmann@ufrgs.br, E-mail: vilhena@mat.ufrgs.br, E-mail: bardobodmann@ufrgs.br, E-mail: richard.vasques@fulbrightmail.org [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica

    2015-07-01

    The neutron point kinetics equation, which models the time-dependent behavior of nuclear reactors, is often used to understand the dynamics of nuclear reactor operations. It consists of a system of coupled differential equations that models the interaction between (i) the neutron population; and (II) the concentration of the delayed neutron precursors, which are radioactive isotopes formed in the fission process that decay through neutron emission. These equations are deterministic in nature, and therefore can provide only average values of the modeled populations. However, the actual dynamical process is stochastic: the neutron density and the delayed neutron precursor concentrations vary randomly with time. To address this stochastic behavior, Hayes and Allen have generalized the standard deterministic point kinetics equation. They derived a system of stochastic differential equations that can accurately model the random behavior of the neutron density and the precursor concentrations in a point reactor. Due to the stiffness of these equations, this system was numerically implemented using a stochastic piecewise constant approximation method (Stochastic PCA). Here, we present a study of the influence of stochastic fluctuations on the results of the neutron point kinetics equation. We reproduce the stochastic formulation introduced by Hayes and Allen and compute Monte Carlo numerical results for examples with constant and time-dependent reactivity, comparing these results with stochastic and deterministic methods found in the literature. Moreover, we introduce a modified version of the stochastic method to obtain a non-stiff solution, analogue to a previously derived deterministic approach. (author)

  7. The solution of the neutron point kinetics equation with stochastic extension: an analysis of two moments

    International Nuclear Information System (INIS)

    Silva, Milena Wollmann da; Vilhena, Marco Tullio M.B.; Bodmann, Bardo Ernst J.; Vasques, Richard

    2015-01-01

    The neutron point kinetics equation, which models the time-dependent behavior of nuclear reactors, is often used to understand the dynamics of nuclear reactor operations. It consists of a system of coupled differential equations that models the interaction between (i) the neutron population; and (II) the concentration of the delayed neutron precursors, which are radioactive isotopes formed in the fission process that decay through neutron emission. These equations are deterministic in nature, and therefore can provide only average values of the modeled populations. However, the actual dynamical process is stochastic: the neutron density and the delayed neutron precursor concentrations vary randomly with time. To address this stochastic behavior, Hayes and Allen have generalized the standard deterministic point kinetics equation. They derived a system of stochastic differential equations that can accurately model the random behavior of the neutron density and the precursor concentrations in a point reactor. Due to the stiffness of these equations, this system was numerically implemented using a stochastic piecewise constant approximation method (Stochastic PCA). Here, we present a study of the influence of stochastic fluctuations on the results of the neutron point kinetics equation. We reproduce the stochastic formulation introduced by Hayes and Allen and compute Monte Carlo numerical results for examples with constant and time-dependent reactivity, comparing these results with stochastic and deterministic methods found in the literature. Moreover, we introduce a modified version of the stochastic method to obtain a non-stiff solution, analogue to a previously derived deterministic approach. (author)

  8. Laguerre-type derivatives: Dobinski relations and combinatorial identities

    International Nuclear Information System (INIS)

    Penson, K. A.; Blasiak, P.; Horzela, A.; Duchamp, G. H. E.; Solomon, A. I.

    2009-01-01

    We consider properties of the operators D(r,M)=a r (a † a) M (which we call generalized Laguerre-type derivatives), with r=1,2,..., M=0,1,..., where a and a † are boson annihilation and creation operators, respectively, satisfying [a,a † ]=1. We obtain explicit formulas for the normally ordered form of arbitrary Taylor-expandable functions of D(r,M) with the help of an operator relation that generalizes the Dobinski formula. Coherent state expectation values of certain operator functions of D(r,M) turn out to be generating functions of combinatorial numbers. In many cases the corresponding combinatorial structures can be explicitly identified.

  9. Distributing the computation in combinatorial optimization experiments over the cloud

    OpenAIRE

    Mario Brcic; Nikica Hlupic; Nenad Katanic

    2017-01-01

    Combinatorial optimization is an area of great importance since many of the real-world problems have discrete parameters which are part of the objective function to be optimized. Development of combinatorial optimization algorithms is guided by the empirical study of the candidate ideas and their performance over a wide range of settings or scenarios to infer general conclusions. Number of scenarios can be overwhelming, especially when modeling uncertainty in some of the problem’s parameters....

  10. Quantum Resonance Approach to Combinatorial Optimization

    Science.gov (United States)

    Zak, Michail

    1997-01-01

    It is shown that quantum resonance can be used for combinatorial optimization. The advantage of the approach is in independence of the computing time upon the dimensionality of the problem. As an example, the solution to a constraint satisfaction problem of exponential complexity is demonstrated.

  11. Combinatorial pretreatment and fermentation optimization enabled a record yield on lignin bioconversion.

    Science.gov (United States)

    Liu, Zhi-Hua; Xie, Shangxian; Lin, Furong; Jin, Mingjie; Yuan, Joshua S

    2018-01-01

    Lignin valorization has recently been considered to be an essential process for sustainable and cost-effective biorefineries. Lignin represents a potential new feedstock for value-added products. Oleaginous bacteria such as Rhodococcus opacus can produce intracellular lipids from biodegradation of aromatic substrates. These lipids can be used for biofuel production, which can potentially replace petroleum-derived chemicals. However, the low reactivity of lignin produced from pretreatment and the underdeveloped fermentation technology hindered lignin bioconversion to lipids. In this study, combinatorial pretreatment with an optimized fermentation strategy was evaluated to improve lignin valorization into lipids using R. opacus PD630. As opposed to single pretreatment, combinatorial pretreatment produced a 12.8-75.6% higher lipid concentration in fermentation using lignin as the carbon source. Gas chromatography-mass spectrometry analysis showed that combinatorial pretreatment released more aromatic monomers, which could be more readily utilized by lignin-degrading strains. Three detoxification strategies were used to remove potential inhibitors produced from pretreatment. After heating detoxification of the lignin stream, the lipid concentration further increased by 2.9-9.7%. Different fermentation strategies were evaluated in scale-up lipid fermentation using a 2.0-l fermenter. With laccase treatment of the lignin stream produced from combinatorial pretreatment, the highest cell dry weight and lipid concentration were 10.1 and 1.83 g/l, respectively, in fed-batch fermentation, with a total soluble substrate concentration of 40 g/l. The improvement of the lipid fermentation performance may have resulted from lignin depolymerization by the combinatorial pretreatment and laccase treatment, reduced inhibition effects by fed-batch fermentation, adequate oxygen supply, and an accurate pH control in the fermenter. Overall, these results demonstrate that combinatorial

  12. Combinatorial reasoning an introduction to the art of counting

    CERN Document Server

    DeTemple, Duane

    2014-01-01

    Written by well-known scholars in the field, this book introduces combinatorics alongside modern techniques, showcases the interdisciplinary aspects of the topic, and illustrates how to problem solve with a multitude of exercises throughout. The authors' approach is very reader-friendly and avoids the ""scholarly tone"" found in many books on this topic. Combinatorial Reasoning: An Introduction to the Art of Counting: Focuses on enumeration and combinatorial thinking as a way to develop a variety of effective approaches to solving counting problemsIncludes brief summaries of basic concepts f

  13. Combinatorial Models for Assembly and Decomposition of Products

    Directory of Open Access Journals (Sweden)

    A. N. Bojko

    2015-01-01

    Full Text Available The paper discusses the most popular combinatorial models that are used for the synthesis of design solutions at the stage of the assembly process flow preparation. It shows that while assembling the product the relations of parts can be represented as a structure of preferences, which is formed on the basis of objective design restrictions put in at the stage of the product design. This structure is a binary preference relation pre-order. Its symmetrical part is equivalence and describes the entry of parts into the assembly unit. The asymmetric part is a partial order. It specifies part- ordering time in in the course of the assembly process. The structure of preferences is a minimal description of the restrictions and constraints in the assembly process. It can serve as a source for generating multiple assembly sequences of a product and its components, which are allowed by design. This multiplicity increases the likelihood of rational choice under uncertainty, unpredictable changes in the properties of technological or industrial systems.Incomplete dominance relation gives grounds for further examination and better understanding of the project situation. Operation field of the study is limited to a set of disparate elements of the partial order. Different strategies for processing the disparate elements may be offered, e.g. selection of the most informative pairs, comparison of which foremost linearizes the original partial order.

  14. Combinatorial interpretations of particular evaluations of complete and elementary symmetric functions

    OpenAIRE

    Mongelli, Pietro

    2011-01-01

    The Jacobi-Stirling numbers and the Legendre-Stirling numbers of the first and second kind were first introduced in [6], [7]. In this paper we note that Jacobi-Stirling numbers and Legendre-Stirling numbers are specializations of elementary and complete symmetric functions. We then study combinatorial interpretations of this specialization and obtain new combinatorial interpretations of the Jacobi-Stirling and Legendre-Stirling numbers.

  15. Stochastic Security and Risk-Constrained Scheduling for an Autonomous Microgrid with Demand Response and Renewable Energy Resources

    DEFF Research Database (Denmark)

    Vahedipour-Dahraie, Mostafa; Rashidizadeh-Kermani, Homa; Najafi, Hamid Reza

    2017-01-01

    is to determine the optimal scheduling with considering risk aversion and system frequency security to maximise the expected profit of operator. To deal with various uncertainties, a riskconstrained two-stage stochastic programming model is proposed where the risk aversion of MG operator is modelled using...... of customers can be effectively applied to balance the demand and supply in electricity networks. This study presents a novel stochastic model from a microgrid (MG) operator perspective for energy and reserve scheduling considering risk management strategy. It is assumed that the MG operator can procure energy...... conditional value at risk method. Extensive numerical results are shown to demonstrate the effectiveness of the proposed framework....

  16. The two-regime method for optimizing stochastic reaction-diffusion simulations

    KAUST Repository

    Flegg, M. B.; Chapman, S. J.; Erban, R.

    2011-01-01

    Spatial organization and noise play an important role in molecular systems biology. In recent years, a number of software packages have been developed for stochastic spatio-temporal simulation, ranging from detailed molecular-based approaches

  17. Hospital daily outpatient visits forecasting using a combinatorial model based on ARIMA and SES models.

    Science.gov (United States)

    Luo, Li; Luo, Le; Zhang, Xinli; He, Xiaoli

    2017-07-10

    Accurate forecasting of hospital outpatient visits is beneficial for the reasonable planning and allocation of healthcare resource to meet the medical demands. In terms of the multiple attributes of daily outpatient visits, such as randomness, cyclicity and trend, time series methods, ARIMA, can be a good choice for outpatient visits forecasting. On the other hand, the hospital outpatient visits are also affected by the doctors' scheduling and the effects are not pure random. Thinking about the impure specialty, this paper presents a new forecasting model that takes cyclicity and the day of the week effect into consideration. We formulate a seasonal ARIMA (SARIMA) model on a daily time series and then a single exponential smoothing (SES) model on the day of the week time series, and finally establish a combinatorial model by modifying them. The models are applied to 1 year of daily visits data of urban outpatients in two internal medicine departments of a large hospital in Chengdu, for forecasting the daily outpatient visits about 1 week ahead. The proposed model is applied to forecast the cross-sectional data for 7 consecutive days of daily outpatient visits over an 8-weeks period based on 43 weeks of observation data during 1 year. The results show that the two single traditional models and the combinatorial model are simplicity of implementation and low computational intensiveness, whilst being appropriate for short-term forecast horizons. Furthermore, the combinatorial model can capture the comprehensive features of the time series data better. Combinatorial model can achieve better prediction performance than the single model, with lower residuals variance and small mean of residual errors which needs to be optimized deeply on the next research step.

  18. Novel modeling of combinatorial miRNA targeting identifies SNP with potential role in bone density.

    Directory of Open Access Journals (Sweden)

    Claudia Coronnello

    Full Text Available MicroRNAs (miRNAs are post-transcriptional regulators that bind to their target mRNAs through base complementarity. Predicting miRNA targets is a challenging task and various studies showed that existing algorithms suffer from high number of false predictions and low to moderate overlap in their predictions. Until recently, very few algorithms considered the dynamic nature of the interactions, including the effect of less specific interactions, the miRNA expression level, and the effect of combinatorial miRNA binding. Addressing these issues can result in a more accurate miRNA:mRNA modeling with many applications, including efficient miRNA-related SNP evaluation. We present a novel thermodynamic model based on the Fermi-Dirac equation that incorporates miRNA expression in the prediction of target occupancy and we show that it improves the performance of two popular single miRNA target finders. Modeling combinatorial miRNA targeting is a natural extension of this model. Two other algorithms show improved prediction efficiency when combinatorial binding models were considered. ComiR (Combinatorial miRNA targeting, a novel algorithm we developed, incorporates the improved predictions of the four target finders into a single probabilistic score using ensemble learning. Combining target scores of multiple miRNAs using ComiR improves predictions over the naïve method for target combination. ComiR scoring scheme can be used for identification of SNPs affecting miRNA binding. As proof of principle, ComiR identified rs17737058 as disruptive to the miR-488-5p:NCOA1 interaction, which we confirmed in vitro. We also found rs17737058 to be significantly associated with decreased bone mineral density (BMD in two independent cohorts indicating that the miR-488-5p/NCOA1 regulatory axis is likely critical in maintaining BMD in women. With increasing availability of comprehensive high-throughput datasets from patients ComiR is expected to become an essential

  19. Separable quadratic stochastic operators

    International Nuclear Information System (INIS)

    Rozikov, U.A.; Nazir, S.

    2009-04-01

    We consider quadratic stochastic operators, which are separable as a product of two linear operators. Depending on properties of these linear operators we classify the set of the separable quadratic stochastic operators: first class of constant operators, second class of linear and third class of nonlinear (separable) quadratic stochastic operators. Since the properties of operators from the first and second classes are well known, we mainly study the properties of the operators of the third class. We describe some Lyapunov functions of the operators and apply them to study ω-limit sets of the trajectories generated by the operators. We also compare our results with known results of the theory of quadratic operators and give some open problems. (author)

  20. Gian-Carlos Rota and Combinatorial Math.

    Science.gov (United States)

    Kolata, Gina Bari

    1979-01-01

    Presents the first of a series of occasional articles about mathematics as seen through the eyes of its prominent scholars. In an interview with Gian-Carlos Rota of the Massachusetts Institute of Technology he discusses how combinatorial mathematics began as a field and its future. (HM)

  1. A Model of Students' Combinatorial Thinking

    Science.gov (United States)

    Lockwood, Elise

    2013-01-01

    Combinatorial topics have become increasingly prevalent in K-12 and undergraduate curricula, yet research on combinatorics education indicates that students face difficulties when solving counting problems. The research community has not yet addressed students' ways of thinking at a level that facilitates deeper understanding of how students…

  2. Capability of focused Ar ion beam sputtering for combinatorial synthesis of metal films

    International Nuclear Information System (INIS)

    Nagata, T.; Haemori, M.; Chikyow, T.

    2009-01-01

    The authors examined the use of focused Ar ion beam sputtering (FAIS) for combinatorial synthesis. A Langmuir probe revealed that the electron temperature and density for FAIS of metal film deposition was lower than that of other major combinatorial thin film growth techniques such as pulsed laser deposition. Combining FAIS with the combinatorial method allowed the compositional fraction of the Pt-Ru binary alloy to be systematically controlled. Pt-Ru alloy metal film grew epitaxially on ZnO substrates, and crystal structures changed from the Pt phase (cubic structure) to the Ru phase (hexagonal structure) in the Pt-Ru alloy phase diagram. The alloy film has a smooth surface, with the Ru phase, in particular, showing a clear step-and-terrace structure. The combination of FAIS and the combinatorial method has major potential for the fabrication of high quality composition-spread metal film.

  3. Capability of focused Ar ion beam sputtering for combinatorial synthesis of metal films

    Energy Technology Data Exchange (ETDEWEB)

    Nagata, T.; Haemori, M.; Chikyow, T. [Advanced Electric Materials Center, National Institute for Materials Science, 1-1 Namiki, Tsukuba, Ibaraki 305-0044 (Japan)

    2009-05-15

    The authors examined the use of focused Ar ion beam sputtering (FAIS) for combinatorial synthesis. A Langmuir probe revealed that the electron temperature and density for FAIS of metal film deposition was lower than that of other major combinatorial thin film growth techniques such as pulsed laser deposition. Combining FAIS with the combinatorial method allowed the compositional fraction of the Pt-Ru binary alloy to be systematically controlled. Pt-Ru alloy metal film grew epitaxially on ZnO substrates, and crystal structures changed from the Pt phase (cubic structure) to the Ru phase (hexagonal structure) in the Pt-Ru alloy phase diagram. The alloy film has a smooth surface, with the Ru phase, in particular, showing a clear step-and-terrace structure. The combination of FAIS and the combinatorial method has major potential for the fabrication of high quality composition-spread metal film.

  4. Analytic solution of the two-dimensional Fokker-Planck equation governing stochastic ion heating by a lower hybrid wave

    International Nuclear Information System (INIS)

    Malescio, G.

    1981-04-01

    The two-dimensional Fokker-Planck equation describing the ion motion in a coherent lower hybrid wave above the stochasticity threshold is analytically solved. An expression is given for the steady state power dissipation

  5. Entropy Production in Stochastics

    Directory of Open Access Journals (Sweden)

    Demetris Koutsoyiannis

    2017-10-01

    Full Text Available While the modern definition of entropy is genuinely probabilistic, in entropy production the classical thermodynamic definition, as in heat transfer, is typically used. Here we explore the concept of entropy production within stochastics and, particularly, two forms of entropy production in logarithmic time, unconditionally (EPLT or conditionally on the past and present having been observed (CEPLT. We study the theoretical properties of both forms, in general and in application to a broad set of stochastic processes. A main question investigated, related to model identification and fitting from data, is how to estimate the entropy production from a time series. It turns out that there is a link of the EPLT with the climacogram, and of the CEPLT with two additional tools introduced here, namely the differenced climacogram and the climacospectrum. In particular, EPLT and CEPLT are related to slopes of log-log plots of these tools, with the asymptotic slopes at the tails being most important as they justify the emergence of scaling laws of second-order characteristics of stochastic processes. As a real-world application, we use an extraordinary long time series of turbulent velocity and show how a parsimonious stochastic model can be identified and fitted using the tools developed.

  6. Stochastic Analysis 2010

    CERN Document Server

    Crisan, Dan

    2011-01-01

    "Stochastic Analysis" aims to provide mathematical tools to describe and model high dimensional random systems. Such tools arise in the study of Stochastic Differential Equations and Stochastic Partial Differential Equations, Infinite Dimensional Stochastic Geometry, Random Media and Interacting Particle Systems, Super-processes, Stochastic Filtering, Mathematical Finance, etc. Stochastic Analysis has emerged as a core area of late 20th century Mathematics and is currently undergoing a rapid scientific development. The special volume "Stochastic Analysis 2010" provides a sa

  7. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  8. Combinatorial optimization networks and matroids

    CERN Document Server

    Lawler, Eugene

    2011-01-01

    Perceptively written text examines optimization problems that can be formulated in terms of networks and algebraic structures called matroids. Chapters cover shortest paths, network flows, bipartite matching, nonbipartite matching, matroids and the greedy algorithm, matroid intersections, and the matroid parity problems. A suitable text or reference for courses in combinatorial computing and concrete computational complexity in departments of computer science and mathematics.

  9. COGEDIF - automatic TORT and DORT input generation from MORSE combinatorial geometry models

    International Nuclear Information System (INIS)

    Castelli, R.A.; Barnett, D.A.

    1992-01-01

    COGEDIF is an interactive utility which was developed to automate the preparation of two and three dimensional geometrical inputs for the ORNL-TORT and DORT discrete ordinates programs from complex three dimensional models described using the MORSE combinatorial geometry input description. The program creates either continuous or disjoint mesh input based upon the intersections of user defined meshing planes and the MORSE body definitions. The composition overlay of the combinatorial geometry is used to create the composition mapping of the discretized geometry based upon the composition found at the centroid of each of the mesh cells. This program simplifies the process of using discrete orthogonal mesh cells to represent non-orthogonal geometries in large models which require mesh sizes of the order of a million cells or more. The program was specifically written to take advantage of the new TORT disjoint mesh option which was developed at ORNL

  10. Large-scale Metabolomic Analysis Reveals Potential Biomarkers for Early Stage Coronary Atherosclerosis.

    Science.gov (United States)

    Gao, Xueqin; Ke, Chaofu; Liu, Haixia; Liu, Wei; Li, Kang; Yu, Bo; Sun, Meng

    2017-09-18

    Coronary atherosclerosis (CAS) is the pathogenesis of coronary heart disease, which is a prevalent and chronic life-threatening disease. Initially, this disease is not always detected until a patient presents with seriously vascular occlusion. Therefore, new biomarkers for appropriate and timely diagnosis of early CAS is needed for screening to initiate therapy on time. In this study, we used an untargeted metabolomics approach to identify potential biomarkers that could enable highly sensitive and specific CAS detection. Score plots from partial least-squares discriminant analysis clearly separated early-stage CAS patients from controls. Meanwhile, the levels of 24 metabolites increased greatly and those of 18 metabolites decreased markedly in early CAS patients compared with the controls, which suggested significant metabolic dysfunction in phospholipid, sphingolipid, and fatty acid metabolism in the patients. Furthermore, binary logistic regression showed that nine metabolites could be used as a combinatorial biomarker to distinguish early-stage CAS patients from controls. The panel of nine metabolites was then tested with an independent cohort of samples, which also yielded satisfactory diagnostic accuracy (AUC = 0.890). In conclusion, our findings provide insight into the pathological mechanism of early-stage CAS and also supply a combinatorial biomarker to aid clinical diagnosis of early-stage CAS.

  11. Summation on the basis of combinatorial representation of equal powers

    Directory of Open Access Journals (Sweden)

    Alexander I. Nikonov

    2016-03-01

    Full Text Available In the paper the conclusion of combinatorial expressions for the sums of members of several sequences is considered. Conclusion is made on the basis of combinatorial representation of the sum of the weighted equal powers. The weighted members of a geometrical progression, the simple arithmetic-geometrical and combined progressions are subject to summation. One of principal places in the given conclusion occupies representation of members of each of the specified progressions in the form of matrix elements. The row of this matrix is formed with use of a gang of equal powers with the set weight factor. Besides, in work formulas of combinatorial identities with participation of free components of the sums of equal powers, and also separate power-member of sequence of equal powers or a geometrical progression are presented. All presented formulas have the general basis-components of the sums of equal powers.

  12. Two combinatorial optimization problems for SNP discovery using base-specific cleavage and mass spectrometry.

    Science.gov (United States)

    Chen, Xin; Wu, Qiong; Sun, Ruimin; Zhang, Louxin

    2012-01-01

    The discovery of single-nucleotide polymorphisms (SNPs) has important implications in a variety of genetic studies on human diseases and biological functions. One valuable approach proposed for SNP discovery is based on base-specific cleavage and mass spectrometry. However, it is still very challenging to achieve the full potential of this SNP discovery approach. In this study, we formulate two new combinatorial optimization problems. While both problems are aimed at reconstructing the sample sequence that would attain the minimum number of SNPs, they search over different candidate sequence spaces. The first problem, denoted as SNP - MSP, limits its search to sequences whose in silico predicted mass spectra have all their signals contained in the measured mass spectra. In contrast, the second problem, denoted as SNP - MSQ, limits its search to sequences whose in silico predicted mass spectra instead contain all the signals of the measured mass spectra. We present an exact dynamic programming algorithm for solving the SNP - MSP problem and also show that the SNP - MSQ problem is NP-hard by a reduction from a restricted variation of the 3-partition problem. We believe that an efficient solution to either problem above could offer a seamless integration of information in four complementary base-specific cleavage reactions, thereby improving the capability of the underlying biotechnology for sensitive and accurate SNP discovery.

  13. Stochastic Effects in Microstructure

    Directory of Open Access Journals (Sweden)

    Glicksman M.E.

    2002-01-01

    Full Text Available We are currently studying microstructural responses to diffusion-limited coarsening in two-phase materials. A mathematical solution to late-stage multiparticle diffusion in finite systems is formulated with account taken of particle-particle interactions and their microstructural correlations, or "locales". The transition from finite system behavior to that for an infinite microstructure is established analytically. Large-scale simulations of late-stage phase coarsening dynamics show increased fluctuations with increasing volume fraction, Vv, of the mean flux entering or leaving particles of a given size class. Fluctuations about the mean flux were found to depend on the scaled particle size, R/, where R is the radius of a particle and is the radius of the dispersoid averaged over the population within the microstructure. Specifically, small (shrinking particles tend to display weak fluctuations about their mean flux, whereas particles of average, or above average size, exhibit strong fluctuations. Remarkably, even in cases of microstructures with a relatively small volume fraction (Vv ~ 10-4, the particle size distribution is broader than that for the well-known Lifshitz-Slyozov limit predicted at zero volume fraction. The simulation results reported here provide some additional surprising insights into the effect of diffusion interactions and stochastic effects during evolution of a microstructure, as it approaches its thermodynamic end-state.

  14. The Robustness of Stochastic Switching Networks

    OpenAIRE

    Loh, Po-Ling; Zhou, Hongchao; Bruck, Jehoshua

    2009-01-01

    Many natural systems, including chemical and biological systems, can be modeled using stochastic switching circuits. These circuits consist of stochastic switches, called pswitches, which operate with a fixed probability of being open or closed. We study the effect caused by introducing an error of size ∈ to each pswitch in a stochastic circuit. We analyze two constructions – simple series-parallel and general series-parallel circuits – and prove that simple series-parallel circuits are robus...

  15. STAR CLUSTER PROPERTIES IN TWO LEGUS GALAXIES COMPUTED WITH STOCHASTIC STELLAR POPULATION SYNTHESIS MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Krumholz, Mark R. [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Adamo, Angela [Department of Astronomy, Oskar Klein Centre, Stockholm University, SE-10691 Stockholm (Sweden); Fumagalli, Michele [Institute for Computational Cosmology and Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE (United Kingdom); Wofford, Aida [Institut d’Astrophysique de Paris, 98bis Boulevard Arago, F-75014 Paris (France); Calzetti, Daniela; Grasha, Kathryn [Department of Astronomy, University of Massachusetts–Amherst, Amherst, MA (United States); Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Ubeda, Leonardo [Space Telescope Science Institute, Baltimore, MD (United States); Gouliermis, Dimitrios A. [Centre for Astronomy, Institute for Theoretical Astrophysics, University of Heidelberg, Heidelberg (Germany); Kim, Hwihyun [Korea Astronomy and Space Science Institute, Daejeon (Korea, Republic of); Nair, Preethi [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL (United States); Ryon, Jenna E. [Department of Astronomy, University of Wisconsin–Madison, Madison, WI (United States); Smith, Linda J. [European Space Agency/Space Telescope Science Institute, Baltimore, MD (United States); Thilker, David [Department of Physics and Astronomy, The Johns Hopkins University, Baltimore, MD (United States); Zackrisson, Erik, E-mail: mkrumhol@ucsc.edu, E-mail: adamo@astro.su.se [Department of Physics and Astronomy, Uppsala University, Uppsala (Sweden)

    2015-10-20

    We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.

  16. Stochastic-field cavitation model

    International Nuclear Information System (INIS)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-01-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian “particles” or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations

  17. Stochastic-field cavitation model

    Science.gov (United States)

    Dumond, J.; Magagnato, F.; Class, A.

    2013-07-01

    Nonlinear phenomena can often be well described using probability density functions (pdf) and pdf transport models. Traditionally, the simulation of pdf transport requires Monte-Carlo codes based on Lagrangian "particles" or prescribed pdf assumptions including binning techniques. Recently, in the field of combustion, a novel formulation called the stochastic-field method solving pdf transport based on Eulerian fields has been proposed which eliminates the necessity to mix Eulerian and Lagrangian techniques or prescribed pdf assumptions. In the present work, for the first time the stochastic-field method is applied to multi-phase flow and, in particular, to cavitating flow. To validate the proposed stochastic-field cavitation model, two applications are considered. First, sheet cavitation is simulated in a Venturi-type nozzle. The second application is an innovative fluidic diode which exhibits coolant flashing. Agreement with experimental results is obtained for both applications with a fixed set of model constants. The stochastic-field cavitation model captures the wide range of pdf shapes present at different locations.

  18. Validation of an Instrument and Testing Protocol for Measuring the Combinatorial Analysis Schema.

    Science.gov (United States)

    Staver, John R.; Harty, Harold

    1979-01-01

    Designs a testing situation to examine the presence of combinatorial analysis, to establish construct validity in the use of an instrument, Combinatorial Analysis Behavior Observation Scheme (CABOS), and to investigate the presence of the schema in young adolescents. (Author/GA)

  19. Dynamic-Programming Approaches to Single- and Multi-Stage Stochastic Knapsack Problems for Portfolio Optimization

    National Research Council Canada - National Science Library

    Khoo, Wai

    1999-01-01

    .... These problems model stochastic portfolio optimization problems (SPOPs) which assume deterministic unit weight, and normally distributed unit return with known mean and variance for each item type...

  20. MARCC (Matrix-Assisted Reader Chromatin Capture): an antibody-free method to enrich and analyze combinatorial nucleosome modifications

    Science.gov (United States)

    Su, Zhangli

    2016-01-01

    Combinatorial patterns of histone modifications are key indicators of different chromatin states. Most of the current approaches rely on the usage of antibodies to analyze combinatorial histone modifications. Here we detail an antibody-free method named MARCC (Matrix-Assisted Reader Chromatin Capture) to enrich combinatorial histone modifications. The combinatorial patterns are enriched on native nucleosomes extracted from cultured mammalian cells and prepared by micrococcal nuclease digestion. Such enrichment is achieved by recombinant chromatin-interacting protein modules, or so-called reader domains, which can bind in a combinatorial modification-dependent manner. The enriched chromatin can be quantified by western blotting or mass spectrometry for the co-existence of histone modifications, while the associated DNA content can be analyzed by qPCR or next-generation sequencing. Altogether, MARCC provides a reproducible, efficient and customizable solution to enrich and analyze combinatorial histone modifications. PMID:26131849

  1. A Dual-Stage Two-Phase Model of Selective Attention

    Science.gov (United States)

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  2. Symmetries of stochastic differential equations: A geometric approach

    Energy Technology Data Exchange (ETDEWEB)

    De Vecchi, Francesco C., E-mail: francesco.devecchi@unimi.it; Ugolini, Stefania, E-mail: stefania.ugolini@unimi.it [Dipartimento di Matematica, Università degli Studi di Milano, via Saldini 50, Milano (Italy); Morando, Paola, E-mail: paola.morando@unimi.it [DISAA, Università degli Studi di Milano, via Celoria 2, Milano (Italy)

    2016-06-15

    A new notion of stochastic transformation is proposed and applied to the study of both weak and strong symmetries of stochastic differential equations (SDEs). The correspondence between an algebra of weak symmetries for a given SDE and an algebra of strong symmetries for a modified SDE is proved under suitable regularity assumptions. This general approach is applied to a stochastic version of a two dimensional symmetric ordinary differential equation and to the case of two dimensional Brownian motion.

  3. [Comparison research on two-stage sequencing batch MBR and one-stage MBR].

    Science.gov (United States)

    Yuan, Xin-Yan; Shen, Heng-Gen; Sun, Lei; Wang, Lin; Li, Shi-Feng

    2011-01-01

    Aiming at resolving problems in MBR operation, like low nitrogen and phosphorous removal efficiency, severe membrane fouling and etc, comparison research on two-stage sequencing batch MBR (TSBMBR) and one-stage aerobic MBR has been done in this paper. The results indicated that TSBMBR owned advantages of SBR in removing nitrogen and phosphorous, which could make up the deficiency of traditional one-stage aerobic MBR in nitrogen and phosphorous removal. During steady operation period, effluent average NH4(+) -N, TN and TP concentration is 2.83, 12.20, 0.42 mg/L, which could reach domestic scenic environment use. From membrane fouling control point of view, TSBMBR has lower SMP in supernatant, specific trans-membrane flux deduction rate, membrane fouling resistant than one-stage aerobic MBR. The sedimentation and gel layer resistant of TSBMBR was only 6.5% and 33.12% of one-stage aerobic MBR. Besides high efficiency in removing nitrogen and phosphorous, TSBMBR could effectively reduce sedimentation and gel layer pollution on membrane surface. Comparing with one-stage MBR, TSBMBR could operate with higher trans-membrane flux, lower membrane fouling rate and better pollutants removal effects.

  4. Energy demand in Portuguese manufacturing: a two-stage model

    International Nuclear Information System (INIS)

    Borges, A.M.; Pereira, A.M.

    1992-01-01

    We use a two-stage model of factor demand to estimate the parameters determining energy demand in Portuguese manufacturing. In the first stage, a capital-labor-energy-materials framework is used to analyze the substitutability between energy as a whole and other factors of production. In the second stage, total energy demand is decomposed into oil, coal and electricity demands. The two stages are fully integrated since the energy composite used in the first stage and its price are obtained from the second stage energy sub-model. The estimates obtained indicate that energy demand in manufacturing responds significantly to price changes. In addition, estimation results suggest that there are important substitution possibilities among energy forms and between energy and other factors of production. The role of price changes in energy-demand forecasting, as well as in energy policy in general, is clearly established. (author)

  5. Combinatorial algebraic geometry selected papers from the 2016 apprenticeship program

    CERN Document Server

    Sturmfels, Bernd

    2017-01-01

    This volume consolidates selected articles from the 2016 Apprenticeship Program at the Fields Institute, part of the larger program on Combinatorial Algebraic Geometry that ran from July through December of 2016. Written primarily by junior mathematicians, the articles cover a range of topics in combinatorial algebraic geometry including curves, surfaces, Grassmannians, convexity, abelian varieties, and moduli spaces. This book bridges the gap between graduate courses and cutting-edge research by connecting historical sources, computation, explicit examples, and new results.

  6. Stochastic Effects; Application in Nuclear Physics

    International Nuclear Information System (INIS)

    Mazonka, O.

    2000-04-01

    Stochastic effects in nuclear physics refer to the study of the dynamics of nuclear systems evolving under stochastic equations of motion. In this dissertation we restrict our attention to classical scattering models. We begin with introduction of the model of nuclear dynamics and deterministic equations of evolution. We apply a Langevin approach - an additional property of the model, which reflect the statistical nature of low energy nuclear behaviour. We than concentrate our attention on the problem of calculating tails of distribution functions, which actually is the problem of calculating probabilities of rare outcomes. Two general strategies are proposed. Result and discussion follow. Finally in the appendix we consider stochastic effects in nonequilibrium systems. A few exactly solvable models are presented. For one model we show explicitly that stochastic behaviour in a microscopic description can lead to ordered collective effects on the macroscopic scale. Two others are solved to confirm the predictions of the fluctuation theorem. (author)

  7. The influence of magnetic field strength in ionization stage on ion transport between two stages of a double stage Hall thruster

    International Nuclear Information System (INIS)

    Yu Daren; Song Maojiang; Li Hong; Liu Hui; Han Ke

    2012-01-01

    It is futile for a double stage Hall thruster to design a special ionization stage if the ionized ions cannot enter the acceleration stage. Based on this viewpoint, the ion transport under different magnetic field strengths in the ionization stage is investigated, and the physical mechanisms affecting the ion transport are analyzed in this paper. With a combined experimental and particle-in-cell simulation study, it is found that the ion transport between two stages is chiefly affected by the potential well, the potential barrier, and the potential drop at the bottom of potential well. With the increase of magnetic field strength in the ionization stage, there is larger plasma density caused by larger potential well. Furthermore, the potential barrier near the intermediate electrode declines first and then rises up while the potential drop at the bottom of potential well rises up first and then declines as the magnetic field strength increases in the ionization stage. Consequently, both the ion current entering the acceleration stage and the total ion current ejected from the thruster rise up first and then decline as the magnetic field strength increases in the ionization stage. Therefore, there is an optimal magnetic field strength in the ionization stage to guide the ion transport between two stages.

  8. PIPERIDINE OLIGOMERS AND COMBINATORIAL LIBRARIES THEREOF

    DEFF Research Database (Denmark)

    1999-01-01

    The present invention relates to piperidine oligomers, methods for the preparation of piperidine oligomers and compound libraries thereof, and the use of piperidine oligomers as drug substances. The present invention also relates to the use of combinatorial libraries of piperidine oligomers...... in libraries (arrays) of compounds especially suitable for screening purposes....

  9. Combinatorial therapy discovery using mixed integer linear programming.

    Science.gov (United States)

    Pang, Kaifang; Wan, Ying-Wooi; Choi, William T; Donehower, Lawrence A; Sun, Jingchun; Pant, Dhruv; Liu, Zhandong

    2014-05-15

    Combinatorial therapies play increasingly important roles in combating complex diseases. Owing to the huge cost associated with experimental methods in identifying optimal drug combinations, computational approaches can provide a guide to limit the search space and reduce cost. However, few computational approaches have been developed for this purpose, and thus there is a great need of new algorithms for drug combination prediction. Here we proposed to formulate the optimal combinatorial therapy problem into two complementary mathematical algorithms, Balanced Target Set Cover (BTSC) and Minimum Off-Target Set Cover (MOTSC). Given a disease gene set, BTSC seeks a balanced solution that maximizes the coverage on the disease genes and minimizes the off-target hits at the same time. MOTSC seeks a full coverage on the disease gene set while minimizing the off-target set. Through simulation, both BTSC and MOTSC demonstrated a much faster running time over exhaustive search with the same accuracy. When applied to real disease gene sets, our algorithms not only identified known drug combinations, but also predicted novel drug combinations that are worth further testing. In addition, we developed a web-based tool to allow users to iteratively search for optimal drug combinations given a user-defined gene set. Our tool is freely available for noncommercial use at http://www.drug.liuzlab.org/. zhandong.liu@bcm.edu Supplementary data are available at Bioinformatics online.

  10. Three Syntactic Theories for Combinatory Graph Reduction

    DEFF Research Database (Denmark)

    Danvy, Olivier; Zerny, Ian

    2011-01-01

    in a third syntactic theory. The structure of the store-based abstract machine corresponding to this third syntactic theory oincides with that of Turner's original reduction machine. The three syntactic theories presented here The three syntactic heories presented here therefore have the following......We present a purely syntactic theory of graph reduction for the canonical combinators S, K, and I, where graph vertices are represented with evaluation contexts and let expressions. We express this syntactic theory as a reduction semantics, which we refocus into the first storeless abstract machine...... for combinatory graph reduction, which we refunctionalize into the first storeless natural semantics for combinatory graph reduction.We then factor out the introduction of let expressions to denote as many graph vertices as possible upfront instead of on demand, resulting in a second syntactic theory, this one...

  11. Three Syntactic Theories for Combinatory Graph Reduction

    DEFF Research Database (Denmark)

    Danvy, Olivier; Zerny, Ian

    2013-01-01

    , as a store-based reduction semantics of combinatory term graphs. We then refocus this store-based reduction semantics into a store-based abstract machine. The architecture of this store-based abstract machine coincides with that of Turner's original reduction machine. The three syntactic theories presented......We present a purely syntactic theory of graph reduction for the canonical combinators S, K, and I, where graph vertices are represented with evaluation contexts and let expressions. We express this rst syntactic theory as a storeless reduction semantics of combinatory terms. We then factor out...... the introduction of let expressions to denote as many graph vertices as possible upfront instead of on demand . The factored terms can be interpreted as term graphs in the sense of Barendregt et al. We express this second syntactic theory, which we prove equivalent to the rst, as a storeless reduction semantics...

  12. Numerical studies of the stochastic Korteweg-de Vries equation

    International Nuclear Information System (INIS)

    Lin Guang; Grinberg, Leopold; Karniadakis, George Em

    2006-01-01

    We present numerical solutions of the stochastic Korteweg-de Vries equation for three cases corresponding to additive time-dependent noise, multiplicative space-dependent noise and a combination of the two. We employ polynomial chaos for discretization in random space, and discontinuous Galerkin and finite difference for discretization in physical space. The accuracy of the stochastic solutions is investigated by comparing the first two moments against analytical and Monte Carlo simulation results. Of particular interest is the interplay of spatial discretization error with the stochastic approximation error, which is examined for different orders of spatial and stochastic approximation

  13. Stochastic quantization and topological theories

    International Nuclear Information System (INIS)

    Fainberg, V.Y.; Subbotin, A.V.; Kuznetsov, A.N.

    1992-01-01

    In the last two years topological quantum field theories (TQFT) have attached much attention. This paper reports that from the very beginning it was realized that due to a peculiar BRST-like symmetry these models admitted so-called Nicolai mapping: the Nicolai variables, in terms of which actions of the theories become gaussian, are nothing but (anti-) selfduality conditions or their generalizations. This fact became a starting point in the quest of possible stochastic interpretation to topological field theories. The reasons behind were quite simple and included, in particular, the well-known relations between stochastic processes and supersymmetry. The main goal would have been achieved, if it were possible to construct stochastic processes governed by Langevin or Fokker-Planck equations in a real Euclidean time leading to TQFT's path integrals (equivalently: to reformulate TQFTs as non-equilibrium phase dynamics of stochastic processes). Further on, if it would appear that these processes correspond to the stochastic quantization of theories of some definite kind, one could expect (d + 1)-dimensional TQFTs to share some common properties with d-dimensional ones

  14. Single-stage Acetabular Revision During Two-stage THA Revision for Infection is Effective in Selected Patients.

    Science.gov (United States)

    Fink, Bernd; Schlumberger, Michael; Oremek, Damian

    2017-08-01

    The treatment of periprosthetic infections of hip arthroplasties typically involves use of either a single- or two-stage (with implantation of a temporary spacer) revision surgery. In patients with severe acetabular bone deficiencies, either already present or after component removal, spacers cannot be safely implanted. In such hips where it is impossible to use spacers and yet a two-stage revision of the prosthetic stem is recommended, we have combined a two-stage revision of the stem with a single revision of the cup. To our knowledge, this approach has not been reported before. (1) What proportion of patients treated with single-stage acetabular reconstruction as part of a two-stage revision for an infected THA remain free from infection at 2 or more years? (2) What are the Harris hip scores after the first stage and at 2 years or more after the definitive reimplantation? Between June 2009 and June 2014, we treated all patients undergoing surgical treatment for an infected THA using a single-stage acetabular revision as part of a two-stage THA exchange if the acetabular defect classification was Paprosky Types 2B, 2C, 3A, 3B, or pelvic discontinuity and a two-stage procedure was preferred for the femur. The procedure included removal of all components, joint débridement, definitive acetabular reconstruction (with a cage to bridge the defect, and a cemented socket), and a temporary cemented femoral component at the first stage; the second stage consisted of repeat joint and femoral débridement and exchange of the femoral component to a cementless device. During the period noted, 35 patients met those definitions and were treated with this approach. No patients were lost to followup before 2 years; mean followup was 42 months (range, 24-84 months). The clinical evaluation was performed with the Harris hip scores and resolution of infection was assessed by the absence of clinical signs of infection and a C-reactive protein level less than 10 mg/L. All

  15. Principles of Linguistic Composition Below and Beyond the Clause—Elements of a semantic combinatorial system

    DEFF Research Database (Denmark)

    Bundgaard, Peer

    2006-01-01

    beyond the scope of the clause. To this end it exposes two major principles of semantic combination that are active through all levels of linguistic composition: viz. frame-schematic structure and narrative structure. These principles are considered as being components of a semantic combinatorial system...

  16. Markov stochasticity coordinates

    International Nuclear Information System (INIS)

    Eliazar, Iddo

    2017-01-01

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  17. Markov stochasticity coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Eliazar, Iddo, E-mail: iddo.eliazar@intel.com

    2017-01-15

    Markov dynamics constitute one of the most fundamental models of random motion between the states of a system of interest. Markov dynamics have diverse applications in many fields of science and engineering, and are particularly applicable in the context of random motion in networks. In this paper we present a two-dimensional gauging method of the randomness of Markov dynamics. The method–termed Markov Stochasticity Coordinates–is established, discussed, and exemplified. Also, the method is tweaked to quantify the stochasticity of the first-passage-times of Markov dynamics, and the socioeconomic equality and mobility in human societies.

  18. Filtering and control of stochastic jump hybrid systems

    CERN Document Server

    Yao, Xiuming; Zheng, Wei Xing

    2016-01-01

    This book presents recent research work on stochastic jump hybrid systems. Specifically, the considered stochastic jump hybrid systems include Markovian jump Ito stochastic systems, Markovian jump linear-parameter-varying (LPV) systems, Markovian jump singular systems, Markovian jump two-dimensional (2-D) systems, and Markovian jump repeated scalar nonlinear systems. Some sufficient conditions are first established respectively for the stability and performances of those kinds of stochastic jump hybrid systems in terms of solution of linear matrix inequalities (LMIs). Based on the derived analysis conditions, the filtering and control problems are addressed. The book presents up-to-date research developments and novel methodologies on stochastic jump hybrid systems. The contents can be divided into two parts: the first part is focused on robust filter design problem, while the second part is put the emphasis on robust control problem. These methodologies provide a framework for stability and performance analy...

  19. Guidelines for the formulation of Lagrangian stochastic models for particle simulations of single-phase and dispersed two-phase turbulent flows

    Science.gov (United States)

    Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.

    2014-11-01

    In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future

  20. Guidelines for the formulation of Lagrangian stochastic models for particle simulations of single-phase and dispersed two-phase turbulent flows

    International Nuclear Information System (INIS)

    Minier, Jean-Pierre; Chibbaro, Sergio; Pope, Stephen B.

    2014-01-01

    In this paper, we establish a set of criteria which are applied to discuss various formulations under which Lagrangian stochastic models can be found. These models are used for the simulation of fluid particles in single-phase turbulence as well as for the fluid seen by discrete particles in dispersed turbulent two-phase flows. The purpose of the present work is to provide guidelines, useful for experts and non-experts alike, which are shown to be helpful to clarify issues related to the form of Lagrangian stochastic models. A central issue is to put forward reliable requirements which must be met by Lagrangian stochastic models and a new element brought by the present analysis is to address the single- and two-phase flow situations from a unified point of view. For that purpose, we consider first the single-phase flow case and check whether models are fully consistent with the structure of the Reynolds-stress models. In the two-phase flow situation, coming up with clear-cut criteria is more difficult and the present choice is to require that the single-phase situation be well-retrieved in the fluid-limit case, elementary predictive abilities be respected and that some simple statistical features of homogeneous fluid turbulence be correctly reproduced. This analysis does not address the question of the relative predictive capacities of different models but concentrates on their formulation since advantages and disadvantages of different formulations are not always clear. Indeed, hidden in the changes from one structure to another are some possible pitfalls which can lead to flaws in the construction of practical models and to physically unsound numerical calculations. A first interest of the present approach is illustrated by considering some models proposed in the literature and by showing that these criteria help to assess whether these Lagrangian stochastic models can be regarded as acceptable descriptions. A second interest is to indicate how future

  1. Development of Combinatorial Methods for Alloy Design and Optimization

    International Nuclear Information System (INIS)

    Pharr, George M.; George, Easo P.; Santella, Michael L

    2005-01-01

    rapid structural and chemical characterization of alloy libraries was developed based on high intensity x-radiation available at synchrotron sources such as the Advanced Photon Source (APS) at Argonne National Laboratory (ANL). With the technique, structural and chemical characterization of up to 2500 discrete positions on a library can made in a period of less than 4 hours. Among the parameters that can be measured are the chemical composition, crystal structure, lattice parameters, texture, and grain size. From these, one can also deduce isothermal sections of ternary phase diagrams. The equipment and techniques needed to do this are now in place for use in future combinatorial studies at the ORNL beam line at the APS. In conjunction with the chemical and structural investigations, nanoindentation techniques were developed to investigate the mechanical properties of the combinatorial libraries. The two primary mechanical properties of interest were the elastic modulus, E, and hardness, H, both of which were measured on alloy library surfaces with spatial resolutions of better than 1 m. A nanoindentation testing system at ORNL was programmed to make a series of indentations at specified locations on the library surface and automatically collect and store all the data needed to obtain hardness and modulus as a function of position. Approximately 200 indentations can be made during an overnight run, which allows for mechanical property measurement over a wide range of chemical composition in a relatively short time. Since the materials based on the Fe-Ni-Cr system often find application in highly carburizing and harsh chemical environments, simple techniques were developed to assess the resistance of Fe-Ni-Cr alloy libraries to carburization and corrosion. Alloy libraries were carburized by standard techniques, and the effectiveness of the carburization at various points along the sample surface was assessed by nanoindentation hardness measurement. Corrosion tests were

  2. Stochastic analysis of contaminant transport in porous media: analysis of a two-member radionuclide chain

    International Nuclear Information System (INIS)

    Bonano, E.J.; Shipers, L.R.

    1987-01-01

    In this study the authors extend previous stochastic analyses of contaminant transport in geologic media for a single species to a chain of two species. The authors particular application is the quantification of uncertainties due to lack of characterization of the spatial variability of hydrologic parameters on transport of radionuclides from a high-level waste repository to the biosphere. Radionuclide chains can have a significant impact on demonstrating compliance (or violation) of standards regulating the release to the environment accessible to humans. Two approaches for determining the cross-covariance terms in the mean concentration equations are presented. One uses a Taylor expansion to obtain the cross-covariance between the velocity and concentration fluctuations, while the other is based on a Fourier-Laplace double transform method. For the conditions of interest here, the difference between these two approaches are expected to be small. In addition, the variances are calculated in a unique way by solving another associated partial differential equation. A parametric study is carried out to examine the sensitivity of the mean concentration of the two species and their corresponding variances and cross-covariance on the parameters associated with the structure of the stochastic velocity field. It is found that the dependent variables are most sensitive to the intensity and correlation length of the velocity fluctuations. The magnitude of the variances and cross-covariance of the concentrations are proportional to the magnitude of the mean concentrations which depend on inlet concentration boundary conditions

  3. The role of demand response in single and multi-objective wind-thermal generation scheduling: A stochastic programming

    International Nuclear Information System (INIS)

    Falsafi, Hananeh; Zakariazadeh, Alireza; Jadid, Shahram

    2014-01-01

    This paper focuses on using DR (Demand Response) as a means to provide reserve in order to cover uncertainty in wind power forecasting in SG (Smart Grid) environment. The proposed stochastic model schedules energy and reserves provided by both of generating units and responsive loads in power systems with high penetration of wind power. This model is formulated as a two-stage stochastic programming, where first-stage is associated with electricity market, its rules and constraints and the second-stage is related to actual operation of the power system and its physical limitations in each scenario. The discrete retail customer responses to incentive-based DR programs are aggregated by DRPs (Demand Response Providers) and are submitted as a load change price and amount offer package to ISO (Independent System Operator). Also, price-based DR program behavior and random nature of wind power are modeled by price elasticity concept of the demand and normal probability distribution function, respectively. In the proposed model, DRPs can participate in energy market as well as reserve market and submit their offers to the wholesale electricity market. This approach is implemented on a modified IEEE 30-bus test system over a daily time horizon. The simulation results are analyzed in six different case studies. The cost, emission and multiobjective functions are optimized in both without and with DR cases. The multiobjective generation scheduling model is solved using augmented epsilon constraint method and the best solution can be chosen by Entropy and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods. The results indicate demand side participation in energy and reserve scheduling reduces the total operation costs and emissions. - Highlights: • Simultaneous participation of loads in both energy and reserve scheduling. • Environmental/economical scheduling of energy and reserve. • Using demand response for covering wind generation forecast

  4. Novel Combinatorial Chemistry-Derived Inhibitors of Oncogenic Phosphatases

    National Research Council Canada - National Science Library

    Lazo, John

    1999-01-01

    Our overall goal of this US Army Breast Cancer Grant entitled "Novel Combinatorial Chemistry-Derived Inhibitors of Oncogenic Phosphatases" is to identity and develop novel therapeutic agents for human breast cancer...

  5. Particle Swarm Optimization applied to combinatorial problem aiming the fuel recharge problem solution in a nuclear reactor

    International Nuclear Information System (INIS)

    Meneses, Anderson Alvarenga de Moura; Schirru, Roberto

    2005-01-01

    This work focuses on the usage the Artificial Intelligence technique Particle Swarm Optimization (PSO) to optimize the fuel recharge at a nuclear reactor. This is a combinatorial problem, in which the search of the best feasible solution is done by minimizing a specific objective function. However, in this first moment it is possible to compare the fuel recharge problem with the Traveling Salesman Problem (TSP), since both of them are combinatorial, with one advantage: the evaluation of the TSP objective function is much more simple. Thus, the proposed methods have been applied to two TSPs: Oliver 30 and Rykel 48. In 1995, KENNEDY and EBERHART presented the PSO technique to optimize non-linear continued functions. Recently some PSO models for discrete search spaces have been developed for combinatorial optimization. Although all of them having different formulation from the ones presented here. In this paper, we use the PSO theory associated with to the Random Keys (RK)model, used in some optimizations with Genetic Algorithms. The Particle Swarm Optimization with Random Keys (PSORK) results from this association, which combines PSO and RK. The adaptations and changings in the PSO aim to allow the usage of the PSO at the nuclear fuel recharge. This work shows the PSORK being applied to the proposed combinatorial problem and the obtained results. (author)

  6. A two-channel detection method for autofluorescence correction and efficient on-bead screening of one-bead one-compound combinatorial libraries using the COPAS fluorescence activated bead sorting system

    International Nuclear Information System (INIS)

    Hintersteiner, Martin; Auer, Manfred

    2013-01-01

    One-bead one-compound combinatorial library beads exhibit varying levels of autofluorescence after solid phase combinatorial synthesis. Very often this causes significant problems for automated on-bead screening using TentaGel beads and fluorescently labeled target proteins. Herein, we present a method to overcome this limitation when fluorescence activated bead sorting is used as the screening method. We have equipped the COPAS bead sorting instrument with a high-speed profiling unit and developed a spectral autofluorescence correction method. The correction method is based on a simple algebraic operation using the fluorescence data from two detection channels and is applied on-the-fly in order to reliably identify hit beads by COPAS bead sorting. Our method provides a practical tool for the fast and efficient isolation of hit beads from one-bead one-compound library screens using either fluorescently labeled target proteins or biotinylated target proteins. This method makes hit bead identification easier and more reliable. It reduces false positives and eliminates the need for time-consuming pre-sorting of library beads in order to remove autofluorescent beads. (technical note)

  7. Combinatorial Solid-Phase Synthesis of Balanol Analogues

    DEFF Research Database (Denmark)

    Nielsen, John; Lyngsø, Lars Ole

    1996-01-01

    The natural product balanol has served as a template for the design and synthesis of a combinatorial library using solid-phase chemistry. Using a retrosynthetic analysis, the structural analogues have been assembled from three relatively accessible building blocks. The solid-phase chemistry inclu...

  8. Infinitary Combinatory Reduction Systems: Normalising Reduction Strategies

    NARCIS (Netherlands)

    Ketema, J.; Simonsen, Jakob Grue

    2010-01-01

    We study normalising reduction strategies for infinitary Combinatory Reduction Systems (iCRSs). We prove that all fair, outermost-fair, and needed-fair strategies are normalising for orthogonal, fully-extended iCRSs. These facts properly generalise a number of results on normalising strategies in

  9. A Stochastic model for two-station hydraulics exhibiting transient impact

    DEFF Research Database (Denmark)

    Jacobsen, Judith L.; Madsen, Henrik; Harremoës, Poul

    1997-01-01

    The objective of the paper is to interpret data on water level variation in a river affected by overflow from a sewer system during rain. The simplest possible, hydraulic description is combined with stochastic methods for data analysis and model parameter estimation. This combination...

  10. Mitigation of Control Channel Jamming via Combinatorial Key Distribution

    Science.gov (United States)

    Falahati, Abolfazl; Azarafrooz, Mahdi

    The problem of countering control channel jamming against internal adversaries in wireless ad hoc networks is addressed. Using combinatorial key distribution, a new method to secure the control channel access is introduced. This method, utilizes the established keys in the key establishment phase to hide the location of control channels without the need for a secure BS. This is in obtained by combination of a collision free one-way function and a combinatorial key establishment method. The proposed scheme can be considered as a special case of the ALOHA random access schemes which uses the common established keys as its seeds to generate the pattern of transmission.

  11. Gems of combinatorial optimization and graph algorithms

    CERN Document Server

    Skutella, Martin; Stiller, Sebastian; Wagner, Dorothea

    2015-01-01

    Are you looking for new lectures for your course on algorithms, combinatorial optimization, or algorithmic game theory?  Maybe you need a convenient source of relevant, current topics for a graduate student or advanced undergraduate student seminar?  Or perhaps you just want an enjoyable look at some beautiful mathematical and algorithmic results, ideas, proofs, concepts, and techniques in discrete mathematics and theoretical computer science?   Gems of Combinatorial Optimization and Graph Algorithms is a handpicked collection of up-to-date articles, carefully prepared by a select group of international experts, who have contributed some of their most mathematically or algorithmically elegant ideas.  Topics include longest tours and Steiner trees in geometric spaces, cartograms, resource buying games, congestion games, selfish routing, revenue equivalence and shortest paths, scheduling, linear structures in graphs, contraction hierarchies, budgeted matching problems, and motifs in networks.   This ...

  12. A stochastic-programming approach to integrated asset and liability ...

    African Journals Online (AJOL)

    This increase in complexity has provided an impetus for the investigation into integrated asset- and liability-management frameworks that could realistically address dynamic portfolio allocation in a risk-controlled way. In this paper the authors propose a multi-stage dynamic stochastic-programming model for the integrated ...

  13. Cryptographic Combinatorial Securities Exchanges

    Science.gov (United States)

    Thorpe, Christopher; Parkes, David C.

    We present a useful new mechanism that facilitates the atomic exchange of many large baskets of securities in a combinatorial exchange. Cryptography prevents information about the securities in the baskets from being exploited, enhancing trust. Our exchange offers institutions who wish to trade large positions a new alternative to existing methods of block trading: they can reduce transaction costs by taking advantage of other institutions’ available liquidity, while third party liquidity providers guarantee execution—preserving their desired portfolio composition at all times. In our exchange, institutions submit encrypted orders which are crossed, leaving a “remainder”. The exchange proves facts about the portfolio risk of this remainder to third party liquidity providers without revealing the securities in the remainder, the knowledge of which could also be exploited. The third parties learn either (depending on the setting) the portfolio risk parameters of the remainder itself, or how their own portfolio risk would change if they were to incorporate the remainder into a portfolio they submit. In one setting, these third parties submit bids on the commission, and the winner supplies necessary liquidity for the entire exchange to clear. This guaranteed clearing, coupled with external price discovery from the primary markets for the securities, sidesteps difficult combinatorial optimization problems. This latter method of proving how taking on the remainder would change risk parameters of one’s own portfolio, without revealing the remainder’s contents or its own risk parameters, is a useful protocol of independent interest.

  14. NLP model and stochastic multi-start optimization approach for heat exchanger networks

    International Nuclear Information System (INIS)

    Núñez-Serna, Rosa I.; Zamora, Juan M.

    2016-01-01

    Highlights: • An NLP model for the optimal design of heat exchanger networks is proposed. • The NLP model is developed from a stage-wise grid diagram representation. • A two-phase stochastic multi-start optimization methodology is utilized. • Improved network designs are obtained with different heat load distributions. • Structural changes and reductions in the number of heat exchangers are produced. - Abstract: Heat exchanger network synthesis methodologies frequently identify good network structures, which nevertheless, might be accompanied by suboptimal values of design variables. The objective of this work is to develop a nonlinear programming (NLP) model and an optimization approach that aim at identifying the best values for intermediate temperatures, sub-stream flow rate fractions, heat loads and areas for a given heat exchanger network topology. The NLP model that minimizes the total annual cost of the network is constructed based on a stage-wise grid diagram representation. To improve the possibilities of obtaining global optimal designs, a two-phase stochastic multi-start optimization algorithm is utilized for the solution of the developed model. The effectiveness of the proposed optimization approach is illustrated with the optimization of two network designs proposed in the literature for two well-known benchmark problems. Results show that from the addressed base network topologies it is possible to achieve improved network designs, with redistributions in exchanger heat loads that lead to reductions in total annual costs. The results also show that the optimization of a given network design sometimes leads to structural simplifications and reductions in the total number of heat exchangers of the network, thereby exposing alternative viable network topologies initially not anticipated.

  15. Transport stochastic multi-dimensional media

    International Nuclear Information System (INIS)

    Haran, O.; Shvarts, D.

    1996-01-01

    Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors)

  16. Transport stochastic multi-dimensional media

    Energy Technology Data Exchange (ETDEWEB)

    Haran, O; Shvarts, D [Israel Atomic Energy Commission, Beersheba (Israel). Nuclear Research Center-Negev; Thiberger, R [Ben-Gurion Univ. of the Negev, Beersheba (Israel)

    1996-12-01

    Many physical phenomena evolve according to known deterministic rules, but in a stochastic media in which the composition changes in space and time. Examples to such phenomena are heat transfer in turbulent atmosphere with non uniform diffraction coefficients, neutron transfer in boiling coolant of a nuclear reactor and radiation transfer through concrete shields. The results of measurements conducted upon such a media are stochastic by nature, and depend on the specific realization of the media. In the last decade there has been a considerable efforts to describe linear particle transport in one dimensional stochastic media composed of several immiscible materials. However, transport in two or three dimensional stochastic media has been rarely addressed. The important effect in multi-dimensional transport that does not appear in one dimension is the ability to bypass obstacles. The current work is an attempt to quantify this effect. (authors).

  17. Effects of Suboptimal Bidding in Combinatorial Auctions

    Science.gov (United States)

    Schneider, Stefan; Shabalin, Pasha; Bichler, Martin

    Though the VCG auction assumes a central place in the mechanism design literature, there are a number of reasons for favoring iterative combinatorial auction designs. Several promising ascending auction formats have been developed throughout the past few years based on primal-dual and subgradient algorithms and linear programming theory. Prices are interpreted as a feasible dual solution and the provisional allocation is interpreted as a feasible primal solution. iBundle( 3) (Parkes and Ungar 2000), dVSV (de Vries et al. 2007) and the Ascending Proxy auction (Ausubel and Milgrom 2002) result in VCG payoffs when the coalitional value function satisfies the buyer submodularity condition and bidders bid straightforward, which is an expost Nash equilibrium in that case. iBEA and CreditDebit auctions (Mishra and Parkes 2007) do not even require the buyer submodularity condition and achieve the same properties for general valuations. In many situations, however, one cannot assume bidders to bid straightforward and it is not clear from the theory how these non-linear personalized price auctions (NLPPAs) perform in this case. Robustness of auctions with respect to different bidding behavior is therefore a critical issue for any application. We have conducted a large number of computational experiments to analyze the performance of NLPPA designs with respect to different bidding strategies and different valuation models. We compare the results of NLPPAs to those of the VCG auction and those of iterative combinatorial auctions with approximate linear prices, such as ALPS (Bichler et al. 2009) and the Combinatorial Clock auction (Porter et al. 2003).

  18. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    International Nuclear Information System (INIS)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.; Park, T. K.; Deng, P.; Yang, G.; Jung, Y. S.; Kim, T. K.; Stauff, N. E.

    2016-01-01

    This report presents the performance characteristics of two ''two-stage'' fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  19. Two-stage electrolysis to enrich tritium in environmental water

    International Nuclear Information System (INIS)

    Shima, Nagayoshi; Muranaka, Takeshi

    2007-01-01

    We present a two-stage electrolyzing procedure to enrich tritium in environmental waters. Tritium is first enriched rapidly through a commercially-available electrolyser with a large 50A current, and then through a newly-designed electrolyser that avoids the memory effect, with a 6A current. Tritium recovery factor obtained by such a two-stage electrolysis was greater than that obtained when using the commercially-available device solely. Water samples collected in 2006 in lakes and along the Pacific coast of Aomori prefecture, Japan, were electrolyzed using the two-stage method. Tritium concentrations in these samples ranged from 0.2 to 0.9 Bq/L and were half or less, that in samples collected at the same sites in 1992. (author)

  20. Combinatorial biosynthesis of medicinal plant secondary metabolites

    NARCIS (Netherlands)

    Julsing, Mattijs K.; Koulman, Albert; Woerdenbag, Herman J.; Quax, Wim J.; Kayser, Oliver

    2006-01-01

    Combinatorial biosynthesis is a new tool in the generation of novel natural products and for the production of rare and expensive natural products. The basic concept is combining metabolic pathways in different organisms on a genetic level. As a consequence heterologous organisms provide precursors

  1. Two-stage nonrecursive filter/decimator

    International Nuclear Information System (INIS)

    Yoder, J.R.; Richard, B.D.

    1980-08-01

    A two-stage digital filter/decimator has been designed and implemented to reduce the sampling rate associated with the long-term computer storage of certain digital waveforms. This report describes the design selection and implementation process and serves as documentation for the system actually installed. A filter design with finite-impulse response (nonrecursive) was chosen for implementation via direct convolution. A newly-developed system-test statistic validates the system under different computer-operating environments

  2. Ambit processes and stochastic partial differential equations

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Benth, Fred Espen; Veraart, Almut

    Ambit processes are general stochastic processes based on stochastic integrals with respect to Lévy bases. Due to their flexible structure, they have great potential for providing realistic models for various applications such as in turbulence and finance. This papers studies the connection betwe...... ambit processes and solutions to stochastic partial differential equations. We investigate this relationship from two angles: from the Walsh theory of martingale measures and from the viewpoint of the Lévy noise analysis....

  3. Yield curve event tree construction for multi stage stochastic programming models

    DEFF Research Database (Denmark)

    Rasmussen, Kourosh Marjani; Poulsen, Rolf

    Dynamic stochastic programming (DSP) provides an intuitive framework for modelling of financial portfolio choice problems where market frictions are present and dynamic re--balancing has a significant effect on initial decisions. The application of these models in practice, however, is limited....... Indeed defining a universal and tractable framework for fully ``appropriate'' event trees is in our opinion an impossible task. A problem specific approach to designing such event trees is the way ahead. In this paper we propose a number of desirable properties which should be present in an event tree...

  4. Combinatorial aspects of covering arrays

    Directory of Open Access Journals (Sweden)

    Charles J. Colbourn

    2004-11-01

    Full Text Available Covering arrays generalize orthogonal arrays by requiring that t -tuples be covered, but not requiring that the appearance of t -tuples be balanced.Their uses in screening experiments has found application in software testing, hardware testing, and a variety of fields in which interactions among factors are to be identified. Here a combinatorial view of covering arrays is adopted, encompassing basic bounds, direct constructions, recursive constructions, algorithmic methods, and applications.

  5. Stochastic models: theory and simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2008-03-01

    Many problems in applied science and engineering involve physical phenomena that behave randomly in time and/or space. Examples are diverse and include turbulent flow over an aircraft wing, Earth climatology, material microstructure, and the financial markets. Mathematical models for these random phenomena are referred to as stochastic processes and/or random fields, and Monte Carlo simulation is the only general-purpose tool for solving problems of this type. The use of Monte Carlo simulation requires methods and algorithms to generate samples of the appropriate stochastic model; these samples then become inputs and/or boundary conditions to established deterministic simulation codes. While numerous algorithms and tools currently exist to generate samples of simple random variables and vectors, no cohesive simulation tool yet exists for generating samples of stochastic processes and/or random fields. There are two objectives of this report. First, we provide some theoretical background on stochastic processes and random fields that can be used to model phenomena that are random in space and/or time. Second, we provide simple algorithms that can be used to generate independent samples of general stochastic models. The theory and simulation of random variables and vectors is also reviewed for completeness.

  6. Combinatorial Libraries on Rigid Scaffolds: Solid Phase Synthesis of Variably Substituted Pyrazoles and Isoxazoles

    Directory of Open Access Journals (Sweden)

    Eduard R. Felder

    1997-01-01

    Full Text Available The synthesis of combinatorial compound libraries has become a powerful lead finding tool in modern drug discovery. The ability to synthesize rapidly, in high yield, new chemical entities with low molecular weight on a solid support has a recognized strategic relevance (“small molecule libraries”. We designed and validated a novel solid phase synthesis scheme, suitable to generate diversity on small heterocycles of the pyrazole and isoxazole type. Appropriate conditions were worked out for each reaction, and a variety of more or less reactive agents (building blocks was utilized for discrete conversions, in order to exploit the system’s breadth of applicability. Four sequential reaction steps were validated, including the loading of the support with an acetyl bearing moiety, a Claisen condensation, an a-alkylation and a cyclization of a b-diketone with monosubstituted hydrazines. In a second stage, the reaction sequence was applied in a split and mix approach, in order to prepare a combinatorial library built-up from 4 acetyl carboxylic acids (R1, 35 carboxylic esters (R2 and 41 hydrazines (R4 (and 1 hydroxylamine to yield a total of 11,760 compounds divided into 41 pyrazole sublibraries with 140 pairs of regioisomers and 1 isoxazole sublibrary of equal size.

  7. KENO-IV/CG, the combinatorial geometry version of the KENO Monte Carlo criticality safety program

    International Nuclear Information System (INIS)

    West, J.T.; Petrie, L.M.; Fraley, S.K.

    1979-09-01

    KENO-IV/CG was developed to merge the simple geometry input description utilized by combinatorial geometry with the repeating lattice feature of the original KENO geometry package. The result is a criticality code with the ability to model a complex system of repeating rectangular lattices inside a complicated three-dimensional geometry system. Furthermore, combinatorial geometry was modified to differentiate between combinatorial zones describing a particular KENO BOX to be repeated in a KENO array and those combinatorial zones describing geometry external to an array. This allows the user to maintain a simple coordinate system without any geometric conflict due to spatial overlap. Several difficult criticality design problems have been solved with the new geometry package in KENO-IV/CG, thus illustrating the power of the code to model difficult geometries with a minimum of effort

  8. Multistage Stochastic Programming and its Applications in Energy Systems Modeling and Optimization

    Science.gov (United States)

    Golari, Mehdi

    Electric energy constitutes one of the most crucial elements to almost every aspect of life of people. The modern electric power systems face several challenges such as efficiency, economics, sustainability, and reliability. Increase in electrical energy demand, distributed generations, integration of uncertain renewable energy resources, and demand side management are among the main underlying reasons of such growing complexity. Additionally, the elements of power systems are often vulnerable to failures because of many reasons, such as system limits, weak conditions, unexpected events, hidden failures, human errors, terrorist attacks, and natural disasters. One common factor complicating the operation of electrical power systems is the underlying uncertainties from the demands, supplies and failures of system components. Stochastic programming provides a mathematical framework for decision making under uncertainty. It enables a decision maker to incorporate some knowledge of the intrinsic uncertainty into the decision making process. In this dissertation, we focus on application of two-stage and multistage stochastic programming approaches to electric energy systems modeling and optimization. Particularly, we develop models and algorithms addressing the sustainability and reliability issues in power systems. First, we consider how to improve the reliability of power systems under severe failures or contingencies prone to cascading blackouts by so called islanding operations. We present a two-stage stochastic mixed-integer model to find optimal islanding operations as a powerful preventive action against cascading failures in case of extreme contingencies. Further, we study the properties of this problem and propose efficient solution methods to solve this problem for large-scale power systems. We present the numerical results showing the effectiveness of the model and investigate the performance of the solution methods. Next, we address the sustainability issue

  9. Fundamentals of stochastic nature sciences

    CERN Document Server

    Klyatskin, Valery I

    2017-01-01

    This book addresses the processes of stochastic structure formation in two-dimensional geophysical fluid dynamics based on statistical analysis of Gaussian random fields, as well as stochastic structure formation in dynamic systems with parametric excitation of positive random fields f(r,t) described by partial differential equations. Further, the book considers two examples of stochastic structure formation in dynamic systems with parametric excitation in the presence of Gaussian pumping. In dynamic systems with parametric excitation in space and time, this type of structure formation either happens – or doesn’t! However, if it occurs in space, then this almost always happens (exponentially quickly) in individual realizations with a unit probability. In the case considered, clustering of the field f(r,t) of any nature is a general feature of dynamic fields, and one may claim that structure formation is the Law of Nature for arbitrary random fields of such type. The study clarifies the conditions under wh...

  10. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    Science.gov (United States)

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  11. Phase Transitions in Combinatorial Optimization Problems Basics, Algorithms and Statistical Mechanics

    CERN Document Server

    Hartmann, Alexander K

    2005-01-01

    A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary

  12. Exploiting Quantum Resonance to Solve Combinatorial Problems

    Science.gov (United States)

    Zak, Michail; Fijany, Amir

    2006-01-01

    Quantum resonance would be exploited in a proposed quantum-computing approach to the solution of combinatorial optimization problems. In quantum computing in general, one takes advantage of the fact that an algorithm cannot be decoupled from the physical effects available to implement it. Prior approaches to quantum computing have involved exploitation of only a subset of known quantum physical effects, notably including parallelism and entanglement, but not including resonance. In the proposed approach, one would utilize the combinatorial properties of tensor-product decomposability of unitary evolution of many-particle quantum systems for physically simulating solutions to NP-complete problems (a class of problems that are intractable with respect to classical methods of computation). In this approach, reinforcement and selection of a desired solution would be executed by means of quantum resonance. Classes of NP-complete problems that are important in practice and could be solved by the proposed approach include planning, scheduling, search, and optimal design.

  13. One-and two-dimensional topological charge distributions in stochastic optical fields

    CSIR Research Space (South Africa)

    Roux, FS

    2011-06-01

    Full Text Available The presentation on topological charge distributions in stochastic optical fields concludes that by using a combination of speckle fields one can produce inhomogeneous vortex distributions that allow both analytical calculations and numerical...

  14. Rapid Two-stage Versus One-stage Surgical Repair of Interrupted Aortic Arch with Ventricular Septal Defect in Neonates

    Directory of Open Access Journals (Sweden)

    Meng-Lin Lee

    2008-11-01

    Conclusion: The outcome of rapid two-stage repair is comparable to that of one-stage repair. Rapid two-stage repair has the advantages of significantly shorter cardiopulmonary bypass duration and AXC time, and avoids deep hypothermic circulatory arrest. LVOTO remains an unresolved issue, and postoperative aortic arch restenosis can be dilated effectively by percutaneous balloon angioplasty.

  15. Teaching basic life support with an automated external defibrillator using the two-stage or the four-stage teaching technique.

    Science.gov (United States)

    Bjørnshave, Katrine; Krogh, Lise Q; Hansen, Svend B; Nebsbjerg, Mette A; Thim, Troels; Løfgren, Bo

    2018-02-01

    Laypersons often hesitate to perform basic life support (BLS) and use an automated external defibrillator (AED) because of self-perceived lack of knowledge and skills. Training may reduce the barrier to intervene. Reduced training time and costs may allow training of more laypersons. The aim of this study was to compare BLS/AED skills' acquisition and self-evaluated BLS/AED skills after instructor-led training with a two-stage versus a four-stage teaching technique. Laypersons were randomized to either two-stage or four-stage teaching technique courses. Immediately after training, the participants were tested in a simulated cardiac arrest scenario to assess their BLS/AED skills. Skills were assessed using the European Resuscitation Council BLS/AED assessment form. The primary endpoint was passing the test (17 of 17 skills adequately performed). A prespecified noninferiority margin of 20% was used. The two-stage teaching technique (n=72, pass rate 57%) was noninferior to the four-stage technique (n=70, pass rate 59%), with a difference in pass rates of -2%; 95% confidence interval: -18 to 15%. Neither were there significant differences between the two-stage and four-stage groups in the chest compression rate (114±12 vs. 115±14/min), chest compression depth (47±9 vs. 48±9 mm) and number of sufficient rescue breaths between compression cycles (1.7±0.5 vs. 1.6±0.7). In both groups, all participants believed that their training had improved their skills. Teaching laypersons BLS/AED using the two-stage teaching technique was noninferior to the four-stage teaching technique, although the pass rate was -2% (95% confidence interval: -18 to 15%) lower with the two-stage teaching technique.

  16. Combinatorial chemistry on solid support in the search for central nervous system agents.

    Science.gov (United States)

    Zajdel, Paweł; Pawłowski, Maciej; Martinez, Jean; Subra, Gilles

    2009-08-01

    The advent of combinatorial chemistry was one of the most important developments, that has significantly contributed to the drug discovery process. Within just a few years, its initial concept aimed at production of libraries containing huge number of compounds (thousands to millions), so called screening libraries, has shifted towards preparation of small and medium-sized rationally designed libraries. When applicable, the use of solid supports for the generation of libraries has been a real breakthrough in enhancing productivity. With a limited amount of resin and simple manual workups, the split/mix procedure may generate thousands of bead-tethered compounds. Beads can be chemically or physically encoded to facilitate the identification of a hit after the biological assay. Compartmentalization of solid supports using small reactors like teabags, kans or pellicular discrete supports like Lanterns resulted in powerful sort and combine technologies, relying on codes 'written' on the reactor, and thus reducing the need for automation and improving the number of compounds synthesized. These methods of solid-phase combinatorial chemistry have been recently supported by introduction of solid-supported reagents and scavenger resins. The first part of this review discusses the general premises of combinatorial chemistry and some methods used in the design of primary and focused combinatorial libraries. The aim of the second part is to present combinatorial chemistry methodologies aimed at discovering bioactive compounds acting on diverse GPCR involved in central nervous system disorders.

  17. Control of stochastic resonance in bistable systems by using periodic signals

    International Nuclear Information System (INIS)

    Min, Lin; Li-Min, Fang; Yong-Jun, Zheng

    2009-01-01

    According to the characteristic structure of double wells in bistable systems, this paper analyses stochastic fluctuations in the single potential well and probability transitions between the two potential wells and proposes a method of controlling stochastic resonance by using a periodic signal. Results of theoretical analysis and numerical simulation show that the phenomenon of stochastic resonance happens when the time scales of the periodic signal and the noise-induced probability transitions between the two potential wells achieve stochastic synchronization. By adding a bistable system with a controllable periodic signal, fluctuations in the single potential well can be effectively controlled, thus affecting the probability transitions between the two potential wells. In this way, an effective control can be achieved which allows one to either enhance or realize stochastic resonance

  18. Long-time correlations in the stochastic regime

    International Nuclear Information System (INIS)

    Karney, C.F.F.

    1982-11-01

    The phase space for Hamiltonians of two degrees of freedom is usually divided into stochastic and integrable components. Even when well into the stochastic regime, integrable orbits may surround small stable regions or islands. The effect of these islands on the correlation function for the stochastic trajectories is examined. Depending on the value of the parameter describing the rotation number for the elliptic fixed point at the center of the island, the long-time correlation function may decay as t -5 or exponentially, but more commonly it decays much more slowly (roughly as t -1 ). As a consequence these small islands may have a profound effect on the properties such as the diffusion coefficient, of the stochastic orbits

  19. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Won Sik [Purdue Univ., West Lafayette, IN (United States); Lin, C. S. [Purdue Univ., West Lafayette, IN (United States); Hader, J. S. [Purdue Univ., West Lafayette, IN (United States); Park, T. K. [Purdue Univ., West Lafayette, IN (United States); Deng, P. [Purdue Univ., West Lafayette, IN (United States); Yang, G. [Purdue Univ., West Lafayette, IN (United States); Jung, Y. S. [Purdue Univ., West Lafayette, IN (United States); Kim, T. K. [Argonne National Lab. (ANL), Argonne, IL (United States); Stauff, N. E. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-01-30

    This report presents the performance characteristics of twotwo-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the discharged fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements

  20. Implementation of a combinatorial cleavage and deprotection scheme

    DEFF Research Database (Denmark)

    Nielsen, John; Rasmussen, Palle H.

    1996-01-01

    Phthalhydrazide libraries are synthesized in solution from substituted hydrazines and phthalimides in several different library formats including single compounds, indexed sub-libraries and a full library. When carried out during solid-phase synthesis, this combinatorial cleavage and deprotection...

  1. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    International Nuclear Information System (INIS)

    Zhai, Jianliang; Zhang, Tusheng

    2017-01-01

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  2. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  3. Evaluation of the Optimum Composition of Low-Temperature Fuel Cell Electrocatalysts for Methanol Oxidation by Combinatorial Screening.

    Science.gov (United States)

    Antolini, Ermete

    2017-02-13

    Combinatorial chemistry and high-throughput screening represent an innovative and rapid tool to prepare and evaluate a large number of new materials, saving time and expense for research and development. Considering that the activity and selectivity of catalysts depend on complex kinetic phenomena, making their development largely empirical in practice, they are prime candidates for combinatorial discovery and optimization. This review presents an overview of recent results of combinatorial screening of low-temperature fuel cell electrocatalysts for methanol oxidation. Optimum catalyst compositions obtained by combinatorial screening were compared with those of bulk catalysts, and the effect of the library geometry on the screening of catalyst composition is highlighted.

  4. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling.

    Science.gov (United States)

    Terza, Joseph V; Basu, Anirban; Rathouz, Paul J

    2008-05-01

    The paper focuses on two estimation methods that have been widely used to address endogeneity in empirical research in health economics and health services research-two-stage predictor substitution (2SPS) and two-stage residual inclusion (2SRI). 2SPS is the rote extension (to nonlinear models) of the popular linear two-stage least squares estimator. The 2SRI estimator is similar except that in the second-stage regression, the endogenous variables are not replaced by first-stage predictors. Instead, first-stage residuals are included as additional regressors. In a generic parametric framework, we show that 2SRI is consistent and 2SPS is not. Results from a simulation study and an illustrative example also recommend against 2SPS and favor 2SRI. Our findings are important given that there are many prominent examples of the application of inconsistent 2SPS in the recent literature. This study can be used as a guide by future researchers in health economics who are confronted with endogeneity in their empirical work.

  5. WORKSHOP ON NEW DEVELOPMENTS IN CHEMICAL SEPARATIONS FROM COMBINATORIAL CHEMISTRY AND RELATED SYNTHETIC STRATEGIES

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Stephen G. [University of Pittsburgh, Pittsburgh, Pennsylvania

    1998-08-22

    The power of combinatorial chemistry and related high throughput synthetic strategies is currently being pursued as a fruitful way to develop molecules and materials with new properties. The strategy is motivated, for example in the pharmaceutical industry, by the difficulty of designing molecules to bind to specific sites on target biomolecules. By synthesizing a variety of similar structures, and then finding the one that has the most potent activity, new so-called lead structures will be found rapidly. Existing lead structures can be optimized. This relatively new approach has many implications for separation science. The most obvious is the call for more separations power: higher resolution, lower concentrations, higher speed. This pressure butresses the traditional directions of research into the development of more useful separations. The advent of chip-based, electroosmotically pumped systems1 will certainly accelerate progress in this traditional direction. The progress in combinatorial chemistry and related synthetic strategies gives rise to two other, broadly significant possibilities for large changes in separation science. One possibility results from the unique requirements of the synthesis of a huge number of products simultaneously. Can syntheses and separations be designed to work together to create strategies that lead to mixtures containing only desired products but without side products? The other possibility results from the need for molecular selectivity in separations. Can combinatorial syntheses and related strategies be used in the development of better separations media? A workshop in two parts was held. In one half-day session, pedagogical presentations educated across the barriers of discipline and scale. In the second half-day session, the participants broke into small groups to flesh out new ideas. A panel summarized the breakout discussions.

  6. Stochastic runaway of dynamical systems

    International Nuclear Information System (INIS)

    Pfirsch, D.; Graeff, P.

    1984-10-01

    One-dimensional, stochastic, dynamical systems are well studied with respect to their stability properties. Less is known for the higher dimensional case. This paper derives sufficient and necessary criteria for the asymptotic divergence of the entropy (runaway) and sufficient ones for the moments of n-dimensional, stochastic, dynamical systems. The crucial implication is the incompressibility of their flow defined by the equations of motion in configuration space. Two possible extensions to compressible flow systems are outlined. (orig.)

  7. Optimal Liquidation under Stochastic Liquidity

    OpenAIRE

    Becherer, Dirk; Bilarev, Todor; Frentrup, Peter

    2016-01-01

    We solve explicitly a two-dimensional singular control problem of finite fuel type for infinite time horizon. The problem stems from the optimal liquidation of an asset position in a financial market with multiplicative and transient price impact. Liquidity is stochastic in that the volume effect process, which determines the inter-temporal resilience of the market in spirit of Predoiu, Shaikhet and Shreve (2011), is taken to be stochastic, being driven by own random noise. The optimal contro...

  8. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    Science.gov (United States)

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  9. Noncausal stochastic calculus

    CERN Document Server

    Ogawa, Shigeyoshi

    2017-01-01

    This book presents an elementary introduction to the theory of noncausal stochastic calculus that arises as a natural alternative to the standard theory of stochastic calculus founded in 1944 by Professor Kiyoshi Itô. As is generally known, Itô Calculus is essentially based on the "hypothesis of causality", asking random functions to be adapted to a natural filtration generated by Brownian motion or more generally by square integrable martingale. The intention in this book is to establish a stochastic calculus that is free from this "hypothesis of causality". To be more precise, a noncausal theory of stochastic calculus is developed in this book, based on the noncausal integral introduced by the author in 1979. After studying basic properties of the noncausal stochastic integral, various concrete problems of noncausal nature are considered, mostly concerning stochastic functional equations such as SDE, SIE, SPDE, and others, to show not only the necessity of such theory of noncausal stochastic calculus but ...

  10. The Two-Word Stage: Motivated by Linguistic or Cognitive Constraints?

    Science.gov (United States)

    Berk, Stephanie; Lillo-Martin, Diane

    2012-01-01

    Child development researchers often discuss a "two-word" stage during language acquisition. However, there is still debate over whether the existence of this stage reflects primarily cognitive or linguistic constraints. Analyses of longitudinal data from two Deaf children, Mei and Cal, not exposed to an accessible first language (American Sign…

  11. Transport in Stochastic Media

    International Nuclear Information System (INIS)

    Haran, O.; Shvarts, D.; Thieberger, R.

    1998-01-01

    Classical transport of neutral particles in a binary, scattering, stochastic media is discussed. It is assumed that the cross-sections of the constituent materials and their volume fractions are known. The inner structure of the media is stochastic, but there exist a statistical knowledge about the lump sizes, shapes and arrangement. The transmission through the composite media depends on the specific heterogeneous realization of the media. The current research focuses on the averaged transmission through an ensemble of realizations, frm which an effective cross-section for the media can be derived. The problem of one dimensional transport in stochastic media has been studied extensively [1]. In the one dimensional description of the problem, particles are transported along a line populated with alternating material segments of random lengths. The current work discusses transport in two-dimensional stochastic media. The phenomenon that is unique to the multi-dimensional description of the problem is obstacle bypassing. Obstacle bypassing tends to reduce the opacity of the media, thereby reducing its effective cross-section. The importance of this phenomenon depends on the manner in which the obstacles are arranged in the media. Results of transport simulations in multi-dimensional stochastic media are presented. Effective cross-sections derived from the simulations are compared against those obtained for the one-dimensional problem, and against those obtained from effective multi-dimensional models, which are partially based on a Markovian assumption

  12. A two-stage heating scheme for heat assisted magnetic recording

    Science.gov (United States)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  13. A multiscale extension of the Margrabe formula under stochastic volatility

    International Nuclear Information System (INIS)

    Kim, Jeong-Hoon; Park, Chang-Rae

    2017-01-01

    Highlights: • Fast-mean-reverting stochastic volatility model is chosen to extend the classical Margrabe formula. • The resultant formula is explicitly given by the greeks of Margrabe price itself. • We show how the stochastic volatility corrects the Margrabe price behavior. - Abstract: The pricing of financial derivatives based on stochastic volatility models has been a popular subject in computational finance. Although exact or approximate closed form formulas of the prices of many options under stochastic volatility have been obtained so that the option prices can be easily computed, such formulas for exchange options leave much to be desired. In this paper, we consider two different risky assets with two different scales of mean-reversion rate of volatility and use asymptotic analysis to extend the classical Margrabe formula, which corresponds to a geometric Brownian motion model, and obtain a pricing formula under a stochastic volatility. The resultant formula can be computed easily, simply by taking derivatives of the Margrabe price itself. Based on the formula, we show how the stochastic volatility corrects the Margrabe price behavior depending on the moneyness and the correlation coefficient between the two asset prices.

  14. A stochastic discrete optimization model for designing container terminal facilities

    Science.gov (United States)

    Zukhruf, Febri; Frazila, Russ Bona; Burhani, Jzolanda Tsavalista

    2017-11-01

    As uncertainty essentially affect the total transportation cost, it remains important in the container terminal that incorporates several modes and transshipments process. This paper then presents a stochastic discrete optimization model for designing the container terminal, which involves the decision of facilities improvement action. The container terminal operation model is constructed by accounting the variation of demand and facilities performance. In addition, for illustrating the conflicting issue that practically raises in the terminal operation, the model also takes into account the possible increment delay of facilities due to the increasing number of equipment, especially the container truck. Those variations expectantly reflect the uncertainty issue in the container terminal operation. A Monte Carlo simulation is invoked to propagate the variations by following the observed distribution. The problem is constructed within the framework of the combinatorial optimization problem for investigating the optimal decision of facilities improvement. A new variant of glow-worm swarm optimization (GSO) is thus proposed for solving the optimization, which is rarely explored in the transportation field. The model applicability is tested by considering the actual characteristics of the container terminal.

  15. Efficacy of single-stage and two-stage Fowler–Stephens laparoscopic orchidopexy in the treatment of intraabdominal high testis

    Directory of Open Access Journals (Sweden)

    Chang-Yuan Wang

    2017-11-01

    Conclusion: In the case of testis with good collateral circulation, single-stage F-S laparoscopic orchidopexy had the same safety and efficacy as the two-stage F-S procedure. Surgical options should be based on comprehensive consideration of intraoperative testicular location, testicular ischemia test, and collateral circumstances surrounding the testes. Under the appropriate conditions, we propose single-stage F-S laparoscopic orchidopexy be preferred. It may be appropriate to avoid unnecessary application of the two-stage procedure that has a higher cost and causes more pain for patients.

  16. The Combinatorial Trace Method in Action

    Science.gov (United States)

    Krebs, Mike; Martinez, Natalie C.

    2013-01-01

    On any finite graph, the number of closed walks of length k is equal to the sum of the kth powers of the eigenvalues of any adjacency matrix. This simple observation is the basis for the combinatorial trace method, wherein we attempt to count (or bound) the number of closed walks of a given length so as to obtain information about the graph's…

  17. Dynamic combinatorial chemistry at the phospholipid bilayer interface

    NARCIS (Netherlands)

    Mansfeld, Friederike M.; Au-Yeung, Ho Yu; Sanders, Jeremy K.M.; Otto, Sijbren

    2010-01-01

    Background: Molecular recognition at the environment provided by the phospholipid bilayer interface plays an important role in biology and is subject of intense investigation. Dynamic combinatorial chemistry is a powerful approach for exploring molecular recognition, but has thus far not been

  18. Programme for test generation for combinatorial and sequential systems

    International Nuclear Information System (INIS)

    Tran Huy Hoan

    1973-01-01

    This research thesis reports the computer-assisted search for tests aimed at failure detection in combinatorial and sequential logic circuits. As he wants to deal with complex circuits with many modules such as those met in large scale integrated circuits (LSI), the author used propagation paths. He reports the development of a method which is valid for combinatorial systems and for several sequential circuits comprising elementary logic modules and JK and RS flip-flops. This method is developed on an IBM 360/91 computer in PL/1 language. The used memory space is limited and adjustable with respect to circuit dimension. Computing time is short when compared to that needed by other programmes. The solution is practical and efficient for failure test and localisation

  19. Combinatorial nuclear level density by a Monte Carlo method

    International Nuclear Information System (INIS)

    Cerf, N.

    1994-01-01

    We present a new combinatorial method for the calculation of the nuclear level density. It is based on a Monte Carlo technique, in order to avoid a direct counting procedure which is generally impracticable for high-A nuclei. The Monte Carlo simulation, making use of the Metropolis sampling scheme, allows a computationally fast estimate of the level density for many fermion systems in large shell model spaces. We emphasize the advantages of this Monte Carlo approach, particularly concerning the prediction of the spin and parity distributions of the excited states,and compare our results with those derived from a traditional combinatorial or a statistical method. Such a Monte Carlo technique seems very promising to determine accurate level densities in a large energy range for nuclear reaction calculations

  20. Exponential mean-square stability of two classes of theta Milstein methods for stochastic delay differential equations

    Science.gov (United States)

    Rouz, Omid Farkhondeh; Ahmadian, Davood; Milev, Mariyan

    2017-12-01

    This paper establishes exponential mean square stability of two classes of theta Milstein methods, namely split-step theta Milstein (SSTM) method and stochastic theta Milstein (STM) method, for stochastic differential delay equations (SDDEs). We consider the SDDEs problem under a coupled monotone condition on drift and diffusion coefficients, as well as a necessary linear growth condition on the last term of theta Milstein method. It is proved that the SSTM method with θ ∈ [0, ½] can recover the exponential mean square stability of the exact solution with some restrictive conditions on stepsize, but for θ ∈ (½, 1], we proved that the stability results hold for any stepsize. Then, based on the stability results of SSTM method, we examine the exponential mean square stability of the STM method and obtain the similar stability results to that of the SSTM method. In the numerical section the figures show thevalidity of our claims.

  1. Magnetic X-points, edge localized modes, and stochasticity

    International Nuclear Information System (INIS)

    Sugiyama, L. E.; Strauss, H. R.

    2010-01-01

    Edge localized modes (ELMs) near the boundary of a high temperature, magnetically confined toroidal plasma represent a new type of nonlinear magnetohydrodynamic (MHD) plasma instability that grows through a coherent plasma interaction with part of a chaotic magnetic field. Under perturbation, the freely moving magnetic boundary surface with an X-point splits into two different limiting asymptotic surfaces (manifolds), similar to the behavior of a hyperbolic saddle point in Hamiltonian dynamics. Numerical simulation using the extended MHD code M3D shows that field-aligned plasma instabilities, such as ballooning modes, can couple to the ''unstable'' manifold that forms helical, field-following lobes around the original surface. Large type I ELMs proceed in stages. Initially, a rapidly growing ballooning outburst involves the entire outboard side. Large plasma fingers grow well off the midplane, while low density regions penetrate deeply into the plasma. The magnetic field becomes superficially stochastic. A secondary inboard edge instability causes inboard plasma loss. The plasma gradually relaxes back toward axisymmetry, with diminishing cycles of edge instability. Poloidal rotation of the interior and edge plasma may be driven. The magnetic tangle constrains the early nonlinear ballooning, but may encourage the later inward penetration. Equilibrium toroidal rotation and two-fluid diamagnetic drifts have relatively small effects on a strong MHD instability. Intrinsic magnetic stochasticity may help explain the wide range of experimentally observed ELMs and ELM-free behavior in fusion plasmas, as well as properties of the H-mode and plasma edge.

  2. Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

    KAUST Repository

    Richtarik, Peter; Taká č, Martin

    2017-01-01

    We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our reformulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact. Further, we propose and analyze three stochastic algorithms for solving the reformulated problem---basic, parallel and accelerated methods---with global linear convergence rates. The rates can be interpreted as condition numbers of a matrix which depends on the system matrix and on the reformulation parameters. This gives rise to a new phenomenon which we call stochastic preconditioning, and which refers to the problem of finding parameters (matrix and distribution) leading to a sufficiently small condition number. Our basic method can be equivalently interpreted as stochastic gradient descent, stochastic Newton method, stochastic proximal point method, stochastic fixed point method, and stochastic projection method, with fixed stepsize (relaxation parameter), applied to the reformulations.

  3. Stochastic Reformulations of Linear Systems: Algorithms and Convergence Theory

    KAUST Repository

    Richtarik, Peter

    2017-06-04

    We develop a family of reformulations of an arbitrary consistent linear system into a stochastic problem. The reformulations are governed by two user-defined parameters: a positive definite matrix defining a norm, and an arbitrary discrete or continuous distribution over random matrices. Our reformulation has several equivalent interpretations, allowing for researchers from various communities to leverage their domain specific insights. In particular, our reformulation can be equivalently seen as a stochastic optimization problem, stochastic linear system, stochastic fixed point problem and a probabilistic intersection problem. We prove sufficient, and necessary and sufficient conditions for the reformulation to be exact. Further, we propose and analyze three stochastic algorithms for solving the reformulated problem---basic, parallel and accelerated methods---with global linear convergence rates. The rates can be interpreted as condition numbers of a matrix which depends on the system matrix and on the reformulation parameters. This gives rise to a new phenomenon which we call stochastic preconditioning, and which refers to the problem of finding parameters (matrix and distribution) leading to a sufficiently small condition number. Our basic method can be equivalently interpreted as stochastic gradient descent, stochastic Newton method, stochastic proximal point method, stochastic fixed point method, and stochastic projection method, with fixed stepsize (relaxation parameter), applied to the reformulations.

  4. An introduction to stochastic processes with applications to biology

    CERN Document Server

    Allen, Linda J S

    2010-01-01

    An Introduction to Stochastic Processes with Applications to Biology, Second Edition presents the basic theory of stochastic processes necessary in understanding and applying stochastic methods to biological problems in areas such as population growth and extinction, drug kinetics, two-species competition and predation, the spread of epidemics, and the genetics of inbreeding. Because of their rich structure, the text focuses on discrete and continuous time Markov chains and continuous time and state Markov processes.New to the Second EditionA new chapter on stochastic differential equations th

  5. Distance covariance for stochastic processes

    DEFF Research Database (Denmark)

    Matsui, Muneya; Mikosch, Thomas Valentin; Samorodnitsky, Gennady

    2017-01-01

    The distance covariance of two random vectors is a measure of their dependence. The empirical distance covariance and correlation can be used as statistical tools for testing whether two random vectors are independent. We propose an analog of the distance covariance for two stochastic processes...

  6. Two-Stage Regularized Linear Discriminant Analysis for 2-D Data.

    Science.gov (United States)

    Zhao, Jianhua; Shi, Lei; Zhu, Ji

    2015-08-01

    Fisher linear discriminant analysis (LDA) involves within-class and between-class covariance matrices. For 2-D data such as images, regularized LDA (RLDA) can improve LDA due to the regularized eigenvalues of the estimated within-class matrix. However, it fails to consider the eigenvectors and the estimated between-class matrix. To improve these two matrices simultaneously, we propose in this paper a new two-stage method for 2-D data, namely a bidirectional LDA (BLDA) in the first stage and the RLDA in the second stage, where both BLDA and RLDA are based on the Fisher criterion that tackles correlation. BLDA performs the LDA under special separable covariance constraints that incorporate the row and column correlations inherent in 2-D data. The main novelty is that we propose a simple but effective statistical test to determine the subspace dimensionality in the first stage. As a result, the first stage reduces the dimensionality substantially while keeping the significant discriminant information in the data. This enables the second stage to perform RLDA in a much lower dimensional subspace, and thus improves the two estimated matrices simultaneously. Experiments on a number of 2-D synthetic and real-world data sets show that BLDA+RLDA outperforms several closely related competitors.

  7. Two-stage precipitation of plutonium trifluoride

    International Nuclear Information System (INIS)

    Luerkens, D.W.

    1984-04-01

    Plutonium trifluoride was precipitated using a two-stage precipitation system. A series of precipitation experiments identified the significant process variables affecting precipitate characteristics. A mathematical precipitation model was developed which was based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter that can be used to control particle characteristics

  8. Stochastic geometry for image analysis

    CERN Document Server

    Descombes, Xavier

    2013-01-01

    This book develops the stochastic geometry framework for image analysis purpose. Two main frameworks are  described: marked point process and random closed sets models. We derive the main issues for defining an appropriate model. The algorithms for sampling and optimizing the models as well as for estimating parameters are reviewed.  Numerous applications, covering remote sensing images, biological and medical imaging, are detailed.  This book provides all the necessary tools for developing an image analysis application based on modern stochastic modeling.

  9. Comprehensive human transcription factor binding site map for combinatory binding motifs discovery.

    Directory of Open Access Journals (Sweden)

    Arnoldo J Müller-Molina

    Full Text Available To know the map between transcription factors (TFs and their binding sites is essential to reverse engineer the regulation process. Only about 10%-20% of the transcription factor binding motifs (TFBMs have been reported. This lack of data hinders understanding gene regulation. To address this drawback, we propose a computational method that exploits never used TF properties to discover the missing TFBMs and their sites in all human gene promoters. The method starts by predicting a dictionary of regulatory "DNA words." From this dictionary, it distills 4098 novel predictions. To disclose the crosstalk between motifs, an additional algorithm extracts TF combinatorial binding patterns creating a collection of TF regulatory syntactic rules. Using these rules, we narrowed down a list of 504 novel motifs that appear frequently in syntax patterns. We tested the predictions against 509 known motifs confirming that our system can reliably predict ab initio motifs with an accuracy of 81%-far higher than previous approaches. We found that on average, 90% of the discovered combinatorial binding patterns target at least 10 genes, suggesting that to control in an independent manner smaller gene sets, supplementary regulatory mechanisms are required. Additionally, we discovered that the new TFBMs and their combinatorial patterns convey biological meaning, targeting TFs and genes related to developmental functions. Thus, among all the possible available targets in the genome, the TFs tend to regulate other TFs and genes involved in developmental functions. We provide a comprehensive resource for regulation analysis that includes a dictionary of "DNA words," newly predicted motifs and their corresponding combinatorial patterns. Combinatorial patterns are a useful filter to discover TFBMs that play a major role in orchestrating other factors and thus, are likely to lock/unlock cellular functional clusters.

  10. Stochastic Learning and the Intuitive Criterion in Simple Signaling Games

    DEFF Research Database (Denmark)

    Sloth, Birgitte; Whitta-Jacobsen, Hans Jørgen

    A stochastic learning process for signaling games with two types, two signals, and two responses gives rise to equilibrium selection which is in remarkable accordance with the selection obtained by the intuitive criterion......A stochastic learning process for signaling games with two types, two signals, and two responses gives rise to equilibrium selection which is in remarkable accordance with the selection obtained by the intuitive criterion...

  11. Stochastic Optimization in The Power Management of Bottled Water Production Planning

    Science.gov (United States)

    Antoro, Budi; Nababan, Esther; Mawengkang, Herman

    2018-01-01

    This paper review a model developed to minimize production costs on bottled water production planning through stochastic optimization. As we know, that planning a management means to achieve the goal that have been applied, since each management level in the organization need a planning activities. The built models is a two-stage stochastic models that aims to minimize the cost on production of bottled water by observing that during the production process, neither interfernce nor vice versa occurs. The models were develop to minimaze production cost, assuming the availability of packing raw materials used considered to meet for each kind of bottles. The minimum cost for each kind production of bottled water are expressed in the expectation of each production with a scenario probability. The probability of uncertainly is a representation of the number of productions and the timing of power supply interruption. This is to ensure that the number of interruption that occur does not exceed the limit of the contract agreement that has been made by the company with power suppliers.

  12. What Diagrams Argue in Late Imperial Chinese Combinatorial Texts.

    Science.gov (United States)

    Bréard, Andrea

    2015-01-01

    Attitudes towards diagrammatic reasoning and visualization in mathematics were seldom spelled out in texts from pre-modern China, although illustrations figure prominently in mathematical literature since the eleventh century. Taking the sums of finite series and their combinatorial interpretation as a case study, this article investigates the epistemological function of illustrations from the eleventh to the nineteenth century that encode either the mathematical objects themselves or represent their related algorithms. It particularly focuses on the two illustrations given in Wang Lai's (1768-1813) Mathematical Principles of Sequential Combinations, arguing that they reflect a specific mode of nineteenth-century mathematical argumentative practice and served as a heuristic model for later authors.

  13. Isocyanide based multi component reactions in combinatorial chemistry.

    NARCIS (Netherlands)

    Dömling, A.

    1998-01-01

    Although usually regarded as a recent development, the combinatorial approach to the synthesis of libraries of new drug candidates was first described as early as 1961 using the isocyanide-based one-pot multicomponent Ugi reaction. Isocyanide-based multi component reactions (MCR's) markedly differ

  14. A New Concept of Two-Stage Multi-Element Resonant-/Cyclo-Converter for Two-Phase IM/SM Motor

    Directory of Open Access Journals (Sweden)

    Mahmud Ali Rzig Abdalmula

    2013-01-01

    Full Text Available The paper deals with a new concept of power electronic two-phase system with two-stage DC/AC/AC converter and two-phase IM/PMSM motor. The proposed system consisting of two-stage converter comprises: input resonant boost converter with AC output, two-phase half-bridge cyclo-converter commutated by HF AC input voltage, and induction or synchronous motor. Such a system with AC interlink, as a whole unit, has better properties as a 3-phase reference VSI inverter: higher efficiency due to soft switching of both converter stages, higher switching frequency, smaller dimensions and weight with lesser number of power semiconductor switches and better price. In comparison with currently used conventional system configurations the proposed system features a good efficiency of electronic converters and also has a good torque overloading of two-phase AC induction or synchronous motors. Design of two-stage multi-element resonant converter and results of simulation experiments are presented in the paper.

  15. Two-Stage Fuzzy Portfolio Selection Problem with Transaction Costs

    OpenAIRE

    Chen, Yanju; Wang, Ye

    2015-01-01

    This paper studies a two-period portfolio selection problem. The problem is formulated as a two-stage fuzzy portfolio selection model with transaction costs, in which the future returns of risky security are characterized by possibility distributions. The objective of the proposed model is to achieve the maximum utility in terms of the expected value and variance of the final wealth. Given the first-stage decision vector and a realization of fuzzy return, the optimal value expression of the s...

  16. Stochastic Modelling of Hydrologic Systems

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa

    2007-01-01

    In this PhD project several stochastic modelling methods are studied and applied on various subjects in hydrology. The research was prepared at Informatics and Mathematical Modelling at the Technical University of Denmark. The thesis is divided into two parts. The first part contains...... an introduction and an overview of the papers published. Then an introduction to basic concepts in hydrology along with a description of hydrological data is given. Finally an introduction to stochastic modelling is given. The second part contains the research papers. In the research papers the stochastic methods...... are described, as at the time of publication these methods represent new contribution to hydrology. The second part also contains additional description of software used and a brief introduction to stiff systems. The system in one of the papers is stiff....

  17. Stochastic stability and bifurcation in a macroeconomic model

    International Nuclear Information System (INIS)

    Li Wei; Xu Wei; Zhao Junfeng; Jin Yanfei

    2007-01-01

    On the basis of the work of Goodwin and Puu, a new business cycle model subject to a stochastically parametric excitation is derived in this paper. At first, we reduce the model to a one-dimensional diffusion process by applying the stochastic averaging method of quasi-nonintegrable Hamiltonian system. Secondly, we utilize the methods of Lyapunov exponent and boundary classification associated with diffusion process respectively to analyze the stochastic stability of the trivial solution of system. The numerical results obtained illustrate that the trivial solution of system must be globally stable if it is locally stable in the state space. Thirdly, we explore the stochastic Hopf bifurcation of the business cycle model according to the qualitative changes in stationary probability density of system response. It is concluded that the stochastic Hopf bifurcation occurs at two critical parametric values. Finally, some explanations are given in a simply way on the potential applications of stochastic stability and bifurcation analysis

  18. Problems of Mathematical Finance by Stochastic Control Methods

    Science.gov (United States)

    Stettner, Łukasz

    The purpose of this paper is to present main ideas of mathematics of finance using the stochastic control methods. There is an interplay between stochastic control and mathematics of finance. On the one hand stochastic control is a powerful tool to study financial problems. On the other hand financial applications have stimulated development in several research subareas of stochastic control in the last two decades. We start with pricing of financial derivatives and modeling of asset prices, studying the conditions for the absence of arbitrage. Then we consider pricing of defaultable contingent claims. Investments in bonds lead us to the term structure modeling problems. Special attention is devoted to historical static portfolio analysis called Markowitz theory. We also briefly sketch dynamic portfolio problems using viscosity solutions to Hamilton-Jacobi-Bellman equation, martingale-convex analysis method or stochastic maximum principle together with backward stochastic differential equation. Finally, long time portfolio analysis for both risk neutral and risk sensitive functionals is introduced.

  19. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  20. On the robustness of two-stage estimators

    KAUST Repository

    Zhelonkin, Mikhail; Genton, Marc G.; Ronchetti, Elvezio

    2012-01-01

    The aim of this note is to provide a general framework for the analysis of the robustness properties of a broad class of two-stage models. We derive the influence function, the change-of-variance function, and the asymptotic variance of a general

  1. Multi-level, Multi-stage and Stochastic Optimization Models for Energy Conservation in Buildings for Federal, State and Local Agencies

    Science.gov (United States)

    Champion, Billy Ray

    projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of "of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency's traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.

  2. Exact model reduction of combinatorial reaction networks

    Directory of Open Access Journals (Sweden)

    Fey Dirk

    2008-08-01

    Full Text Available Abstract Background Receptors and scaffold proteins usually possess a high number of distinct binding domains inducing the formation of large multiprotein signaling complexes. Due to combinatorial reasons the number of distinguishable species grows exponentially with the number of binding domains and can easily reach several millions. Even by including only a limited number of components and binding domains the resulting models are very large and hardly manageable. A novel model reduction technique allows the significant reduction and modularization of these models. Results We introduce methods that extend and complete the already introduced approach. For instance, we provide techniques to handle the formation of multi-scaffold complexes as well as receptor dimerization. Furthermore, we discuss a new modeling approach that allows the direct generation of exactly reduced model structures. The developed methods are used to reduce a model of EGF and insulin receptor crosstalk comprising 5,182 ordinary differential equations (ODEs to a model with 87 ODEs. Conclusion The methods, presented in this contribution, significantly enhance the available methods to exactly reduce models of combinatorial reaction networks.

  3. Structural factoring approach for analyzing stochastic networks

    Science.gov (United States)

    Hayhurst, Kelly J.; Shier, Douglas R.

    1991-01-01

    The problem of finding the distribution of the shortest path length through a stochastic network is investigated. A general algorithm for determining the exact distribution of the shortest path length is developed based on the concept of conditional factoring, in which a directed, stochastic network is decomposed into an equivalent set of smaller, generally less complex subnetworks. Several network constructs are identified and exploited to reduce significantly the computational effort required to solve a network problem relative to complete enumeration. This algorithm can be applied to two important classes of stochastic path problems: determining the critical path distribution for acyclic networks and the exact two-terminal reliability for probabilistic networks. Computational experience with the algorithm was encouraging and allowed the exact solution of networks that have been previously analyzed only by approximation techniques.

  4. Stochastic reaction-diffusion algorithms for macromolecular crowding

    Science.gov (United States)

    Sturrock, Marc

    2016-06-01

    Compartment-based (lattice-based) reaction-diffusion algorithms are often used for studying complex stochastic spatio-temporal processes inside cells. In this paper the influence of macromolecular crowding on stochastic reaction-diffusion simulations is investigated. Reaction-diffusion processes are considered on two different kinds of compartmental lattice, a cubic lattice and a hexagonal close packed lattice, and solved using two different algorithms, the stochastic simulation algorithm and the spatiocyte algorithm (Arjunan and Tomita 2010 Syst. Synth. Biol. 4, 35-53). Obstacles (modelling macromolecular crowding) are shown to have substantial effects on the mean squared displacement and average number of molecules in the domain but the nature of these effects is dependent on the choice of lattice, with the cubic lattice being more susceptible to the effects of the obstacles. Finally, improvements for both algorithms are presented.

  5. Estimating meme fitness in adaptive memetic algorithms for combinatorial problems.

    Science.gov (United States)

    Smith, J E

    2012-01-01

    Among the most promising and active research areas in heuristic optimisation is the field of adaptive memetic algorithms (AMAs). These gain much of their reported robustness by adapting the probability with which each of a set of local improvement operators is applied, according to an estimate of their current value to the search process. This paper addresses the issue of how the current value should be estimated. Assuming the estimate occurs over several applications of a meme, we consider whether the extreme or mean improvements should be used, and whether this aggregation should be global, or local to some part of the solution space. To investigate these issues, we use the well-established COMA framework that coevolves the specification of a population of memes (representing different local search algorithms) alongside a population of candidate solutions to the problem at hand. Two very different memetic algorithms are considered: the first using adaptive operator pursuit to adjust the probabilities of applying a fixed set of memes, and a second which applies genetic operators to dynamically adapt and create memes and their functional definitions. For the latter, especially on combinatorial problems, credit assignment mechanisms based on historical records, or on notions of landscape locality, will have limited application, and it is necessary to estimate the value of a meme via some form of sampling. The results on a set of binary encoded combinatorial problems show that both methods are very effective, and that for some problems it is necessary to use thousands of variables in order to tease apart the differences between different reward schemes. However, for both memetic algorithms, a significant pattern emerges that reward based on mean improvement is better than that based on extreme improvement. This contradicts recent findings from adapting the parameters of operators involved in global evolutionary search. The results also show that local reward schemes

  6. Parameter-free resolution of the superposition of stochastic signals

    Energy Technology Data Exchange (ETDEWEB)

    Scholz, Teresa, E-mail: tascholz@fc.ul.pt [Center for Theoretical and Computational Physics, University of Lisbon (Portugal); Raischel, Frank [Center for Geophysics, IDL, University of Lisbon (Portugal); Closer Consulting, Av. Eng. Duarte Pacheco Torre 1 15" 0, 1070-101 Lisboa (Portugal); Lopes, Vitor V. [DEIO-CIO, University of Lisbon (Portugal); UTEC–Universidad de Ingeniería y Tecnología, Lima (Peru); Lehle, Bernd; Wächter, Matthias; Peinke, Joachim [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Lind, Pedro G. [Institute of Physics and ForWind, Carl-von-Ossietzky University of Oldenburg, Oldenburg (Germany); Institute of Physics, University of Osnabrück, Osnabrück (Germany)

    2017-01-30

    This paper presents a direct method to obtain the deterministic and stochastic contribution of the sum of two independent stochastic processes, one of which is an Ornstein–Uhlenbeck process and the other a general (non-linear) Langevin process. The method is able to distinguish between the stochastic processes, retrieving their corresponding stochastic evolution equations. This framework is based on a recent approach for the analysis of multidimensional Langevin-type stochastic processes in the presence of strong measurement (or observational) noise, which is here extended to impose neither constraints nor parameters and extract all coefficients directly from the empirical data sets. Using synthetic data, it is shown that the method yields satisfactory results.

  7. A chance-constrained stochastic approach to intermodal container routing problems.

    Science.gov (United States)

    Zhao, Yi; Liu, Ronghui; Zhang, Xi; Whiteing, Anthony

    2018-01-01

    We consider a container routing problem with stochastic time variables in a sea-rail intermodal transportation system. The problem is formulated as a binary integer chance-constrained programming model including stochastic travel times and stochastic transfer time, with the objective of minimising the expected total cost. Two chance constraints are proposed to ensure that the container service satisfies ship fulfilment and cargo on-time delivery with pre-specified probabilities. A hybrid heuristic algorithm is employed to solve the binary integer chance-constrained programming model. Two case studies are conducted to demonstrate the feasibility of the proposed model and to analyse the impact of stochastic variables and chance-constraints on the optimal solution and total cost.

  8. Bidirectional Classical Stochastic Processes with Measurements and Feedback

    Science.gov (United States)

    Hahne, G. E.

    2005-01-01

    A measurement on a quantum system is said to cause the "collapse" of the quantum state vector or density matrix. An analogous collapse occurs with measurements on a classical stochastic process. This paper addresses the question of describing the response of a classical stochastic process when there is feedback from the output of a measurement to the input, and is intended to give a model for quantum-mechanical processes that occur along a space-like reaction coordinate. The classical system can be thought of in physical terms as two counterflowing probability streams, which stochastically exchange probability currents in a way that the net probability current, and hence the overall probability, suitably interpreted, is conserved. The proposed formalism extends the . mathematics of those stochastic processes describable with linear, single-step, unidirectional transition probabilities, known as Markov chains and stochastic matrices. It is shown that a certain rearrangement and combination of the input and output of two stochastic matrices of the same order yields another matrix of the same type. Each measurement causes the partial collapse of the probability current distribution in the midst of such a process, giving rise to calculable, but non-Markov, values for the ensuing modification of the system's output probability distribution. The paper concludes with an analysis of a classical probabilistic version of the so-called grandfather paradox.

  9. Stochastic two-delay differential model of delayed visual feedback effects on postural dynamics.

    Science.gov (United States)

    Boulet, Jason; Balasubramaniam, Ramesh; Daffertshofer, Andreas; Longtin, André

    2010-01-28

    We report on experiments and modelling involving the 'visuo-postural control loop' in the upright stance. We experimentally manipulated an artificial delay to the visual feedback during standing, presented at delays ranging from 0 to 1 s in increments of 250 ms. Using stochastic delay differential equations, we explicitly modelled the centre-of-pressure (COP) and centre-of-mass (COM) dynamics with two independent delay terms for vision and proprioception. A novel 'drifting fixed point' hypothesis was used to describe the fluctuations of the COM with the COP being modelled as a faster, corrective process of the COM. The model was in good agreement with the data in terms of probability density functions, power spectral densities, short- and long-term correlations (Hurst exponents) as well the critical time between the two ranges. This journal is © 2010 The Royal Society

  10. Combinatorial structures and processing in neural blackboard architectures

    NARCIS (Netherlands)

    van der Velde, Frank; van der Velde, Frank; de Kamps, Marc; Besold, Tarek R.; d'Avila Garcez, Artur; Marcus, Gary F.; Miikkulainen, Risto

    2015-01-01

    We discuss and illustrate Neural Blackboard Architectures (NBAs) as the basis for variable binding and combinatorial processing the brain. We focus on the NBA for sentence structure. NBAs are based on the notion that conceptual representations are in situ, hence cannot be copied or transported.

  11. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    Science.gov (United States)

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  12. Combinatorial interpretations of binomial coefficient analogues related to Lucas sequences

    OpenAIRE

    Sagan, Bruce; Savage, Carla

    2009-01-01

    Let s and t be variables. Define polynomials {n} in s, t by {0}=0, {1}=1, and {n}=s{n-1}+t{n-2} for n >= 2. If s, t are integers then the corresponding sequence of integers is called a Lucas sequence. Define an analogue of the binomial coefficients by C{n,k}={n}!/({k}!{n-k}!) where {n}!={1}{2}...{n}. It is easy to see that C{n,k} is a polynomial in s and t. The purpose of this note is to give two combinatorial interpretations for this polynomial in terms of statistics on integer partitions in...

  13. A stochastic asymptotic-preserving scheme for a kinetic-fluid model for disperse two-phase flows with uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Shi, E-mail: sjin@wisc.edu [Department of Mathematics, University of Wisconsin–Madison, Madison, WI 53706 (United States); Institute of Natural Sciences, School of Mathematical Science, MOELSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240 (China); Shu, Ruiwen, E-mail: rshu2@math.wisc.edu [Department of Mathematics, University of Wisconsin–Madison, Madison, WI 53706 (United States)

    2017-04-15

    In this paper we consider a kinetic-fluid model for disperse two-phase flows with uncertainty. We propose a stochastic asymptotic-preserving (s-AP) scheme in the generalized polynomial chaos stochastic Galerkin (gPC-sG) framework, which allows the efficient computation of the problem in both kinetic and hydrodynamic regimes. The s-AP property is proved by deriving the equilibrium of the gPC version of the Fokker–Planck operator. The coefficient matrices that arise in a Helmholtz equation and a Poisson equation, essential ingredients of the algorithms, are proved to be positive definite under reasonable and mild assumptions. The computation of the gPC version of a translation operator that arises in the inversion of the Fokker–Planck operator is accelerated by a spectrally accurate splitting method. Numerical examples illustrate the s-AP property and the efficiency of the gPC-sG method in various asymptotic regimes.

  14. Analytical results for a stochastic model of gene expression with arbitrary partitioning of proteins

    Science.gov (United States)

    Tschirhart, Hugo; Platini, Thierry

    2018-05-01

    In biophysics, the search for analytical solutions of stochastic models of cellular processes is often a challenging task. In recent work on models of gene expression, it was shown that a mapping based on partitioning of Poisson arrivals (PPA-mapping) can lead to exact solutions for previously unsolved problems. While the approach can be used in general when the model involves Poisson processes corresponding to creation or degradation, current applications of the method and new results derived using it have been limited to date. In this paper, we present the exact solution of a variation of the two-stage model of gene expression (with time dependent transition rates) describing the arbitrary partitioning of proteins. The methodology proposed makes full use of the PPA-mapping by transforming the original problem into a new process describing the evolution of three biological switches. Based on a succession of transformations, the method leads to a hierarchy of reduced models. We give an integral expression of the time dependent generating function as well as explicit results for the mean, variance, and correlation function. Finally, we discuss how results for time dependent parameters can be extended to the three-stage model and used to make inferences about models with parameter fluctuations induced by hidden stochastic variables.

  15. Transport fuels from two-stage coal liquefaction

    Energy Technology Data Exchange (ETDEWEB)

    Benito, A.; Cebolla, V.; Fernandez, I.; Martinez, M.T.; Miranda, J.L.; Oelert, H.; Prado, J.G. (Instituto de Carboquimica CSIC, Zaragoza (Spain))

    1994-03-01

    Four Spanish lignites and their vitrinite concentrates were evaluated for coal liquefaction. Correlationships between the content of vitrinite and conversion in direct liquefaction were observed for the lignites but not for the vitrinite concentrates. The most reactive of the four coals was processed in two-stage liquefaction at a higher scale. First-stage coal liquefaction was carried out in a continuous unit at Clausthal University at a temperature of 400[degree]C at 20 MPa hydrogen pressure and with anthracene oil as a solvent. The coal conversion obtained was 75.41% being 3.79% gases, 2.58% primary condensate and 69.04% heavy liquids. A hydroprocessing unit was built at the Instituto de Carboquimica for the second-stage coal liquefaction. Whole and deasphalted liquids from the first-stage liquefaction were processed at 450[degree]C and 10 MPa hydrogen pressure, with two commercial catalysts: Harshaw HT-400E (Co-Mo/Al[sub 2]O[sub 3]) and HT-500E (Ni-Mo/Al[sub 2]O[sub 3]). The effects of liquid hourly space velocity (LHSV), temperature, gas/liquid ratio and catalyst on the heteroatom liquids, and levels of 5 ppm of nitrogen and 52 ppm of sulphur were reached at 450[degree]C, 10 MPa hydrogen pressure, 0.08 kg H[sub 2]/kg feedstock and with Harshaw HT-500E catalyst. The liquids obtained were hydroprocessed again at 420[degree]C, 10 MPa hydrogen pressure and 0.06 kg H[sub 2]/kg feedstock to hydrogenate the aromatic structures. In these conditions, the aromaticity was reduced considerably, and 39% of naphthas and 35% of kerosene fractions were obtained. 18 refs., 4 figs., 4 tabs.

  16. Fastest Rates for Stochastic Mirror Descent Methods

    KAUST Repository

    Hanzely, Filip; Richtarik, Peter

    2018-01-01

    Relative smoothness - a notion introduced by Birnbaum et al. (2011) and rediscovered by Bauschke et al. (2016) and Lu et al. (2016) - generalizes the standard notion of smoothness typically used in the analysis of gradient type methods. In this work we are taking ideas from well studied field of stochastic convex optimization and using them in order to obtain faster algorithms for minimizing relatively smooth functions. We propose and analyze two new algorithms: Relative Randomized Coordinate Descent (relRCD) and Relative Stochastic Gradient Descent (relSGD), both generalizing famous algorithms in the standard smooth setting. The methods we propose can be in fact seen as a particular instances of stochastic mirror descent algorithms. One of them, relRCD corresponds to the first stochastic variant of mirror descent algorithm with linear convergence rate.

  17. Fastest Rates for Stochastic Mirror Descent Methods

    KAUST Repository

    Hanzely, Filip

    2018-03-20

    Relative smoothness - a notion introduced by Birnbaum et al. (2011) and rediscovered by Bauschke et al. (2016) and Lu et al. (2016) - generalizes the standard notion of smoothness typically used in the analysis of gradient type methods. In this work we are taking ideas from well studied field of stochastic convex optimization and using them in order to obtain faster algorithms for minimizing relatively smooth functions. We propose and analyze two new algorithms: Relative Randomized Coordinate Descent (relRCD) and Relative Stochastic Gradient Descent (relSGD), both generalizing famous algorithms in the standard smooth setting. The methods we propose can be in fact seen as a particular instances of stochastic mirror descent algorithms. One of them, relRCD corresponds to the first stochastic variant of mirror descent algorithm with linear convergence rate.

  18. Stochastic light-cone CTMRG: a new DMRG approach to stochastic models 02.50.Ey Stochastic processes; 64.60.Ht Dynamic critical phenomena; 02.70.-c Computational techniques; 05.10.Cc Renormalization group methods;

    CERN Document Server

    Kemper, A; Nishino, T; Schadschneider, A; Zittartz, J

    2003-01-01

    We develop a new variant of the recently introduced stochastic transfer matrix DMRG which we call stochastic light-cone corner-transfer-matrix DMRG (LCTMRG). It is a numerical method to compute dynamic properties of one-dimensional stochastic processes. As suggested by its name, the LCTMRG is a modification of the corner-transfer-matrix DMRG, adjusted by an additional causality argument. As an example, two reaction-diffusion models, the diffusion-annihilation process and the branch-fusion process are studied and compared with exact data and Monte Carlo simulations to estimate the capability and accuracy of the new method. The number of possible Trotter steps of more than 10 sup 5 shows a considerable improvement on the old stochastic TMRG algorithm.

  19. STOCHASTIC FLOWS OF MAPPINGS

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, the stochastic flow of mappings generated by a Feller convolution semigroup on a compact metric space is studied. This kind of flow is the generalization of superprocesses of stochastic flows and stochastic diffeomorphism induced by the strong solutions of stochastic differential equations.

  20. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    Directory of Open Access Journals (Sweden)

    Chia-Chang Chien

    2009-01-01

    Full Text Available Chia-Chang Chien1, Shu-Fen Huang1,2,3,4, For-Wey Lung1,2,3,41Department of Psychiatry, Kaohsiung Armed Forces General Hospital, Kaohsiung, Taiwan; 2Graduate Institute of Behavioral Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan; 3Department of Psychiatry, National Defense Medical Center, Taipei, Taiwan; 4Calo Psychiatric Center, Pingtung County, TaiwanObjective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts.Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST and the Wechsler Adult Intelligence Scale-Revised (WAIS-R assessments.Results: Logistic regression analysis showed the conceptual level responses (CLR index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84. We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%.Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.Keywords: intellectual disability, intelligence screening, two-stage positive screening, Wisconsin Card Sorting Test, Wechsler Adult Intelligence Scale-Revised