WorldWideScience

Sample records for optimal sampling policies

  1. Optimal patent policies: A survey

    DEFF Research Database (Denmark)

    Poulsen, Odile

    2002-01-01

    This paper surveys some of the patent literature, in particular, it focuses on optimal patent policies. We compare two situations. The first where the government only has a single policy tool to design the optimal patent policy, namely the optimal patent length. In the second situation......, the government uses two policy tools, the optimal breadth and length. We show that theoretical models give very different answers to what is the optimal patent policy. In particular, we show that the optimal patent policy depends among othet things on the price elasticity of demand, the intersectoral elasticity...... of research outputs as well as the degree of compettition in the R&D sector. The actual law on intellectual property, which advocates a unique patent length of 20 years is in general not supported by theoretical models....

  2. Optimal Temporal Policies in Fluid Milk Advertising

    OpenAIRE

    Vande Kamp, Philip R.; Kaiser, Harry M.

    1998-01-01

    This study develops an approach to obtain optimal temporal advertising strategies when consumers' response to advertising is asymmetric. Using this approach, optimal strategies for generic fluid milk advertising in New York City are determined. Results indicate that pulsed advertising policies are significantly more effective in increasing demand than a uniform advertising policy. Sensitivity analyses show that the optimal advertising policies are insensitive to reasonable variations in inter...

  3. An optimal maintenance policy for machine replacement problem using dynamic programming

    Directory of Open Access Journals (Sweden)

    Mohsen Sadegh Amalnik

    2017-06-01

    Full Text Available In this article, we present an acceptance sampling plan for machine replacement problem based on the backward dynamic programming model. Discount dynamic programming is used to solve a two-state machine replacement problem. We plan to design a model for maintenance by consid-ering the quality of the item produced. The purpose of the proposed model is to determine the optimal threshold policy for maintenance in a finite time horizon. We create a decision tree based on a sequential sampling including renew, repair and do nothing and wish to achieve an optimal threshold for making decisions including renew, repair and continue the production in order to minimize the expected cost. Results show that the optimal policy is sensitive to the data, for the probability of defective machines and parameters defined in the model. This can be clearly demonstrated by a sensitivity analysis technique.

  4. Optimal Policy in OG Models

    DEFF Research Database (Denmark)

    Ghiglino, Christian; Tvede, Mich

    for generations, through fiscal policy, i.e. monetary transfers and taxes. Both situations with and without time discounting are considered. It is shown that if the discount factor is suffciently close to one then the optimal policy stabilizes the economy, i.e. the equilibrium path has the turnpike property...

  5. Optimal Policy in OG Models

    DEFF Research Database (Denmark)

    Ghiglino, Christian; Tvede, Mich

    2000-01-01

    for generations, through fiscal policy, i.e., monetary transfers and taxes. Situations both with and without time discounting are considered. It is shown that if the discount factor is sufficiently close to one then the optimal policy stabilizes the economy, i.e. the equilibrium path has the turnpike property...

  6. OPTIMIZATION OF THE RUSSIAN MACROECONOMIC POLICY FOR 2016-2020

    Directory of Open Access Journals (Sweden)

    Gilmundinov V. M.

    2016-12-01

    Full Text Available This paper is concerned with the methodological issues of economic policy elaboration and optimization of economic policy instruments’ parameters. Actuality of this research is provided by growing complexity of social and economic systems, important state role in their functioning as well as multi-targets of economic policy with limited number of instruments. Considering a big variety of internal and external restrictions of social and economic development of modern Russia it has wide range of applications. Extension of the dynamic econometric general equilibrium input-output model of the Russian economy with development of the sub-model of economic policy optimization is a key purpose of this study. The sub-model of economic policy optimization allows estimating impact of economic policy measures on target indicators as well as defining optimal values of their parameters. For this purpose, we extend Robert Mundell’s approach by considering dynamic optimization and wider range of economic policy targets and measures. Use of general equilibrium input-output model allows considering impact of economic policy on different aggregate markets and sectors. Approbation of suggested approach allows us to develop multi-variant forecast for the Russian economy for 2016-2020, define optimal values of monetary policy parameters and compare considered variants by values of social losses. The obtained results could be further used in theoretical as well as applied researches concerned with issues of economic policy elaboration and forecasting of social and economic development.

  7. Off-Policy Reinforcement Learning: Optimal Operational Control for Two-Time-Scale Industrial Processes.

    Science.gov (United States)

    Li, Jinna; Kiumarsi, Bahare; Chai, Tianyou; Lewis, Frank L; Fan, Jialu

    2017-12-01

    Industrial flow lines are composed of unit processes operating on a fast time scale and performance measurements known as operational indices measured at a slower time scale. This paper presents a model-free optimal solution to a class of two time-scale industrial processes using off-policy reinforcement learning (RL). First, the lower-layer unit process control loop with a fast sampling period and the upper-layer operational index dynamics at a slow time scale are modeled. Second, a general optimal operational control problem is formulated to optimally prescribe the set-points for the unit industrial process. Then, a zero-sum game off-policy RL algorithm is developed to find the optimal set-points by using data measured in real-time. Finally, a simulation experiment is employed for an industrial flotation process to show the effectiveness of the proposed method.

  8. Endogenous price flexibility and optimal monetary policy

    OpenAIRE

    Ozge Senay; Alan Sutherland

    2014-01-01

    Much of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This article extends the standard new Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenizing the degree of price flexibility tends to shift optimal monetary policy towards complete i...

  9. An optimal maintenance policy for machine replacement problem using dynamic programming

    OpenAIRE

    Mohsen Sadegh Amalnik; Morteza Pourgharibshahi

    2017-01-01

    In this article, we present an acceptance sampling plan for machine replacement problem based on the backward dynamic programming model. Discount dynamic programming is used to solve a two-state machine replacement problem. We plan to design a model for maintenance by consid-ering the quality of the item produced. The purpose of the proposed model is to determine the optimal threshold policy for maintenance in a finite time horizon. We create a decision tree based on a sequential sampling inc...

  10. Extreme Trust Region Policy Optimization for Active Object Recognition.

    Science.gov (United States)

    Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei

    2018-06-01

    In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.

  11. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  12. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  13. Optimization of Overflow Policies in Call Centers

    DEFF Research Database (Denmark)

    Koole, G.M.; Nielsen, B.F.; Nielsen, T.B.

    2015-01-01

    . A Markov decision chain is used to determine the optimal policy. This policy outperforms considerably the ones used most often in practice, which use a fixed threshold. The present method can be used also for other call-center models and other situations where performance is based on actual waiting times...

  14. A bivariate optimal replacement policy for a multistate repairable system

    International Nuclear Information System (INIS)

    Zhang Yuanlin; Yam, Richard C.M.; Zuo, Ming J.

    2007-01-01

    In this paper, a deteriorating simple repairable system with k+1 states, including k failure states and one working state, is studied. It is assumed that the system after repair is not 'as good as new' and the deterioration of the system is stochastic. We consider a bivariate replacement policy, denoted by (T,N), in which the system is replaced when its working age has reached T or the number of failures it has experienced has reached N, whichever occurs first. The objective is to determine the optimal replacement policy (T,N)* such that the long-run expected profit per unit time is maximized. The explicit expression of the long-run expected profit per unit time is derived and the corresponding optimal replacement policy can be determined analytically or numerically. We prove that the optimal policy (T,N)* is better than the optimal policy N* for a multistate simple repairable system. We also show that a general monotone process model for a multistate simple repairable system is equivalent to a geometric process model for a two-state simple repairable system in the sense that they have the same structure for the long-run expected profit (or cost) per unit time and the same optimal policy. Finally, a numerical example is given to illustrate the theoretical results

  15. Optimal Replacement Policies for Non-Uniform Cache Objects with Optional Eviction

    National Research Council Canada - National Science Library

    Bahat, Omri; Makowski, Armand M

    2002-01-01

    .... However, since the introduction of optimal replacement policies for conventional caching, the problem of finding optimal replacement policies under the factors indicated has not been studied in any systematic manner...

  16. Financing and funding health care: Optimal policy and political implementability.

    Science.gov (United States)

    Nuscheler, Robert; Roeder, Kerstin

    2015-07-01

    Health care financing and funding are usually analyzed in isolation. This paper combines the corresponding strands of the literature and thereby advances our understanding of the important interaction between them. We investigate the impact of three modes of health care financing, namely, optimal income taxation, proportional income taxation, and insurance premiums, on optimal provider payment and on the political implementability of optimal policies under majority voting. Considering a standard multi-task agency framework we show that optimal health care policies will generally differ across financing regimes when the health authority has redistributive concerns. We show that health care financing also has a bearing on the political implementability of optimal health care policies. Our results demonstrate that an isolated analysis of (optimal) provider payment rests on very strong assumptions regarding both the financing of health care and the redistributive preferences of the health authority. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. State dependent optimization of measurement policy

    Science.gov (United States)

    Konkarikoski, K.

    2010-07-01

    Measurements are the key to rational decision making. Measurement information generates value, when it is applied in the decision making. An investment cost and maintenance costs are associated with each component of the measurement system. Clearly, there is - under a given set of scenarios - a measurement setup that is optimal in expected (discounted) utility. This paper deals how the measurement policy optimization is affected by different system states and how this problem can be tackled.

  18. State dependent optimization of measurement policy

    International Nuclear Information System (INIS)

    Konkarikoski, K

    2010-01-01

    Measurements are the key to rational decision making. Measurement information generates value, when it is applied in the decision making. An investment cost and maintenance costs are associated with each component of the measurement system. Clearly, there is - under a given set of scenarios - a measurement setup that is optimal in expected (discounted) utility. This paper deals how the measurement policy optimization is affected by different system states and how this problem can be tackled.

  19. On Optimal Policies for Network-Coded Cooperation

    DEFF Research Database (Denmark)

    Khamfroush, Hana; Roetter, Daniel Enrique Lucani; Pahlevani, Peyman

    2015-01-01

    Network-coded cooperative communication (NC-CC) has been proposed and evaluated as a powerful technology that can provide a better quality of service in the next-generation wireless systems, e.g., D2D communications. Previous contributions have focused on performance evaluation of NC-CC scenarios...... rather than searching for optimal policies that can minimize the total cost of reliable packet transmission. We break from this trend by initially analyzing the optimal design of NC-CC for a wireless network with one source, two receivers, and half-duplex erasure channels. The problem is modeled...... as a special case of Markov decision process (MDP), which is called stochastic shortest path (SSP), and is solved for any field size, arbitrary number of packets, and arbitrary erasure probabilities of the channels. The proposed MDP solution results in an optimal transmission policy per time slot, and we use...

  20. Optimal government policies in models with heterogeneous agents

    Czech Academy of Sciences Publication Activity Database

    Boháček, Radim; Kejak, Michal

    -, č. 272 (2005), s. 1-55 ISSN 1211-3298 Institutional research plan: CEZ:AV0Z70850503 Keywords : optimal macroeconomic policy * optimal taxation * distribution of wealth and income Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp272.pdf

  1. The determination of optimal climate policy

    International Nuclear Information System (INIS)

    Aaheim, Asbjoern

    2010-01-01

    Analyses of the costs and benefits of climate policy, such as the Stern Review, evaluate alternative strategies to reduce greenhouse gas emissions by requiring that the cost of emission cuts in each and every year has to be covered by the associated value of avoided damage, discounted by a an exogenously chosen rate. An alternative is to optimize abatement programmes towards a stationary state, where the concentrations of greenhouse gases are stabilized and shadow prices, including the rate of discount, are determined endogenously. This paper examines the properties of optimized stabilization. It turns out that the implications for the evaluation of climate policy are substantial if compared with evaluations of the present value of costs and benefits based on exogenously chosen shadow prices. Comparisons of discounted costs and benefits tend to exaggerate the importance of the choice of discount rate, while ignoring the importance of future abatement costs, which turns out to be essential for the optimal abatement path. Numerical examples suggest that early action may be more beneficial than indicated by comparisons of costs and benefits discounted by a rate chosen on the basis of current observations. (author)

  2. Optimal Tax-Transfer Policies, Life-Cycle Labour Supply and Present-Biased Preferences

    DEFF Research Database (Denmark)

    Gunnersen, Lasse Frisgaard; Rasmussen, Bo Sandemann

    Using a two-period model with two types of agents that are characterized by present-biased preferences second-best optimal tax-transfer policies are considered. The paternalistic optimal tax-transfer policy has two main concerns: Income redistribution from high to low ability households...... consequences not only for optimal subsidies to savings but also for optimal marginal income taxes....

  3. Optimal Repair And Replacement Policy For A System With Multiple Components

    Science.gov (United States)

    2016-06-17

    increases. A future research direction is to develop efficient heuristic methods that can produce near-optimal policy with much less computational...decision variables represent the long-run fraction of time for each state- action pair. The objective function is the linear combination of long-run...exists an optimal policy. To find this policy we solve the linear program. The solution shows that for each state only one state-action pair, represented

  4. The impact of uncertainty on optimal emission policies

    Science.gov (United States)

    Botta, Nicola; Jansson, Patrik; Ionescu, Cezar

    2018-05-01

    We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.

  5. Dynamic mobility applications policy analysis : policy and institutional issues for intelligent network flow optimization (INFLO).

    Science.gov (United States)

    2014-12-01

    The report documents policy considerations for the Intelligent Network Flow Optimization (INFLO) connected vehicle applications : bundle. INFLO aims to optimize network flow on freeways and arterials by informing motorists of existing and impen...

  6. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  7. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  8. Optimization of ACC system spacing policy on curved highway

    Science.gov (United States)

    Ma, Jun; Qian, Kun; Gong, Zaiyan

    2017-05-01

    The paper optimizes the original spacing policy when adopting VTH (Variable Time Headway), proposes to introduce the road curve curvature K to the spacing policy to cope with following the wrong vehicle or failing to follow the vehicle owing to the radar limitation of curve in ACC system. By utilizing MATLAB/Simulink, automobile longitudinal dynamics model is established. At last, the paper sets up such three common cases as the vehicle ahead runs at a uniform velocity, an accelerated velocity and hits the brake suddenly, simulates these cases on the curve with different curvature, analyzes the curve spacing policy in the perspective of safety and vehicle following efficiency and draws the conclusion whether the optimization scheme is effective or not.

  9. Trade Liberalization and Optimal Environmental Policies in Vertical Related Markets

    Directory of Open Access Journals (Sweden)

    Yan-Shu Lin

    2012-12-01

    Full Text Available This paper establishes a symmetric two-country model with vertically related markets. In the downstream market, there is one firm in each country selling a homogeneous good, whose production generates pollution, to its home and the foreign markets a la Brander (1981. In the intermediate good market, there is also one upstream firm in each country, supplying the intermediate good only to its own country’s downstream market. The upstream firms can choose either price or quantity to maximize their profits. With this setting, the paper examines the optimal environmental policy and how it is affected by the tariff on the final good. It is found that, under free trade, the optimal final-good output with imperfect intermediate-good market will have the same output level as that with perfect intermediate-good market after imposing the optimal emission tax. The optimal environmental tax is smaller and the optimal environmental policy is less likely to be a green strategy under trade liberalization if the market structure in the intermediate good market is imperfect than perfect competition. On the other hand, the optimal environmental tax is necessarily higher if the upstream firm chooses price than quantity. Moreover, the optimal environmental policy is less likely to be a green strategy under trade liberalization if the upstream firms choose quantity than price to maximize their profits.

  10. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  11. Input-output interactions and optimal monetary policy

    DEFF Research Database (Denmark)

    Petrella, Ivan; Santoro, Emiliano

    2011-01-01

    This paper deals with the implications of factor demand linkages for monetary policy design in a two-sector dynamic general equilibrium model. Part of the output of each sector serves as a production input in both sectors, in accordance with a realistic input–output structure. Strategic...... complementarities induced by factor demand linkages significantly alter the transmission of shocks and amplify the loss of social welfare under optimal monetary policy, compared to what is observed in standard two-sector models. The distinction between value added and gross output that naturally arises...... in this context is of key importance to explore the welfare properties of the model economy. A flexible inflation targeting regime is close to optimal only if the central bank balances inflation and value added variability. Otherwise, targeting gross output variability entails a substantial increase in the loss...

  12. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  13. Transparency and corruption: an optimal taxation policy

    OpenAIRE

    Jellal, Mohamed; Bouzahzah, Mohamed

    2013-01-01

    Under Principal-Agent-Supervisor paradigm, we examine in this paper how a tax collection agency changes optimal schemes in order to lessen the occurrence of corruption between the tax collector and the taxpayer. Indeed, the Principal, who maximizes the expected net fiscal revenue, reacts by decreasing tax rates when the supervisor is likely to engage in corrupt transaction with taxpayer. Therefore, the optimal policy against collusion and corruption may explain the rational of the greater rel...

  14. Optimal Monetary Policy Cooperation through State-Independent Contracts with Targets

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2000-01-01

    Simple state-independent monetary institutions are shown to secure optimal cooperative policies in a stochastic, linear-quadratic two-country world with international policy spill-overs and national credibility problems. Institutions characterize delegation to independent central bankers facing...... quadratic performance related contracts punishing or rewarding deviations from primary and intermediate policy targets...

  15. Optimal policies for cumulative damage models with maintenance last and first

    International Nuclear Information System (INIS)

    Zhao, Xufeng; Qian, Cunhua; Nakagawa, Toshio

    2013-01-01

    From the economical viewpoint of several combined PM policies in reliability theory, this paper takes up a standard cumulative damage model in which the notion of maintenance last is applied, i.e., the unit undergoes preventive maintenances before failure at a planned time T, at a damage level Z, or at a shock number N, whichever occurs last. Expected cost rates are detailedly formulated, and optimal problems of two alternative policies which combine time-based with condition-based preventive maintenances are discussed, i.e., optimal T L ⁎ for N, Z L ⁎ for T, and N L ⁎ for T are rigorously obtained. Comparison methods between such maintenance last and conventional maintenance first are explored. It is determined theoretically and numerically which policy should be adopted, according to the different methods in different cases when the time-based or the condition-based PM policy is optimized.

  16. Optimal pricing policies for services with consideration of facility maintenance costs

    Science.gov (United States)

    Yeh, Ruey Huei; Lin, Yi-Fang

    2012-06-01

    For survival and success, pricing is an essential issue for service firms. This article deals with the pricing strategies for services with substantial facility maintenance costs. For this purpose, a mathematical framework that incorporates service demand and facility deterioration is proposed to address the problem. The facility and customers constitute a service system driven by Poisson arrivals and exponential service times. A service demand with increasing price elasticity and a facility lifetime with strictly increasing failure rate are also adopted in modelling. By examining the bidirectional relationship between customer demand and facility deterioration in the profit model, the pricing policies of the service are investigated. Then analytical conditions of customer demand and facility lifetime are derived to achieve a unique optimal pricing policy. The comparative statics properties of the optimal policy are also explored. Finally, numerical examples are presented to illustrate the effects of parameter variations on the optimal pricing policy.

  17. Prescription drug samples--does this marketing strategy counteract policies for quality use of medicines?

    Science.gov (United States)

    Groves, K E M; Sketris, I; Tett, S E

    2003-08-01

    Prescription drug samples, as used by the pharmaceutical industry to market their products, are of current interest because of their influence on prescribing, and their potential impact on consumer safety. Very little research has been conducted into the use and misuse of prescription drug samples, and the influence of samples on health policies designed to improve the rational use of medicines. This is a topical issue in the prescription drug debate, with increasing costs and increasing concerns about optimizing use of medicines. This manuscript critically evaluates the research that has been conducted to date about prescription drug samples, discusses the issues raised in the context of traditional marketing theory, and suggests possible alternatives for the future.

  18. Chaotic dynamics in optimal monetary policy

    Science.gov (United States)

    Gomes, O.; Mendes, V. M.; Mendes, D. A.; Sousa Ramos, J.

    2007-05-01

    There is by now a large consensus in modern monetary policy. This consensus has been built upon a dynamic general equilibrium model of optimal monetary policy as developed by, e.g., Goodfriend and King [ NBER Macroeconomics Annual 1997 edited by B. Bernanke and J. Rotemberg (Cambridge, Mass.: MIT Press, 1997), pp. 231 282], Clarida et al. [J. Econ. Lit. 37, 1661 (1999)], Svensson [J. Mon. Econ. 43, 607 (1999)] and Woodford [ Interest and Prices: Foundations of a Theory of Monetary Policy (Princeton, New Jersey, Princeton University Press, 2003)]. In this paper we extend the standard optimal monetary policy model by introducing nonlinearity into the Phillips curve. Under the specific form of nonlinearity proposed in our paper (which allows for convexity and concavity and secures closed form solutions), we show that the introduction of a nonlinear Phillips curve into the structure of the standard model in a discrete time and deterministic framework produces radical changes to the major conclusions regarding stability and the efficiency of monetary policy. We emphasize the following main results: (i) instead of a unique fixed point we end up with multiple equilibria; (ii) instead of saddle-path stability, for different sets of parameter values we may have saddle stability, totally unstable equilibria and chaotic attractors; (iii) for certain degrees of convexity and/or concavity of the Phillips curve, where endogenous fluctuations arise, one is able to encounter various results that seem intuitively correct. Firstly, when the Central Bank pays attention essentially to inflation targeting, the inflation rate has a lower mean and is less volatile; secondly, when the degree of price stickiness is high, the inflation rate displays a larger mean and higher volatility (but this is sensitive to the values given to the parameters of the model); and thirdly, the higher the target value of the output gap chosen by the Central Bank, the higher is the inflation rate and its

  19. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  20. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  1. Modeling and optimizing periodically inspected software rejuvenation policy based on geometric sequences

    International Nuclear Information System (INIS)

    Meng, Haining; Liu, Jianjun; Hei, Xinhong

    2015-01-01

    Software aging is characterized by an increasing failure rate, progressive performance degradation and even a sudden crash in a long-running software system. Software rejuvenation is an effective method to counteract software aging. A periodically inspected rejuvenation policy for software systems is studied. The consecutive inspection intervals are assumed to be a decreasing geometric sequence, and upon the inspection times of software system and its failure features, software rejuvenation or system recovery is performed. The system availability function and cost rate function are obtained, and the optimal inspection time and rejuvenation interval are both derived to maximize system availability and minimize cost rate. Then, boundary conditions of the optimal rejuvenation policy are deduced. Finally, the numeric experiment result shows the effectiveness of the proposed policy. Further compared with the existing software rejuvenation policy, the new policy has higher system availability. - Highlights: • A periodically inspected rejuvenation policy for software systems is studied. • A decreasing geometric sequence is used to denote the consecutive inspection intervals. • The optimal inspection times and rejuvenation interval are found. • The new policy is capable of reducing average cost and improving system availability

  2. China's optimal stockpiling policies in the context of new oil price trend

    International Nuclear Information System (INIS)

    Xie, Nan; Yan, Zhijun; Zhou, Yi; Huang, Wenjun

    2017-01-01

    Optimizing the size of oil stockpiling plays a fundamental role in the process of making national strategic petroleum reserve (SPR) policies. There have been extensive studies on the operating strategies of SPR. However, previous literatures have paid more attention to a booming or stable international oil market, while few studies analyzed the impact of a long-term low oil price on SPR policy. As a supplement, this paper extends a static model to study China's optimal stockpiling policy under different oil price trends, and in response to different current oil prices. A new variable “FC”, which demonstrates the appreciation and depreciation of the reserved oil economic value, has been taken into account to assess the optimal size of SPR. In this paper, a more multi-perspective of view is provided to consider the policies of China's SPR, especially under the different trend of international oil price fluctuations. - Highlights: • We extended a static model to study optimal stockpiling size of China's SPR. • A new variable “FC” was applied to illustrate the shifting financial value of SPR. • We analyzed how current oil price and varied prediction influence optimal size. • Operational measures could be adjusted at the end of each decision-making period. • A more multifaceted of view might be provided for China's SPR policy-making.

  3. Fear of Floating: An optimal discretionary monetary policy analysis

    OpenAIRE

    Madhavi Bokil

    2005-01-01

    This paper explores the idea that “Fear of Floating” and accompanying pro-cyclical interest rate policies observed in the case of some emerging market economies may be justified as an optimal discretionary monetary policy response to shocks. The paper also examines how the differences in monetary policies may lead to different degrees of this fear. These questions are addressed with a small open economy, new- Keynesian model with endogenous capital accumulation and sticky prices. The economy ...

  4. Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model

    OpenAIRE

    Shulgin, A.

    2015-01-01

    Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...

  5. Optimal Replacement and Management Policies for Beef Cows

    OpenAIRE

    W. Marshall Frasier; George H. Pfeiffer

    1994-01-01

    Beef cow replacement studies have not reflected the interaction between herd management and the culling decision. We demonstrate techniques for modeling optimal beef cow replacement intervals and discrete management policies by incorporating the dynamic effects of management on future productivity when biological response is uncertain. Markovian decision analysis is used to identify optimal beef cow management on a ranch typical of the Sandhills region of Nebraska. Issues of breeding season l...

  6. On the Optimal Design of Distributed Generation Policies: Is Net Metering Ever Optimal?

    OpenAIRE

    Brown, David; Sappington, David

    2014-01-01

    Electricity customers who install solar panels often are paid the prevailing retail price for the electricity they generate. We show that this "net metering" policy typically is not optimal. A payment for distributed generation (w) that is below the retail price of electricity (r) will induce the welfare-maximizing level of distributed generation (DG) when centralized generation and DG produce similar (pollution) externalities. However, w can optimally exceed r when DG entails a substantial r...

  7. Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis

    OpenAIRE

    Arato, Hiroki

    2009-01-01

    This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...

  8. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  9. Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.

    Science.gov (United States)

    Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L

    2017-10-01

    The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.

  10. Optimal fleet conversion policy from a life cycle perspective

    International Nuclear Information System (INIS)

    Hyung Chul Kim; Ross, M.H.; Keoleian, G.A.

    2004-01-01

    Vehicles typically deteriorate with accumulating mileage and emit more tailpipe air pollutants per mile. Although incentive programs for scrapping old, high-emitting vehicles have been implemented to reduce urban air pollutants and greenhouse gases, these policies may create additional sales of new vehicles as well. From a life cycle perspective, the emissions from both the additional vehicle production and scrapping need to be addressed when evaluating the benefits of scrapping older vehicles. This study explores an optimal fleet conversion policy based on mid-sized internal combustion engine vehicles in the US, defined as one that minimizes total life cycle emissions from the entire fleet of new and used vehicles. To describe vehicles' lifetime emission profiles as functions of accumulated mileage, a series of life cycle inventories characterizing environmental performance for vehicle production, use, and retirement was developed for each model year between 1981 and 2020. A simulation program is developed to investigate ideal and practical fleet conversion policies separately for three regulated pollutants (CO, NMHC, and NO x ) and for CO 2 . According to the simulation results, accelerated scrapping policies are generally recommended to reduce regulated emissions, but they may increase greenhouse gases. Multi- objective analysis based on economic valuation methods was used to investigate trade-offs among emissions of different pollutants for optimal fleet conversion policies. (author)

  11. The optimal time path of clean energy R&D policy when patents have finite lifetime

    NARCIS (Netherlands)

    Gerlagh, R.; Kverndokk, S.; Rosendahl, K.E.

    We study the optimal time path for clean energy innovation policy. In a model with emission reduction through clean energy deployment, and with R&D increasing the overall productivity of clean energy, we describe optimal R&D policies jointly with emission pricing policies. We find that while

  12. Realistic nurse-led policy implementation, optimization and evaluation: novel methodological exemplar.

    Science.gov (United States)

    Noyes, Jane; Lewis, Mary; Bennett, Virginia; Widdas, David; Brombley, Karen

    2014-01-01

    To report the first large-scale realistic nurse-led implementation, optimization and evaluation of a complex children's continuing-care policy. Health policies are increasingly complex, involve multiple Government departments and frequently fail to translate into better patient outcomes. Realist methods have not yet been adapted for policy implementation. Research methodology - Evaluation using theory-based realist methods for policy implementation. An expert group developed the policy and supporting tools. Implementation and evaluation design integrated diffusion of innovation theory with multiple case study and adapted realist principles. Practitioners in 12 English sites worked with Consultant Nurse implementers to manipulate the programme theory and logic of new decision-support tools and care pathway to optimize local implementation. Methods included key-stakeholder interviews, developing practical diffusion of innovation processes using key-opinion leaders and active facilitation strategies and a mini-community of practice. New and existing processes and outcomes were compared for 137 children during 2007-2008. Realist principles were successfully adapted to a shorter policy implementation and evaluation time frame. Important new implementation success factors included facilitated implementation that enabled 'real-time' manipulation of programme logic and local context to best-fit evolving theories of what worked; using local experiential opinion to change supporting tools to more realistically align with local context and what worked; and having sufficient existing local infrastructure to support implementation. Ten mechanisms explained implementation success and differences in outcomes between new and existing processes. Realistic policy implementation methods have advantages over top-down approaches, especially where clinical expertise is low and unlikely to diffuse innovations 'naturally' without facilitated implementation and local optimization. © 2013

  13. Optimization of series-parallel multi-state systems under maintenance policies

    International Nuclear Information System (INIS)

    Nourelfath, Mustapha; Ait-Kadi, Daoud

    2007-01-01

    In the redundancy optimization problem, the design goal is achieved by discrete choices made from components available in the market. In this paper, the problem is to find, under reliability constraints, the minimal cost configuration of a multi-state series-parallel system, which is subject to a specified maintenance policy. The number of maintenance teams is less than the number of repairable components, and a maintenance policy specifies the priorities between the system components. To take into account the dependencies resulting from the sharing of maintenance teams, the universal generating function approach is coupled with a Markov model. The resulting optimization approach has the advantage of being mainly analytical

  14. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  15. Optimal Overhaul-Replacement Policies for Repairable Machine Sold with Warranty

    Directory of Open Access Journals (Sweden)

    Kusmaningrum Soemadi

    2014-12-01

    Full Text Available This research deals with an overhaul-replacement policy for a repairable machine sold with Free Replacement Warranty (FRW. The machine will be used for a finite horizon, T (T <, and evaluated at a fixed interval, s (s< T. At each evaluation point, the buyer considers three alternative decisions i.e. Keep the machine, Overhaul it, or Replace it with a new identical one. An overhaul can reduce the machine age virtually, but not to a point that the machine is as good as new. If the machine fails during the warranty period, it is rectified at no cost to the buyer. Any failure occurring before and after the expiry of the warranty is restored by minimal repair. An overhaul-replacement policy is formulated for such machines by using dynamic programming approach to obtain the buyer’s optimal policy. The results show that a significant rejuvenation effect due to overhaul may extend the length of machine life cycle and delay the replacement decision. In contrast, the warranty stimulates early machine replacement and by then increases the replacement frequencies for a certain range of replacement cost. This demonstrates that to minimize the total ownership cost over T the buyer needs to consider the minimal repair cost reduction due to rejuvenation effect of overhaul as well as the warranty benefit due to replacement. Numerical examples are presented for both illustrating the optimal policy and describing the behavior of the optimal solution.

  16. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  17. Identification of Optimal Policies in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf

  18. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  19. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  20. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  1. Optimal climate policy is a utopia. From quantitative to qualitative cost-benefit analysis

    International Nuclear Information System (INIS)

    Van den Bergh, Jeroen C.J.M.

    2004-01-01

    The dominance of quantitative cost-benefit analysis (CBA) and optimality concepts in the economic analysis of climate policy is criticised. Among others, it is argued to be based in a misplaced interpretation of policy for a complex climate-economy system as being analogous to individual inter-temporal welfare optimisation. The transfer of quantitative CBA and optimality concepts reflects an overly ambitious approach that does more harm than good. An alternative approach is to focus the attention on extreme events, structural change and complexity. It is argued that a qualitative rather than a quantitative CBA that takes account of these aspects can support the adoption of a minimax regret approach or precautionary principle in climate policy. This means: implement stringent GHG reduction policies as soon as possible

  2. Optimal Control via Reinforcement Learning with Symbolic Policy Approximation

    NARCIS (Netherlands)

    Kubalìk, Jiřì; Alibekov, Eduard; Babuska, R.; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    Model-based reinforcement learning (RL) algorithms can be used to derive optimal control laws for nonlinear dynamic systems. With continuous-valued state and input variables, RL algorithms have to rely on function approximators to represent the value function and policy mappings. This paper

  3. Optimal post-warranty maintenance policy with repair time threshold for minimal repair

    International Nuclear Information System (INIS)

    Park, Minjae; Mun Jung, Ki; Park, Dong Ho

    2013-01-01

    In this paper, we consider a renewable minimal repair–replacement warranty policy and propose an optimal maintenance model after the warranty is expired. Such model adopts the repair time threshold during the warranty period and follows with a certain type of system maintenance policy during the post-warranty period. As for the criteria for optimality, we utilize the expected cost rate per unit time during the life cycle of the system, which has been frequently used in many existing maintenance models. Based on the cost structure defined for each failure of the system, we formulate the expected cost rate during the life cycle of the system, assuming that a renewable minimal repair–replacement warranty policy with the repair time threshold is provided to the user during the warranty period. Once the warranty is expired, the maintenance of the system is the user's sole responsibility. The life cycle of the system is defined on the perspective of the user and the expected cost rate per unit time is derived in this context. We obtain the optimal maintenance policy during the maintenance period following the expiration of the warranty period by minimizing such a cost rate. Numerical examples using actual failure data are presented to exemplify the applicability of the methodologies proposed in this paper.

  4. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.

    Science.gov (United States)

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-10-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.

  5. People adopt optimal policies in simple decision-making, after practice and guidance.

    Science.gov (United States)

    Evans, Nathan J; Brown, Scott D

    2017-04-01

    Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.

  6. Storage Policies and Optimal Shape of a Storage System

    NARCIS (Netherlands)

    Zaerpour, N.; De Koster, René; Yu, Yugang

    2013-01-01

    The response time of a storage system is mainly influenced by its shape (configuration), the storage assignment and retrieval policies, and the location of the input/output (I/O) points. In this paper, we show that the optimal shape of a storage system, which minimises the response time for single

  7. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  8. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  9. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  10. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  11. Joint Optimization of Preventive Maintenance and Spare Parts Inventory with Appointment Policy

    Directory of Open Access Journals (Sweden)

    Jing Cai

    2017-01-01

    Full Text Available Under the background of the wide application of condition-based maintenance (CBM in maintenance practice, the joint optimization of maintenance and spare parts inventory is becoming a hot research to take full advantage of CBM and reduce the operational cost. In order to avoid both the high inventory level and the shortage of spare parts, an appointment policy of spare parts is first proposed based on the prediction of remaining useful lifetime, and then a corresponding joint optimization model of preventive maintenance and spare parts inventory is established. Due to the complexity of the model, the combination method of genetic algorithm and Monte Carlo is presented to get the optimal maximum inventory level, safety inventory level, potential failure threshold, and appointment threshold to minimize the cost rate. Finally, the proposed model is studied through a case study and compared with both the separate optimization and the joint optimization without appointment policy, and the results show that the proposed model is more effective. In addition, the sensitivity analysis shows that the proposed model is consistent with the actual situation of maintenance practices and inventory management.

  12. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  13. Optimal dynamic pricing and replenishment policy for perishable items with inventory-level-dependent demand

    Science.gov (United States)

    Lu, Lihao; Zhang, Jianxiong; Tang, Wansheng

    2016-04-01

    An inventory system for perishable items with limited replenishment capacity is introduced in this paper. The demand rate depends on the stock quantity displayed in the store as well as the sales price. With the goal to realise profit maximisation, an optimisation problem is addressed to seek for the optimal joint dynamic pricing and replenishment policy which is obtained by solving the optimisation problem with Pontryagin's maximum principle. A joint mixed policy, in which the sales price is a static decision variable and the replenishment rate remains to be a dynamic decision variable, is presented to compare with the joint dynamic policy. Numerical results demonstrate the advantages of the joint dynamic one, and further show the effects of different system parameters on the optimal joint dynamic policy and the maximal total profit.

  14. Loss-Averse Retailer’s Optimal Ordering Policies for Perishable Products with Customer Returns

    Directory of Open Access Journals (Sweden)

    Xu Chen

    2014-01-01

    Full Text Available We investigate the loss-averse retailer’s ordering policies for perishable product with customer returns. With the introduction of the segmental loss utility function, we depict the retailer’s loss aversion decision bias and establish the loss-averse retailer’s ordering policy model. We derive that the loss-averse retailer’s optimal order quantity with customer returns exists and is unique. By comparison, we obtain that both the risk-neutral and the loss-averse retailer’s optimal order quantities depend on the inventory holding cost and the marginal shortage cost. Through the sensitivity analysis, we also discuss the effect of loss-averse coefficient and the ratio of return on the loss-averse retailer’s optimal order quantity with customer returns.

  15. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  16. Handling Practicalities in Agricultural Policy Optimization for Water Quality Improvements

    Science.gov (United States)

    Bilevel and multi-objective optimization methods are often useful to spatially target agri-environmental policy throughout a watershed. This type of problem is complex and is comprised of a number of practicalities: (i) a large number of decision variables, (ii) at least two inte...

  17. Optimal environmental policy and the dynamic property in LDCs

    Directory of Open Access Journals (Sweden)

    Masahiro Yabuta

    2002-01-01

    Full Text Available This paper has provided a model framework of foreign assistance policy in the context of dynamic optimal control and investigated the environmental policies in LDCs that received some financial support from abroad. The model framework features a specific behavior of the social planner who determines the level of voluntary expenditure for preservation of natural environment. Because more financial needs for natural environmental protection means less allowance of growth-oriented investment, the social planner confronts a trade-off problem between economic growth and environmental preservation. To tackle with this clearly, we have built a dynamic model with two control variables: per-capita consumption and voluntary expenditure for natural environment.

  18. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  19. Optimal reservoir operation policies using novel nested algorithms

    Science.gov (United States)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.

  20. Optimal contant time injection policy for enhanced oil recovery and characterization of optimal viscous profiles

    Science.gov (United States)

    Daripa, Prabir

    2011-11-01

    We numerically investigate the optimal viscous profile in constant time injection policy of enhanced oil recovery. In particular, we investigate the effect of a combination of interfacial and layer instabilities in three-layer porous media flow on the overall growth of instabilities and thereby characterize the optimal viscous profile. Results based on monotonic and non-monotonic viscous profiles will be presented. Time permitting. we will also present results on multi-layer porous media flows for Newtonian and non-Newtonian fluids and compare the results. The support of Qatar National Fund under a QNRF Grant is acknowledged.

  1. Optimal policy for mitigating emissions in the European transport sector

    Science.gov (United States)

    Leduc, Sylvain; Piera, Patrizio; Sennai, Mesfun; Igor, Staritsky; Berien, Elbersen; Tijs, Lammens; Florian, Kraxner

    2017-04-01

    A geographic explicit techno-economic model, BeWhere (www.iiasa.ac.at/bewhere), has been developed at the European scale (Europe 28, the Balkans countries, Turkey, Moldavia and Ukraine) at a 40km grid size, to assess the potential of bioenergy from non-food feedstock. Based on the minimization of the supply chain from feedstock collection to the final energy product distribution, the model identifies the optimal bioenergy production plants in terms of spatial location, technology and capacity. The feedstock of interests are woody biomass (divided into eight types from conifers and non-conifers) and five different crop residuals. For each type of feedstock, one or multiple technologies can be applied for either heat, electricity or biofuel production. The model is run for different policy tools such as carbon cost, biofuel support, or subsidies, and the optimal mix of technologies and biomass needed is optimized to reach a production cost competitive against the actual reference system which is fossil fuel based. From this approach, the optimal mix of policy tools that can be applied country wide in Europe will be identified. The preliminary results show that high carbon tax and biofuel support contribute to the development of large scale biofuel production based on woody biomass plants mainly located in the northern part of Europe. Finally the highest emission reduction is reached with low biofuel support and high carbon tax evenly distributed in Europe.

  2. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  3. Optimal subsidy policy for accelerating the diffusion of green products

    Directory of Open Access Journals (Sweden)

    Hongguang Peng

    2013-06-01

    Full Text Available Purpose: We consider a dynamic duopoly market in which two firms respectively produce green products and conventional products. The two types of product can substitute each other in some degree. Their demand rates depend on not only prices but the consumers’ increasing environmental awareness. Too high initial cost relative to conventional products becomes one of the major obstacles that hinder the adoption of green products. The government employs subsidy policy to trigger the adoption of green products. The purpose of the paper is to explore the optimal subsidy strategy to fulfill the government’s objective. Design/methodology/approach: We suppose the players in the game employ open-loop strategies, which make sense since the government generally cannot alter his policy for political and economic purposes. We take a differential game approach and use backward induction to analyze the firms’ pricing strategy under Cournot competition, and then focus upon a Stackelberg equilibrium to find the optimal subsidy strategy of the government. Findings: The results show that the more remarkable the energy or environmental performance, or the bigger the initial cost of green products, the higher the subsidy level should be. Due to the increasing environmental awareness and the learning curve, the optimal subsidy level decreases over time. Research limitations/implications: In our model several simplifying assumptions are made to keep the analysis more tractable. In particular, we have assumed only one type of green product. In reality several types of product with different energy or environmental performances exist. Our research can be extended in future work to take into account product differentiation on energy or environmental performance and devise a discriminatory subsidy policy accordingly. Originality/value: In the paper we set the objective of the government as minimizing the total social cost induced by the energy consumption or

  4. Optimized Policies for Improving Fairness of Location-based Relay Selection

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova

    2013-01-01

    For WLAN systems in which relaying is used to improve throughput performance for nodes located at the cell edge, node mobility and information collection delays can have a significant impact on the performance of a relay selection scheme. In this paper we extend our existing Markov Chain modeling...... framework for relay selection to allow for efficient calculation of relay policies given either mean throughput or kth throughput percentile as optimization criterium. In a scenario with static access point, static relay, and a mobile destination node, the kth throughput percentile optimization...

  5. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  6. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  7. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  8. Optimal Inventory Policy under Permissible Payment Delay in Fashion Supply Chains

    OpenAIRE

    Guo Li; Yuchen Kang; Mengqi Liu; Zhaohua Wang

    2014-01-01

    This paper investigates a retailer’s optimal inventory cycle and the corresponding time of payment in fashion supply chains where a supplier allows the payment delay. Here according to the established model we first analyze the retailer's reaction, and then find out the retailer’s optimal inventory policy and time of payment to maximize its total profit. Our result shows that it is not always the best choice for retailers of fashion supply chains to choose the discount way to replenish stocks...

  9. A policy iteration approach to online optimal control of continuous-time constrained-input systems.

    Science.gov (United States)

    Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L

    2013-09-01

    This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. Copyright © 2013 ISA. All rights reserved.

  10. Optimal pricing and replenishment policies for instantaneous deteriorating items with backlogging and trade credit under inflation

    Science.gov (United States)

    Sundara Rajan, R.; Uthayakumar, R.

    2017-12-01

    In this paper we develop an economic order quantity model to investigate the optimal replenishment policies for instantaneous deteriorating items under inflation and trade credit. Demand rate is a linear function of selling price and decreases negative exponentially with time over a finite planning horizon. Shortages are allowed and partially backlogged. Under these conditions, we model the retailer's inventory system as a profit maximization problem to determine the optimal selling price, optimal order quantity and optimal replenishment time. An easy-to-use algorithm is developed to determine the optimal replenishment policies for the retailer. We also provide optimal present value of profit when shortages are completely backlogged as a special case. Numerical examples are presented to illustrate the algorithm provided to obtain optimal profit. And we also obtain managerial implications from numerical examples to substantiate our model. The results show that there is an improvement in total profit from complete backlogging rather than the items being partially backlogged.

  11. Social Optimization and Pricing Policy in Cognitive Radio Networks with an Energy Saving Strategy

    Directory of Open Access Journals (Sweden)

    Shunfu Jin

    2016-01-01

    Full Text Available The rapid growth of wireless application results in an increase in demand for spectrum resource and communication energy. In this paper, we firstly introduce a novel energy saving strategy in cognitive radio networks (CRNs and then propose an appropriate pricing policy for secondary user (SU packets. We analyze the behavior of data packets in a discrete-time single-server priority queue under multiple-vacation discipline. With the help of a Quasi-Birth-Death (QBD process model, we obtain the joint distribution for the number of SU packets and the state of base station (BS via the Matrix-Geometric Solution method. We assess the average latency of SU packets and the energy saving ratio of system. According to a natural reward-cost structure, we study the individually optimal behavior and the socially optimal behavior of the energy saving strategy and use an optimization algorithm based on standard particle swarm optimization (SPSO method to search the socially optimal arrival rate of SU packets. By comparing the individually optimal behavior and the socially optimal behavior, we impose an appropriate admission fee to SU packets. Finally, we present numerical results to show the impacts of system parameters on the system performance and the pricing policy.

  12. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  13. Asymptotically optimal production policies in dynamic stochastic jobshops with limited buffers

    Science.gov (United States)

    Hou, Yumei; Sethi, Suresh P.; Zhang, Hanqin; Zhang, Qing

    2006-05-01

    We consider a production planning problem for a jobshop with unreliable machines producing a number of products. There are upper and lower bounds on intermediate parts and an upper bound on finished parts. The machine capacities are modelled as finite state Markov chains. The objective is to choose the rate of production so as to minimize the total discounted cost of inventory and production. Finding an optimal control policy for this problem is difficult. Instead, we derive an asymptotic approximation by letting the rates of change of the machine states approach infinity. The asymptotic analysis leads to a limiting problem in which the stochastic machine capacities are replaced by their equilibrium mean capacities. The value function for the original problem is shown to converge to the value function of the limiting problem. The convergence rate of the value function together with the error estimate for the constructed asymptotic optimal production policies are established.

  14. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  15. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  16. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  17. Optimal harvesting policy of a stochastic two-species competitive model with Lévy noise in a polluted environment

    Science.gov (United States)

    Zhao, Yu; Yuan, Sanling

    2017-07-01

    As well known that the sudden environmental shocks and toxicant can affect the population dynamics of fish species, a mechanistic understanding of how sudden environmental change and toxicant influence the optimal harvesting policy requires development. This paper presents the optimal harvesting of a stochastic two-species competitive model with Lévy noise in a polluted environment, where the Lévy noise is used to describe the sudden climate change. Due to the discontinuity of the Lévy noise, the classical optimal harvesting methods based on the explicit solution of the corresponding Fokker-Planck equation are invalid. The object of this paper is to fill up this gap and establish the optimal harvesting policy. By using of aggregation and ergodic methods, the approximation of the optimal harvesting effort and maximum expectation of sustainable yields are obtained. Numerical simulations are carried out to support these theoretical results. Our analysis shows that the Lévy noise and the mean stress measure of toxicant in organism may affect the optimal harvesting policy significantly.

  18. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    Science.gov (United States)

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  19. Off-Policy Actor-Critic Structure for Optimal Control of Unknown Systems With Disturbances.

    Science.gov (United States)

    Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai; Zhang, Huaguang

    2016-05-01

    An optimal control method is developed for unknown continuous-time systems with unknown disturbances in this paper. The integral reinforcement learning (IRL) algorithm is presented to obtain the iterative control. Off-policy learning is used to allow the dynamics to be completely unknown. Neural networks are used to construct critic and action networks. It is shown that if there are unknown disturbances, off-policy IRL may not converge or may be biased. For reducing the influence of unknown disturbances, a disturbances compensation controller is added. It is proven that the weight errors are uniformly ultimately bounded based on Lyapunov techniques. Convergence of the Hamiltonian function is also proven. The simulation study demonstrates the effectiveness of the proposed optimal control method for unknown systems with disturbances.

  20. Tax policy can change the production path: A model of optimal oil extraction in Alaska

    International Nuclear Information System (INIS)

    Leighty, Wayne; Lin, C.-Y. Cynthia

    2012-01-01

    We model the economically optimal dynamic oil production decisions for seven production units (fields) on Alaska's North Slope. We use adjustment cost and discount rate to calibrate the model against historical production data, and use the calibrated model to simulate the impact of tax policy on production rate. We construct field-specific cost functions from average cost data and an estimated inverse production function, which incorporates engineering aspects of oil production into our economic modeling. Producers appear to have approximated dynamic optimality. Consistent with prior research, we find that changing the tax rate alone does not change the economically optimal oil production path, except for marginal fields that may cease production. Contrary to prior research, we find that the structure of tax policy can be designed to affect the economically optimal production path, but at a cost in net social benefit. - Highlights: ► We model economically optimal dynamic oil production decisions for 7 Alaska fields. ► Changing tax rate alone does not alter the economically optimal oil production path. ► But change in tax structure can affect the economically optimal oil production path. ► Tax structures that modify the optimal production path reduce net social benefit. ► Field-specific cost functions and inverse production functions are estimated

  1. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  2. Vertical integration and optimal reimbursement policy.

    Science.gov (United States)

    Afendulis, Christopher C; Kessler, Daniel P

    2011-09-01

    Health care providers may vertically integrate not only to facilitate coordination of care, but also for strategic reasons that may not be in patients' best interests. Optimal Medicare reimbursement policy depends upon the extent to which each of these explanations is correct. To investigate, we compare the consequences of the 1997 adoption of prospective payment for skilled nursing facilities (SNF PPS) in geographic areas with high versus low levels of hospital/SNF integration. We find that SNF PPS decreased spending more in high integration areas, with no measurable consequences for patient health outcomes. Our findings suggest that integrated providers should face higher-powered reimbursement incentives, i.e., less cost-sharing. More generally, we conclude that purchasers of health services (and other services subject to agency problems) should consider the organizational form of their suppliers when choosing a reimbursement mechanism.

  3. Joint Optimal Production Planning for Complex Supply Chains Constrained by Carbon Emission Abatement Policies

    Directory of Open Access Journals (Sweden)

    Longfei He

    2014-01-01

    Full Text Available We focus on the joint production planning of complex supply chains facing stochastic demands and being constrained by carbon emission reduction policies. We pick two typical carbon emission reduction policies to research how emission regulation influences the profit and carbon footprint of a typical supply chain. We use the input-output model to capture the interrelated demand link between an arbitrary pair of two nodes in scenarios without or with carbon emission constraints. We design optimization algorithm to obtain joint optimal production quantities combination for maximizing overall profit under regulatory policies, respectively. Furthermore, numerical studies by featuring exponentially distributed demand compare systemwide performances in various scenarios. We build the “carbon emission elasticity of profit (CEEP” index as a metric to evaluate the impact of regulatory policies on both chainwide emissions and profit. Our results manifest that by facilitating the mandatory emission cap in proper installation within the network one can balance well effective emission reduction and associated acceptable profit loss. The outcome that CEEP index when implementing Carbon emission tax is elastic implies that the scale of profit loss is greater than that of emission reduction, which shows that this policy is less effective than mandatory cap from industry standpoint at least.

  4. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  5. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  6. An integrated DEA-COLS-SFA algorithm for optimization and policy making of electricity distribution units

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.; Eivazy, H.

    2009-01-01

    This paper presents an integrated data envelopment analysis (DEA)-corrected ordinary least squares (COLS)-stochastic frontier analysis (SFA)-principal component analysis (PCA)-numerical taxonomy (NT) algorithm for performance assessment, optimization and policy making of electricity distribution units. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study proposes an integrated flexible approach to measure the rank and choose the best version of the DEA method for optimization and policy making purposes. It covers both static and dynamic aspects of information environment due to involvement of SFA which is finally compared with the best DEA model through the Spearman correlation technique. The integrated approach would yield in improved ranking and optimization of electricity distribution systems. To illustrate the usability and reliability of the proposed algorithm, 38 electricity distribution units in Iran have been considered, ranked and optimized by the proposed algorithm of this study.

  7. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  8. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  9. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    Science.gov (United States)

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. An Optimization of (Q,r Inventory Policy Based on Health Care Apparel Products with Compound Poisson Demands

    Directory of Open Access Journals (Sweden)

    An Pan

    2014-01-01

    Full Text Available Addressing the problems of a health care center which produces tailor-made clothes for specific people, the paper proposes a single product continuous review model and establishes an optimal policy for the center based on (Q,r control policy to minimize expected average cost on an order cycle. A generic mathematical model to compute cost on real-time inventory level is developed to generate optimal order quantity under stochastic stock variation. The customer demands are described as compound Poisson process. Comparisons on cost between optimization method and experience-based decision on Q are made through numerical studies conducted for the inventory system of the center.

  11. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  12. Optimization of protein samples for NMR using thermal shift assays

    International Nuclear Information System (INIS)

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-01-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  13. Optimization of protein samples for NMR using thermal shift assays

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)

    2016-04-15

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  14. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  15. A comparison of alternative medicare reimbursement policies under optimal hospital pricing.

    Science.gov (United States)

    Dittman, D A; Morey, R C

    1983-01-01

    This paper applies and extends the use of a nonlinear hospital pricing model, recently posited in the literature by Dittman and Morey [1]. That model applied a hospital profit-maximizing behavior and studied the effects of optimal pricing of hospital ancillary services on the incidence of payment by private insurance companies and the Medicare trust fund. Here, we examine variations of the above model where both hospital profit-maximizing and profit-satisficing postures are of interest. We apply the model to three types of Medicare reimbursement policies currently in use or under legislative mandate to implement. The policies differ according to hospital size and whether cross-subsidies are allowed. We are interested in determining the effects of profit-maximizing and -satisficing behaviors of these three reimbursement policies on the levels of profits received, and on the respective implications for private payors and the Medicare trust fund. PMID:6347973

  16. Off-policy integral reinforcement learning optimal tracking control for continuous-time chaotic systems

    International Nuclear Information System (INIS)

    Wei Qing-Lai; Song Rui-Zhuo; Xiao Wen-Dong; Sun Qiu-Ye

    2015-01-01

    This paper estimates an off-policy integral reinforcement learning (IRL) algorithm to obtain the optimal tracking control of unknown chaotic systems. Off-policy IRL can learn the solution of the HJB equation from the system data generated by an arbitrary control. Moreover, off-policy IRL can be regarded as a direct learning method, which avoids the identification of system dynamics. In this paper, the performance index function is first given based on the system tracking error and control error. For solving the Hamilton–Jacobi–Bellman (HJB) equation, an off-policy IRL algorithm is proposed. It is proven that the iterative control makes the tracking error system asymptotically stable, and the iterative performance index function is convergent. Simulation study demonstrates the effectiveness of the developed tracking control method. (paper)

  17. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  18. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2001-01-01

    We present an algorithm for computing a near-optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...... be off. The systems' semi-Markov decision model has infinite action sets for the positive states. We design a new tailor-made policy-iteration algorithm for computing a policy for which the long-run average cost is at most a positive tolerance above the minimum average cost. For any positive tolerance...

  19. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2000-01-01

    We present an algorithm for computing a near optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...... be off. The system's semi-Markov decision model has infinite action sets for the positive states. We design a new tailor-made policy iteration algorithm for computing a policy for which the long-run average cost is at most a positive tolerance above the minimum average cost. For any positive tolerance...

  20. Is there room for geoengineering in the optimal climate policy mix?

    International Nuclear Information System (INIS)

    Bahn, Olivier; Chesney, Marc; Gheyssens, Jonathan; Knutti, Reto; Pana, Anca Claudia

    2015-01-01

    Highlights: • We investigate the optimal policy mix for dealing with climate change. • We consider jointly mitigation, adaptation, and solar radiation management (SRM). • SRM can control temperature, but brings environmental side-effects. • SRM is not robust due to uncertainty in magnitude and persistency of side-effects. • Implementing SRM with wrong assumptions about side-effects largely decreases welfare. - Abstract: We investigate geoengineering as a possible substitute for mitigation and adaptation measures to address climate change. Relying on an integrated assessment model, we distinguish between the effects of solar radiation management (SRM) on atmospheric temperature levels and its side-effects on the environment. The optimal climate portfolio is a mix of mitigation, adaptation, and SRM. When accounting for uncertainty in the magnitude of SRM side-effects and their persistency over time, we show that the SRM option lacks robustness. We then analyse the welfare consequences of basing the SRM decision on wrong assumptions about its side-effects, and show that total output losses are considerable and increase with the error horizon. This reinforces the need to balance the policy portfolio in favour of mitigation

  1. Welfare-based optimal monetary policy in a two-sector small open economy

    Czech Academy of Sciences Publication Activity Database

    Rychalovska, Yuliya

    -, č. 16 (2007), s. 1-46 ISSN 1803-2397 Institutional research plan: CEZ:AV0Z70850503 Keywords : DSGE models * optimal monetary policy * non-traded goods Subject RIV: AH - Economics http://www.cnb.cz/m2export/sites/www.cnb.cz/en/research/research_publications/cnb_wp/download/cnbwp_2007_16.pdf

  2. Monetary policy and dynamic adjustment of corporate investment: A policy transmission channel perspective

    Directory of Open Access Journals (Sweden)

    Qiang Fu

    2015-06-01

    Full Text Available We investigate monetary policy effects on corporate investment adjustment, using a sample of China’s A-share listed firms (2005–2012, under an asymmetic framework and from a monetary policy transmission channel perspective. We find that corporate investment adjustment is faster in expansionary than contractionary monetary policy periods. Monetary policy has a significant effect on adjustment speed through monetary and credit channels. An increase in the growth rate of money supply or credit accelerates adjustment. Both effects are significantly greater during tightening than expansionary periods. The monetary channel has significant asymmetry, whereas the credit channel has none. Leverage moderates the relationship between monetary policy and adjustment, with a greater effect in expansionary periods. This study enriches the corporate investment behavior literature and can help governments develop and optimize macro-control policies.

  3. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  4. Financial stability, wealth effects and optimal macroeconomic policy combination in the United Kingdom: A new-Keynesian dynamic stochastic general equilibrium framework

    Directory of Open Access Journals (Sweden)

    Muhammad Ali Nasir

    2016-12-01

    Full Text Available This study derives an optimal macroeconomic policy combination for financial sector stability in the United Kingdom by employing a New Keynesian Dynamic Stochastic General Equilibrium (NK-DSGE framework. The empirical results obtained show that disciplined fiscal and accommodative monetary policies stance is optimal for financial sector stability. Furthermore, fiscal indiscipline countered by contractionary monetary stance adversely affects financial sector stability. Financial markets, e.g. stocks and Gilts show a short-term asymmetric response to macroeconomic policy interaction and to each other. The asymmetry is a reflection of portfolio adjustment. However in the long-run, the responses to suggested optimal policy combination had homogenous effects and there was evidence of co-movement in the stock and Gilt markets.

  5. Optimal preventive maintenance and repair policies for multi-state systems

    International Nuclear Information System (INIS)

    Sheu, Shey-Huei; Chang, Chin-Chih; Chen, Yen-Luan; George Zhang, Zhe

    2015-01-01

    This paper studies the optimal preventive maintenance (PM) policies for multi-state systems. The scheduled PMs can be either imperfect or perfect type. The improved effective age is utilized to model the effect of an imperfect PM. The system is considered as in a failure state (unacceptable state) once its performance level falls below a given customer demand level. If the system fails before a scheduled PM, it is repaired and becomes operational again. We consider three types of major, minimal, and imperfect repair actions, respectively. The deterioration of the system is assumed to follow a non-homogeneous continuous time Markov process (NHCTMP) with finite state space. A recursive approach is proposed to efficiently compute the time-dependent distribution of the multi-state system. For each repair type, we find the optimal PM schedule that minimizes the average cost rate. The main implication of our results is that in determining the optimal scheduled PM, choosing the right repair type will significantly improve the efficiency of the system maintenance. Thus PM and repair decisions must be made jointly to achieve the best performance

  6. Optimal Pricing and Advertising Policies for New Product Oligopoly Models

    OpenAIRE

    Gerald L. Thompson; Jinn-Tsair Teng

    1984-01-01

    In this paper our previous work on monopoly and oligopoly new product models is extended by the addition of pricing as well as advertising control variables. These models contain Bass's demand growth model, and the Vidale-Wolfe and Ozga advertising models, as well as the production learning curve model and an exponential demand function. The problem of characterizing an optimal pricing and advertising policy over time is an important question in the field of marketing as well as in the areas ...

  7. Joint Optimal Production Planning for Complex Supply Chains Constrained by Carbon Emission Abatement Policies

    OpenAIRE

    He, Longfei; Xu, Zhaoguang; Niu, Zhanwen

    2014-01-01

    We focus on the joint production planning of complex supply chains facing stochastic demands and being constrained by carbon emission reduction policies. We pick two typical carbon emission reduction policies to research how emission regulation influences the profit and carbon footprint of a typical supply chain. We use the input-output model to capture the interrelated demand link between an arbitrary pair of two nodes in scenarios without or with carbon emission constraints. We design optim...

  8. Optimum equipment maintenance/replacement policy. Part 2: Markov decision approach

    Science.gov (United States)

    Charng, T.

    1982-01-01

    Dynamic programming was utilized as an alternative optimization technique to determine an optimal policy over a given time period. According to a joint effect of the probabilistic transition of states and the sequence of decision making, the optimal policy is sought such that a set of decisions optimizes the long-run expected average cost (or profit) per unit time. Provision of an alternative measure for the expected long-run total discounted costs is also considered. A computer program based on the concept of the Markov Decision Process was developed and tested. The program code listing, the statement of a sample problem, and the computed results are presented.

  9. Optimal replenishment policy for fuzzy inventory model with deteriorating items and allowable shortages under inflationary conditions

    Directory of Open Access Journals (Sweden)

    Jaggi Chandra K.

    2016-01-01

    Full Text Available This study develops an inventory model to determine ordering policy for deteriorating items with constant demand rate under inflationary condition over a fixed planning horizon. Shortages are allowed and are partially backlogged. In today’s wobbling economy, especially for long term investment, the effects of inflation cannot be disregarded as uncertainty about future inflation may influence the ordering policy. Therefore, in this paper a fuzzy model is developed that fuzzify the inflation rate, discount rate, deterioration rate, and backlogging parameter by using triangular fuzzy numbers to represent the uncertainty. For Defuzzification, the well known signed distance method is employed to find the total profit over the planning horizon. The objective of the study is to derive the optimal number of cycles and their optimal length so to maximize the net present value of the total profit over a fixed planning horizon. The necessary and sufficient conditions for an optimal solution are characterized. An algorithm is proposed to find the optimal solution. Finally, the proposed model has been validated with numerical example. Sensitivity analysis has been performed to study the impact of various parameters on the optimal solution, and some important managerial implications are presented.

  10. Optimal Control and Optimization of Stochastic Supply Chain Systems

    CERN Document Server

    Song, Dong-Ping

    2013-01-01

    Optimal Control and Optimization of Stochastic Supply Chain Systems examines its subject in the context of the presence of a variety of uncertainties. Numerous examples with intuitive illustrations and tables are provided, to demonstrate the structural characteristics of the optimal control policies in various stochastic supply chains and to show how to make use of these characteristics to construct easy-to-operate sub-optimal policies.                 In Part I, a general introduction to stochastic supply chain systems is provided. Analytical models for various stochastic supply chain systems are formulated and analysed in Part II. In Part III the structural knowledge of the optimal control policies obtained in Part II is utilized to construct easy-to-operate sub-optimal control policies for various stochastic supply chain systems accordingly. Finally, Part IV discusses the optimisation of threshold-type control policies and their robustness. A key feature of the book is its tying together of ...

  11. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  12. Optimal combination of energy crops under different policy scenarios; The case of Northern Greece

    International Nuclear Information System (INIS)

    Zafeiriou, Eleni; Petridis, Konstantinos; Karelakis, Christos; Arabatzis, Garyfallos

    2016-01-01

    Energy crops production is considered as environmentally benign and socially acceptable, offering ecological benefits over fossil fuels through their contribution to the reduction of greenhouse gases and acidifying emissions. Energy crops are subjected to persistent policy support by the EU, despite their limited or even marginally negative impact on the greenhouse effect. The present study endeavors to optimize the agricultural income generated by energy crops in a remote and disadvantageous region, with the assistance of linear programming. The optimization concerns the income created from soybean, sunflower (proxy for energy crop), and corn. Different policy scenarios imposed restrictions on the value of the subsidies as a proxy for EU policy tools, the value of inputs (costs of capital and labor) and different irrigation conditions. The results indicate that the area and the imports per energy crop remain unchanged, independently of the policy scenario enacted. Furthermore, corn cultivation contributes the most to iFncome maximization, whereas the implemented CAP policy plays an incremental role in uptaking an energy crop. A key implication is that alternative forms of motivation should be provided to the farmers beyond the financial ones in order the extensive use of energy crops to be achieved. - Highlights: •A stochastic and a deterministic LP model is formulated. •The role of CAP is vital in generated income. •Imports and cultivated areas are subsidy neutral. •The regime of free market results in lower income acquired from the potential crop mix. •Non – financial motivation is a key determinant of the farmers’ attitude towards energy crops.

  13. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  14. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  15. Energy efficiency optimization in distribution transformers considering Spanish distribution regulation policy

    International Nuclear Information System (INIS)

    Pezzini, Paola; Gomis-Bellmunt, Oriol; Frau-Valenti, Joan; Sudria-Andreu, Antoni

    2010-01-01

    In transmission and distribution systems, the high number of installed transformers, a loss source in networks, suggests a good potential for energy savings. This paper presents how the Spanish Distribution regulation policy, Royal Decree 222/2008, affects the overall energy efficiency in distribution transformers. The objective of a utility is the maximization of the benefit, and in case of failures, to install a chosen transformer in order to maximize the profit. Here, a novel method to optimize energy efficiency, considering the constraints set by the Spanish Distribution regulation policy, is presented; its aim is to achieve the objectives of the utility when installing new transformers. The overall energy efficiency increase is a clear result that can help in meeting the requirements of European environmental plans, such as the '20-20-20' action plan.

  16. Examination of energy price policies in Iran for optimal configuration of CHP and CCHP systems based on particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Tichi, S.G.; Ardehali, M.M.; Nazari, M.E.

    2010-01-01

    The current subsidized energy prices in Iran are proposed to be gradually eliminated over the next few years. The objective of this study is to examine the effects of current and future energy price policies on optimal configuration of combined heat and power (CHP) and combined cooling, heating, and power (CCHP) systems in Iran, under the conditions of selling and not-selling electricity to utility. The particle swarm optimization algorithm is used for minimizing the cost function for owning and operating various CHP and CCHP systems in an industrial dairy unit. The results show that with the estimated future unsubsidized utility prices, CHP and CCHP systems operating with reciprocating engine prime mover have total costs of 5.6 and $2.9x10 6 over useful life of 20 years, respectively, while both systems have the same capital recovery periods of 1.3 years. However, for the same prime mover and with current subsidized prices, CHP and CCHP systems require 4.9 and 5.2 years for capital recovery, respectively. It is concluded that the current energy price policies hinder the promotion of installing CHP and CCHP systems and, the policy of selling electricity to utility as well as eliminating subsidies are prerequisites to successful widespread utilization of such systems.

  17. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    Science.gov (United States)

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.

  18. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  19. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  20. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  1. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  2. Joint cost of energy under an optimal economic policy of hybrid power systems subject to uncertainty

    International Nuclear Information System (INIS)

    Díaz, Guzmán; Planas, Estefanía; Andreu, Jon; Kortabarria, Iñigo

    2015-01-01

    Economical optimization of hybrid systems is usually performed by means of LCoE (levelized cost of energy) calculation. Previous works deal with the LCoE calculation of the whole hybrid system disregarding an important issue: the stochastic component of the system units must be jointly considered. This paper deals with this issue and proposes a new fast optimal policy that properly calculates the LCoE of a hybrid system and finds the lowest LCoE. This proposed policy also considers the implied competition among power sources when variability of gas and electricity prices are taken into account. Additionally, it presents a comparative between the LCoE of the hybrid system and its individual technologies of generation by means of a fast and robust algorithm based on vector logical computation. Numerical case analyses based on realistic data are presented that valuate the contribution of technologies in a hybrid power system to the joint LCoE. - Highlights: • We perform the LCoE calculation with the stochastic component jointly considered. • We propose a fast an optimal policy that minimizes the LCoE. • We compare the obtained LCoEs by means of a fast and robust algorithm. • We take into account the competition among gas prices and electricity prices

  3. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  4. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  5. Automated procedure for selection of optimal refueling policies for light water reactors

    International Nuclear Information System (INIS)

    Lin, B.I.; Zolotar, B.; Weisman, J.

    1979-01-01

    An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function

  6. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  7. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  8. Energy efficiency optimization in distribution transformers considering Spanish distribution regulation policy

    Energy Technology Data Exchange (ETDEWEB)

    Pezzini, Paola [Centre d' Innovacio en Convertidors Estatics i Accionaments (CITCEA-UPC), E.T.S. Enginyeria Industrial Barcelona, Universitat Politecnica Catalunya, Diagonal, 647, Pl. 2, 08028 Barcelona (Spain); Gomis-Bellmunt, Oriol; Sudria-Andreu, Antoni [Centre d' Innovacio en Convertidors Estatics i Accionaments (CITCEA-UPC), E.T.S. Enginyeria Industrial Barcelona, Universitat Politecnica Catalunya, Diagonal, 647, Pl. 2, 08028 Barcelona (Spain); IREC Catalonia Institute for Energy Research, Josep Pla, B2, Pl. Baixa, 08019 Barcelona (Spain); Frau-Valenti, Joan [ENDESA, Carrer Joan Maragall, 16 07006 Palma (Spain)

    2010-12-15

    In transmission and distribution systems, the high number of installed transformers, a loss source in networks, suggests a good potential for energy savings. This paper presents how the Spanish Distribution regulation policy, Royal Decree 222/2008, affects the overall energy efficiency in distribution transformers. The objective of a utility is the maximization of the benefit, and in case of failures, to install a chosen transformer in order to maximize the profit. Here, a novel method to optimize energy efficiency, considering the constraints set by the Spanish Distribution regulation policy, is presented; its aim is to achieve the objectives of the utility when installing new transformers. The overall energy efficiency increase is a clear result that can help in meeting the requirements of European environmental plans, such as the '20-20-20' action plan. (author)

  9. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  10. Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy

    Science.gov (United States)

    Magee, T. M.; Clement, M. A.; Zagona, E. A.

    2012-12-01

    Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level

  11. Optimal Policy under Restricted Government Spending

    DEFF Research Database (Denmark)

    Sørensen, Anders

    2006-01-01

    Welfare ranking of policy instruments is addressed in a two-sector Ramsey model with monopoly pricing in one sector as the only distortion. When government spending is restricted, i.e. when a government is unable or unwilling to finance the required costs for implementing the optimum policy...... effectiveness canexceed the welfare loss from introducing new distortions. Moreover, it is found that the investment subsidy is gradually phased out of the welfare maximizing policy, which may be a policy combining the two subsidies, when the level of government spending is increased.Keywords: welfare ranking......, indirect and direct policy instruments, restricted government spending JEL: E61, O21, O41...

  12. INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS.

    Science.gov (United States)

    Villar, Sofía S

    2016-01-01

    Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects' state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics.

  13. Optimal repairable spare-parts procurement policy under total business volume discount environment

    International Nuclear Information System (INIS)

    Pascual, Rodrigo; Santelices, Gabriel; Lüer-Villagra, Armin; Vera, Jorge; Cawley, Alejandro Mac

    2017-01-01

    In asset intensive fields, where components are expensive and high system availability is required, spare parts procurement is often a critical issue. To gain competitiveness and market share is common for vendors to offer Total Business Volume Discounts (TBVD). Accordingly, companies must define the procurement and stocking policy of their spare parts in order to reduce procurement costs and increase asset availability. In response to those needs, this work presents an optimization model that maximizes the availability of the equipment under a TBVD environment, subject to a budget constraint. The model uses a single-echelon structure where parts can be repaired. It determines the optimal number of repairable spare parts to be stocked, giving emphasis on asset availability, procurement costs and service levels as the main decision criteria. A heuristic procedure that achieves high quality solutions in a fast and time-consistent way was implemented to improve the time required to obtain the model solution. Results show that using an optimal procurement policy of spare parts and accounting for TBVD produces better overall results and yields a better availability performance. - Highlights: • We propose a model for procurement of repairable components in single-echelon and business volume discount environments. • We used a mathematical model to develop a competitive heuristic that provides high quality solutions in very short times. • Our model places emphasis on using system availability, procurement costs and service levels as leading decision criteria. • The model can be used as an engine for a multi-criteria Decision Support System.

  14. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  15. Optimal Monetary Policy and Exchange Rate in a Small Open Economy with Unemployment

    Directory of Open Access Journals (Sweden)

    Hyuk-Jae Rhee

    2014-09-01

    Full Text Available In this paper, we consider a small open economy under the New Keynesian model with unemployment of Gali (2011a, b to discuss the design of the monetary policy. Our findings can be summarized in three parts. First, even with the existence of unemployment, the optimal policy is to minimize variance of domestic price inflation, wage inflation, and the output gap when both domestic price and wage are sticky. Second, stabilizing unemployment rate is important in reducing the welfare loss incurred by both technology and labor supply shocks. Therefore, introducing the unemployment rate as an another argument into the Taylor-rule type interest rate rule will be welfare-enhancing. Lastly, controlling CPI inflation is the best option when the policy is not allowed to respond to unemployment rate. Once the unemployment rate is controlled, however, stabilizing power of CPI inflation-based Taylor rule is diminished.

  16. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  17. A structural model for electricity prices with spikes: measurement of spike risk and optimal policies for hydropower plant operation

    International Nuclear Information System (INIS)

    Kanamura, Takashi

    2007-01-01

    This paper proposes a new model for electricity prices based on demand and supply, which we call a structural model. We show that the structural model can generate price spikes that fits the observed data better than those generated by other preceding models such as the jump diffusion model and the Box-Cox transformation model. We apply the structural model to obtain the optimal operation policy for a pumped-storage hydropower generator, and show that the structural model can provide more realistic optimal policies than the jump diffusion model. (author)

  18. A structural model for electricity prices with spikes: measurement of spike risk and optimal policies for hydropower plant operation

    Energy Technology Data Exchange (ETDEWEB)

    Kanamura, Takashi [Hitotsubashi University, Tokyo (Japan). Graduate School of International Corporate Strategy; Ohashi, Azuhiko [J-Power, Tokyo (Japan)

    2007-09-15

    This paper proposes a new model for electricity prices based on demand and supply, which we call a structural model. We show that the structural model can generate price spikes that fits the observed data better than those generated by other preceding models such as the jump diffusion model and the Box-Cox transformation model. We apply the structural model to obtain the optimal operation policy for a pumped-storage hydropower generator, and show that the structural model can provide more realistic optimal policies than the jump diffusion model. (author)

  19. Optimal operation and forecasting policy for pump storage plants in day-ahead markets

    International Nuclear Information System (INIS)

    Muche, Thomas

    2014-01-01

    Highlights: • We investigate unit commitment deploying stochastic and deterministic approaches. • We consider day-ahead markets, its forecast and weekly price based unit commitment. • Stochastic and deterministic unit commitment are identical for the first planning day. • Unit commitment and bidding policy can be based on the deterministic approach. • Robust forecasting models should be estimated based on the whole planning horizon. - Abstract: Pump storage plants are an important electricity storage technology at present. Investments in this technology are expected to increase. The necessary investment valuation often includes expected cash flows from future price-based unit commitment policies. A price-based unit commitment policy has to consider market price uncertainty and the information revealing nature of electricity markets. For this environment stochastic programming models are suggested to derive the optimal unit commitment policy. For the considered day-ahead price electricity market stochastic and deterministic unit commitment policies are comparable suggesting an application of easier implementable deterministic models. In order to identify suitable unit commitment and forecasting policies, deterministic unit commitment models are applied to actual day-ahead electricity prices of a whole year. As a result, a robust forecasting model should consider the unit commitment planning period. This robust forecasting models result in expected cash flows similar to realized ones allowing a reliable investment valuation

  20. Optimization of the two-sample rank Neyman-Pearson detector

    Science.gov (United States)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  1. Optimal Policies for Deteriorating Items with Maximum Lifetime and Two-Level Trade Credits

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2014-01-01

    Full Text Available The retailer’s optimal policies are developed when the product has fixed lifetime and also the units in inventory are subject to deterioration at a constant rate. This study will be mainly applicable to pharmaceuticals, drugs, beverages, and dairy products, and so forth. To boost the demand, offering a credit period is considered as the promotional tool. The retailer passes credit period to the buyers which is received from the supplier. The objective is to maximize the total profit per unit time of the retailer with respect to optimal retail price of an item and purchase quantity during the optimal cycle time. The concavity of the total profit per unit time is exhibited using inventory parametric values. The sensitivity analysis is carried out to advise the decision maker to keep an eye on critical inventory parameters.

  2. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    Science.gov (United States)

    He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian

    2010-12-01

    In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  3. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Jiang Zhu

    2010-01-01

    Full Text Available In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP, which can be solved by standard linear programming (LP method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  4. Optimal replacement policy of products with repair-cost threshold after the extended warranty

    Institute of Scientific and Technical Information of China (English)

    Lijun Shang; Zhiqiang Cai

    2017-01-01

    The reliability of the product sold under a warranty is usually maintained by the manufacturer during the warranty period. After the expiry of the warranty, however, the consumer confronts a problem about how to maintain the reliability of the product. This paper proposes, from the consumer's perspective, a replace-ment policy after the extended warranty, under the assumption that the product is sold under the renewable free replacement warranty (RFRW) policy in which the replacement is dependent on the repair-cost threshold. The proposed replacement policy is the replacement after the extended warranty is performed by the consumer based on the repair-cost threshold or preventive replacement (PR) age, which are decision variables. The expected cost rate model is derived from the consumer's perspective. The existence and uniqueness of the optimal solution that minimizes the expected cost rate per unit time are offered. Finally, a numeri-cal example is presented to exemplify the proposed model.

  5. An Optimization Model for Expired Drug Recycling Logistics Networks and Government Subsidy Policy Design Based on Tri-level Programming.

    Science.gov (United States)

    Huang, Hui; Li, Yuyu; Huang, Bo; Pi, Xing

    2015-07-09

    In order to recycle and dispose of all people's expired drugs, the government should design a subsidy policy to stimulate users to return their expired drugs, and drug-stores should take the responsibility of recycling expired drugs, in other words, to be recycling stations. For this purpose it is necessary for the government to select the right recycling stations and treatment stations to optimize the expired drug recycling logistics network and minimize the total costs of recycling and disposal. This paper establishes a tri-level programming model to study how the government can optimize an expired drug recycling logistics network and the appropriate subsidy policies. Furthermore, a Hybrid Genetic Simulated Annealing Algorithm (HGSAA) is proposed to search for the optimal solution of the model. An experiment is discussed to illustrate the good quality of the recycling logistics network and government subsides obtained by the HGSAA. The HGSAA is proven to have the ability to converge on the global optimal solution, and to act as an effective algorithm for solving the optimization problem of expired drug recycling logistics network and government subsidies.

  6. Optimizing Multireservoir System Operating Policies Using Exogenous Hydrologic Variables

    Science.gov (United States)

    Pina, Jasson; Tilmant, Amaury; Côté, Pascal

    2017-11-01

    Stochastic dual dynamic programming (SDDP) is one of the few available algorithms to optimize the operating policies of large-scale hydropower systems. This paper presents a variant, called SDDPX, in which exogenous hydrologic variables, such as snow water equivalent and/or sea surface temperature, are included in the state space vector together with the traditional (endogenous) variables, i.e., past inflows. A reoptimization procedure is also proposed in which SDDPX-derived benefit-to-go functions are employed within a simulation carried out over the historical record of both the endogenous and exogenous hydrologic variables. In SDDPX, release policies are now a function of storages, past inflows, and relevant exogenous variables that potentially capture more complex hydrological processes than those found in traditional SDDP formulations. To illustrate the potential gain associated with the use of exogenous variables when operating a multireservoir system, the 3,137 MW hydropower system of Rio Tinto (RT) located in the Saguenay-Lac-St-Jean River Basin in Quebec (Canada) is used as a case study. The performance of the system is assessed for various combinations of hydrologic state variables, ranging from the simple lag-one autoregressive model to more complex formulations involving past inflows, snow water equivalent, and winter precipitation.

  7. Suboptimal and optimal order policies for fixed and varying replenishment interval with declining market

    Science.gov (United States)

    Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon

    2016-06-01

    One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.

  8. 76 FR 41186 - Salmonella Verification Sampling Program: Response to Comments on New Agency Policies and...

    Science.gov (United States)

    2011-07-13

    ... Service [Docket No. FSIS-2008-0008] Salmonella Verification Sampling Program: Response to Comments on New Agency Policies and Clarification of Timeline for the Salmonella Initiative Program (SIP) AGENCY: Food... Federal Register notice (73 FR 4767- 4774), which described upcoming policy changes in the FSIS Salmonella...

  9. US fiscal regimes and optimal monetary policy

    NARCIS (Netherlands)

    Mavromatis, K.

    2014-01-01

    Fiscal policy in the US has been documented to have been the leading authority in the ‘60s and the ‘70s (active fiscal policy), while committing to make the necessary fiscal adjustments following Volcker’s appointment (passive fiscal policy). Moreover, while passive, US fiscal policy has at times

  10. A multi-period optimization model for planning of China's power sector with consideration of carbon dioxide mitigation—The importance of continuous and stable carbon mitigation policy

    International Nuclear Information System (INIS)

    Zhang, Dongjie; Liu, Pei; Ma, Linwei; LI, Zheng

    2013-01-01

    A great challenge China's power sector faces is to mitigate its carbon emissions whilst satisfying the ever-increasing power demand. Optimal planning of the power sector with consideration of carbon mitigation for a long-term future remains a complex task, involving many technical alternatives and an infinite number of possible plants installations, retrofitting, and decommissioning over the planning horizon. Previously the authors built a multi-period optimization model for the planning of China's power sector during 2010–2050. Based on that model, this paper executed calculations on the optimal pathways of China's power sector with two typical decision-making modes, which are based on “full-information” and “limited-information” hypothesis, and analyzed the impacts on the optimal planning results by two typical types of carbon tax policies including a “continuous and stable” one and a “loose first and tight later” one. The results showed that making carbon tax policy for long-term future, and improving the continuity and stability in policy execution can effectively help reduce the accumulated total carbon emissions, and also the cost for carbon mitigation of the power sector. The conclusion of this study is of great significance for the policy makers to make carbon mitigation policies in China and other countries as well. - Highlights: • A multi-stage optimization model for planning the power sector is applied as basis. • Difference of ideal and actual decision making processes are proposed and analyzed. • A “continuous and stable” policy and a “loose first and tight later” one are designed. • 4 policy scenarios are studied applying the optimal planning model and compared. • The importance of “continuous and stable” policy for long term is well demonstrated

  11. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Continuous Linguistic Rhetorical Education as a Means of Optimizing Language Policy in Russian Multinational Regions

    Science.gov (United States)

    Vorozhbitova, Alexandra A.; Konovalova, Galina M.; Ogneva, Tatiana N.; Chekulaeva, Natalia Y.

    2017-01-01

    Drawing on the function of Russian as a state language the paper proposes a concept of continuous linguistic rhetorical (LR) education perceived as a means of optimizing language policy in Russian multinational regions. LR education as an innovative pedagogical system shapes a learner's readiness for self-projection as a strong linguistic…

  13. An Optimization Model for Expired Drug Recycling Logistics Networks and Government Subsidy Policy Design Based on Tri-level Programming

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2015-07-01

    Full Text Available In order to recycle and dispose of all people’s expired drugs, the government should design a subsidy policy to stimulate users to return their expired drugs, and drug-stores should take the responsibility of recycling expired drugs, in other words, to be recycling stations. For this purpose it is necessary for the government to select the right recycling stations and treatment stations to optimize the expired drug recycling logistics network and minimize the total costs of recycling and disposal. This paper establishes a tri-level programming model to study how the government can optimize an expired drug recycling logistics network and the appropriate subsidy policies. Furthermore, a Hybrid Genetic Simulated Annealing Algorithm (HGSAA is proposed to search for the optimal solution of the model. An experiment is discussed to illustrate the good quality of the recycling logistics network and government subsides obtained by the HGSAA. The HGSAA is proven to have the ability to converge on the global optimal solution, and to act as an effective algorithm for solving the optimization problem of expired drug recycling logistics network and government subsidies.

  14. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  15. Optimization Policy of Inventory Spare Parts Stocking and Provisioning

    International Nuclear Information System (INIS)

    Yun, Tae Sik; Park, Jong Hyuk; Hwang, Eui Youp; Yoo, Sung Soo; Kim, In Hwan

    2005-01-01

    Spare parts, especially safety related items, being used in Korea Nuclear Power Plants are largely from the United States, Canada, France and the like, meaning the inventory policy, stocking and provision, should be influenced by those countries' nuclear industry situation in a direct or indirect manner. As a result of nuclear industry downturn practices, lots of spare supply corporations have gone broke, which gave immediate signals we have to resolve inventory purchases in need. It is known for that nuclear maintenance spare items are particularly composed of many kinds with small quantities, which makes matters worse to Korea nuclear operation company (KHNP) to purchase them. Hence, Korea nuclear business is trying to change its exinventory purchasing paradigm into innovative schemes it did not have to consider in the past. In order to implement a new stocking policy, it should be kept in mind the factors such as not only how much to stock for a smooth operation but also economic point of view. Even though it has done a lot of studies to optimize the inventory stocking level in an academic curiosity, it is not easy to apply the researches in a real world. Since it is so tough job to anticipate when and how large scale even occurs. Hence, it would be thought that the nuclear inventory should be dealt in a different manner from the general manufacturing industry

  16. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  17. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  18. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  19. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  20. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  1. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  2. Optimal maintenance policy incorporating system level and unit level for mechanical systems

    Science.gov (United States)

    Duan, Chaoqun; Deng, Chao; Wang, Bingran

    2018-04-01

    The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.

  3. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    Science.gov (United States)

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  4. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  5. Optimal dynamic pricing and replenishment policies for deteriorating items

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2014-08-01

    Full Text Available Marketing strategies and proper inventory replenishment policies are often incorporated by enterprises to stimulate demand and maximize profit. The aim of this paper is to represent an integrated model for dynamic pricing and inventory control of deteriorating items. To reflect the dynamic characteristic of the problem, the selling price is defined as a time-dependent function of the initial selling price and the discount rate. In this regard, the price is exponentially discounted to compensate negative impact of the deterioration. The planning horizon is assumed to be infinite and the deterioration rate is time-dependent. In addition to price, the demand rate is dependent on advertisement as a powerful marketing tool. Several theoretical results and an iterative solution algorithm are developed to provide the optimal solution. Finally, to show validity of the model and illustrate the solution procedure, numerical results are presented.

  6. A note on optimal (s,S) and (R,nQ) policies under a stuttering Poisson demand process

    DEFF Research Database (Denmark)

    Larsen, Christian

    2015-01-01

    In this note, a new efficient algorithm is proposed to find an optimal (s, S) replenishment policy for inventory systems with continuous reviews and where the demand follows a stuttering Poisson process (the compound element is geometrically distributed). We also derive three upper bounds...

  7. AN OPTIMAL REPLENISHMENT POLICY FOR DETERIORATING ITEMS WITH RAMP TYPE DEMAND UNDER PERMISSIBLE DELAY IN PAYMENTS

    Directory of Open Access Journals (Sweden)

    Dr. Sanjay Jain

    2010-10-01

    Full Text Available The aim of this paper is to develop an optimal replenishment policy for inventory models of deteriorating items with ramp type demand under permissible delay in payments. Deterioration of items begins on their arrival in stock.  An example is also presented to illustrate the application of developed model.

  8. Optimal back-to-front airplane boarding

    Science.gov (United States)

    Bachmat, Eitan; Khachaturov, Vassilii; Kuperman, Ran

    2013-06-01

    The problem of finding an optimal back-to-front airplane boarding policy is explored, using a mathematical model that is related to the 1+1 polynuclear growth model with concave boundary conditions and to causal sets in gravity. We study all airplane configurations and boarding group sizes. Optimal boarding policies for various airplane configurations are presented. Detailed calculations are provided along with simulations that support the main conclusions of the theory. We show that the effectiveness of back-to-front policies undergoes a phase transition when passing from lightly congested airplanes to heavily congested airplanes. The phase transition also affects the nature of the optimal or near-optimal policies. Under what we consider to be realistic conditions, optimal back-to-front policies lead to a modest 8-12% improvement in boarding time over random (no policy) boarding, using two boarding groups. Having more than two groups is not effective.

  9. Optimal back-to-front airplane boarding.

    Science.gov (United States)

    Bachmat, Eitan; Khachaturov, Vassilii; Kuperman, Ran

    2013-06-01

    The problem of finding an optimal back-to-front airplane boarding policy is explored, using a mathematical model that is related to the 1+1 polynuclear growth model with concave boundary conditions and to causal sets in gravity. We study all airplane configurations and boarding group sizes. Optimal boarding policies for various airplane configurations are presented. Detailed calculations are provided along with simulations that support the main conclusions of the theory. We show that the effectiveness of back-to-front policies undergoes a phase transition when passing from lightly congested airplanes to heavily congested airplanes. The phase transition also affects the nature of the optimal or near-optimal policies. Under what we consider to be realistic conditions, optimal back-to-front policies lead to a modest 8-12% improvement in boarding time over random (no policy) boarding, using two boarding groups. Having more than two groups is not effective.

  10. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  11. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  12. Optimal Policies for Random and Periodic Garbage Collections with Tenuring Threshold

    Science.gov (United States)

    Zhao, Xufeng; Nakamura, Syouji; Nakagawa, Toshio

    It is an important problem to determine the tenuring threshold to meet the pause time goal for a generational garbage collector. From such viewpoint, this paper proposes two stochastic models based on the working schemes of a generational garbage collector: One is random collection which occurs at a nonhomogeneous Poisson process and the other is periodic collection which occurs at periodic times. Since the cost suffered for minor collection increases, as the amount of surviving objects accumulates, tenuring minor collection should be made at some tenuring threshold. Using the techniques of cumulative processes and reliability theory, expected cost rates with tenuring threshold are obtained, and optimal policies which minimize them are discussed analytically and computed numerically.

  13. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  14. Rational optimization of reliability and safety policies

    International Nuclear Information System (INIS)

    Melchers, Robert E.

    2001-01-01

    Optimization of structures for design has a long history, including optimization using numerical methods and optimality criteria. Much of this work has considered a subset of the complete design optimization problem--that of the technical issues alone. The more general problem must consider also non-technical issues and, importantly, the interplay between them and the parameters which influence them. Optimization involves optimal setting of design or acceptance criteria and, separately, optimal design within the criteria. In the modern context of probability based design codes this requires probabilistic acceptance criteria. The determination of such criteria involves more than the nominal code failure probability approach used for design code formulation. A more general view must be taken and a clear distinction must be made between those matters covered by technical reliability and non-technical reliability. The present paper considers this issue and outlines a framework for rational optimization of structural and other systems given the socio-economic and political systems within which optimization must be performed

  15. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  16. Optimal human capital policies

    Czech Academy of Sciences Publication Activity Database

    Boháček, Radim; Kapička, M.

    2008-01-01

    Roč. 55, č. 1 (2008), s. 1-16 ISSN 0304-3932 Institutional research plan: CEZ:AV0Z70850503 Keywords : dynamic optimal taxation * income taxation Subject RIV: AH - Economics Impact factor: 1.429, year: 2008

  17. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  18. Optimal Ordering Policy of a Risk-Averse Retailer Subject to Inventory Inaccuracy

    Directory of Open Access Journals (Sweden)

    Lijing Zhu

    2013-01-01

    Full Text Available Inventory inaccuracy refers to the discrepancy between the actual inventory and the recorded inventory information. Inventory inaccuracy is prevalent in retail stores. It may result in a higher inventory level or poor customer service. Earlier studies of inventory inaccuracy have traditionally assumed risk-neutral retailers whose objective is to maximize expected profits. We investigate a risk-averse retailer within a newsvendor framework. The risk aversion attitude is measured by conditional-value-at-risk (CVaR. We consider inventory inaccuracy stemming both from permanent shrinkage and temporary shrinkage. Two scenarios of reducing inventory shrinkage are presented. In the first scenario, the retailer conducts physical inventory audits to identify the discrepancy. In the second scenario, the retailer deploys an automatic tracking technology, radiofrequency identification (RFID, to reduce inventory shrinkage. With the CVaR criterion, we propose optimal policies for the two scenarios. We show monotonicity between the retailer’s ordering policy and his risk aversion degree. A numerical analysis provides managerial insights for risk-averse retailers considering investing in RFID technology.

  19. Optimal pricing and promotional effort control policies for a new product growth in segmented market

    Directory of Open Access Journals (Sweden)

    Jha P.C.

    2015-01-01

    Full Text Available Market segmentation enables the marketers to understand and serve the customers more effectively thereby improving company’s competitive position. In this paper, we study the impact of price and promotion efforts on evolution of sales intensity in segmented market to obtain the optimal price and promotion effort policies. Evolution of sales rate for each segment is developed under the assumption that marketer may choose both differentiated as well as mass market promotion effort to influence the uncaptured market potential. An optimal control model is formulated and a solution method using Maximum Principle has been discussed. The model is extended to incorporate budget constraint. Model applicability is illustrated by a numerical example. P.C. Jha, P. Manik, K. Chaudhary, R. Cambini / Optimal Pricing and Promotional 2 Since the discrete time data is available, the formulated model is discretized. For solving the discrete model, differential evolution algorithm is used.

  20. Optimal inventory policy in a closed loop supply chain system with multiple periods

    International Nuclear Information System (INIS)

    Sasi Kumar, A.; Natarajan, K.; Ramasubramaniam, Muthu Rathna Sapabathy.; Deepaknallasamy, K.K.

    2017-01-01

    Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods. The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C) for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  1. Optimal inventory policy in a closed loop supply chain system with multiple periods

    Energy Technology Data Exchange (ETDEWEB)

    Sasi Kumar, A.; Natarajan, K.; Ramasubramaniam, Muthu Rathna Sapabathy.; Deepaknallasamy, K.K.

    2017-07-01

    Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods. The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C) for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  2. Optimal inventory policy in a closed loop supply chain system with multiple periods

    Directory of Open Access Journals (Sweden)

    SasiKumar A.

    2017-05-01

    Full Text Available Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods.  The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  3. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  4. Event-Triggered Distributed Approximate Optimal State and Output Control of Affine Nonlinear Interconnected Systems.

    Science.gov (United States)

    Narayanan, Vignesh; Jagannathan, Sarangapani

    2017-06-08

    This paper presents an approximate optimal distributed control scheme for a known interconnected system composed of input affine nonlinear subsystems using event-triggered state and output feedback via a novel hybrid learning scheme. First, the cost function for the overall system is redefined as the sum of cost functions of individual subsystems. A distributed optimal control policy for the interconnected system is developed using the optimal value function of each subsystem. To generate the optimal control policy, forward-in-time, neural networks are employed to reconstruct the unknown optimal value function at each subsystem online. In order to retain the advantages of event-triggered feedback for an adaptive optimal controller, a novel hybrid learning scheme is proposed to reduce the convergence time for the learning algorithm. The development is based on the observation that, in the event-triggered feedback, the sampling instants are dynamic and results in variable interevent time. To relax the requirement of entire state measurements, an extended nonlinear observer is designed at each subsystem to recover the system internal states from the measurable feedback. Using a Lyapunov-based analysis, it is demonstrated that the system states and the observer errors remain locally uniformly ultimately bounded and the control policy converges to a neighborhood of the optimal policy. Simulation results are presented to demonstrate the performance of the developed controller.

  5. Measuring public opinion on alcohol policy: a factor analytic study of a US probability sample.

    Science.gov (United States)

    Latimer, William W; Harwood, Eileen M; Newcomb, Michael D; Wagenaar, Alexander C

    2003-03-01

    Public opinion has been one factor affecting change in policies designed to reduce underage alcohol use. Extant research, however, has been criticized for using single survey items of unknown reliability to define adult attitudes on alcohol policy issues. The present investigation addresses a critical gap in the literature by deriving scales on public attitudes, knowledge, and concerns pertinent to alcohol policies designed to reduce underage drinking using a US probability sample survey of 7021 adults. Five attitudinal scales were derived from exploratory and confirmatory factor analyses addressing policies to: (1) regulate alcohol marketing, (2) regulate alcohol consumption in public places, (3) regulate alcohol distribution, (4) increase alcohol taxes, and (5) regulate youth access. The scales exhibited acceptable psychometric properties and were largely consistent with a rational framework which guided the survey construction.

  6. The Optimal Replenishment Policy under Trade Credit Financing with Ramp Type Demand and Demand Dependent Production Rate

    Directory of Open Access Journals (Sweden)

    Juanjuan Qin

    2014-01-01

    Full Text Available This paper investigates the optimal replenishment policy for the retailer with the ramp type demand and demand dependent production rate involving the trade credit financing, which is not reported in the literatures. First, the two inventory models are developed under the above situation. Second, the algorithms are given to optimize the replenishment cycle time and the order quantity for the retailer. Finally, the numerical examples are carried out to illustrate the optimal solutions and the sensitivity analysis is performed. The results show that if the value of production rate is small, the retailer will lower the frequency of putting the orders to cut down the order cost; if the production rate is high, the demand dependent production rate has no effect on the optimal decisions. When the trade credit is less than the growth stage time, the retailer will shorten the replenishment cycle; when it is larger than the breakpoint of the demand, within the maturity stage of the products, the trade credit has no effect on the optimal order cycle and the optimal order quantity.

  7. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  8. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  9. Optimal sampling in damage detection of flexural beams by continuous wavelet transform

    International Nuclear Information System (INIS)

    Basu, B; Broderick, B M; Montanari, L; Spagnoli, A

    2015-01-01

    Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)

  10. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  11. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  12. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  13. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  14. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  15. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  17. A bi-objective model for optimizing replacement time of age and block policies with consideration of spare parts’ availability

    Science.gov (United States)

    Alsyouf, Imad

    2018-05-01

    Reliability and availability of critical systems play an important role in achieving the stated objectives of engineering assets. Preventive replacement time affects the reliability of the components, thus the number of system failures encountered and its downtime expenses. On the other hand, spare parts inventory level is a very critical factor that affects the availability of the system. Usually, the decision maker has many conflicting objectives that should be considered simultaneously for the selection of the optimal maintenance policy. The purpose of this research was to develop a bi-objective model that will be used to determine the preventive replacement time for three maintenance policies (age, block good as new, block bad as old) with consideration of spare parts’ availability. It was suggested to use a weighted comprehensive criterion method with two objectives, i.e. cost and availability. The model was tested with a typical numerical example. The results of the model demonstrated its effectiveness in enabling the decision maker to select the optimal maintenance policy under different scenarios and taking into account preferences with respect to contradicting objectives such as cost and availability.

  18. A model based on stochastic dynamic programming for determining China's optimal strategic petroleum reserve policy

    International Nuclear Information System (INIS)

    Zhang Xiaobing; Fan Ying; Wei Yiming

    2009-01-01

    China's Strategic Petroleum Reserve (SPR) is currently being prepared. But how large the optimal stockpile size for China should be, what the best acquisition strategies are, how to release the reserve if a disruption occurs, and other related issues still need to be studied in detail. In this paper, we develop a stochastic dynamic programming model based on a total potential cost function of establishing SPRs to evaluate the optimal SPR policy for China. Using this model, empirical results are presented for the optimal size of China's SPR and the best acquisition and drawdown strategies for a few specific cases. The results show that with comprehensive consideration, the optimal SPR size for China is around 320 million barrels. This size is equivalent to about 90 days of net oil import amount in 2006 and should be reached in the year 2017, three years earlier than the national goal, which implies that the need for China to fill the SPR is probably more pressing; the best stockpile release action in a disruption is related to the disruption levels and expected continuation probabilities. The information provided by the results will be useful for decision makers.

  19. Optimal dividend policies with transaction costs for a class of jump-diffusion processes

    DEFF Research Database (Denmark)

    Hunting, Martin; Paulsen, Jostein

    2013-01-01

    his paper addresses the problem of finding an optimal dividend policy for a class of jump-diffusion processes. The jump component is a compound Poisson process with negative jumps, and the drift and diffusion components are assumed to satisfy some regularity and growth restrictions. Each dividend...... payment is changed by a fixed and a proportional cost, meaning that if ξ is paid out by the company, the shareholders receive kξ−K, where k and K are positive. The aim is to maximize expected discounted dividends until ruin. It is proved that when the jumps belong to a certain class of light...

  20. Emergency Diesel Generation System Surveillance Test Policy Optimization Through Genetic Algorithms Using Non-Periodic Intervention Frequencies and Seasonal Constraints

    International Nuclear Information System (INIS)

    Lapa, Celso M.F.; Pereira, Claudio M.N.A.; Frutuoso e Melo, P.F.

    2002-01-01

    Nuclear standby safety systems must frequently, be submitted to periodic surveillance tests. The main reason is to detect, as soon as possible, the occurrence of unrevealed failure states. Such interventions may, however, affect the overall system availability due to component outages. Besides, as the components are demanded, deterioration by aging may occur, penalizing again the system performance. By these reasons, planning a good surveillance test policy implies in a trade-off between gains and overheads due to the surveillance test interventions. In order maximize the systems average availability during a given period of time, it has recently been developed a non-periodic surveillance test optimization methodology based on genetic algorithms (GA). The fact of allowing non-periodic tests turns the solution space much more flexible and schedules can be better adjusted, providing gains in the overall system average availability, when compared to those obtained by an optimized periodic tests scheme. The optimization problem becomes, however, more complex. Hence, the use of a powerful optimization technique, such as GAs, is required. Some particular features of certain systems can turn it advisable to introduce other specific constraints in the optimization problem. The Emergency Diesel Generation System (EDGS) of a Nuclear Power Plant (N-PP) is a good example for demonstrating the introduction of seasonal constraints in the optimization problem. This system is responsible for power supply during an external blackout. Therefore, it is desirable during periods of high blackout probability to maintain the system availability as high as possible. Previous applications have demonstrated the robustness and effectiveness of the methodology. However, no seasonal constraints have ever been imposed. This work aims at investigating the application of such methodology in the Angra-II Brazilian NPP EDGS surveillance test policy optimization, considering the blackout probability

  1. Tax optimization and the firm's value: Evidence from the Tunisian context

    Directory of Open Access Journals (Sweden)

    Soufiene Assidi

    2016-09-01

    Full Text Available The paper investigated the relationship between corporate tax optimization and the firm's value in the Tunisian context over an 11 year period. The empirical results revealed that tax optimization, accruals and investment increased the firm's value. After dividing the sample between listed and non-listed firms, we concluded that, compared to non-listed firms, the listed firms were better able to optimize tax through adopting a tax policy. Our findings help decision makers, researchers and practices to better understand the role of tax optimization in the management of firms and, also, in their performance.

  2. Optimizing Input/Output Using Adaptive File System Policies

    Science.gov (United States)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  3. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  4. A Global Optimizing Policy for Decaying Items with Ramp-Type Demand Rate under Two-Level Trade Credit Financing Taking Account of Preservation Technology

    Directory of Open Access Journals (Sweden)

    S. R. Singh

    2013-01-01

    Full Text Available An inventory system for deteriorating items, with ramp-type demand rate, under two-level trade credit policy taking account of preservation technology is considered. The objective of this study is to develop a deteriorating inventory policy when the supplier provides to the retailer a permissible delay in payments, and during this credit period, the retailer accumulates the revenue and earns interest on that revenue; also the retailer invests on the preservation technology to reduce the rate of product deterioration. Shortages are allowed and partially backlogged. Sufficient conditions of the existence and uniqueness of the optimal replenishment policy are provided, and an algorithm, for its determination, is proposed. Numerical examples draw attention to the obtained results, and the sensitivity analysis of the optimal solution with respect to leading parameters of the system is carried out.

  5. School Board Policies on Leaves and Absences. Educational Policies Development Kit.

    Science.gov (United States)

    National School Boards Association, Waterford, CT. Educational Policies Service.

    This report provides board policy samples and other policy resources on leaves and absences. The intent in providing policy samples is to encourage thinking in policy terms and to provide working papers that can be edited, modified, or adapted to meet local requirements. Topics covered in the samples include (1) sick leave, (2) maternity leave,…

  6. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  7. Optimal household refrigerator replacement policy for life cycle energy, greenhouse gas emissions, and cost

    International Nuclear Information System (INIS)

    Kim, Hyung Chul; Keoleian, Gregory A.; Horie, Yuhta A.

    2006-01-01

    Although the last decade witnessed dramatic progress in refrigerator efficiencies, inefficient, outdated refrigerators are still in operation, sometimes consuming more than twice as much electricity per year compared with modern, efficient models. Replacing old refrigerators before their designed lifetime could be a useful policy to conserve electric energy and greenhouse gas emissions. However, from a life cycle perspective, product replacement decisions also induce additional economic and environmental burdens associated with disposal of old models and production of new models. This paper discusses optimal lifetimes of mid-sized refrigerator models in the US, using a life cycle optimization model based on dynamic programming. Model runs were conducted to find optimal lifetimes that minimize energy, global warming potential (GWP), and cost objectives over a time horizon between 1985 and 2020. The baseline results show that depending on model years, optimal lifetimes range 2-7 years for the energy objective, and 2-11 years for the GWP objective. On the other hand, an 18-year of lifetime minimizes the economic cost incurred during the time horizon. Model runs with a time horizon between 2004 and 2020 show that current owners should replace refrigerators that consume more than 1000 kWh/year of electricity (typical mid-sized 1994 models and older) as an efficient strategy from both cost and energy perspectives

  8. Optimal maintenance policy for a system subject to damage in a discrete time process

    International Nuclear Information System (INIS)

    Chien, Yu-Hung; Sheu, Shey-Huei; Zhang, Zhe George

    2012-01-01

    Consider a system operating over n discrete time periods (n=1, 2, …). Each operation period causes a random amount of damage to the system which accumulates over time periods. The system fails when the cumulative damage exceeds a failure level ζ and a corrective maintenance (CM) action is immediately taken. To prevent such a failure, a preventive maintenance (PM) may be performed. In an operation period without a CM or PM, a regular maintenance (RM) is conducted at the end of that period to maintain the operation of the system. We propose a maintenance policy which prescribes a PM when the accumulated damage exceeds a pre-specified level δ ( ⁎ and N ⁎ and discuss some useful properties about them. It has been shown that a δ-based PM outperforms a N-based PM in terms of cost minimization. Numerical examples are presented to demonstrate the optimization of this class of maintenance policies.

  9. Optimal strategies for pricing general insurance

    OpenAIRE

    Emms, P.; Haberman, S.; Savoulli, I.

    2006-01-01

    Optimal premium pricing policies in a competitive insurance environment are investigated using approximation methods and simulation of sample paths. The market average premium is modelled as a diffusion process, with the premium as the control function and the maximization of the expected total utility of wealth, over a finite time horizon, as the objective. In order to simplify the optimisation problem, a linear utility function is considered and two particular premium strategies are adopted...

  10. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Fiscal Policy and the Implementation of the Walsh Contract for Central Bankers

    OpenAIRE

    Haizhou Huang; A. Jorge Padilla

    2002-01-01

    We develop a simple macroeconomic model where the time inconsistency of optimal monetary policy is due to tax distortions. If fiscal policy is exogenously fixed at its optimal level, a Walsh contract (Walsh, 1995) offered to an independent central bank implements the optimal monetary policy. When fiscal policy is determined endogenously, however, this contract is subject to strategic manipulation by the government, which results in a suboptimal policy mix. Implementing the optimal policy mix ...

  12. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    Science.gov (United States)

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-12-01

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. On the Optimal Policy for the Single-product Inventory Problem with Set-up Cost and a Restricted Production Capacity

    NARCIS (Netherlands)

    Foreest, N. D. van; Wijngaard, J.

    2010-01-01

    The single-product, stationary inventory problem with set-up cost is one of the classical problems in stochastic operations research. Theories have been developed to cope with finite production capacity in periodic review systems, and it has been proved that optimal policies for these cases are not

  14. Optimal pricing of non-utility generated electric power

    International Nuclear Information System (INIS)

    Siddiqi, S.N.; Baughman, M.L.

    1994-01-01

    The importance of an optimal pricing policy for pricing non-utility generated power is pointed out in this paper. An optimal pricing policy leads to benefits for all concerned: the utility, industry, and the utility's other customers. In this paper, it is shown that reliability differentiated real-time pricing provides an optimal non-utility generated power pricing policy, from a societal welfare point of view. Firm capacity purchase, and hence an optimal price for purchasing firm capacity, are an integral part of this pricing policy. A case study shows that real-time pricing without firm capacity purchase results in improper investment decisions and higher costs for the system as a whole. Without explicit firm capacity purchase, the utility makes greater investment in capacity addition in order to meet its reliability criteria than is socially optimal. It is concluded that the non-utility generated power pricing policy presented in this paper and implied by reliability differentiated pricing policy results in social welfare-maximizing investment and operation decisions

  15. Economy of climate policy. Criticism and alternatives

    International Nuclear Information System (INIS)

    Van den Bergh, J.C.J.M.

    2002-01-01

    The economy of climate policy is characterized by notions as cost-benefit analysis, optimal policy and optimal timing. It is argued that the use of such notions reflects an unjustified optimism with respect to the contribution of economic science to the discussion on climate policy. The complexity of the biosphere and the uncertainty about climatic change, as well as their socio-economic consequences, are extensive. Another economic approach of the climate problem is suggested, based on complexity and historical justice. 12 refs [nl

  16. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  17. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  18. REGULATORY POLICY AND OPTIMIZATION OF INVESTMENT RESOURCE ALLOCATION IN THE MODEL OF FUNCTIONING OF RECREATION INDUSTRY

    Directory of Open Access Journals (Sweden)

    Hanna Shevchenko

    2017-11-01

    Full Text Available The research objective is the rationale of the theoretical and methodical approach concerning the improvement of regulatory policy as well as the process of distribution of financial investments using the model of the functioning of a recreational sector of the national economy. The methodology of the study includes the use of optimal control theory for the model formation of the functioning of the recreational industry as well as determining the behaviour of regulatory authorities and capabilities to optimize the allocation of investment resources in the recreational sector of the national economy. Results. The issue of equilibration of regulatory policy in the recreational sector of the national economy is actualized, including the question of targeted distribution of state and external financial investments. Also, it is proved that regulatory policy should establish the frameworks that on the one hand, do not allow public authorities to exercise extra influence on the economy of recreation, on the other hand, to keep the behaviour of the recreational business entities within the limits of normal socio-economic activity – on the basis of analysis of the continuum “recreation – work” by means of modified Brennan-Buchanan model. It is revealed that even with the condition of the tax reduction, the situation when the population resting less and works more than in the background of a developed economy is observed. However, according to the optimistic forecast, eventually on condition when the economy is emerging from the shade, we will obtain an official mode of the work in which, while maintaining taxes on proposed more advantageous for the population level, ultimately the ratio leisure and work will be established which is corresponding to the principles of sustainable development. Practical value. On the basis of methodical principles of the theory of optimal control, the model of the functioning of the recreational industry under the

  19. Optimal management strategies in variable environments: Stochastic optimal control methods

    Science.gov (United States)

    Williams, B.K.

    1985-01-01

    Dynamic optimization was used to investigate the optimal defoliation of salt desert shrubs in north-western Utah. Management was formulated in the context of optimal stochastic control theory, with objective functions composed of discounted or time-averaged biomass yields. Climatic variability and community patterns of salt desert shrublands make the application of stochastic optimal control both feasible and necessary. A primary production model was used to simulate shrub responses and harvest yields under a variety of climatic regimes and defoliation patterns. The simulation results then were used in an optimization model to determine optimal defoliation strategies. The latter model encodes an algorithm for finite state, finite action, infinite discrete time horizon Markov decision processes. Three questions were addressed: (i) What effect do changes in weather patterns have on optimal management strategies? (ii) What effect does the discounting of future returns have? (iii) How do the optimal strategies perform relative to certain fixed defoliation strategies? An analysis was performed for the three shrub species, winterfat (Ceratoides lanata), shadscale (Atriplex confertifolia) and big sagebrush (Artemisia tridentata). In general, the results indicate substantial differences among species in optimal control strategies, which are associated with differences in physiological and morphological characteristics. Optimal policies for big sagebrush varied less with variation in climate, reserve levels and discount rates than did either shadscale or winterfat. This was attributed primarily to the overwintering of photosynthetically active tissue and to metabolic activity early in the growing season. Optimal defoliation of shadscale and winterfat generally was more responsive to differences in plant vigor and climate, reflecting the sensitivity of these species to utilization and replenishment of carbohydrate reserves. Similarities could be seen in the influence of both

  20. A Jackson network model and threshold policy for joint optimization of energy and delay in multi-hop wireless networks

    KAUST Repository

    Xia, Li; Shihada, Basem

    2014-01-01

    This paper studies the joint optimization problem of energy and delay in a multi-hop wireless network. The optimization variables are the transmission rates, which are adjustable according to the packet queueing length in the buffer. The optimization goal is to minimize the energy consumption of energy-critical nodes and the packet transmission delay throughout the network. In this paper, we aim at understanding the well-known decentralized algorithms which are threshold based from a different research angle. By using a simplified network model, we show that we can adopt the semi-open Jackson network model and study this optimization problem in closed form. This simplified network model further allows us to establish some significant optimality properties. We prove that the system performance is monotonic with respect to (w.r.t.) the transmission rate. We also prove that the threshold-type policy is optimal, i.e., when the number of packets in the buffer is larger than a threshold, transmit with the maximal rate (power); otherwise, no transmission. With these optimality properties, we develop a heuristic algorithm to iteratively find the optimal threshold. Finally, we conduct some simulation experiments to demonstrate the main idea of this paper.

  1. A Jackson network model and threshold policy for joint optimization of energy and delay in multi-hop wireless networks

    KAUST Repository

    Xia, Li

    2014-11-20

    This paper studies the joint optimization problem of energy and delay in a multi-hop wireless network. The optimization variables are the transmission rates, which are adjustable according to the packet queueing length in the buffer. The optimization goal is to minimize the energy consumption of energy-critical nodes and the packet transmission delay throughout the network. In this paper, we aim at understanding the well-known decentralized algorithms which are threshold based from a different research angle. By using a simplified network model, we show that we can adopt the semi-open Jackson network model and study this optimization problem in closed form. This simplified network model further allows us to establish some significant optimality properties. We prove that the system performance is monotonic with respect to (w.r.t.) the transmission rate. We also prove that the threshold-type policy is optimal, i.e., when the number of packets in the buffer is larger than a threshold, transmit with the maximal rate (power); otherwise, no transmission. With these optimality properties, we develop a heuristic algorithm to iteratively find the optimal threshold. Finally, we conduct some simulation experiments to demonstrate the main idea of this paper.

  2. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  3. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  4. Optimizing two-dimensional renewable warranty policies for sensor embedded remanufactured products

    Directory of Open Access Journals (Sweden)

    Ammar Alqahtani

    2017-05-01

    Full Text Available Purpose: Remanufactured products, in addition to being environment friendly, are popular with consumers because they can offer the latest technology with lower prices in comparison to brand new products. However, some consumers are hesitant to buy remanufactured products because they are skeptical about the quality of the remanufactured product and thus are unsure of the extent to which the product will render services when compared to a new product. A strategy that remanufacturers may employ to entice customers is to offer warranties on remanufactured products. To that end, this paper studies and scrutinizes the impact of offering renewing warranties on remanufactured products. Specifically, the paper suggests a methodology which simultaneously minimizes the cost incurred by the remanufacturers and maximizes the confidence of the consumers towards buying remanufacturing products. Design/methodology/approach: This study uses discrete-event simulation to optimize the implementation of a two-dimensional renewing warranty policy for remanufactured products. The implementation is illustrated using a specific product recovery system called the Advanced Remanufacturing-To-Order (ARTO system. The experiments used in the study were designed using Taguchi’s Orthogonal Arrays to represent the entire domain of the recovery system so as to observe the system behavior under various experimental conditions. In order to determine the optimum strategy offered by the remanufacturer, various warranty and preventive maintenance scenarios were analyzed using pairwise t-tests along with one-way analysis of variance (ANOVA and Tukey pairwise comparisons tests for every scenario. Findings: The proposed methodology is able to simultaneously minimize the cost incurred by the remanufacturer, optimize the warranty price and period, and optimize the preventive maintenance strategy resulting in increased consumer confidence. Originality/value: This is the first study that

  5. Optimizing two-dimensional renewable warranty policies for sensor embedded remanufactured products

    International Nuclear Information System (INIS)

    Alqahtani, Ammar; Gupta, Surendra M.

    2017-01-01

    Remanufactured products, in addition to being environment friendly, are popular with consumers because they can offer the latest technology with lower prices in comparison to brand new products. However, some consumers are hesitant to buy remanufactured products because they are skeptical about the quality of the remanufactured product and thus are unsure of the extent to which the product will render services when compared to a new product. A strategy that remanufacturers may employ to entice customers is to offer warranties on remanufactured products. To that end, this paper studies and scrutinizes the impact of offering renewing warranties on remanufactured products. Specifically, the paper suggests a methodology which simultaneously minimizes the cost incurred by the remanufacturers and maximizes the confidence of the consumers towards buying remanufacturing products. Design/methodology/approach: This study uses discrete-event simulation to optimize the implementation of a two-dimensional renewing warranty policy for remanufactured products. The implementation is illustrated using a specific product recovery system called the Advanced Remanufacturing-To-Order (ARTO) system. The experiments used in the study were designed using Taguchi’s Orthogonal Arrays to represent the entire domain of the recovery system so as to observe the system behavior under various experimental conditions. In order to determine the optimum strategy offered by the remanufacturer, various warranty and preventive maintenance scenarios were analyzed using pairwise t-tests along with one-way analysis of variance (ANOVA) and Tukey pairwise comparisons tests for every scenario. Findings: The proposed methodology is able to simultaneously minimize the cost incurred by the remanufacturer, optimize the warranty price and period, and optimize the preventive maintenance strategy resulting in increased consumer confidence. Originality/value: This is the first study that evaluates in a

  6. Optimizing two-dimensional renewable warranty policies for sensor embedded remanufactured products

    Energy Technology Data Exchange (ETDEWEB)

    Alqahtani, Ammar; Gupta, Surendra M.

    2017-07-01

    Remanufactured products, in addition to being environment friendly, are popular with consumers because they can offer the latest technology with lower prices in comparison to brand new products. However, some consumers are hesitant to buy remanufactured products because they are skeptical about the quality of the remanufactured product and thus are unsure of the extent to which the product will render services when compared to a new product. A strategy that remanufacturers may employ to entice customers is to offer warranties on remanufactured products. To that end, this paper studies and scrutinizes the impact of offering renewing warranties on remanufactured products. Specifically, the paper suggests a methodology which simultaneously minimizes the cost incurred by the remanufacturers and maximizes the confidence of the consumers towards buying remanufacturing products. Design/methodology/approach: This study uses discrete-event simulation to optimize the implementation of a two-dimensional renewing warranty policy for remanufactured products. The implementation is illustrated using a specific product recovery system called the Advanced Remanufacturing-To-Order (ARTO) system. The experiments used in the study were designed using Taguchi’s Orthogonal Arrays to represent the entire domain of the recovery system so as to observe the system behavior under various experimental conditions. In order to determine the optimum strategy offered by the remanufacturer, various warranty and preventive maintenance scenarios were analyzed using pairwise t-tests along with one-way analysis of variance (ANOVA) and Tukey pairwise comparisons tests for every scenario. Findings: The proposed methodology is able to simultaneously minimize the cost incurred by the remanufacturer, optimize the warranty price and period, and optimize the preventive maintenance strategy resulting in increased consumer confidence. Originality/value: This is the first study that evaluates in a

  7. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  8. Determination of optimal environmental policy for reclamation of land unearthed in lignite mines - Strategy and tactics

    Science.gov (United States)

    Batzias, Dimitris F.; Pollalis, Yannis A.

    2012-12-01

    In this paper, optimal environmental policy for reclamation of land unearthed in lignite mines is defined as a strategic target. The tactics concerning the achievement of this target, includes estimation of optimal time lag between each lignite site (which is a segment of the whole lignite field) complete exploitation and its reclamation. Subsidizing of reclamation has been determined as a function of this time lag and relevant implementation is presented for parameter values valid for the Greek economy. We proved that the methodology we have developed gives reasonable quantitative results within the norms imposed by legislation. Moreover, the interconnection between strategy and tactics becomes evident, since the former causes the latter by deduction and the latter revises the former by induction in the time course of land reclamation.

  9. Optimal Replacement Policy of Jet Engine Modules from the Aircarrier's Point of View

    Directory of Open Access Journals (Sweden)

    Anita Domitrović

    2008-01-01

    Full Text Available A mathematical model for optimising preventive maintenanceof aircraft jet engine was developed by dynamic programming.Replacement planning for jet engine modules is regardedas a multistage decision process, while optimum modulereplacement is considered as a problem of equipment replacement.The goal of the optimal replacement policy of jet enginemodules is a defined series of decisions resulting in minimummaintenance costs. The model was programmed inC++ programming language and tested by using CFM56 jetengine data. The optimum maintenance strategy costs werecompared to costs of simpler experience-based maintenancestrategies. The results of the comparison j usti.JY further developmentand usage of the model in order to achieve significant costreduction for airline carriers.

  10. Optimal replenishment and credit policy in supply chain inventory model under two levels of trade credit with time- and credit-sensitive demand involving default risk

    Science.gov (United States)

    Mahata, Puspita; Mahata, Gour Chandra; Kumar De, Sujit

    2018-03-01

    Traditional supply chain inventory modes with trade credit usually only assumed that the up-stream suppliers offered the down-stream retailers a fixed credit period. However, in practice the retailers will also provide a credit period to customers to promote the market competition. In this paper, we formulate an optimal supply chain inventory model under two levels of trade credit policy with default risk consideration. Here, the demand is assumed to be credit-sensitive and increasing function of time. The major objective is to determine the retailer's optimal credit period and cycle time such that the total profit per unit time is maximized. The existence and uniqueness of the optimal solution to the presented model are examined, and an easy method is also shown to find the optimal inventory policies of the considered problem. Finally, numerical examples and sensitive analysis are presented to illustrate the developed model and to provide some managerial insights.

  11. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  12. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  14. Robustness of climate metrics under climate policy ambiguity

    International Nuclear Information System (INIS)

    Ekholm, Tommi; Lindroos, Tomi J.; Savolainen, Ilkka

    2013-01-01

    Highlights: • We assess the economic impacts of using different climate metrics. • The setting is cost-efficient scenarios for three interpretations of the 2C target. • With each target setting, the optimal metric is different. • Therefore policy ambiguity prevents the selection of an optimal metric. • Robust metric values that perform well with multiple policy targets however exist. -- Abstract: A wide array of alternatives has been proposed as the common metrics with which to compare the climate impacts of different emission types. Different physical and economic metrics and their parameterizations give diverse weights between e.g. CH 4 and CO 2 , and fixing the metric from one perspective makes it sub-optimal from another. As the aims of global climate policy involve some degree of ambiguity, it is not possible to determine a metric that would be optimal and consistent with all policy aims. This paper evaluates the cost implications of using predetermined metrics in cost-efficient mitigation scenarios. Three formulations of the 2 °C target, including both deterministic and stochastic approaches, shared a wide range of metric values for CH 4 with which the mitigation costs are only slightly above the cost-optimal levels. Therefore, although ambiguity in current policy might prevent us from selecting an optimal metric, it can be possible to select robust metric values that perform well with multiple policy targets

  15. Corporate Accounting Policy Efficiency Improvement

    Directory of Open Access Journals (Sweden)

    Elena K. Vorobei

    2013-01-01

    Full Text Available The article is focused on the issues of efficient use of different methods of tax accounting for the optimization of income tax expenses and their consolidation in corporate accounting policy. The article makes reasoned conclusions, concerning optimal selection of depreciation methods for tax and bookkeeping accounting and their consolidation in corporate accounting policy and consolidation of optimal methods of cost recovery in production, considering business environment. The impact of the selected methods on corporate income tax rates and corporate property tax rates was traced and tax recovery was estimated.

  16. Replacement policy of residential lighting optimized for cost, energy, and greenhouse gas emissions

    Science.gov (United States)

    Liu, Lixi; Keoleian, Gregory A.; Saitou, Kazuhiro

    2017-11-01

    Accounting for 10% of the electricity consumption in the US, artificial lighting represents one of the easiest ways to cut household energy bills and greenhouse gas (GHG) emissions by upgrading to energy-efficient technologies such as compact fluorescent lamps (CFL) and light emitting diodes (LED). However, given the high initial cost and rapidly improving trajectory of solid-state lighting today, estimating the right time to switch over to LEDs from a cost, primary energy, and GHG emissions perspective is not a straightforward problem. This is an optimal replacement problem that depends on many determinants, including how often the lamp is used, the state of the initial lamp, and the trajectories of lighting technology and of electricity generation. In this paper, multiple replacement scenarios of a 60 watt-equivalent A19 lamp are analyzed and for each scenario, a few replacement policies are recommended. For example, at an average use of 3 hr day-1 (US average), it may be optimal both economically and energetically to delay the adoption of LEDs until 2020 with the use of CFLs, whereas purchasing LEDs today may be optimal in terms of GHG emissions. In contrast, incandescent and halogen lamps should be replaced immediately. Based on expected LED improvement, upgrading LED lamps before the end of their rated lifetime may provide cost and environmental savings over time by taking advantage of the higher energy efficiency of newer models.

  17. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  18. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  19. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  20. Optimal policy for value-based decision-making.

    Science.gov (United States)

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  1. Off-Policy Reinforcement Learning for Synchronization in Multiagent Graphical Games.

    Science.gov (United States)

    Li, Jinna; Modares, Hamidreza; Chai, Tianyou; Lewis, Frank L; Xie, Lihua

    2017-10-01

    This paper develops an off-policy reinforcement learning (RL) algorithm to solve optimal synchronization of multiagent systems. This is accomplished by using the framework of graphical games. In contrast to traditional control protocols, which require complete knowledge of agent dynamics, the proposed off-policy RL algorithm is a model-free approach, in that it solves the optimal synchronization problem without knowing any knowledge of the agent dynamics. A prescribed control policy, called behavior policy, is applied to each agent to generate and collect data for learning. An off-policy Bellman equation is derived for each agent to learn the value function for the policy under evaluation, called target policy, and find an improved policy, simultaneously. Actor and critic neural networks along with least-square approach are employed to approximate target control policies and value functions using the data generated by applying prescribed behavior policies. Finally, an off-policy RL algorithm is presented that is implemented in real time and gives the approximate optimal control policy for each agent using only measured data. It is shown that the optimal distributed policies found by the proposed algorithm satisfy the global Nash equilibrium and synchronize all agents to the leader. Simulation results illustrate the effectiveness of the proposed method.

  2. Effect of different economic support policies on the optimal synthesis and operation of a distributed energy supply system with renewable energy sources for an industrial area

    International Nuclear Information System (INIS)

    Casisi, Melchiorre; De Nardi, Alberto; Pinamonti, Piero; Reini, Mauro

    2015-01-01

    Highlights: • MILP model optimization identifies best structure and operation of an energy system. • Total cost of the system is minimized according to industrial stakeholders wills. • Effects of the adoption of economic support policies on the system are evaluated. • Social cost of incentives is comparted with correspondent CO 2 emission reduction. • Support schemes that promote an actual environmental benefit are highlighted. - Abstract: Economic support policies are widely adopted in European countries in order to promote a more efficient energy usage and the growth of renewable energy technologies. On one hand these schemes allow us to reduce the overall pollutant emissions and the total cost from the point of view of the energy systems, but on the other hand their social impact in terms of economic investment needs to be evaluated. The aim of this paper is to compare the social cost of the application of each incentive with the correspondent CO 2 emission reduction and overall energy saving. A Mixed Integer Linear Programming optimization procedure is used to evaluate the effect of different economic support policies on the optimal configuration and operation of a distributed energy supply system of an industrial area located in the north-east of Italy. The minimized objective function is the total annual cost for owning, operating and maintaining the whole energy system. The expectation is that a proper mix of renewable energy technologies and cogeneration systems will be included in the optimal solution, depending on the amount and nature of the supporting policies, highlighting the incentives that promote a real environmental benefit

  3. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  4. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    Directory of Open Access Journals (Sweden)

    Arnaud Chapon

    Full Text Available Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples’ characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters. Keywords: Liquid Scintillation Counting (LSC, PerkinElmer, Tri-Carb, Smear, Swipe

  5. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  6. The Theory of Optimal Taxation

    DEFF Research Database (Denmark)

    Sørensen, Peter Birch

    The theory of optimal taxation has often been criticized for being of little practical policy relevance, due to a lack of robust theoretical results. This paper argues that recent advances in optimal tax theory has made that theory easier to apply and may help to explain some current trends...... in international tax policy. Covering the taxation of labour income and capital income as well as indirect taxation, the paper also illustrates how some of the key results in optimal tax theory may be derived in a simple, heuristic manner....

  7. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  8. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    Science.gov (United States)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  9. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  10. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  11. Optimal trade-credit policy for perishable items deeming imperfect production and stock dependent demand

    Directory of Open Access Journals (Sweden)

    S. R. Singh

    2014-01-01

    Full Text Available Trade credit is the most succeeding economic phenomenon which is used by the supplier for encouraging the retailers to buy more quantity. In this article, a mathematical model with stock dependent demand and deterioration is developed to investigate the retailer’s optimal inventory policy under the scheme of permissible delay in payment. It is assumed that defective items are produced during the production process and delay period is progressive. The objective is to minimize the total average cost of the system. To exemplify hypothesis of the proposed model numerical examples and sensitivity analysis are provided. Finally, the convexities of the cost functions and the effects of changing parameters are represented through the graphs.

  12. Simulation of an integrated age replacement and spare provisioning policy using SLAM

    International Nuclear Information System (INIS)

    Zohrul Kabir, A.B.M.; Farrash, S.H.A.

    1996-01-01

    This paper presents a SLAM simulation model for determining a jointly optimal age replacement and spare part provisioning policy. The policy, referred to as a stocking policy, is formulated by combining age replacement policy with a continuous review (s, S) type inventory policy, where s is the stock reorder level and S is the maximum stock level. The optimal values of the decision variables are obtained by minimizing the total cost of replacement and inventory. The simulation procedure outlined in the paper can be used to model any operating situation having either a single item or a number of identical items. Results from a number of case problems specifically constructed by 5-factor second order rotatory design have been presented and the effects of different cost elements, item failure characteristics and lead time characteristics have been highlighted. For all case problems, optimal (s, S) policies to support the Barlow-Proschan age policy have also been determined. Simulation results clearly indicate the separate optimizations of replacement and spare provisioning policies do not ensure global optimality when total system cost has to be minimized

  13. Parallel island genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Lapa, Celso M.F.

    2003-01-01

    In this work, we focus the application of an Island Genetic Algorithm (IGA), a coarse-grained parallel genetic algorithm (PGA) model, to a Nuclear Power Plant (NPP) Auxiliary Feedwater System (AFWS) surveillance tests policy optimization. Here, the main objective is to outline, by means of comparisons, the advantages of the IGA over the simple (non-parallel) genetic algorithm (GA), which has been successfully applied in the solution of such kind of problem. The goal of the optimization is to maximize the system's average availability for a given period of time, considering realistic features such as: i) aging effects on standby components during the tests; ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; iii) components have distinct test parameters (outage time, aging factors, etc.) and iv) tests are not necessarily periodic. In our experiments, which were made in a cluster comprised by 8 1-GHz personal computers, we could clearly observe gains not only in the computational time, which reduced linearly with the number of computers, but in the optimization outcome

  14. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  15. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  16. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  17. Optimal sample to tracer ratio for isotope dilution mass spectrometry: the polyisotopic case

    International Nuclear Information System (INIS)

    Laszlo, G.; Ridder, P. de; Goldman, A.; Cappis, J.; Bievre, P. de

    1991-01-01

    The Isotope Dilution Mass Spectrometry (IDMS) measurement technique provides a means for determining the unknown amount of various isotopes of an element in a sample solution of known mass. The sample solution is mixed with an auxiliary solution, or tracer, containing a known amount of the same element having the same isotopes but of different relative abundances or isotopic composition and the induced change in the isotopic composition measured by isotope mass spectrometry. The technique involves the measurement of the abundance ratio of each isotope to a (same) reference isotope in the sample solution, in the tracer solution and in the blend of the sample and tracer solution. These isotope ratio measurements, the known element amount in the tracer and the known mass of sample solution are used to calculate the unknown amount of one isotope in the sample solution. Subsequently the unknown amount of element is determined. The purpose of this paper is to examine the optimization of the ratio of the estimated unknown amount of element in the sample solution to the known amount of element in the tracer solution in order to minimize the relative uncertainty in the determination of the unknown amount of element

  18. Carbon Sequestration and Optimal Climate Policy

    International Nuclear Information System (INIS)

    Grimaud, Andre; Rouge, Luc

    2009-01-01

    We present an endogenous growth model in which the use of a non-renewable natural resource generates carbon-dioxide emissions that can be partly sequestered. This approach breaks with the systematic link between resource use and pollution emission. The accumulated stock of remaining emissions has a negative impact on household utility and corporate productivity. While sequestration quickens the optimal extraction rate, it can also generate higher emissions in the short run. It also has an adverse effect on economic growth. We study the impact of a carbon tax: the level of the tax has an effect in our model, its optimal level is positive, and it can be interpreted ex post as a decreasing ad valorem tax on the resource

  19. Inequality and Optimal Redistributive Tax and Transfer Policies

    OpenAIRE

    Howell H Zee

    1999-01-01

    This paper explores the revenue-raising aspect of progressive taxation and derives, on the basis of a simple model, the optimal degree of tax progressivity where the tax revenue is used exclusively to finance (perfectly) targeted transfers to the poor. The paper shows that not only would it be optimal to finance the targeted transfers with progressive taxation, but that the optimal progressivity increases unambiguously with growing income inequality. This conclusion holds up under different a...

  20. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation

    International Nuclear Information System (INIS)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A.; Bouquerel, Hélène

    2016-01-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L"−"1 and 10% for 10 mBq L"−"1. While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L"−"1, a conservative experimental estimate is rather 5 mBq L"−"1, corresponding to 0.14 fg g"−"1. The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. - Highlights: • Radium-226 concentration measured with optimized accumulation in a container. • Radon-222 in air measured precisely with scintillation flasks and long countings. • Method tested by repetition tests, dilution experiments, and successful blind tests. • Estimated conservative detection limit without pre-concentration is 5 mBq L"−"1. • Method is portable, cost

  1. Endogenous Quality Effects of Trade Policy

    NARCIS (Netherlands)

    J.L. Moraga-Gonzalez (José Luis); J.M.A. Viaene (Jean-Marie)

    1999-01-01

    textabstractWe study the optimal trade policy against a foreign oligopoly with endogenous quality. We show that, under the Most Favoured Nation (MFN) clause, a uniform tariff policy is always welfare improving over the free trade equilibrium. However, a nonuniform tariff policy is always desirable

  2. Isolation and identification of phytase-producing strains from soil samples and optimization of production parameters

    Directory of Open Access Journals (Sweden)

    Masoud Mohammadi

    2017-09-01

    Discussion and conclusion: Penicillium sp. isolated from a soil sample near Qazvin, was able to produce highly active phytase in optimized environmental conditions, which could be a suitable candidate for commercial production of phytase to be used as complement in poultry feeding industries.

  3. Value maximizing maintenance policies under general repair

    International Nuclear Information System (INIS)

    Marais, Karen B.

    2013-01-01

    One class of maintenance optimization problems considers the notion of general repair maintenance policies where systems are repaired or replaced on failure. In each case the optimality is based on minimizing the total maintenance cost of the system. These cost-centric optimizations ignore the value dimension of maintenance and can lead to maintenance strategies that do not maximize system value. This paper applies these ideas to the general repair optimization problem using a semi-Markov decision process, discounted cash flow techniques, and dynamic programming to identify the value-optimal actions for any given time and system condition. The impact of several parameters on maintenance strategy, such as operating cost and revenue, system failure characteristics, repair and replacement costs, and the planning time horizon, is explored. This approach provides a quantitative basis on which to base maintenance strategy decisions that contribute to system value. These decisions are different from those suggested by traditional cost-based approaches. The results show (1) how the optimal action for a given time and condition changes as replacement and repair costs change, and identifies the point at which these costs become too high for profitable system operation; (2) that for shorter planning horizons it is better to repair, since there is no time to reap the benefits of increased operating profit and reliability; (3) how the value-optimal maintenance policy is affected by the system's failure characteristics, and hence whether it is worthwhile to invest in higher reliability; and (4) the impact of the repair level on the optimal maintenance policy. -- Highlights: •Provides a quantitative basis for maintenance strategy decisions that contribute to system value. •Shows how the optimal action for a given condition changes as replacement and repair costs change. •Shows how the optimal policy is affected by the system's failure characteristics. •Shows when it is

  4. Optimal ordering and pricing policy for price sensitive stock–dependent demand under progressive payment scheme

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2011-01-01

    Full Text Available The terminal condition of inventory level to be zero at the end of the cycle time adopted by Soni and Shah (2008, 2009 is not viable when demand is stock-dependent. To rectify this assumption, we extend their model for (1 an ending – inventory to be non-zero; (2 limited floor space; (3 a profit maximization model; (4 selling price to be a decision variable, and (5 units in inventory deteriorate at a constant rate. The algorithm is developed to search for the optimal decision policy. The working of the proposed model is supported with a numerical example. Sensitivity analysis is carried out to investigate critical parameters.

  5. Entrepreneurship Policies: Principles, Problems and Opportunities

    OpenAIRE

    Karlsson, Charlie; Andersson, Martin

    2009-01-01

    In this paper, we discuss the current status of the literature on entrepreneurship policy. The purpose is to discuss and assess several fundamental questions pertaining to entrepreneurship policies, such as “What is the optimal rate of entrepreneurship?” and “What entrepreneurship policies to pursue to remedy market failures and to avoid policy failures?”. In the entrepreneurship policies literature several contributors make distinctions between five types of entrepreneurship policy: governme...

  6. How do local governments decide on public policy in fiscal federalism?

    DEFF Research Database (Denmark)

    Köthenbürger, Marko

    2011-01-01

    Previous literature widely assumes that taxes are optimized in local public finance while expenditures adjust residually. This paper endogenizes the choice of the optimization variable. In particular, it analyzes how federal policy toward local governments influences the way local governments...... decide on public policy. Unlike the usual presumption, the paper shows that local governments may choose to optimize over expenditures. The result holds when federal policy subsidizes local taxation. The results offer a new perspective of the efficiency implications of federal policy toward local...

  7. How Do Local Governments Decide on Public Policy in Fiscal Federalism?

    DEFF Research Database (Denmark)

    Köthenbürger, Marko

    2008-01-01

    Previous literature widely assumes that taxes are optimized in local public finance while expenditures adjust residually. This paper endogenizes the choice of the optimization variable. In particular, it analyzes how federal policy toward local governments influences the way local governments...... decide on public policy. Unlike the presumption, the paper shows that local governments may choose to optimize over expenditures. The result most notably prevails when federal policy subsidizes local fiscal effort. The results offer a new perspective of the efficiency implications of federal policy...

  8. Dynamic optimization in environmental economics

    International Nuclear Information System (INIS)

    Moser, Elke; Tragler, Gernot; Veliov, Vladimir M.; Semmler, Willi

    2014-01-01

    This book contains two chapters with the topics: 1. Chapter: INTERACTIONS BETWEEN ECONOMY AND CLIMATE: (a) Climate Change and Technical Progress: Impact of Informational Constraints. (b) Environmental Policy in a Dynamic Model with Heterogeneous Agents and Voting. (c) Optimal Environmental Policy in the Presence of Multiple Equilibria and Reversible Hysteresis. (d). Modeling the Dynamics of the Transition to a Green Economy. (e) One-Parameter GHG Emission Policy With R and D-Based Growth. (f) Pollution, Public Health Care, and Life Expectancy when Inequality Matters. (g) Uncertain Climate Policy and the Green Paradox. (h) Uniqueness Versus Indeterminacy in the Tragedy of the Commons - A ''Geometric'' Approach. 2. Chapter: OPTIMAL EXTRACTION OF RESOURCES: (j) Dynamic Behavior of Oil Importers and Exporters Under Uncertainty. (k) Robust Control of a Spatially Distributed Commercial Fishery. (l) On the Effect of Resource Exploitation on Growth: Domestic Innovation vs. Technological Diffusion Through Trade. (m) Forest Management and Biodiversity in Size-Structured Forests Under Climate Change. (n) Carbon Taxes and Comparison of Trading Regimes in Fossil Fuels. (o) Landowning, Status and Population Growth. (p) Optimal Harvesting of Size-Structured Biological Populations.

  9. A summary of maintenance policies for a finite interval

    International Nuclear Information System (INIS)

    Nakagawa, T.; Mizutani, S.

    2009-01-01

    It would be an important problem to consider practically some maintenance policies for a finite time span, because the working times of most units are finite in actual fields. This paper converts the usual maintenance models to finite maintenance models. It is more difficult to study theoretically optimal policies for a finite time span than those for an infinite time span. Three usual models of periodic replacement with minimal repair, block replacement and simple replacement are transformed to finite replacement models. Further, optimal periodic and sequential policies for an imperfect preventive maintenance and an inspection model for a finite time span are considered. Optimal policies for each model are analytically derived and are numerically computed

  10. An Economic and Environmental Assessment Model for Selecting the Optimal Implementation Strategy of Fuel Cell Systems—A Focus on Building Energy Policy

    Directory of Open Access Journals (Sweden)

    Daeho Kim

    2014-08-01

    Full Text Available Considerable effort is being made to reduce the primary energy consumption in buildings. As part of this effort, fuel cell systems are attracting attention as a new/renewable energy systems for several reasons: (i distributed generation system; (ii combined heat and power system; and (iii availability of various sources of hydrogen in the future. Therefore, this study aimed to develop an economic and environmental assessment model for selecting the optimal implementation strategy of the fuel cell system, focusing on building energy policy. This study selected two types of buildings (i.e., residential buildings and non-residential buildings as the target buildings and considered two types of building energy policies (i.e., the standard of energy cost calculation and the standard of a government subsidy. This study established the optimal implementation strategy of the fuel cell system in terms of the life cycle cost and life cycle CO2 emissions. For the residential building, it is recommended that the subsidy level and the system marginal price level be increased. For the non-residential building, it is recommended that gas energy cost be decreased and the system marginal price level be increased. The developed model could be applied to any other country or any other type of building according to building energy policy.

  11. Optimal Acquisition and Production Policy for End-of-Life Engineering Machinery Recovering in a Joint Manufacturing/Remanufacturing System under Uncertainties in Procurement and Demand

    Directory of Open Access Journals (Sweden)

    Haolan Liao

    2017-02-01

    Full Text Available The intensive shortage of natural resources and the inchoate phase of automobile remanufacturing in a closed-loop supply chain (CLSC are driving people to take cyclic manufacturing seriously. Aiming at maximizing resource utilization and produce profits, we apply an optimizing mathematical analysis to the modeling of automobile engine remanufacturing in a joint manufacturing system, in which the quantity and quality of procurement, and the demand of the market, are both uncertain. The manufacturer can either produce new products with raw materials or remanufacture the returned product taken back from customers; the raw materials are bought from two suppliers with certain probabilities of disruption in the supply. The returned products are classified into different quality levels according to the testing results after sorting, by considering the remanufacture-up-to strategy we obtained the optimal remanufacturing ratio, then the manufacturing quantity and corresponding maximized total profit of this joint system are determined. We also investigated a real-life case of auto engine remanufacturing, comparing it with the theory of optimal remanufacturing policy, and the results indicate that a material savings of more than 45% and a cost improvement of more than 40% could be achieved when the optimal remanufacturing policy of our model is implemented.

  12. Optimization of China's generating portfolio and policy implications based on portfolio theory

    International Nuclear Information System (INIS)

    Zhu, Lei; Fan, Ying

    2010-01-01

    This paper applies portfolio theory to evaluate China's 2020-medium-term plans for generating technologies and its generating portfolio. With reference to the risk of relevant generating-cost streams, the paper discusses China's future development of efficient (Pareto optimal) generating portfolios that enhance energy security in different scenarios, including CO 2 -emission-constrained scenarios. This research has found that the future adjustment of China's planned 2020 generating portfolio can reduce the portfolio's cost risk through appropriate diversification of generating technologies, but a price will be paid in the form of increased generating cost. In the CO 2 -emission-constrained scenarios, the generating-cost risk of China's planned 2020 portfolio is even greater than that of the 2005 portfolio, but increasing the proportion of nuclear power in the generating portfolio can reduce the cost risk effectively. For renewable-power generation, because of relatively high generating costs, it will be necessary to obtain stronger policy support to promote renewable-power development.

  13. Designing evaluation studies to optimally inform policy: what factors do policy-makers in China consider when making resource allocation decisions on healthcare worker training programmes?

    Science.gov (United States)

    Wu, Shishi; Legido-Quigley, Helena; Spencer, Julia; Coker, Richard James; Khan, Mishal Sameer

    2018-02-23

    In light of the gap in evidence to inform future resource allocation decisions about healthcare provider (HCP) training in low- and middle-income countries (LMICs), and the considerable donor investments being made towards training interventions, evaluation studies that are optimally designed to inform local policy-makers are needed. The aim of our study is to understand what features of HCP training evaluation studies are important for decision-making by policy-makers in LMICs. We investigate the extent to which evaluations based on the widely used Kirkpatrick model - focusing on direct outcomes of training, namely reaction of trainees, learning, behaviour change and improvements in programmatic health indicators - align with policy-makers' evidence needs for resource allocation decisions. We use China as a case study where resource allocation decisions about potential scale-up (using domestic funding) are being made about an externally funded pilot HCP training programme. Qualitative data were collected from high-level officials involved in resource allocation at the national and provincial level in China through ten face-to-face, in-depth interviews and two focus group discussions consisting of ten participants each. Data were analysed manually using an interpretive thematic analysis approach. Our study indicates that Chinese officials not only consider information about the direct outcomes of a training programme, as captured in the Kirkpatrick model, but also need information on the resources required to implement the training, the wider or indirect impacts of training, and the sustainability and scalability to other settings within the country. In addition to considering findings presented in evaluation studies, we found that Chinese policy-makers pay close attention to whether the evaluations were robust and to the composition of the evaluation team. Our qualitative study indicates that training programme evaluations that focus narrowly on direct training

  14. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  15. Policy Analysis Screening System (PASS) demonstration: sample queries and terminal instructions

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-10-16

    This document contains the input and output for the Policy Analysis Screening System (PASS) demonstration. This demonstration is stored on a portable disk at the Environmental Impacts Division. Sample queries presented here include: (1) how to use PASS; (2) estimated 1995 energy consumption from Mid-Range Energy-Forecasting System (MEFS) data base; (3) pollution projections from Strategic Environmental Assessment System (SEAS) data base; (4) diesel auto regulations; (5) diesel auto health effects; (6) oil shale health and safety measures; (7) water pollution effects of SRC; (8) acid rainfall from Energy Environmental Statistics (EES) data base; 1990 EIA electric generation by fuel type; sulfate concentrations by Federal region; forecast of 1995 SO/sub 2/ emissions in Region III; and estimated electrical generating capacity in California to 1990. The file name for each query is included.

  16. Optimized sample preparation for two-dimensional gel electrophoresis of soluble proteins from chicken bursa of Fabricius

    Directory of Open Access Journals (Sweden)

    Zheng Xiaojuan

    2009-10-01

    Full Text Available Abstract Background Two-dimensional gel electrophoresis (2-DE is a powerful method to study protein expression and function in living organisms and diseases. This technique, however, has not been applied to avian bursa of Fabricius (BF, a central immune organ. Here, optimized 2-DE sample preparation methodologies were constructed for the chicken BF tissue. Using the optimized protocol, we performed further 2-DE analysis on a soluble protein extract from the BF of chickens infected with virulent avibirnavirus. To demonstrate the quality of the extracted proteins, several differentially expressed protein spots selected were cut from 2-DE gels and identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS. Results An extraction buffer containing 7 M urea, 2 M thiourea, 2% (w/v 3-[(3-cholamidopropyl-dimethylammonio]-1-propanesulfonate (CHAPS, 50 mM dithiothreitol (DTT, 0.2% Bio-Lyte 3/10, 1 mM phenylmethylsulfonyl fluoride (PMSF, 20 U/ml Deoxyribonuclease I (DNase I, and 0.25 mg/ml Ribonuclease A (RNase A, combined with sonication and vortex, yielded the best 2-DE data. Relative to non-frozen immobilized pH gradient (IPG strips, frozen IPG strips did not result in significant changes in the 2-DE patterns after isoelectric focusing (IEF. When the optimized protocol was used to analyze the spleen and thymus, as well as avibirnavirus-infected bursa, high quality 2-DE protein expression profiles were obtained. 2-DE maps of BF of chickens infected with virulent avibirnavirus were visibly different and many differentially expressed proteins were found. Conclusion These results showed that method C, in concert extraction buffer IV, was the most favorable for preparing samples for IEF and subsequent protein separation and yielded the best quality 2-DE patterns. The optimized protocol is a useful sample preparation method for comparative proteomics analysis of chicken BF tissues.

  17. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  18. A niching genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Sacco, W.F.; Lapa, Celso M.F.; Pereira, C.M.N.A.; Oliveira, C.R.E. de

    2006-01-01

    This article extends previous efforts on genetic algorithms (GAs) applied to a nuclear power plant (NPP) auxiliary feedwater system (AFWS) surveillance tests policy optimization. We introduce the application of a niching genetic algorithm (NGA) to this problem and compare its performance to previous results. The NGA maintains a populational diversity during the search process, thus promoting a greater exploration of the search space. The optimization problem consists in maximizing the system's average availability for a given period of time, considering realistic features such as: (i) aging effects on standby components during the tests; (ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; (iii) components have distinct test parameters (outage time, aging factors, etc.) and (iv) tests are not necessarily periodic. We find that the NGA performs better than the conventional GA and the island GA due to a greater exploration of the search space

  19. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (AV...

  20. Shadow Cost of Public Funds and Privatization Policies

    OpenAIRE

    Sato, Susumu; Matsumura, Toshihiro

    2017-01-01

    We investigate the optimal privatization policy in mixed oligopolies with shadow cost of public funds (excess burden of taxation). The government is concerned with both the total social surplus and the revenue obtained by the privatization of a public firm. We find that the relationship between the shadow cost of public funds and the optimal privatization policy is non-monotone. When the cost is moderate, then higher the cost is, the lower is the optimal degree of privatization. ...

  1. Welfare effects of deterrence-motivated activation policy

    DEFF Research Database (Denmark)

    Rasmussen, Martin

    We investigate whether activation policy is part of optimal policy of a benevolent government, when the motivation for introducing activation is to deter some people from collecting benefits. The government offers a pure benefit programme and an activation programme, and individuals self-select i......We investigate whether activation policy is part of optimal policy of a benevolent government, when the motivation for introducing activation is to deter some people from collecting benefits. The government offers a pure benefit programme and an activation programme, and individuals self......-select into programmes. Individuals differ with respect to disutility and wage. Activation programmes are relatively costly and favour individuals who are relatively well off. Hence, for activation policy to used, labour supply effects have to be relatively small. We discuss how labour supply effects depend...

  2. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  3. Balancing Exploration, Uncertainty Representation and Computational Time in Many-Objective Reservoir Policy Optimization

    Science.gov (United States)

    Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.

    2016-12-01

    As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS

  4. Dynamic optimization in environmental economics

    Energy Technology Data Exchange (ETDEWEB)

    Moser, Elke; Tragler, Gernot; Veliov, Vladimir M. (eds.) [Vienna Univ. of Technology (Austria). Inst. of Mathematical Methods in Economics; Semmler, Willi [The New School for Social Research, New York, NY (United States). Dept. of Economics

    2014-11-01

    This book contains two chapters with the topics: 1. Chapter: INTERACTIONS BETWEEN ECONOMY AND CLIMATE: (a) Climate Change and Technical Progress: Impact of Informational Constraints. (b) Environmental Policy in a Dynamic Model with Heterogeneous Agents and Voting. (c) Optimal Environmental Policy in the Presence of Multiple Equilibria and Reversible Hysteresis. (d). Modeling the Dynamics of the Transition to a Green Economy. (e) One-Parameter GHG Emission Policy With R and D-Based Growth. (f) Pollution, Public Health Care, and Life Expectancy when Inequality Matters. (g) Uncertain Climate Policy and the Green Paradox. (h) Uniqueness Versus Indeterminacy in the Tragedy of the Commons - A ''Geometric'' Approach. 2. Chapter: OPTIMAL EXTRACTION OF RESOURCES: (j) Dynamic Behavior of Oil Importers and Exporters Under Uncertainty. (k) Robust Control of a Spatially Distributed Commercial Fishery. (l) On the Effect of Resource Exploitation on Growth: Domestic Innovation vs. Technological Diffusion Through Trade. (m) Forest Management and Biodiversity in Size-Structured Forests Under Climate Change. (n) Carbon Taxes and Comparison of Trading Regimes in Fossil Fuels. (o) Landowning, Status and Population Growth. (p) Optimal Harvesting of Size-Structured Biological Populations.

  5. Optimal policy of energy innovation in developing countries: Development of solar PV in Iran

    International Nuclear Information System (INIS)

    Shafiei, Ehsan; Saboohi, Yadollah; Ghofrani, Mohammad B.

    2009-01-01

    The purpose of this study is to apply managerial economics and methods of decision analysis to study the optimal pattern of innovation activities for development of new energy technologies in developing countries. For this purpose, a model of energy research and development (R and D) planning is developed and it is then linked to a bottom-up energy-systems model. The set of interlinked models provide a comprehensive analytical tool for assessment of energy technologies and innovation planning taking into account the specific conditions of developing countries. An energy-system model is used as a tool for the assessment and prioritization of new energy technologies. Based on the results of the technology assessment model, the optimal R and D resources allocation for new energy technologies is estimated with the help of the R and D planning model. The R and D planning model is based on maximization of the total net present value of resulting R and D benefits taking into account the dynamics of technological progress, knowledge and experience spillovers from advanced economies, technology adoption and R and D constraints. Application of the set of interlinked models is explained through the analysis of the development of solar PV in Iranian electricity supply system and then some important policy insights are concluded

  6. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  7. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  8. National Policy on Nuclear Fuel Cycle

    International Nuclear Information System (INIS)

    Soedyartomo, S.

    1996-01-01

    National policy on nuclear fuel cycle is aimed at attaining the expected condition, i.e. being able to support optimality the national energy policy and other related Government policies taking into account current domestic nuclear fuel cycle condition and the trend of international nuclear fuel cycle development, the national strength, weakness, thread and opportunity in the field of energy. This policy has to be followed by the strategy to accomplish covering the optimization of domestic efforts, cooperation with other countries, and or purchasing licences. These policy and strategy have to be broken down into various nuclear fuel cycle programmes covering basically assesment of the whole cycle, performing research and development of the whole cycle without enrichment and reprocessing being able for weapon, as well as programmes for industrialization of the fuel cycle stepwisery commencing with the middle part of the cycle and ending with the edge of the back-end of the cycle

  9. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  10. Performance optimization of queueing systems with perturbation realization

    KAUST Repository

    Xia, Li

    2012-04-01

    After the intensive studies of queueing theory in the past decades, many excellent results in performance analysis have been obtained, and successful examples abound. However, exploring special features of queueing systems directly in performance optimization still seems to be a territory not very well cultivated. Recent progresses of perturbation analysis (PA) and sensitivity-based optimization provide a new perspective of performance optimization of queueing systems. PA utilizes the structural information of queueing systems to efficiently extract the performance sensitivity information from a sample path of system. This paper gives a brief review of PA and performance optimization of queueing systems, focusing on a fundamental concept called perturbation realization factors, which captures the special dynamic feature of a queueing system. With the perturbation realization factors as building blocks, the performance derivative formula and performance difference formula can be obtained. With performance derivatives, gradient-based optimization can be derived, while with performance difference, policy iteration and optimality equations can be derived. These two fundamental formulas provide a foundation for performance optimization of queueing systems from a sensitivity-based point of view. We hope this survey may provide some inspirations on this promising research topic. © 2011 Elsevier B.V. All rights reserved.

  11. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  12. Policy Development Fosters Collaborative Practice

    DEFF Research Database (Denmark)

    Meyer, Daniel M; Kaste, Linda M; Lituri, Kathy M

    2016-01-01

    This article provides an example of interprofessional collaboration for policy development regarding environmental global health vis-à-vis the Minamata Convention on Mercury. It presents an overview of mercury and mercury-related environmental health issues; public policy processes and stakeholde...... requiring dental engagement for interprofessional policy development include education, disaster response, HPV vaccination, pain management, research priorities, and antibiotic resistance.......; and specifics including organized dentistry's efforts to create global policy to restrict environmental contamination by mercury. Dentistry must participate in interprofessional collaborations and build on such experiences to be optimally placed for ongoing interprofessional policy development. Current areas...

  13. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  14. Political economy constraints on carbon pricing policies: What are the implications for economic efficiency, environmental efficacy, and climate policy design?

    International Nuclear Information System (INIS)

    Jenkins, Jesse D.

    2014-01-01

    Economists traditionally view a Pigouvian fee on carbon dioxide and other greenhouse gas emissions, either via carbon taxes or emissions caps and permit trading (“cap-and-trade”), as the economically optimal or “first-best” policy to address climate change-related externalities. Yet several political economy factors can severely constrain the implementation of these carbon pricing policies, including opposition of industrial sectors with a concentration of assets that would lose considerable value under such policies; the collective action nature of climate mitigation efforts; principal agent failures; and a low willingness-to-pay for climate mitigation by citizens. Real-world implementations of carbon pricing policies can thus fall short of the economically optimal outcomes envisioned in theory. Consistent with the general theory of the second-best, the presence of binding political economy constraints opens a significant “opportunity space” for the design of creative climate policy instruments with superior political feasibility, economic efficiency, and environmental efficacy relative to the constrained implementation of carbon pricing policies. This paper presents theoretical political economy frameworks relevant to climate policy design and provides corroborating evidence from the United States context. It concludes with a series of implications for climate policy making and argues for the creative pursuit of a mix of second-best policy instruments. - Highlights: • Political economy constraints can bind carbon pricing policies. • These constraints can prevent implementation of theoretically optimal carbon prices. • U.S. household willingness-to-pay for climate policy likely falls in the range of $80–$200 per year. • U.S. carbon prices may be politically constrained to as low as $2–$8 per ton of CO 2 . • An opportunity space exists for improvements in climate policy design and outcomes

  15. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    Science.gov (United States)

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  16. OPTIMAL TRAINING POLICY FOR PROMOTION - STOCHASTIC MODELS OF MANPOWER SYSTEMS

    Directory of Open Access Journals (Sweden)

    V.S.S. Yadavalli

    2012-01-01

    Full Text Available In this paper, the optimal planning of manpower training programmes in a manpower system with two grades is discussed. The planning of manpower training within a given organization involves a trade-off between training costs and expected return. These planning problems are examined through models that reflect the random nature of manpower movement in two grades. To be specific, the system consists of two grades, grade 1 and grade 2. Any number of persons in grade 2 can be sent for training and after the completion of training, they will stay in grade 2 and will be given promotion as and when vacancies arise in grade 1. Vacancies arise in grade 1 only by wastage. A person in grade 1 can leave the system with probability p. Vacancies are filled with persons in grade 2 who have completed the training. It is assumed that there is a perfect passing rate and that the sizes of both grades are fixed. Assuming that the planning horizon is finite and is T, the underlying stochastic process is identified as a finite state Markov chain and using dynamic programming, a policy is evolved to determine how many persons should be sent for training at any time k so as to minimize the total expected cost for the entire planning period T.

  17. Optimization of a method based on micro-matrix solid-phase dispersion (micro-MSPD for the determination of PCBs in mussel samples

    Directory of Open Access Journals (Sweden)

    Nieves Carro

    2017-03-01

    Full Text Available This paper reports the development and optimization of micro-matrix solid-phase dispersion (micro-MSPD of nine polychlorinated biphenyls (PCBs in mussel samples (Mytilus galloprovincialis by using a two-level factorial design. Four variables (amount of sample, anhydrous sodium sulphate, Florisil and solvent volume were considered as factors in the optimization process. The results suggested that only the interaction between the amount of anhydrous sodium sulphate and the solvent volume was statistically significant for the overall recovery of a trichlorinated compound, CB 28. Generally most of the considered species exhibited a similar behaviour, the sample and Florisil amounts had a positive effect on PCBs extractions and solvent volume and sulphate amount had a negative effect. The analytical determination and confirmation of PCBs were carried out by using GC-ECD and GC-MS/MS, respectively. The method was validated having satisfactory precision and accuracy with RSD values below 6% and recoveries between 81 and 116% for all congeners. The optimized method was applied to the extraction of real mussel samples from two Galician Rías.

  18. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    International Nuclear Information System (INIS)

    Carpy, R; Picker, G; Amann, B; Ranebo, H; Vincent-Bonnieu, S; Minster, O; Winter, J; Dettmann, J; Castiglione, L; Höhler, R; Langevin, D

    2011-01-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of 'wet foams' have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 3 . These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).

  19. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  20. Analysis and Optimization of Distributed Real-Time Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    and scheduling policies. In this context, the task of designing such systems is becoming increasingly difficult. The success of new adequate design methods depends on the availability of efficient analysis as well as optimization techniques. In this paper, we present both analysis and optimization approaches...... characteristic to this class of systems: mapping of functionality, the optimization of the access to the communication channel, and the assignment of scheduling policies to processes. Optimization heuristics aiming at producing a schedulable system, with a given amount of resources, are presented....

  1. Optimal Network QoS over the Internet of Vehicles for E-Health Applications

    Directory of Open Access Journals (Sweden)

    Di Lin

    2016-01-01

    Full Text Available Wireless technologies are pervasive to support ubiquitous healthcare applications. However, a critical issue of using wireless communications under a healthcare scenario is the electromagnetic interference (EMI caused by RF transmission, and a high level of EMI may lead to a critical malfunction of medical sensors. In consideration of EMI on medical sensors, we study the optimization of quality of service (QoS within the whole Internet of vehicles for E-health and propose a novel model to optimize the QoS by allocating the transmit power of each user. Our results show that the optimal power control policy depends on the objective of optimization problems: a greedy policy is optimal to maximize the summation of QoS of each user, whereas a fair policy is optimal to maximize the product of QoS of each user. Algorithms are taken to derive the optimal policies, and numerical results of optimizing QoS are presented for both objectives and QoS constraints.

  2. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  3. An optimal replacement policy for a repairable system based on its repairman having vacations

    Energy Technology Data Exchange (ETDEWEB)

    Yuan Li [School of Aerospace Engineering and Applied Mechanics, Tongji University, Shanghai 200092 (China); Xu Jian, E-mail: xujian@tongji.edu.c [School of Aerospace Engineering and Applied Mechanics, Tongji University, Shanghai 200092 (China)

    2011-07-15

    This paper studies a cold standby repairable system with two different components and one repairman who can take multiple vacations. If there is a component which fails and the repairman is on vacation, the failed component will wait for repair until the repairman is available. In the system, assume that component 1 has priority in use. After repair, component 1 follows a geometric process repair, while component 2 can be repaired as good as new after failures. Under these assumptions, a replacement policy N based on the failed times of component 1 is studied. The system will be replaced if the failure times of component 1 reach N. The explicit expression of the expected cost rate is given, so that the optimal replacement time N{sup *} is determined. Finally, a numerical example is given to illustrate the theoretical results of the model.

  4. An Optimization Model for Expired Drug Recycling Logistics Networks and Government Subsidy Policy Design Based on Tri-level Programming

    OpenAIRE

    Huang, Hui; Li, Yuyu; Huang, Bo; Pi, Xing

    2015-01-01

    In order to recycle and dispose of all people’s expired drugs, the government should design a subsidy policy to stimulate users to return their expired drugs, and drug-stores should take the responsibility of recycling expired drugs, in other words, to be recycling stations. For this purpose it is necessary for the government to select the right recycling stations and treatment stations to optimize the expired drug recycling logistics network and minimize the total costs of recycling and disp...

  5. AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange

    International Nuclear Information System (INIS)

    Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua

    2009-01-01

    The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach

  6. 'Green' preferences as regulatory policy instrument

    International Nuclear Information System (INIS)

    Brennan, Timothy J.

    2006-01-01

    We examine here the suggestion that if consumers in sufficient numbers are willing to pay the premium to have power generated using low-emission technologies, tax or permit policies become less necessary or stringent. While there are implementation difficulties with this proposal, our purpose is more fundamental: Can economics make sense of using preferences as a regulatory instrument? If 'green' preferences are exogenously given, to what extent can or should they be regarded as a substitute for other policies? Even with 'green' preferences, production and consumption of polluting goods continue to impose social costs not borne in the market. Moreover, if green preferences are regarded as a policy instrument, the 'no policy' baseline would require a problematic specification of counterfactual 'non-green' preferences. Viewing green preferences as a regulatory policy instrument is conceptually sensible if the benchmark for optimal emissions is based on value judgments apart from the preferences consumers happen to have. If so, optimal environmental protection would be defined by reference to ethical theory, or, even less favorably, by prescriptions from policy advocates who give their own preferences great weight while giving those of the public at large (and the costs they bear) very little consideration. (author)

  7. Pavement maintenance optimization model using Markov Decision Processes

    Science.gov (United States)

    Mandiartha, P.; Duffield, C. F.; Razelan, I. S. b. M.; Ismail, A. b. H.

    2017-09-01

    This paper presents an optimization model for selection of pavement maintenance intervention using a theory of Markov Decision Processes (MDP). There are some particular characteristics of the MDP developed in this paper which distinguish it from other similar studies or optimization models intended for pavement maintenance policy development. These unique characteristics include a direct inclusion of constraints into the formulation of MDP, the use of an average cost method of MDP, and the policy development process based on the dual linear programming solution. The limited information or discussions that are available on these matters in terms of stochastic based optimization model in road network management motivates this study. This paper uses a data set acquired from road authorities of state of Victoria, Australia, to test the model and recommends steps in the computation of MDP based stochastic optimization model, leading to the development of optimum pavement maintenance policy.

  8. Optimizing Plutonium stock management

    International Nuclear Information System (INIS)

    Niquil, Y.; Guillot, J.

    1997-01-01

    Plutonium from spent fuel reprocessing is reused in new MOX assemblies. Since plutonium isotopic composition deteriorates with time, it is necessary to optimize plutonium stock management over a long period, to guarantee safe procurement, and contribute to a nuclear fuel cycle policy at the lowest cost. This optimization is provided by the prototype software POMAR

  9. Optimal sampling plan for clean development mechanism lighting projects with lamp population decay

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2014-01-01

    Highlights: • A metering cost minimisation model is built with the lamp population decay to optimise CDM lighting projects sampling plan. • The model minimises the total metering cost and optimise the annual sample size during the crediting period. • The required 90/10 criterion sampling accuracy is satisfied for each CDM monitoring report. - Abstract: This paper proposes a metering cost minimisation model that minimises metering cost under the constraints of sampling accuracy requirement for clean development mechanism (CDM) energy efficiency (EE) lighting project. Usually small scale (SSC) CDM EE lighting projects expect a crediting period of 10 years given that the lighting population will decay as time goes by. The SSC CDM sampling guideline requires that the monitored key parameters for the carbon emission reduction quantification must satisfy the sampling accuracy of 90% confidence and 10% precision, known as the 90/10 criterion. For the existing registered CDM lighting projects, sample sizes are either decided by professional judgment or by rule-of-thumb without considering any optimisation. Lighting samples are randomly selected and their energy consumptions are monitored continuously by power meters. In this study, the sampling size determination problem is formulated as a metering cost minimisation model by incorporating a linear lighting decay model as given by the CDM guideline AMS-II.J. The 90/10 criterion is formulated as constraints to the metering cost minimisation problem. Optimal solutions to the problem minimise the metering cost whilst satisfying the 90/10 criterion for each reporting period. The proposed metering cost minimisation model is applicable to other CDM lighting projects with different population decay characteristics as well

  10. Exponential Lower Bounds For Policy Iteration

    OpenAIRE

    Fearnley, John

    2010-01-01

    We study policy iteration for infinite-horizon Markov decision processes. It has recently been shown policy iteration style algorithms have exponential lower bounds in a two player game setting. We extend these lower bounds to Markov decision processes with the total reward and average-reward optimality criteria.

  11. Biocapacity optimization in regional planning

    Science.gov (United States)

    Guo, Jianjun; Yue, Dongxia; Li, Kai; Hui, Cang

    2017-01-01

    Ecological overshoot has been accelerating across the globe. Optimizing biocapacity has become a key to resolve the overshoot of ecological demand in regional sustainable development. However, most literature has focused on reducing ecological footprint but ignores the potential of spatial optimization of biocapacity through regional planning of land use. Here we develop a spatial probability model and present four scenarios for optimizing biocapacity of a river basin in Northwest China. The potential of enhanced biocapacity and its effects on ecological overshoot and water consumption in the region were explored. Two scenarios with no restrictions on croplands and water use reduced the overshoot by 29 to 53%, and another two scenarios which do not allow croplands and water use to increase worsened the overshoot by 11 to 15%. More spatially flexible transition rules of land use led to higher magnitude of change after optimization. However, biocapacity optimization required a large amount of additional water resources, casting considerable pressure on the already water-scarce socio-ecological system. Our results highlight the potential for policy makers to manage/optimize regional land use which addresses ecological overshoot. Investigation on the feasibility of such spatial optimization complies with the forward-looking policies for sustainable development and deserves further attention.

  12. Nationwide survey of policies and practices related to capillary blood sampling in medical laboratories in Croatia.

    Science.gov (United States)

    Krleza, Jasna Lenicek

    2014-01-01

    Capillary sampling is increasingly used to obtain blood for laboratory tests in volumes as small as necessary and as non-invasively as possible. Whether capillary blood sampling is also frequent in Croatia, and whether it is performed according to international laboratory standards is unclear. All medical laboratories that participate in the Croatian National External Quality Assessment Program (N = 204) were surveyed on-line to collect information about the laboratory's parent institution, patient population, types and frequencies of laboratory tests based on capillary blood samples, choice of reference intervals, and policies and procedures specifically related to capillary sampling. Sampling practices were compared with guidelines from the Clinical and Laboratory Standards Institute (CLSI) and the World Health Organization (WHO). Of the 204 laboratories surveyed, 174 (85%) responded with complete questionnaires. Among the 174 respondents, 155 (89%) reported that they routinely perform capillary sampling, which is carried out by laboratory staff in 118 laboratories (76%). Nearly half of respondent laboratories (48%) do not have a written protocol including order of draw for multiple sampling. A single puncture site is used to provide capillary blood for up to two samples at 43% of laboratories that occasionally or regularly perform such sampling. Most respondents (88%) never perform arterialisation prior to capillary blood sampling. Capillary blood sampling is highly prevalent in Croatia across different types of clinical facilities and patient populations. Capillary sampling procedures are not standardised in the country, and the rate of laboratory compliance with CLSI and WHO guidelines is low.

  13. Shipment Consolidation Policy under Uncertainty of Customer Order for Sustainable Supply Chain Management

    Directory of Open Access Journals (Sweden)

    Kyunghoon Kang

    2017-09-01

    Full Text Available With increasing concern over the environment, shipment consolidation has become one of a main initiative to reduce CO2 emissions and transportation cost among the logistics service providers. Increased delivery time caused by shipment consolidation may lead to customer’s order cancellation. Thus, order cancellation should be considered as a factor in order uncertainty to determine the optimal shipment consolidation policy. We develop mathematical models for quantity-based and time-based policies and obtain optimality properties for the models. Efficient algorithms using optimal properties are provided to compute the optimal parameters for ordering and shipment decisions. To compare the performances of the quantity-based policy with the time-based policy, extensive numerical experiments are conducted, and the total cost is compared.

  14. Replenishment policies for Empty Containers in an Inland Multi-depot System

    DEFF Research Database (Denmark)

    Dang, Quang-Vinh; Nielsen, Izabela Ewa; Yun, W. Y.

    2013-01-01

    . The objective is to obtain the optimal policy in order to minimize the expected total cost consisting of: inventory holding, overseas positioning, inland positioning and leasing costs. A simulation-based genetic algorithm is developed to find near-optimal policies. Numerical examples are given to demonstrate...

  15. Optimizing pricing and ordering strategies in a three-level supply chain under return policy

    Science.gov (United States)

    Noori-daryan, Mahsa; Taleizadeh, Ata Allah

    2018-03-01

    This paper develops an economic production quantity model in a three-echelon supply chain composing of a supplier, a manufacturer and a wholesaler under two scenarios. As the first scenario, we consider a return contract between the outside supplier and the supplier and also between the manufacturer and the wholesaler, but in the second one, the return policy between the manufacturer and the wholesaler is not applied. Here, it is assumed that shortage is permitted and demand is price-sensitive. The principal goal of the research is to maximize the total profit of the chain by optimizing the order quantity of the supplier and the selling prices of the manufacturer and the wholesaler. Nash-equilibrium approach is considered between the chain members. In the end, a numerical example is presented to clarify the applicability of the introduced model and compare the profit of the chain under two scenarios.

  16. Interactions Among Insider Ownership, Dividend Policy, Debt Policy, Investment Decision, and Business Risk

    OpenAIRE

    F., Indri Erkaningrum

    2013-01-01

    The study of interaction among insider ownership, dividend policy, debt policy, investment decision, and business risk is still conducted. This research aims at investigating theinfluencing factors of insider ownership, dividend policy, debt policy, investment decision, business risk, and the interaction among insider ownership, dividend policy, debt policy, investment decision, and business risk. The samples of the research are 137 manufacturing companies listed in the Indonesia Stock Exchan...

  17. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  18. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    Science.gov (United States)

    Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.

    2011-12-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 40).

  19. The State Fiscal Policy: Determinants and Optimization of Financial Flows

    Directory of Open Access Journals (Sweden)

    Sitash Tetiana D.

    2017-03-01

    Full Text Available The article outlines the determinants of the state fiscal policy at the present stage of global transformations. Using the principles of financial science it is determined that regulation of financial flows within the fiscal sphere, namely centralization and redistribution of the GDP, which results in the regulation of the financial capacity of economic agents, is of importance. It is emphasized that the urgent measure for improving the tax model is re-considering the provision of fiscal incentives, which are used to stimulate the accumulation of capital, investment activity, innovation, increase of the competitiveness of national products, expansion of exports, increase of the level of the population employment. The necessity of applying the instruments of fiscal regulation of financial flows, which should take place on the basis of institutional economics emphasizing the analysis of institutional changes, the evolution of institutions and their impact on the behavior of participants of economic relations. At the same time it is determined that the maximum effect of fiscal regulation of financial flows is ensured when application of fiscal instruments is aimed not only at achieving the target values of parameters of financial flows but at overcoming institutional deformations as well. It is determined that the optimal movement of financial flows enables creating favorable conditions for development and maintenance of financial balance in the society and achievement of the necessary level of competitiveness of the national economy.

  20. Social Preferences and Labor Market Policy

    DEFF Research Database (Denmark)

    Filges, Trine; Kennes, John; Larsen, Birthe

    2006-01-01

    We find that the main featues of labor policy across OECD countries can be explained by a simple general equilibrium search model with risk neutral agents and a government that chooses policy to maximize a social welfare function. In equilibrum, policies are chosen to optimal redistribute income....... The model also explains why countries that appear to pursue equity spend more on both active and passive labor market programs....

  1. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  2. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  3. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  4. Optimal Aquifer Pumping Policy to Reduce Contaminant Concentration

    Directory of Open Access Journals (Sweden)

    Ali Abaei

    2012-01-01

    Full Text Available Different sources of ground water contamination lead to non-uniform distribution of contaminant concentration in the aquifer. If elimination or containment of pollution sources was not possible, the distribution of contaminant concentrations could be modified in order to eliminate peak concentrations using optimal water pumping discharge plan. In the present investigation Visual MODFLOW model was used to simulate the flow and transport in a hypothetic aquifer. Genetic Algorithm (GA also was applied to optimize the location and pumping flow rate of wells in order to reduce contaminants peak concentrations in aquifer.

  5. INTERACTIONS AMONG INSIDER OWNERSHIP, DIVIDEND POLICY, DEBT POLICY, INVESTMENT DECISION, AND BUSINESS RISK

    OpenAIRE

    F., Indri Erkaningrum

    2015-01-01

    The study of interaction among insider ownership, dividend policy, debt policy, investment decision, and business risk is still conducted. This research aims at investigating theinfluencing factors of insider ownership, dividend policy, debt policy, investment decision, business risk, and the interaction among insider ownership, dividend policy, debt policy, investment decision, and business risk. The samples of the research are 137 manufacturing companies listed in the Indonesia Stock Exchan...

  6. Integrated dynamic policy management methodology and system for strategic environmental assessment of golf course installation policy in Taiwan

    International Nuclear Information System (INIS)

    Chen, Ching-Ho; Liu, Wei-Lin; Liaw, Shu-Liang

    2011-01-01

    Strategic environmental assessment (SEA) focuses primarily on assessing how policies, plans, and programs (PPPs) influence the sustainability of the involved regions. However, the processes of assessing policies and developing management strategies for pollution load and resource use are usually separate in the current SEA system. This study developed a policy management methodology to overcome the defects generated during the above processes. This work first devised a dynamic management framework using the methods of systems thinking, system dynamics, and Managing for Results (MFRs). Furthermore, a driving force-pressure-state-impact-response (DPSIR) indicator system was developed. The golf course installation policy was applied as a case study. Taiwan, counties of Taiwan, and the golf courses within those individual counties were identified as a system, subsystems, and objects, respectively. This study identified an object-linked double-layer framework with multi-stage-option to simultaneously to quantify golf courses in each subsystem and determine ratios of abatement and allocation for pollution load and resource use of each golf course. The DPSIR indicator values for each item of each golf course in each subsystem are calculated based on the options taken in the two decision layers. The summation of indicator values for all items of all golf courses in all subsystems according to various options is defined as the sustainability value of the policy. An optimization model and a system (IDPMS) were developed to obtain the greatest sustainability value of the policy, while golf course quantity, human activity intensity, total quantities of pollution load and resource use are simultaneously obtained. The solution method based on enumeration of multiple bounds for objectives and constraints (EMBOC) was developed for the problem with 1.95 x 10 128 combinations of possible options to solve the optimal solution in ten minutes using a personal computer with 3.0 GHz CPU

  7. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    Science.gov (United States)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  8. Optimal sampling period of the digital control system for the nuclear power plant steam generator water level control

    International Nuclear Information System (INIS)

    Hur, Woo Sung; Seong, Poong Hyun

    1995-01-01

    A great effort has been made to improve the nuclear plant control system by use of digital technologies and a long term schedule for the control system upgrade has been prepared with an aim to implementation in the next generation nuclear plants. In case of digital control system, it is important to decide the sampling period for analysis and design of the system, because the performance and the stability of a digital control system depend on the value of the sampling period of the digital control system. There is, however, currently no systematic method used universally for determining the sampling period of the digital control system. Generally, a traditional way to select the sampling frequency is to use 20 to 30 times the bandwidth of the analog control system which has the same system configuration and parameters as the digital one. In this paper, a new method to select the sampling period is suggested which takes into account of the performance as well as the stability of the digital control system. By use of the Irving's model steam generator, the optimal sampling period of an assumptive digital control system for steam generator level control is estimated and is actually verified in the digital control simulation system for Kori-2 nuclear power plant steam generator level control. Consequently, we conclude the optimal sampling period of the digital control system for Kori-2 nuclear power plant steam generator level control is 1 second for all power ranges. 7 figs., 3 tabs., 8 refs. (Author)

  9. Sleep and optimism: A longitudinal study of bidirectional causal relationship and its mediating and moderating variables in a Chinese student sample.

    Science.gov (United States)

    Lau, Esther Yuet Ying; Hui, C Harry; Lam, Jasmine; Cheung, Shu-Fai

    2017-01-01

    While both sleep and optimism have been found to be predictive of well-being, few studies have examined their relationship with each other. Neither do we know much about the mediators and moderators of the relationship. This study investigated (1) the causal relationship between sleep quality and optimism in a college student sample, (2) the role of symptoms of depression, anxiety, and stress as mediators, and (3) how circadian preference might moderate the relationship. Internet survey data were collected from 1,684 full-time university students (67.6% female, mean age = 20.9 years, SD = 2.66) at three time-points, spanning about 19 months. Measures included the Attributional Style Questionnaire, the Pittsburgh Sleep Quality Index, the Composite Scale of Morningness, and the Depression Anxiety Stress Scale-21. Moderate correlations were found among sleep quality, depressive mood, stress symptoms, anxiety symptoms, and optimism. Cross-lagged analyses showed a bidirectional effect between optimism and sleep quality. Moreover, path analyses demonstrated that anxiety and stress symptoms partially mediated the influence of optimism on sleep quality, while depressive mood partially mediated the influence of sleep quality on optimism. In support of our hypothesis, sleep quality affects mood symptoms and optimism differently for different circadian preferences. Poor sleep results in depressive mood and thus pessimism in non-morning persons only. In contrast, the aggregated (direct and indirect) effects of optimism on sleep quality were invariant of circadian preference. Taken together, people who are pessimistic generally have more anxious mood and stress symptoms, which adversely affect sleep while morningness seems to have a specific protective effect countering the potential damage poor sleep has on optimism. In conclusion, optimism and sleep quality were both cause and effect of each other. Depressive mood partially explained the effect of sleep quality on optimism

  10. Nationwide survey of policies and practices related to capillary blood sampling in medical laboratories in Croatia

    Science.gov (United States)

    Krleza, Jasna Lenicek

    2014-01-01

    Introduction: Capillary sampling is increasingly used to obtain blood for laboratory tests in volumes as small as necessary and as non-invasively as possible. Whether capillary blood sampling is also frequent in Croatia, and whether it is performed according to international laboratory standards is unclear. Materials and methods: All medical laboratories that participate in the Croatian National External Quality Assessment Program (N = 204) were surveyed on-line to collect information about the laboratory’s parent institution, patient population, types and frequencies of laboratory tests based on capillary blood samples, choice of reference intervals, and policies and procedures specifically related to capillary sampling. Sampling practices were compared with guidelines from the Clinical and Laboratory Standards Institute (CLSI) and the World Health Organization (WHO). Results: Of the 204 laboratories surveyed, 174 (85%) responded with complete questionnaires. Among the 174 respondents, 155 (89%) reported that they routinely perform capillary sampling, which is carried out by laboratory staff in 118 laboratories (76%). Nearly half of respondent laboratories (48%) do not have a written protocol including order of draw for multiple sampling. A single puncture site is used to provide capillary blood for up to two samples at 43% of laboratories that occasionally or regularly perform such sampling. Most respondents (88%) never perform arterialisation prior to capillary blood sampling. Conclusions: Capillary blood sampling is highly prevalent in Croatia across different types of clinical facilities and patient populations. Capillary sampling procedures are not standardised in the country, and the rate of laboratory compliance with CLSI and WHO guidelines is low. PMID:25351353

  11. A linear programming model to optimize diets in environmental policy scenarios.

    Science.gov (United States)

    Moraes, L E; Wilen, J E; Robinson, P H; Fadel, J G

    2012-03-01

    The objective was to develop a linear programming model to formulate diets for dairy cattle when environmental policies are present and to examine effects of these policies on diet formulation and dairy cattle nitrogen and mineral excretions as well as methane emissions. The model was developed as a minimum cost diet model. Two types of environmental policies were examined: a tax and a constraint on methane emissions. A tax was incorporated to simulate a greenhouse gas emissions tax policy, and prices of carbon credits in the current carbon markets were attributed to the methane production variable. Three independent runs were made, using carbon dioxide equivalent prices of $5, $17, and $250/t. A constraint was incorporated into the model to simulate the second type of environmental policy, reducing methane emissions by predetermined amounts. The linear programming formulation of this second alternative enabled the calculation of marginal costs of reducing methane emissions. Methane emission and manure production by dairy cows were calculated according to published equations, and nitrogen and mineral excretions were calculated by mass conservation laws. Results were compared with respect to the values generated by a base least-cost model. Current prices of the carbon credit market did not appear onerous enough to have a substantive incentive effect in reducing methane emissions and altering diet costs of our hypothetical dairy herd. However, when emissions of methane were assumed to be reduced by 5, 10, and 13.5% from the base model, total diet costs increased by 5, 19.1, and 48.5%, respectively. Either these increased costs would be passed onto the consumer or dairy producers would go out of business. Nitrogen and potassium excretions were increased by 16.5 and 16.7% with a 13.5% reduction in methane emissions from the base model. Imposing methane restrictions would further increase the demand for grains and other human-edible crops, which is not a progressive

  12. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  13. Evaluation and optimization of feed-in tariffs

    International Nuclear Information System (INIS)

    Kim, Kyoung-Kuk; Lee, Chi-Guhn

    2012-01-01

    Feed-in tariff program is an incentive plan that provides investors with a set payment for electricity generated from renewable energy sources that is fed into the power grid. As of today, FIT is being used by over 75 jurisdictions around the world and offers a number of design options to achieve policy goals. The objective of this paper is to propose a quantitative model, by which a specific FIT program can be evaluated and hence optimized. We focus on payoff structure, which has a direct impact on the net present value of the investment, and other parameters relevant to investor reaction and electricity prices. We combine cost modeling, option valuation, and consumer choice so as to simulate the performance of a FIT program of interest in various scenarios. The model is used to define an optimization problem from a policy maker's perspective, who wants to increase the contribution of renewable energy to the overall energy supply, while keeping the total burden on ratepayers under control. Numerical studies shed light on the interactions among design options, program parameters, and the performance of a FIT program. - Highlights: ► A quantitative model to evaluate and optimize feed-in tariff policies. ► Net present value of investment on renewable energy under a given feed-in tariff policy. ► Analysis of the interactions of policy options and relevant parameters. ► Recommendations for how to set policy options for feed-in tariff program.

  14. Contract portfolio optimization for a gasoline supply chain

    Science.gov (United States)

    Wang, Shanshan

    this model, we characterize a simple and easily implementable dynamic contract portfolio policy that would enable the company to dynamically rebalance its supply contract portfolio over time in anticipation of the future market conditions in each individual channel while satisfying the contractual obligations. The optimal policy is a state-dependent base-share contract portfolio policy characterized by a branded base-share level and an unbranded contract commitment combination, given as a function of the initial information state. Using real-world market data, we estimate the model parameters. We also apply an efficient modified policy iteration method to compute the optimal contract portfolio strategies and corresponding profit value. We present computational results in order to obtain insights into the structure of optimal policies, capture the value of the dynamic contract portfolio policy by comparing it with static policies, and illustrate the sensitivity of the optimal contract portfolio and corresponding profit value in terms of the different parameters. Considering the geographic dispersion of different market areas and the pipeline network together with the dynamic contract portfolio optimization problem, we formulate a forward-looking operational model, which could be used by gasoline suppliers for lower-level planning. Finally, we discuss the generalization of the framework to other problems and applications, as well as further research.

  15. Modeling and Optimization of M/G/1-Type Queueing Networks: An Efficient Sensitivity Analysis Approach

    Directory of Open Access Journals (Sweden)

    Liang Tang

    2010-01-01

    Full Text Available A mathematical model for M/G/1-type queueing networks with multiple user applications and limited resources is established. The goal is to develop a dynamic distributed algorithm for this model, which supports all data traffic as efficiently as possible and makes optimally fair decisions about how to minimize the network performance cost. An online policy gradient optimization algorithm based on a single sample path is provided to avoid suffering from a “curse of dimensionality”. The asymptotic convergence properties of this algorithm are proved. Numerical examples provide valuable insights for bridging mathematical theory with engineering practice.

  16. Neutron activation analysis for the optimal sampling and extraction of extractable organohalogens in human hari

    International Nuclear Information System (INIS)

    Zhang, H.; Chai, Z.F.; Sun, H.B.; Xu, H.F.

    2005-01-01

    Many persistent organohalogen compounds such as DDTs and polychlorinated biphenyls have caused seriously environmental pollution problem that now involves all life. It is know that neutron activation analysis (NAA) is a very convenient method for halogen analysis and is also the only method currently available for simultaneously determining organic chlorine, bromine and iodine in one extract. Human hair is a convenient material to evaluate the burden of such compounds in human body and dan be easily collected from people over wide ranges of age, sex, residential areas, eating habits and working environments. To effectively extract organohalogen compounds from human hair, in present work the optimal Soxhelt-extraction time of extractable organohalogen (EOX) and extractable persistent organohalogen (EPOX) from hair of different lengths were studied by NAA. The results indicated that the optimal Soxhelt-extraction time of EOX and EPOX from human hair was 8-11 h, and the highest EOX and EPOX contents were observed in hair powder extract. The concentrations of both EOX and EPOX in different hair sections were in the order of hair powder ≥ 2 mm > 5 mm, which stated that hair samples milled into hair powder or cut into very short sections were not only for homogeneous. hair sample but for the best hair extraction efficiency.

  17. Fuel demand elasticities for energy and environmental policies: Indian sample survey evidence

    International Nuclear Information System (INIS)

    Gundimeda, Haripriya; Koehlin, Gunnar

    2008-01-01

    India has been running large-scale interventions in the energy sector over the last decades. Still, there is a dearth of reliable and readily available price and income elasticities of demand to base these on, especially for domestic use of traditional fuels. This study uses the linear approximate Almost Ideal Demand System (LA-AIDS) using micro data of more than 100,000 households sampled across India. The LA-AIDS model is expanded by specifying the intercept as a linear function of household characteristics. Marshallian and Hicksian price and expenditure elasticities of demand for four main fuels are estimated for both urban and rural areas by different income groups. These can be used to evaluate recent and current energy policies. The results can also be used for energy projections and carbon dioxide simulations given different growth rates for different segments of the Indian population. (author)

  18. Optimal Cash Management Under Uncertainty

    OpenAIRE

    Bensoussan, Alain; Chutani, Anshuman; Sethi, Suresh

    2009-01-01

    We solve an agent's optimization problem of meeting demands for cash over time with cash deposited in bank or invested in stock. The stock pays dividends and uncertain capital gains, and a commission is incurred in buying and selling of stock. We use a stochastic maximum principle to obtain explicitly the optimal transaction policy.

  19. International climate policy : consequences for shipping

    OpenAIRE

    Mæstad, Ottar; Evensen, Annika Jaersen; Mathiesen, Lars; Olsen, Kristian

    2000-01-01

    This report summarises the main results from the project Norwegian and international climate policy consequences for shipping. The aim of the project has been to shed light on how climate policies might affect shipping, both from the cost side and from the demand side. The project has been divided into three sub-projects, investigating the consequences of climate policies for 1. Optimal shipping operations and management 2. The competitiveness of shipping relative to land transport 3. The tra...

  20. The Theory of Optimal Taxation: What is the Policy Relevance?

    OpenAIRE

    Birch Sørensen, Peter

    2006-01-01

    The paper discusses the implications of optimal tax theory for the debates on uniform commodity taxation and neutral capital income taxation. While strong administrative and political economy arguments in favor of uniform and neutral taxation remain, recent advances in optimal tax theory suggest that the information needed to implement the differentiated taxation prescribed by optimal tax theory may be easier to obtain than previously believed. The paper also points to the strong similarity b...

  1. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    International Nuclear Information System (INIS)

    Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.

    2011-01-01

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3 , the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  2. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    Energy Technology Data Exchange (ETDEWEB)

    Brum, Daniel M.; Lima, Claudio F. [Departamento de Quimica, Universidade Federal de Vicosa, A. Peter Henry Rolfs s/n, Vicosa/MG, 36570-000 (Brazil); Robaina, Nicolle F. [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil); Fonseca, Teresa Cristina O. [Petrobras, Cenpes/PDEDS/QM, Av. Horacio Macedo 950, Ilha do Fundao, Rio de Janeiro/RJ, 21941-915 (Brazil); Cassella, Ricardo J., E-mail: cassella@vm.uff.br [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil)

    2011-05-15

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO{sub 3}, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO{sub 3} medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  3. Focusing light through dynamical samples using fast continuous wavefront optimization.

    Science.gov (United States)

    Blochet, B; Bourdieu, L; Gigan, S

    2017-12-01

    We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO 2 particles in glycerol with tunable temporal stability.

  4. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Science.gov (United States)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost

  5. THE EFFECTS OF DIVIDEND POLICY AND OWNERSHIP STRUCTURE TOWARDS DEBT POLICY

    Directory of Open Access Journals (Sweden)

    Marcella Fransisca Santosa

    2014-07-01

    Full Text Available This research used multiple regression methods to examine the relationshipbetween the dividend policy, institutional ownership, and insider ownership withthe debt policy. Hypotheses tests of this researchused 64 manufacturingcompanies which were listed in the Indonesian StockExchange (IDX from theyear of 2007 until 2010 as the samples. The resultsof this research show that thedividend policies and the insider ownership had noeffects towards the debtpolicy, while the institutional ownership had a significant negative effect towardsthe debt policy.

  6. State-age-dependent maintenance policies for deteriorating systems with Erlang sojourn time distributions

    International Nuclear Information System (INIS)

    Yeh, R.H.

    1997-01-01

    This paper investigates state-age-dependent maintenance policies for multistate deteriorating systems with Erlang sojourn time distributions. Since Erlang distributions are serial combinations of exponential phases, the deteriorating process can be modeled by a multi-phase Markovian model and hence easily analyzed. Based on the Markovian model, the optimal phase-dependent inspection and replacement policy can be obtained by using a policy improvement algorithm. However, since phases are fictitious and can not be identified by inspections, two procedures are developed to construct state-age-dependent policies based on the optimal phase-dependent policy. The properties of the constructed state-age-dependent policies are further investigated and the performance of the policy is evaluated through a numerical example

  7. Optimal medication dosing from suboptimal clinical examples: a deep reinforcement learning approach.

    Science.gov (United States)

    Nemati, Shamim; Ghassemi, Mohammad M; Clifford, Gari D

    2016-08-01

    Misdosing medications with sensitive therapeutic windows, such as heparin, can place patients at unnecessary risk, increase length of hospital stay, and lead to wasted hospital resources. In this work, we present a clinician-in-the-loop sequential decision making framework, which provides an individualized dosing policy adapted to each patient's evolving clinical phenotype. We employed retrospective data from the publicly available MIMIC II intensive care unit database, and developed a deep reinforcement learning algorithm that learns an optimal heparin dosing policy from sample dosing trails and their associated outcomes in large electronic medical records. Using separate training and testing datasets, our model was observed to be effective in proposing heparin doses that resulted in better expected outcomes than the clinical guidelines. Our results demonstrate that a sequential modeling approach, learned from retrospective data, could potentially be used at the bedside to derive individualized patient dosing policies.

  8. Simultaneously learning and optimizing using controlled variance pricing

    NARCIS (Netherlands)

    Boer, den A.V.; Zwart, B.

    2014-01-01

    Price experimentation is an important tool for firms to find the optimal selling price of their products. It should be conducted properly, since experimenting with selling prices can be costly. A firm, therefore, needs to find a pricing policy that optimally balances between learning the optimal

  9. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    Science.gov (United States)

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  10. Replacement and inspection policies for products with random life cycle

    International Nuclear Information System (INIS)

    Yun, Won Young; Nakagawa, Toshio

    2010-01-01

    In this paper, we consider maintenance policies for products in which the economical life cycle of products is a random variable. First, we study a periodic replacement policy with minimal repair. The system is minimally repaired at failure and is replaced by new one at age T (periodic replacement policy with minimal repair of Barlow and Hunter). The expected present value of total maintenance cost of products with random life cycle is obtained and the optimal replacement interval minimizing the cost is found. Second, we consider an inspection policy for products with random life cycle to detect the system failure. The expected total cost is obtained and the optimal inspection interval is found. Numerical examples are also included.

  11. Model for determining the completion and production policy in oil wells

    Energy Technology Data Exchange (ETDEWEB)

    Acurero S, L A

    1983-12-01

    An optimization scheme for reservoir development was examined considering the value of the resource, choice of completion and production techniques, and boundary conditions for the reservoir. A 3-phase semi-analytic single-well model was formulated to determine the reservoir response for any completion and production policy. Second, an optimization scheme based on the discrete version of the maximum principle of Pontryagin and the Fibonacci search method was formulated to determine the optimal production and completion policy. Both models are combined in a general algorithm of solution proposed to solve the optimization problem, and a computer code was developed and tested.

  12. Optimal taxation and public provision for poverty reduction

    OpenAIRE

    Kanbur, Ravi; Paukkeri, Tuuli; Pirttilä, Jukka; Tuomala, Matti

    2018-01-01

    The existing literature on optimal taxation typically assumes there exists a capacity to implement complex tax schemes, which is not necessarily the case for many developing countries. We examine the determinants of optimal redistributive policies in the context of a developing country that can only implement linear tax policies due to administrative reasons. Further, the reduction of poverty is typically the expressed goal of such countries, and this feature is also taken into account in our...

  13. Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming

    National Research Council Canada - National Science Library

    Fu, Michael C; Jin, Xing

    2005-01-01

    .... These results have practical implications for Monte Carlo simulation-based solution approaches to stochastic dynamic programming problems where it is impractical to extract the explicit transition...

  14. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    International Nuclear Information System (INIS)

    Saavedra, Y.; Gonzalez, A.; Fernandez, P.; Blanco, J.

    2004-01-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good

  15. Simulation-based optimization of sustainable national energy systems

    International Nuclear Information System (INIS)

    Batas Bjelić, Ilija; Rajaković, Nikola

    2015-01-01

    The goals of the EU2030 energy policy should be achieved cost-effectively by employing the optimal mix of supply and demand side technical measures, including energy efficiency, renewable energy and structural measures. In this paper, the achievement of these goals is modeled by introducing an innovative method of soft-linking of EnergyPLAN with the generic optimization program (GenOpt). This soft-link enables simulation-based optimization, guided with the chosen optimization algorithm, rather than manual adjustments of the decision vectors. In order to obtain EnergyPLAN simulations within the optimization loop of GenOpt, the decision vectors should be chosen and explained in GenOpt for scenarios created in EnergyPLAN. The result of the optimization loop is an optimal national energy master plan (as a case study, energy policy in Serbia was taken), followed with sensitivity analysis of the exogenous assumptions and with focus on the contribution of the smart electricity grid to the achievement of EU2030 goals. It is shown that the increase in the policy-induced total costs of less than 3% is not significant. This general method could be further improved and used worldwide in the optimal planning of sustainable national energy systems. - Highlights: • Innovative method of soft-linking of EnergyPLAN with GenOpt has been introduced. • Optimal national energy master plan has been developed (the case study for Serbia). • Sensitivity analysis on the exogenous world energy and emission price development outlook. • Focus on the contribution of smart energy systems to the EU2030 goals. • Innovative soft-linking methodology could be further improved and used worldwide.

  16. Optimal (R, Q) policy and pricing for two-echelon supply chain with lead time and retailer's service-level incomplete information

    Science.gov (United States)

    Esmaeili, M.; Naghavi, M. S.; Ghahghaei, A.

    2018-03-01

    Many studies focus on inventory systems to analyze different real-world situations. This paper considers a two-echelon supply chain that includes one warehouse and one retailer with stochastic demand and an up-to-level policy. The retailer's lead time includes the transportation time from the warehouse to the retailer that is unknown to the retailer. On the other hand, the warehouse is unaware of retailer's service level. The relationship between the retailer and the warehouse is modeled based on the Stackelberg game with incomplete information. Moreover, their relationship is presented when the warehouse and the retailer reveal their private information using the incentive strategies. The optimal inventory and pricing policies are obtained using an algorithm based on bi-level programming. Numerical examples, including sensitivity analysis of some key parameters, will compare the results between the Stackelberg models. The results show that information sharing is more beneficial to the warehouse rather than the retailer.

  17. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  18. The Dividend Payment Policies of Selected Listed Companies in ...

    African Journals Online (AJOL)

    The Dividend Payment Policies of Selected Listed Companies in Botswana. ... an optimal dividend policy is important because of the effect of its information on outsiders ... on the firm's capital structure, investment opportunities and stock price.

  19. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  20. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  1. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico

    Science.gov (United States)

    Haer, Toon; Botzen, W. J. Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M.; Ward, Philip J.

    2018-06-01

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  2. Optimal capital stock and financing constraints

    OpenAIRE

    Saltari, Enrico; Giuseppe, Travaglini

    2011-01-01

    In this paper we show that financing constraints affect the optimal level of capital stock even when the financing constraint is ineffective. This happens when the firm rationally anticipates that access to external financing resources may be rationed in the future. We will show that with these expectations, the optimal investment policy is to invest less in any given period, thereby lowering the desired optimal capital stock in the long run.

  3. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    Science.gov (United States)

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Transmission characteristics and optimal diagnostic samples to detect an FMDV infection in vaccinated and non-vaccinated sheep

    NARCIS (Netherlands)

    Eble, P.L.; Orsel, K.; Kluitenberg-van Hemert, F.; Dekker, A.

    2015-01-01

    We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2

  5. An Overlay Architecture for Throughput Optimal Multipath Routing

    Science.gov (United States)

    2017-01-14

    maximum throughput. Finally, we propose a threshold-based policy (BP-T) and a heuristic policy (OBP), which dynamically control traffic bifurcations...network stability region is available . Second, given any subset of nodes that are controllable, we also wish to develop an optimal routing policy that...case when tunnels do not overlap. We also develop a heuristic overlay control policy for use on general topologies, and show through simulation that

  6. Liquidity Constraints and Fiscal Stabilization Policy

    DEFF Research Database (Denmark)

    Kristoffersen, Mark Strøm

    It is often claimed that the presence of liquidity constrained households enhances the need for and the effects of fi…scal stabilization policies. This paper studies this in a model of a small open economy with liquidity constrained households. The results show that the consequences of liquidity...... constraints are more complex than previously thought: The optimal stabilization policy in case of productivity shocks is independent of the liquidity constraints, and the presence of liquidity constraints tends to reduce the need for an active policy stabilizing productivity shocks....

  7. An analysis of the feasibility of carbon management policies as a mechanism to influence water conservation using optimization methods.

    Science.gov (United States)

    Wright, Andrew; Hudson, Darren

    2014-10-01

    Studies of how carbon reduction policies would affect agricultural production have found that there is a connection between carbon emissions and irrigation. Using county level data we develop an optimization model that accounts for the gross carbon emitted during the production process to evaluate how carbon reducing policies applied to agriculture would affect the choices of what to plant and how much to irrigate by producers on the Texas High Plains. Carbon emissions were calculated using carbon equivalent (CE) calculations developed by researchers at the University of Arkansas. Carbon reduction was achieved in the model through a constraint, a tax, or a subsidy. Reducing carbon emissions by 15% resulted in a significant reduction in the amount of water applied to a crop; however, planted acreage changed very little due to a lack of feasible alternative crops. The results show that applying carbon restrictions to agriculture may have important implications for production choices in areas that depend on groundwater resources for agricultural production. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Optimal Mobile Sensing and Actuation Policies in Cyber-physical Systems

    CERN Document Server

    Tricaud, Christophe

    2012-01-01

    A successful cyber-physical system, a complex interweaving of hardware and software in direct interaction with some parts of the physical environment, relies heavily on proper identification of the, often pre-existing, physical elements. Based on information from that process, a bespoke “cyber” part of the system may then be designed for a specific purpose. Optimal Mobile Sensing and Actuation Strategies in Cyber-physical Systems focuses on distributed-parameter systems the dynamics of which can be modelled with partial differential equations. Such systems are very challenging to measure, their states being distributed throughout a spatial domain. Consequently, optimal strategies are needed and systematic approaches to the optimization of sensor locations have to be devised for parameter estimation. The text begins by reviewing the newer field of cyber-physical systems and introducing background notions of distributed parameter systems and optimal observation theory. New research opportunities are then de...

  9. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  10. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  11. Oil Trade and Climate Policy

    OpenAIRE

    Malik Curuk; Suphi Sen

    2015-01-01

    It has been argued that a depletable resource owner might optimally increase near-term supply in response to environmental policies promoting the development of alternative resources, which might render climate policy ineffective or even counterproductive. This paper empirically confirms this prediction using data on crude oil exports from OPEC to OECD countries between 2001-2010 in a gravity framework. It documents that oil exporters decrease prices and increase quantity of oil exports in re...

  12. Intermittently Connected Cloudlet System to Obtain an Optimal Offloading Policy

    Directory of Open Access Journals (Sweden)

    Nadim Akhtar

    2016-09-01

    Full Text Available The great potential has been shown over the performance enhancement for offloading the mobile devices within intensive parts of computation within mobile cloud application. The complete realization for the potential which being mismatch within the particular mobile devices on the resource computing demand and that provide an offer. The request over offloading is connecting the variable network where cloud services are always being in the required process within infrequent, variable connectivity of network and quick response time for relatively incurring the times for long setup and quanta for long time are may be indifferent for the connectivity of network. The requirement over the mobile application is needed more resources for executing the single device task within the fact of mobile devices enhanced capabilities. The problems have been addressed for several computation of offloading the remote cloud services and resources which is locating the computing resources in the cloudlets. The proposed concept is proposing an experimental approach for highlighting the tradeoff of offloading. The proposed architecture of the generic algorithm is performing an integration of mobile cloud computing for automatic offloading to improve the application response time when minimizing the consumption of energy for mobile device. Offloading task within a remote machine is not better than performing task particularly. The particular performance of the task is always better than remote machine. The proposed system is developing an algorithm of optimal offloading for mobile user which considering over the cloudlets availability and local load of user’s. The solution and formulation of the MDP (Markov Decision Process model is for obtaining a policy to the user mobile and minimizing the objective of offloading cost and computation cost.

  13. DYNAMIC OPTIMAL BUDGET ALLOCATION FOR INTEGRATED MARKETING CONSIDERING PERSISTENCE

    OpenAIRE

    SHIZHONG AI; RONG DU; QIYING HU

    2010-01-01

    Aiming at forming dynamic optimal integrated marketing policies, we build a budget allocation model considering both current effects and sustained ones. The model includes multiple time periods and multiple marketing tools which interact through a common resource pool as well as through delayed cross influences on each other's sales, reflecting the nature of "integrated marketing" and its dynamics. In our study, marginal analysis is used to illuminate the structure of optimal policy. We deriv...

  14. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  15. Optimal unemployment insurance with monitoring and sanctions

    NARCIS (Netherlands)

    Boone, J.; Fredriksson, P.; Holmlund, B.; van Ours, J.C.

    2007-01-01

    This article analyses the design of optimal unemployment insurance in a search equilibrium framework where search effort among the unemployed is not perfectly observable. We examine to what extent the optimal policy involves monitoring of search effort and benefit sanctions if observed search is

  16. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Distributed Optimal Consensus Control for Nonlinear Multiagent System With Unknown Dynamic.

    Science.gov (United States)

    Zhang, Jilie; Zhang, Huaguang; Feng, Tao

    2017-08-01

    This paper focuses on the distributed optimal cooperative control for continuous-time nonlinear multiagent systems (MASs) with completely unknown dynamics via adaptive dynamic programming (ADP) technology. By introducing predesigned extra compensators, the augmented neighborhood error systems are derived, which successfully circumvents the system knowledge requirement for ADP. It is revealed that the optimal consensus protocols actually work as the solutions of the MAS differential game. Policy iteration algorithm is adopted, and it is theoretically proved that the iterative value function sequence strictly converges to the solution of the coupled Hamilton-Jacobi-Bellman equation. Based on this point, a novel online iterative scheme is proposed, which runs based on the data sampled from the augmented system and the gradient of the value function. Neural networks are employed to implement the algorithm and the weights are updated, in the least-square sense, to the ideal value, which yields approximated optimal consensus protocols. Finally, a numerical example is given to illustrate the effectiveness of the proposed scheme.

  18. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  19. Simple Macroeconomic Policies and Welfare: A Quantitative Assessment

    Directory of Open Access Journals (Sweden)

    Eurilton Araújo

    2014-09-01

    Full Text Available We quantitatively compare three macroeconomic policies in a cash-credit goods framework. The policies are: the optimal one; another one that fully smoothes out oscillations in output; and a simple one that prescribes constant values for tax and monetary growth rates. As often found in the related literature, the welfare gains or losses from changing from a given policy to another are small. We also show that the simple policy dominates the one that leads to constant output.

  20. Optimal transfer, ordering and payment policies for joint supplier-buyer inventory model with price-sensitive trapezoidal demand and net credit

    Science.gov (United States)

    Shah, Nita H.; Shah, Digeshkumar B.; Patel, Dushyantkumar G.

    2015-07-01

    This study aims at formulating an integrated supplier-buyer inventory model when market demand is variable price-sensitive trapezoidal and the supplier offers a choice between discount in unit price and permissible delay period for settling the accounts due against the purchases made. This type of trade credit is termed as 'net credit'. In this policy, if the buyer pays within offered time M1, then the buyer is entitled for a cash discount; otherwise the full account must be settled by the time M2; where M2 > M1 ⩾ 0. The goal is to determine the optimal selling price, procurement quantity, number of transfers from the supplier to the buyer and payment time to maximise the joint profit per unit time. An algorithm is worked out to obtain the optimal solution. A numerical example is given to validate the proposed model. The managerial insights based on sensitivity analysis are deduced.

  1. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  2. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    International Nuclear Information System (INIS)

    Bontha, J.R.; Golcar, G.R.; Hannigan, N.

    2000-01-01

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%

  3. Optimized pre-thinning procedures of ion-beam thinning for TEM sample preparation by magnetorheological polishing.

    Science.gov (United States)

    Luo, Hu; Yin, Shaohui; Zhang, Guanhua; Liu, Chunhui; Tang, Qingchun; Guo, Meijian

    2017-10-01

    Ion-beam-thinning is a well-established sample preparation technique for transmission electron microscopy (TEM), but tedious procedures and labor consuming pre-thinning could seriously reduce its efficiency. In this work, we present a simple pre-thinning technique by using magnetorheological (MR) polishing to replace manual lapping and dimpling, and demonstrate the successful preparation of electron-transparent single crystal silicon samples after MR polishing and single-sided ion milling. Dimples pre-thinned to less than 30 microns and with little mechanical surface damage were repeatedly produced under optimized MR polishing conditions. Samples pre-thinned by both MR polishing and traditional technique were ion-beam thinned from the rear side until perforation, and then observed by optical microscopy and TEM. The results show that the specimen pre-thinned by MR technique was free from dimpling related defects, which were still residual in sample pre-thinned by conventional technique. Nice high-resolution TEM images could be acquired after MR polishing and one side ion-thinning. MR polishing promises to be an adaptable and efficient method for pre-thinning in preparation of TEM specimens, especially for brittle ceramics. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  5. Dopaminergic balance between reward maximization and policy complexity

    Directory of Open Access Journals (Sweden)

    Naama eParush

    2011-05-01

    Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.

  6. Markdown Optimization via Approximate Dynamic Programming

    Directory of Open Access Journals (Sweden)

    Cos?gun

    2013-02-01

    Full Text Available We consider the markdown optimization problem faced by the leading apparel retail chain. Because of substitution among products the markdown policy of one product affects the sales of other products. Therefore, markdown policies for product groups having a significant crossprice elasticity among each other should be jointly determined. Since the state space of the problem is very huge, we use Approximate Dynamic Programming. Finally, we provide insights on the behavior of how each product price affects the markdown policy.

  7. Dynamic optimization of maintenance and improvement planning for water main system: Periodic replacement approach

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jong Woo; Choi, Go Bong; Lee, Jong Min [Seoul National University, Seoul (Korea, Republic of); Suh, Jung Chul [Samchully Corporation, Seoul (Korea, Republic of)

    2016-01-15

    This paper proposes a Markov decision process (MDP) based approach to derive an optimal schedule of maintenance, rehabilitation and replacement of the water main system. The scheduling problem utilizes auxiliary information of a pipe such as the current state, cost, and deterioration model. The objective function and detailed algorithm of dynamic programming are modified to solve the periodic replacement problem. The optimal policy evaluated by the proposed algorithm is compared to several existing policies via Monte Carlo simulations. The proposed decision framework provides a systematic way to obtain an optimal policy.

  8. Optimal Pricing Strategies for New Products in Dynamic Oligopolies

    OpenAIRE

    Engelbert Dockner; Steffen Jørgensen

    1988-01-01

    This paper deals with the determination of optimal pricing policies for firms in oligopolistic markets. The problem is studied as a differential game and optimal pricing policies are established as Nash open-loop controls. Cost learning effects are assumed such that unit costs are decreasing with cumulative output. Discounting of future profits is also taken into consideration. Initially, the problem is addressed in a general framework, and we proceed to study some specific cases that are rel...

  9. Policy Implications Analysis: A Methodological Advancement for Policy Research and Evaluation.

    Science.gov (United States)

    Madey, Doren L.; Stenner, A. Jackson

    Policy Implications Analysis (PIA) is a tool designed to maximize the likelihood that an evaluation report will have an impact on decision-making. PIA was designed to help people planning and conducting evaluations tailor their information so that it has optimal potential for being used and acted upon. This paper describes the development and…

  10. Road maintenance optimization through a discrete-time semi-Markov decision process

    International Nuclear Information System (INIS)

    Zhang Xueqing; Gao Hui

    2012-01-01

    Optimization models are necessary for efficient and cost-effective maintenance of a road network. In this regard, road deterioration is commonly modeled as a discrete-time Markov process such that an optimal maintenance policy can be obtained based on the Markov decision process, or as a renewal process such that an optimal maintenance policy can be obtained based on the renewal theory. However, the discrete-time Markov process cannot capture the real time at which the state transits while the renewal process considers only one state and one maintenance action. In this paper, road deterioration is modeled as a semi-Markov process in which the state transition has the Markov property and the holding time in each state is assumed to follow a discrete Weibull distribution. Based on this semi-Markov process, linear programming models are formulated for both infinite and finite planning horizons in order to derive optimal maintenance policies to minimize the life-cycle cost of a road network. A hypothetical road network is used to illustrate the application of the proposed optimization models. The results indicate that these linear programming models are practical for the maintenance of a road network having a large number of road segments and that they are convenient to incorporate various constraints on the decision process, for example, performance requirements and available budgets. Although the optimal maintenance policies obtained for the road network are randomized stationary policies, the extent of this randomness in decision making is limited. The maintenance actions are deterministic for most states and the randomness in selecting actions occurs only for a few states.

  11. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  12. Power Optimization of Multimode Mobile Embedded Systems with Workload-Delay Dependency

    Directory of Open Access Journals (Sweden)

    Hoeseok Yang

    2016-01-01

    Full Text Available This paper proposes to take the relationship between delay and workload into account in the power optimization of microprocessors in mobile embedded systems. Since the components outside a device continuously change their values or properties, the workload to be handled by the systems becomes dynamic and variable. This variable workload is formulated as a staircase function of the delay taken at the previous iteration in this paper and applied to the power optimization of DVFS (dynamic voltage-frequency scaling. In doing so, a graph representation of all possible workload/mode changes during the lifetime of a device, Workload Transition Graph (WTG, is proposed. Then, the power optimization problem is transformed into finding a cycle (closed walk in WTG which minimizes the average power consumption over it. Out of the obtained optimal cycle of WTG, one can derive the optimal power management policy of the target device. It is shown that the proposed policy is valid for both continuous and discrete DVFS models. The effectiveness of the proposed power optimization policy is demonstrated with the simulation results of synthetic and real-life examples.

  13. Discounted cost model for condition-based maintenance optimization

    International Nuclear Information System (INIS)

    Weide, J.A.M. van der; Pandey, M.D.; Noortwijk, J.M. van

    2010-01-01

    This paper presents methods to evaluate the reliability and optimize the maintenance of engineering systems that are damaged by shocks or transients arriving randomly in time and overall degradation is modeled as a cumulative stochastic point process. The paper presents a conceptually clear and comprehensive derivation of formulas for computing the discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance. The proposed discounted cost model provides a more realistic basis for optimizing the maintenance policies than those based on the asymptotic, non-discounted cost rate criterion.

  14. Optimization of maintenance policy using the proportional hazard model

    Energy Technology Data Exchange (ETDEWEB)

    Samrout, M. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: mohamad.el_samrout@utt.fr; Chatelet, E. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: chatelt@utt.fr; Kouta, R. [M3M Laboratory, University of Technology of Belfort Montbeliard (France); Chebbo, N. [Industrial Systems Laboratory, IUT, Lebanese University (Lebanon)

    2009-01-15

    The evolution of system reliability depends on its structure as well as on the evolution of its components reliability. The latter is a function of component age during a system's operating life. Component aging is strongly affected by maintenance activities performed on the system. In this work, we consider two categories of maintenance activities: corrective maintenance (CM) and preventive maintenance (PM). Maintenance actions are characterized by their ability to reduce this age. PM consists of actions applied on components while they are operating, whereas CM actions occur when the component breaks down. In this paper, we expound a new method to integrate the effect of CM while planning for the PM policy. The proportional hazard function was used as a modeling tool for that purpose. Interesting results were obtained when comparison between policies that take into consideration the CM effect and those that do not is established.

  15. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  16. A condition-based maintenance policy for stochastically deteriorating systems

    International Nuclear Information System (INIS)

    Grall, A.; Berenguer, C.; Dieulle, L.

    2002-01-01

    We focus on the analytical modeling of a condition-based inspection/replacement policy for a stochastically and continuously deteriorating single-unit system. We consider both the replacement threshold and the inspection schedule as decision variables for this maintenance problem and we propose to implement the maintenance policy using a multi-level control-limit rule. In order to assess the performance of the proposed maintenance policy and to minimize the long run expected maintenance cost per unit time, a mathematical model for the maintained system cost is derived, supported by the existence of a stationary law for the maintained system state. Numerical experiments illustrate the performance of the proposed policy and confirm that the maintenance cost rate on an infinite horizon can be minimized by a joint optimization of the maintenance structure thresholds, or equivalently by a joint optimization of a system replacement threshold and the aperiodic inspection schedule

  17. Sterile Reverse Osmosis Water Combined with Friction Are Optimal for Channel and Lever Cavity Sample Collection of Flexible Duodenoscopes

    Directory of Open Access Journals (Sweden)

    Michelle J. Alfa

    2017-11-01

    Full Text Available IntroductionSimulated-use buildup biofilm (BBF model was used to assess various extraction fluids and friction methods to determine the optimal sample collection method for polytetrafluorethylene channels. In addition, simulated-use testing was performed for the channel and lever cavity of duodenoscopes.Materials and methodsBBF was formed in polytetrafluorethylene channels using Enterococcus faecalis, Escherichia coli, and Pseudomonas aeruginosa. Sterile reverse osmosis (RO water, and phosphate-buffered saline with and without Tween80 as well as two neutralizing broths (Letheen and Dey–Engley were each assessed with and without friction. Neutralizer was added immediately after sample collection and samples concentrated using centrifugation. Simulated-use testing was done using TJF-Q180V and JF-140F Olympus duodenoscopes.ResultsDespite variability in the bacterial CFU in the BBF model, none of the extraction fluids tested were significantly better than RO. Borescope examination showed far less residual material when friction was part of the extraction protocol. The RO for flush-brush-flush (FBF extraction provided significantly better recovery of E. coli (p = 0.02 from duodenoscope lever cavities compared to the CDC flush method.Discussion and conclusionWe recommend RO with friction for FBF extraction of the channel and lever cavity of duodenoscopes. Neutralizer and sample concentration optimize recovery of viable bacteria on culture.

  18. Anticipating the uncertain: economic modeling and climate change policy

    Energy Technology Data Exchange (ETDEWEB)

    Jensen, Svenn

    2012-11-01

    With this thesis I wish to contribute to the understanding of how uncertainty and the anticipation of future events by economic actors affect climate policies. The thesis consists of four papers. Two papers are analytical models which explicitly consider that emissions are caused by extracting scarce fossil fuels which in the future must be replaced by clean technologies. The other two are so called numerical integrated assessment models. Such models represent the world economy, the climate system and the interactions between those two quantitatively, complementing more abstract theoretical work. Should policy makers discriminate between subsidizing renewable energy sources such as wind or solar power, and technologies such as carbon capture and storage (CCS)? Focusing only on the dynamic supply of fossil fuels and hence Co{sub 2}, we find here that cheaper future renewables cause extraction to speed up, lower costs of CCS may delay it. CCS hence may dampen the dynamic inefficiency caused by the absence of comprehensive climate policies today. Does it matter whether uncertainty about future damage assessment is due to scientific complexities or stems from the political process? In paper two, I find that political and scientific uncertainties have opposing effects on the incentives to investment in renewables and the extraction of fossil fuels: The prospect of scientific learning about the climate system increases investment incentives and, ceteris paribus, slows extraction down; uncertainty about future political constellations does the opposite. The optimal carbon tax under scientific uncertainty equals expected marginal damages, whereas political uncertainty demands a tax below marginal damages that decreases over time. Does uncertainty about economic growth impact optimal climate policy today? Here we are the first to consistently analyze how uncertainty about future economic growth affects optimal emission reductions and the optimal social cost of carbon. We

  19. Plasma treatment of bulk niobium surface for superconducting rf cavities: Optimization of the experimental conditions on flat samples

    Directory of Open Access Journals (Sweden)

    M. Rašković

    2010-11-01

    Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7  μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the sample surface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

  20. Special nuclear material inventory sampling plans

    International Nuclear Information System (INIS)

    Vaccaro, H.S.; Goldman, A.S.

    1987-01-01

    This paper presents improved procedures for obtaining statistically valid sampling plans for nuclear facilities. The double sampling concept and methods for developing optimal double sampling plans are described. An algorithm is described that is satisfactory for finding optimal double sampling plans and choosing appropriate detection and false alarm probabilities

  1. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  2. An optimization methodology for identifying robust process integration investments under uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Elin; Berntsson, Thore [Department of Energy and Environment, Division of Heat and Power Technology, Chalmers University of Technology, SE-412 96 Goeteborg (Sweden); Stroemberg, Ann-Brith [Fraunhofer-Chalmers Research Centre for Industrial Mathematics, Chalmers Science Park, SE-412 88 Gothenburg (Sweden); Patriksson, Michael [Department of Mathematical Sciences, Chalmers University of Technology and Department of Mathematical Sciences, University of Gothenburg, SE-412 96 Goeteborg (Sweden)

    2009-02-15

    Uncertainties in future energy prices and policies strongly affect decisions on investments in process integration measures in industry. In this paper, we present a five-step methodology for the identification of robust investment alternatives incorporating explicitly such uncertainties in the optimization model. Methods for optimization under uncertainty (or, stochastic programming) are thus combined with a deep understanding of process integration and process technology in order to achieve a framework for decision-making concerning the investment planning of process integration measures under uncertainty. The proposed methodology enables the optimization of investments in energy efficiency with respect to their net present value or an environmental objective. In particular, as a result of the optimization approach, complex investment alternatives, allowing for combinations of energy efficiency measures, can be analyzed. Uncertainties as well as time-dependent parameters, such as energy prices and policies, are modelled using a scenario-based approach, enabling the identification of robust investment solutions. The methodology is primarily an aid for decision-makers in industry, but it will also provide insight for policy-makers into how uncertainties regarding future price levels and policy instruments affect the decisions on investments in energy efficiency measures. (author)

  3. An optimization methodology for identifying robust process integration investments under uncertainty

    International Nuclear Information System (INIS)

    Svensson, Elin; Berntsson, Thore; Stroemberg, Ann-Brith; Patriksson, Michael

    2009-01-01

    Uncertainties in future energy prices and policies strongly affect decisions on investments in process integration measures in industry. In this paper, we present a five-step methodology for the identification of robust investment alternatives incorporating explicitly such uncertainties in the optimization model. Methods for optimization under uncertainty (or, stochastic programming) are thus combined with a deep understanding of process integration and process technology in order to achieve a framework for decision-making concerning the investment planning of process integration measures under uncertainty. The proposed methodology enables the optimization of investments in energy efficiency with respect to their net present value or an environmental objective. In particular, as a result of the optimization approach, complex investment alternatives, allowing for combinations of energy efficiency measures, can be analyzed. Uncertainties as well as time-dependent parameters, such as energy prices and policies, are modelled using a scenario-based approach, enabling the identification of robust investment solutions. The methodology is primarily an aid for decision-makers in industry, but it will also provide insight for policy-makers into how uncertainties regarding future price levels and policy instruments affect the decisions on investments in energy efficiency measures. (author)

  4. Optimal Monetary Policy with Durable Consumption Goods and Factor Demand Linkages

    DEFF Research Database (Denmark)

    Petrella, Ivan; Santoro, Emiliano

    of production in both sectors, according to an input-output matrix calibrated on the US economy. As shown in a number of recent contributions, this roundabout technology allows us to reconcile standard two-sector New Keynesian models with the empirical evidence showing co-movement between durable and non......-durable spending in response to a monetary policy shock. A main result of our monetary policy analysis is that strategic complementarities generated by factor demand linkages amplify social welfare loss. As the degree of interconnection between sectors increases, the cost of misperceiving the correct production......This paper deals with the implications of factor demand linkages for monetary policy design. We develop a dynamic general equilibrium model with two sectors that produce durable and non-durable goods, respectively. Part of the output produced in each sector is used as an intermediate input...

  5. Assessing groundwater policy with coupled economic-groundwater hydrologic modeling

    Science.gov (United States)

    Mulligan, Kevin B.; Brown, Casey; Yang, Yi-Chen E.; Ahlfeld, David P.

    2014-03-01

    This study explores groundwater management policies and the effect of modeling assumptions on the projected performance of those policies. The study compares an optimal economic allocation for groundwater use subject to streamflow constraints, achieved by a central planner with perfect foresight, with a uniform tax on groundwater use and a uniform quota on groundwater use. The policies are compared with two modeling approaches, the Optimal Control Model (OCM) and the Multi-Agent System Simulation (MASS). The economic decision models are coupled with a physically based representation of the aquifer using a calibrated MODFLOW groundwater model. The results indicate that uniformly applied policies perform poorly when simulated with more realistic, heterogeneous, myopic, and self-interested agents. In particular, the effects of the physical heterogeneity of the basin and the agents undercut the perceived benefits of policy instruments assessed with simple, single-cell groundwater modeling. This study demonstrates the results of coupling realistic hydrogeology and human behavior models to assess groundwater management policies. The Republican River Basin, which overlies a portion of the Ogallala aquifer in the High Plains of the United States, is used as a case study for this analysis.

  6. Are optimal CO2 emissions really optimal? Four critical issues for economists in the greenhouse

    International Nuclear Information System (INIS)

    Azar, C.

    1998-01-01

    Although the greenhouse effect is by many considered as one of the most serious environmental problems, several economic studies of the greenhouse effect, most notably Nordhaus's DICE model, suggest that it is optimal to allow the emissions of greenhouse gases (GHG) to increase by a factor of three over the next century. Other studies have found that substantial reductions can be justified on economic grounds. This paper explores into the reasons for these differences and identifies four (partly overlapping) crucial issues that have to be dealt with when analysing the economics of the greenhouse effect low-probability but catastrophic events; cost evaluation methods; the choice of discount rate; the choice of decision criterion. The paper shows that (1) these aspects are crucial for the policy conclusions drawn from models of the economics of climate change, and that (2) ethical choices have to be made for each of these issues. This fact needs wider recognition since economics is very often perceived as a value neutral tool that can be used to provide policy makers with 'optimal' policies. 62 refs

  7. Public and policy maker support for point-of-sale tobacco policies in New York.

    Science.gov (United States)

    Schmitt, Carol L; Juster, Harlan R; Dench, Daniel; Willett, Jeffrey; Curry, Laurel E

    2014-01-01

    To compare public and policy maker support for three point-of-sale tobacco policies. Two cross-sectional surveys--one of the public from the New York Adult Tobacco Survey and one of policy makers from the Local Opinion Leader Survey; both collected and analyzed in 2011. Tobacco control programs focus on educating the public and policy makers about tobacco control policy solutions. Six hundred seventy-six county-level legislators in New York's 62 counties and New York City's five boroughs (response rate: 59%); 7439 New York residents aged 18 or older. Landline response rates: 20.2% to 22%. Cell phone response rates: 9.2% to 11.1%. Gender, age, smoking status, presence of a child aged 18 years or younger in the household, county of residence, and policy maker and public support for three potential policy solutions to point-of-sale tobacco marketing. t-tests to compare the demographic makeup for the two samples. Adjusted Wald tests to test for differences in policy support between samples. The public was significantly more supportive of point-of-sale policy solutions than were policy makers: cap on retailers (48.0% vs. 19.2%, respectively); ban on sales at pharmacies (49.1% vs. 38.8%); and ban on retailers near schools (53.3% vs. 42.5%). cross-sectional data, sociodemographic differences, and variations in item wording. Tobacco control programs need to include information about implementation, enforcement, and potential effects on multiple constituencies (including businesses) in their efforts to educate policy makers about point-of-sale policy solutions.

  8. The capital structure impact on forming company’s accounting policy

    OpenAIRE

    Česnavičiūtė, Giedrė

    2011-01-01

    KEWORDS: Accounting policy, accounting policy choice, disclosure of accounting policies, capital structure, financial leverage, legitimacy theory, agency theory, signal theory, stakeholder theory. The optimal structure of the capital has a huge impact assuring its goals and financial stability. The company’s appropriate situation of financial condition depends on the accounting policies formation as well. In this paper there was made the investigation of correlation of company’s capital struc...

  9. Optimal utilization of energy resources

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, E. A.

    1977-10-15

    General principles that should guide the extraction of New Zealand's energy resources are presented. These principles are based on the objective of promoting the general economic and social benefit obtained from the use of the extracted fuel. For a single resource, the central question to be answered is, simply, what quantity of energy should be extracted in each year of the resource's lifetime. For the energy system as a whole the additional question must be answered of what mix of fuels should be used in any year. The analysis of optimal management of a single energy resource is specifically discussed. The general principles for optimal resource extraction are derived, and then applied to the examination of the characteristics of the optimal time paths of energy quantity and price; to the appraisal of the efficiency, in resource management, of various market structures; to the evaluation of various energy pricing policies; and to the examination of circumstances in which market organization is inefficient and the guidelines for corrective government policy in such cases.

  10. Optimal utilization of energy resources

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, E.A.

    1977-10-15

    General principles that should guide the extraction of New Zealand's energy resources are presented. These principles are based on the objective of promoting the general economic and social benefit obtained from the use of the extracted fuel. For a single resource, the central question to be answered is, simply, what quantity of energy should be extracted in each year of the resource's lifetime. For the energy system as a whole the additional question must be answered of what mix of fuels should be used in any year. The analysis of optimal management of a single energy resource is specifically discussed. The general principles for optimal resource extraction are derived, and then applied to the examination of the characteristics of the optimal time paths of energy quantity and price; to the appraisal of the efficiency, in resource management, of various market structures; to the evaluation of various energy pricing policies; and to the examination of circumstances in which market organization is inefficient and the guidelines for corrective government policy in such cases.

  11. Optimal inflation for the U.S.

    OpenAIRE

    Roberto M. Billi

    2007-01-01

    What is the correctly measured inflation rate that monetary policy should aim for in the long-run? This paper characterizes the optimal inflation rate for the U.S. economy in a New Keynesian sticky-price model with an occasionally binding zero lower bound on the nominal interest rate. Real-rate and mark-up shocks jointly determine the optimal inflation rate to be positive but not large. Even allowing for the possibility of extreme model misspecification, the optimal inflation rate is robustly...

  12. Optimizing Chemical Reactions with Deep Reinforcement Learning.

    Science.gov (United States)

    Zhou, Zhenpeng; Li, Xiaocheng; Zare, Richard N

    2017-12-27

    Deep reinforcement learning was employed to optimize chemical reactions. Our model iteratively records the results of a chemical reaction and chooses new experimental conditions to improve the reaction outcome. This model outperformed a state-of-the-art blackbox optimization algorithm by using 71% fewer steps on both simulations and real reactions. Furthermore, we introduced an efficient exploration strategy by drawing the reaction conditions from certain probability distributions, which resulted in an improvement on regret from 0.062 to 0.039 compared with a deterministic policy. Combining the efficient exploration policy with accelerated microdroplet reactions, optimal reaction conditions were determined in 30 min for the four reactions considered, and a better understanding of the factors that control microdroplet reactions was reached. Moreover, our model showed a better performance after training on reactions with similar or even dissimilar underlying mechanisms, which demonstrates its learning ability.

  13. An inventory model with a new credit drift: Flexible trade credit policy

    Directory of Open Access Journals (Sweden)

    Ankit Prakash Tyagi

    2016-01-01

    Full Text Available In most of the published articles dealing with optimal order quantity model under permissible delay in payments, it is assumed that the supplier only put forwards fully permissible delay in payments if retailer ordered a bulky sufficient quantity otherwise permissible delay in payments would not be permitted. Practically, in competitive market environments and recession phases of business, every supplier wants to attract more retailers by the help of providing good facilities for trading. Necessity of order quantity may put a negative pressure on supplier’s demand. So, within the economic order quantity (EOQ framework the main purpose of this paper is to broaden this extreme case by introducing a new credit policy, Flexible Trade Credit Policy (FTCP, for supplier which can help him provide more free space of trading to retailers. This policy, after adopting by suppliers, not only provides attractive trading environments for retailers but also enhances the demand of supplier due to the large number of new retailers. Here in, under this policy, an inventory system is investigated as a cost minimization problem to establish the retailer’s optimal inventory cycle time and optimal order quantity. Three theorems are established to describe and to lighten optimal replenishment policies for the retailer. Finally, numerical examples are considered to illustrate all these theorems and managerial insights are given based on considered numerical examples.

  14. Sample preparation optimization in fecal metabolic profiling.

    Science.gov (United States)

    Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G

    2017-03-15

    Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (w f /v s ) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. Control-limit preventive maintenance policies for components subject to imperfect preventive maintenance and variable operational conditions

    International Nuclear Information System (INIS)

    You Mingyi; Li Hongguang; Meng Guang

    2011-01-01

    This paper develops two component-level control-limit preventive maintenance (PM) policies for systems subject to the joint effect of partial recovery PM acts (imperfect PM acts) and variable operational conditions, and investigates the properties of the proposed policies. The extended proportional hazards model (EPHM) is used to model the system failure likelihood influenced by both factors. Several numerical experiments are conducted for policy property analysis, using real lifetime and operational condition data and typical characterization of imperfect PM acts and maintenance durations. The experimental results demonstrate the necessity of considering both factors when they do exist, characterize the joint effect of the two factors on the performance of an optimized PM policy, and explore the influence of the loading sequence of time-varying operational conditions on the performance of an optimized PM policy. The proposed policies extend the applicability of PM optimization techniques.

  16. Optimization Models for Petroleum Field Exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Jonsbraaten, Tore Wiig

    1998-12-31

    This thesis presents and discusses various models for optimal development of a petroleum field. The objective of these optimization models is to maximize, under many uncertain parameters, the project`s expected net present value. First, an overview of petroleum field optimization is given from the point of view of operations research. Reservoir equations for a simple reservoir system are derived and discretized and included in optimization models. Linear programming models for optimizing production decisions are discussed and extended to mixed integer programming models where decisions concerning platform, wells and production strategy are optimized. Then, optimal development decisions under uncertain oil prices are discussed. The uncertain oil price is estimated by a finite set of price scenarios with associated probabilities. The problem is one of stochastic mixed integer programming, and the solution approach is to use a scenario and policy aggregation technique developed by Rockafellar and Wets although this technique was developed for continuous variables. Stochastic optimization problems with focus on problems with decision dependent information discoveries are also discussed. A class of ``manageable`` problems is identified and an implicit enumeration algorithm for finding optimal decision policy is proposed. Problems involving uncertain reservoir properties but with a known initial probability distribution over possible reservoir realizations are discussed. Finally, a section on Nash-equilibrium and bargaining in an oil reservoir management game discusses the pool problem arising when two lease owners have access to the same underlying oil reservoir. Because the oil tends to migrate, both lease owners have incentive to drain oil from the competitors part of the reservoir. The discussion is based on a numerical example. 107 refs., 31 figs., 14 tabs.

  17. Fire assisted pastoralism vs. sustainable forestry--the implications of missing markets for carbon in determining optimal land use in the wet-dry tropics of Australia.

    Science.gov (United States)

    Ockwell, David; Lovett, Jon C

    2005-04-01

    Using Cape York Peninsula, Queensland, Australia as a case study, this paper combines field sampling of woody vegetation with cost-benefit analysis to compare the social optimality of fire-assisted pastoralism with sustainable forestry. Carbon sequestration is estimated to be significantly higher in the absence of fire. Integration of carbon sequestration benefits for mitigating future costs of climate change into cost-benefit analysis demonstrates that sustainable forestry is a more socially optimal land use than fire-assisted pastoralism. Missing markets for carbon, however, imply that fire-assisted pastoralism will continue to be pursued in the absence of policy intervention. Creation of markets for carbon represents a policy solution that has the potential to drive land use away from fire-assisted pastoralism towards sustainable forestry and environmental conservation.

  18. Reinforcement learning solution for HJB equation arising in constrained optimal control problem.

    Science.gov (United States)

    Luo, Biao; Wu, Huai-Ning; Huang, Tingwen; Liu, Derong

    2015-11-01

    The constrained optimal control problem depends on the solution of the complicated Hamilton-Jacobi-Bellman equation (HJBE). In this paper, a data-based off-policy reinforcement learning (RL) method is proposed, which learns the solution of the HJBE and the optimal control policy from real system data. One important feature of the off-policy RL is that its policy evaluation can be realized with data generated by other behavior policies, not necessarily the target policy, which solves the insufficient exploration problem. The convergence of the off-policy RL is proved by demonstrating its equivalence to the successive approximation approach. Its implementation procedure is based on the actor-critic neural networks structure, where the function approximation is conducted with linearly independent basis functions. Subsequently, the convergence of the implementation procedure with function approximation is also proved. Finally, its effectiveness is verified through computer simulations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. 'Are we there yet?' - operationalizing the concept of Integrated Public Health Policies.

    Science.gov (United States)

    Hendriks, Anna-Marie; Habraken, Jolanda; Jansen, Maria W J; Gubbels, Jessica S; De Vries, Nanne K; van Oers, Hans; Michie, Susan; Atkins, L; Kremers, Stef P J

    2014-02-01

    Although 'integrated' public health policies are assumed to be the ideal way to optimize public health, it remains hard to determine how far removed we are from this ideal, since clear operational criteria and defining characteristics are lacking. A literature review identified gaps in previous operationalizations of integrated public health policies. We searched for an approach that could fill these gaps. We propose the following defining characteristics of an integrated policy: (1) the combination of policies includes an appropriate mix of interventions that optimizes the functioning of the behavioral system, thus ensuring that motivation, capability and opportunity interact in such a way that they promote the preferred (health-promoting) behavior of the target population, and (2) the policies are implemented by the relevant policy sectors from different policy domains. Our criteria should offer added value since they describe pathways in the process towards formulating integrated policy. The aim of introducing our operationalization is to assist policy makers and researchers in identifying truly integrated cases. The Behavior Change Wheel proved to be a useful framework to develop operational criteria to assess the current state of integrated public health policies in practice. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Radiological optimization

    International Nuclear Information System (INIS)

    Zeevaert, T.

    1998-01-01

    Radiological optimization is one of the basic principles in each radiation-protection system and it is a basic requirement in the safety standards for radiation protection in the European Communities. The objectives of the research, performed in this field at the Belgian Nuclear Research Centre SCK-CEN, are: (1) to implement the ALARA principles in activities with radiological consequences; (2) to develop methodologies for optimization techniques in decision-aiding; (3) to optimize radiological assessment models by validation and intercomparison; (4) to improve methods to assess in real time the radiological hazards in the environment in case of an accident; (5) to develop methods and programmes to assist decision-makers during a nuclear emergency; (6) to support the policy of radioactive waste management authorities in the field of radiation protection; (7) to investigate existing software programmes in the domain of multi criteria analysis. The main achievements for 1997 are given

  1. Determining energy and climate market policy using multiobjective programs with equilibrium constraints

    International Nuclear Information System (INIS)

    Siddiqui, Sauleh; Christensen, Adam

    2016-01-01

    Energy and climate market policy is inherently multiobjective and multilevel, in that desired choices often conflict and are made at a higher level than influenced actors. Analyzing tradeoff between reducing emissions and keeping fuel prices low, while seeking compromise among producers, traders, and consumers is the crux of the policy problem. This paper aims to address this issue by combining multiobjective optimization problems, which allow the study of tradeoff between choices, with equilibrium problems that model the networks and players over which these policies are chosen, to produce a formulation called a Multiobjective Program with Equilibrium Constraints. We apply this formulation to the United States renewable fuel market to help understand why it has been so difficult in releasing the 2014 mandate for the RFS (Renewable Fuel Standard). The RFS ensures that a minimum volume of renewable fuel is included in transportation fuel sold in the United States. Determining the RFS volume requirements involves anticipating market reaction as well as balancing policy objectives. We provide policy alternatives to aid in setting these volume obligations that are applicable to a wide variety of climate and energy market settings and explain why the RFS is not an optimal policy for reducing emissions. - Highlights: • First time a MOPEC has been used to model energy markets and climate policy. • Method to endogenously determine energy policy along with associated tradeoff. • Computationally efficient algorithm for MOPECs and compare to methods. • Explain why the RFS is not an optimal policy for emission reduction.

  2. Strong profiling is not mathematically optimal for discovering rare malfeasors

    Energy Technology Data Exchange (ETDEWEB)

    Press, William H [Los Alamos National Laboratory

    2008-01-01

    In a large population of individuals labeled j = 1,2,...,N, governments attempt to find the rare malfeasor j = j, (terrorist, for example) by making use of priors p{sub j} that estimate the probability of individual j being a malfeasor. Societal resources for secondary random screening such as airport search or police investigation are concentrated against individuals with the largest priors. They may call this 'strong profiling' if the concentration is at least proportional to p{sub j} for the largest values. Strong profiling often results in higher probability, but otherwise innocent, individuals being repeatedly subjected to screening. They show here that, entirely apart from considerations of social policy, strong profiling is not mathematically optimal at finding malfeasors. Even if prior probabilities were accurate, their optimal use would be only as roughly the geometric mean between a strong profiling and a completely uniform sampling of the population.

  3. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  4. Optimal climate change: economics and climate science policy histories (from heuristic to normative).

    Science.gov (United States)

    Randalls, Samuel

    2011-01-01

    Historical accounts of climate change science and policy have reflected rather infrequently upon the debates, discussions, and policy advice proffered by economists in the 1980s. While there are many forms of economic analysis, this article focuses upon cost-benefit analysis, especially as adopted in the work of William Nordhaus. The article addresses the way in which climate change economics subtly altered debates about climate policy from the late 1970s through the 1990s. These debates are often technical and complex, but the argument in this article is that the development of a philosophy of climate change as an issue for cost-benefit analysis has had consequences for how climate policy is made today.

  5. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  6. Modeling the optimal management of spent nuclear fuel

    International Nuclear Information System (INIS)

    Nachlas, J.A.; Kurstedt, H.A. Jr.; Swindle, D.W. Jr.; Korcz, K.O.

    1977-01-01

    Recent governmental policy decisions dictate that strategies for managing spent nuclear fuel be developed. Two models are constructed to investigate the optimum residence time and the optimal inventory withdrawal policy for fuel material that presently must be stored. The mutual utility of the models is demonstrated through reference case application

  7. Evaluation and optimization of DNA extraction and purification procedures for soil and sediment samples.

    Science.gov (United States)

    Miller, D N; Bryant, J E; Madsen, E L; Ghiorse, W C

    1999-11-01

    We compared and statistically evaluated the effectiveness of nine DNA extraction procedures by using frozen and dried samples of two silt loam soils and a silt loam wetland sediment with different organic matter contents. The effects of different chemical extractants (sodium dodecyl sulfate [SDS], chloroform, phenol, Chelex 100, and guanadinium isothiocyanate), different physical disruption methods (bead mill homogenization and freeze-thaw lysis), and lysozyme digestion were evaluated based on the yield and molecular size of the recovered DNA. Pairwise comparisons of the nine extraction procedures revealed that bead mill homogenization with SDS combined with either chloroform or phenol optimized both the amount of DNA extracted and the molecular size of the DNA (maximum size, 16 to 20 kb). Neither lysozyme digestion before SDS treatment nor guanidine isothiocyanate treatment nor addition of Chelex 100 resin improved the DNA yields. Bead mill homogenization in a lysis mixture containing chloroform, SDS, NaCl, and phosphate-Tris buffer (pH 8) was found to be the best physical lysis technique when DNA yield and cell lysis efficiency were used as criteria. The bead mill homogenization conditions were also optimized for speed and duration with two different homogenizers. Recovery of high-molecular-weight DNA was greatest when we used lower speeds and shorter times (30 to 120 s). We evaluated four different DNA purification methods (silica-based DNA binding, agarose gel electrophoresis, ammonium acetate precipitation, and Sephadex G-200 gel filtration) for DNA recovery and removal of PCR inhibitors from crude extracts. Sephadex G-200 spin column purification was found to be the best method for removing PCR-inhibiting substances while minimizing DNA loss during purification. Our results indicate that for these types of samples, optimum DNA recovery requires brief, low-speed bead mill homogenization in the presence of a phosphate-buffered SDS-chloroform mixture, followed

  8. Open space preservation, property value, and optimal spatial configuration

    Science.gov (United States)

    Yong Jiang; Stephen K. Swallow

    2007-01-01

    The public has increasingly demonstrated a strong support for open space preservation. How to finance the socially efficient level of open space with the optimal spatial structure is of high policy relevance to local governments. In this study, we developed a spatially explicit open space model to help identify the socially optimal amount and optimal spatial...

  9. Evaluation and improvement of dynamic optimality in electrochemical reactors

    International Nuclear Information System (INIS)

    Vijayasekaran, B.; Basha, C. Ahmed

    2005-01-01

    A systematic approach for the dynamic optimization problem statement to improve the dynamic optimality in electrochemical reactors is presented in this paper. The formulation takes an account of the diffusion phenomenon in the electrode/electrolyte interface. To demonstrate the present methodology, the optimal time-varying electrode potential for a coupled chemical-electrochemical reaction scheme, that maximizes the production of the desired product in a batch electrochemical reactor with/without recirculation are determined. The dynamic optimization problem statement, based upon this approach, is a nonlinear differential algebraic system, and its solution provides information about the optimal policy. Optimal control policy at different conditions is evaluated using the best-known Pontryagin's maximum principle. The two-point boundary value problem resulting from the application of the maximum principle is then solved using the control vector iteration technique. These optimal time-varying profiles of electrode potential are then compared to the best uniform operation through the relative improvements of the performance index. The application of the proposed approach to two electrochemical systems, described by ordinary differential equations, shows that the existing electrochemical process control strategy could be improved considerably when the proposed method is incorporated

  10. Bayesian policy reuse

    CSIR Research Space (South Africa)

    Rosman, Benjamin

    2016-02-01

    Full Text Available Keywords Policy Reuse · Reinforcement Learning · Online Learning · Online Bandits · Transfer Learning · Bayesian Optimisation · Bayesian Decision Theory. 1 Introduction As robots and software agents are becoming more ubiquitous in many applications.... The agent has access to a library of policies (pi1, pi2 and pi3), and has previously experienced a set of task instances (τ1, τ2, τ3, τ4), as well as samples of the utilities of the library policies on these instances (the black dots indicate the means...

  11. Optimization on replacement and inspection period of plant equipment

    International Nuclear Information System (INIS)

    Takase, Kentaro; Kasai, Masao

    2004-01-01

    Rationalization of the plant maintenance is one of the main topics being investigated in Japanese nuclear power industries. Optimization of the inspection and replacement period of equipments is effective for the maintenance cost reduction. The more realistic model of the replacement policy is proposed in this study. It is based on the classical replacement policy model and its cost is estimated. Then, to consider the inspection for the maintenance, the formulation that includes the risk concept is discussed. Based on it, two variations of the combination of the inspection and the replacement are discussed and the costs are estimated. In this study the effect of the degradation of the equipment is important. The optimized maintenance policy depends on the existence of significant degradation. (author)

  12. Determination of production-shipment policy using a two-phase algebraic approach

    Directory of Open Access Journals (Sweden)

    Huei-Hsin Chang

    2012-04-01

    Full Text Available The optimal production-shipment policy for end products using mathematicalmodeling and a two-phase algebraic approach is investigated. A manufacturing systemwith a random defective rate, a rework process, and multiple deliveries is studied with thepurpose of deriving the optimal replenishment lot size and shipment policy that minimisestotal production-delivery costs. The conventional method uses differential calculus on thesystem cost function to determine the economic lot size and optimal number of shipmentsfor such an integrated vendor-buyer system, whereas the proposed two-phase algebraicapproach is a straightforward method that enables practitioners who may not havesufficient knowledge of calculus to manage real-world systems more effectively.

  13. Mathematical Model of (R,Q Inventory Policy under Limited Storage Space for Continuous and Periodic Review Policies with Backlog and Lost Sales

    Directory of Open Access Journals (Sweden)

    Kanokwan Singha

    2017-01-01

    Full Text Available This paper involves developing new mathematical expressions to find reorder point and order quantity for inventory management policies that explicitly consider storage space capacity. Both continuous and periodic reviews, as well as backlogged and lost demand during stockout, are considered. With storage space capacity, when on-hand inventory exceeds the capacity, the over-ordering cost of storage at an external warehouse is charged on a per-unit-period basis. The objective is to minimize the total cost, consisting of ordering, shortage, holding, and over-ordering costs. Demand and lead time are stochastic and discrete in nature. Demand during varying lead time is modeled using an empirical distribution so that the findings are not subject to assumptions of demand and lead time probability distributions. Due to the complexity of the developed mathematical expressions, the problems are solved using an iterative method. The method is tested with problem instances that use real data from industry. Optimal solutions of the problem instance are determined by performing exhaustive search. The proposed method can effectively find optimal solutions for continuous review policies and near optimal solutions for periodic review policies. Fundamental insights about the inventory policies are reported from a comparison between continuous review and periodic review solutions, as well as a comparison between backlog and lost sales cases.

  14. When Environmental Policy is Superfluous: Growth and Polluting Resources

    International Nuclear Information System (INIS)

    Schou, Poul

    2002-01-01

    In a research-driven endogenous growth model, a non-renewable resource gives rise to pollution. Consumption may either grow or decline along the optimal balanced growth path, hut the (flow) pollution level necessarily diminishes continuously. Any positive balanced growth path is sustainable. Utility may improve, even though consumption declines. Although positive growth is optimal, the market economy may nevertheless result in permanently declining consumption possibilities. At the same time, a growth-enhancing government policy may improve long-run environmental conditions. The pollution externality does not distort the decisions of the market economy, so that a specific environmental policy is superfluous

  15. Age replacement policy based on imperfect repair with random probability

    International Nuclear Information System (INIS)

    Lim, J.H.; Qu, Jian; Zuo, Ming J.

    2016-01-01

    In most of literatures of age replacement policy, failures before planned replacement age can be either minimally repaired or perfectly repaired based on the types of failures, cost for repairs and so on. In this paper, we propose age replacement policy based on imperfect repair with random probability. The proposed policy incorporates the case that such intermittent failure can be either minimally repaired or perfectly repaired with random probabilities. The mathematical formulas of the expected cost rate per unit time are derived for both the infinite-horizon case and the one-replacement-cycle case. For each case, we show that the optimal replacement age exists and is finite. - Highlights: • We propose a new age replacement policy with random probability of perfect repair. • We develop the expected cost per unit time. • We discuss the optimal age for replacement minimizing the expected cost rate.

  16. Application of Chitosan-Zinc Oxide Nanoparticles for Lead Extraction From Water Samples by Combining Ant Colony Optimization with Artificial Neural Network

    Science.gov (United States)

    Khajeh, M.; Pourkarami, A.; Arefnejad, E.; Bohlooli, M.; Khatibi, A.; Ghaffari-Moghaddam, M.; Zareian-Jahromi, S.

    2017-09-01

    Chitosan-zinc oxide nanoparticles (CZPs) were developed for solid-phase extraction. Combined artificial neural network-ant colony optimization (ANN-ACO) was used for the simultaneous preconcentration and determination of lead (Pb2+) ions in water samples prior to graphite furnace atomic absorption spectrometry (GF AAS). The solution pH, mass of adsorbent CZPs, amount of 1-(2-pyridylazo)-2-naphthol (PAN), which was used as a complexing agent, eluent volume, eluent concentration, and flow rates of sample and eluent were used as input parameters of the ANN model, and the percentage of extracted Pb2+ ions was used as the output variable of the model. A multilayer perception network with a back-propagation learning algorithm was used to fit the experimental data. The optimum conditions were obtained based on the ACO. Under the optimized conditions, the limit of detection for Pb2+ ions was found to be 0.078 μg/L. This procedure was also successfully used to determine the amounts of Pb2+ ions in various natural water samples.

  17. The Impact of Transport Mode and Carbon Policy on Low-Carbon Retailer

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    2015-01-01

    Full Text Available Low-carbon retail has become a strategic target for many developed and developing economies. This study discusses the impact of transport mode and carbon policy on achieving this objective. We investigated the retailer transportation mode, pricing, and ordering strategy, which all consider carbon-sensitive demand under the carbon cap-and-trade policy. We analyzed the optimal decision of retailer and their maximum profit affected by transport mode and cap-and-trade policy parameters. Results show that the two elements (cap-and-trade policy and consumer low-carbon awareness could encourage the retailer to choose low-carbon transportation. The two elements also influence the profit and optimal decision of retailer. Finally, a numerical example is presented to illustrate the applicability of the model.

  18. [Cost-effectiveness of breast cancer screening policies in Mexico].

    Science.gov (United States)

    Valencia-Mendoza, Atanacio; Sánchez-González, Gilberto; Bautista-Arredondo, Sergio; Torres-Mejía, Gabriela; Bertozzi, Stefano M

    2009-01-01

    Generate cost-effectiveness information to allow policy makers optimize breast cancer (BC) policy in Mexico. We constructed a Markov model that incorporates four interrelated processes of the disease: the natural history; detection using mammography; treatment; and other competing-causes mortality, according to which 13 different strategies were modeled. Strategies (starting age, % of coverage, frequency in years)= (48, 25, 2), (40, 50, 2) and (40, 50, 1) constituted the optimal method for expanding the BC program, yielding 75.3, 116.4 and 171.1 thousand pesos per life-year saved, respectively. The strategies included in the optimal method for expanding the program produce a cost per life-year saved of less than two times the GNP per capita and hence are cost-effective according to WHO Commission on Macroeconomics and Health criteria.

  19. Optimal maintenance of a multi-unit system under dependencies

    Science.gov (United States)

    Sung, Ho-Joon

    The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the

  20. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Science.gov (United States)

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  1. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  2. Policy and systems analysis for nuclear installation decommissioning

    International Nuclear Information System (INIS)

    Gu Jiande

    1995-01-01

    On the basis of introducing into principal concept for nuclear installation decommissioning, form policy, sciences point of view, the author analyses present problems in the policy, the administrative and programme for decommissioning work in China. According to the physical process of decommissioning, the author studied engineering economics, derived method and formulas to estimate decommissioning cost. It is pointed out that basing on optimization principle for radiation protection and analysing cost-benefit for decommissioning engineering, the corresponding policy decision can be made

  3. Indicator Accuracy and Monetary Policy: Is Ignorance Bliss?

    OpenAIRE

    Nimark, Kristoffer P.

    2003-01-01

    This paper argues that assuming a common information set shared by the public and the central bank may be inappropriate when one is concerned with the value of information itself. Specifically, we argue that it may lead one to draw the conclusion that monetary policy do not benefit from accurate real time data. This paper sets up a New-Keynesian model with optimal discretionary monetary policy, where we allow for partial and diverse information. The model is used to show that monetary policy ...

  4. Optimization and Customer Utilities under Dynamic Lead Time Quotation in an M/M Type Base Stock System

    Directory of Open Access Journals (Sweden)

    Koichi Nakade

    2017-01-01

    Full Text Available In a manufacturing and inventory system, information on production and order lead time helps consumers’ decision whether they receive finished products or not by considering their own impatience on waiting time. In Savaşaneril et al. (2010, the optimal dynamic lead time quotation policy in a one-stage production and inventory system with a base stock policy for maximizing the system’s profit and its properties are discussed. In this system, each arriving customer decides whether he/she enters the system based on the quoted lead time informed by the system. On the other hand, the customer’s utility may be small under the optimal quoted lead time policy because the actual lead time may be longer than the quoted lead time. We use a utility function with respect to benefit of receiving products and waiting time and propose several kinds of heuristic lead time quotation policies. These are compared with optimal policies with respect to both profits and customer’s utilities. Through numerical examples some kinds of heuristic policies have better expected utilities of customers than the optimal quoted lead time policy maximizing system’s profits.

  5. The Q(s,S) control policy for the joint replenishment problem extended to the case of correlation among item-demands

    DEFF Research Database (Denmark)

    Larsen, Christian

      We develop an algorithm to compute an optimal Q(s,S) policy for the joint replenishment problem when demands follow a compound correlated Poisson process. It is a non-trivial generalization of the work by Nielsen and Larsen (2005). We make some numerical analyses on two-item problems where we...... compare the optimal Q(s,S) policy to the optimal uncoordinated (s,S) policies. The results indicate that the more negative the correlation the less advantageous it is to coordinate. Therefore, in some cases the degree of correlation determines whether to apply the coordinated Q(s,S) policy...... or the uncoordinated (s,S) policies. Finally, we compare the Q(s,S) policy and the closely connected P(s,S) policy. Here we explain why the Q(s,S) policy is a better choice if item-demands are correlated....

  6. Toward a better union: improving the effectiveness of foreign policies

    OpenAIRE

    Bradshaw, Daniel J.

    2014-01-01

    Approved for public release; distribution is unlimited A fundamental characteristic of state-state interaction in a globalized system is the explicitness with which states communicate their foreign policies to each other. In order to understand the role and the importance of foreign policy explicitness in the global foreign policy system, I first created a simple mesh model of the actors and institutions that form the U.S. foreign policy system. By optimizing this model with various system...

  7. Gas chromatographic-mass spectrometric analysis of urinary volatile organic metabolites: Optimization of the HS-SPME procedure and sample storage conditions.

    Science.gov (United States)

    Živković Semren, Tanja; Brčić Karačonji, Irena; Safner, Toni; Brajenović, Nataša; Tariba Lovaković, Blanka; Pizent, Alica

    2018-01-01

    Non-targeted metabolomics research of human volatile urinary metabolome can be used to identify potential biomarkers associated with the changes in metabolism related to various health disorders. To ensure reliable analysis of urinary volatile organic metabolites (VOMs) by gas chromatography-mass spectrometry (GC-MS), parameters affecting the headspace-solid phase microextraction (HS-SPME) procedure have been evaluated and optimized. The influence of incubation and extraction temperatures and times, coating fibre material and salt addition on SPME efficiency was investigated by multivariate optimization methods using reduced factorial and Doehlert matrix designs. The results showed optimum values for temperature to be 60°C, extraction time 50min, and incubation time 35min. The proposed conditions were applied to investigate urine samples' stability regarding different storage conditions and freeze-thaw processes. The sum of peak areas of urine samples stored at 4°C, -20°C, and -80°C up to six months showed a time dependent decrease over time although storage at -80°C resulted in a slight non-significant reduction comparing to the fresh sample. However, due to the volatile nature of the analysed compounds, more than two cycles of freezing/thawing of the sample stored for six months at -80°C should be avoided whenever possible. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Analyzing personalized policies for online biometric verification.

    Science.gov (United States)

    Sadhwani, Apaar; Yang, Yan; Wein, Lawrence M

    2014-01-01

    Motivated by India's nationwide biometric program for social inclusion, we analyze verification (i.e., one-to-one matching) in the case where we possess similarity scores for 10 fingerprints and two irises between a resident's biometric images at enrollment and his biometric images during his first verification. At subsequent verifications, we allow individualized strategies based on these 12 scores: we acquire a subset of the 12 images, get new scores for this subset that quantify the similarity to the corresponding enrollment images, and use the likelihood ratio (i.e., the likelihood of observing these scores if the resident is genuine divided by the corresponding likelihood if the resident is an imposter) to decide whether a resident is genuine or an imposter. We also consider two-stage policies, where additional images are acquired in a second stage if the first-stage results are inconclusive. Using performance data from India's program, we develop a new probabilistic model for the joint distribution of the 12 similarity scores and find near-optimal individualized strategies that minimize the false reject rate (FRR) subject to constraints on the false accept rate (FAR) and mean verification delay for each resident. Our individualized policies achieve the same FRR as a policy that acquires (and optimally fuses) 12 biometrics for each resident, which represents a five (four, respectively) log reduction in FRR relative to fingerprint (iris, respectively) policies previously proposed for India's biometric program. The mean delay is [Formula: see text] sec for our proposed policy, compared to 30 sec for a policy that acquires one fingerprint and 107 sec for a policy that acquires all 12 biometrics. This policy acquires iris scans from 32-41% of residents (depending on the FAR) and acquires an average of 1.3 fingerprints per resident.

  9. Competing intelligent search agents in global optimization

    Energy Technology Data Exchange (ETDEWEB)

    Streltsov, S.; Vakili, P. [Boston Univ., MA (United States); Muchnik, I. [Rutgers Univ., Piscataway, NJ (United States)

    1996-12-31

    In this paper we present a new search methodology that we view as a development of intelligent agent approach to the analysis of complex system. The main idea is to consider search process as a competition mechanism between concurrent adaptive intelligent agents. Agents cooperate in achieving a common search goal and at the same time compete with each other for computational resources. We propose a statistical selection approach to resource allocation between agents that leads to simple and efficient on average index allocation policies. We use global optimization as the most general setting that encompasses many types of search problems, and show how proposed selection policies can be used to improve and combine various global optimization methods.

  10. Assessing Screening Policies for Childhood Obesity

    Science.gov (United States)

    Wein, Lawrence M.; Yang, Yan; Goldhaber-Fiebert, Jeremy D.

    2014-01-01

    To address growing concerns over childhood obesity, the United States Preventive Services Task Force (USPSTF) recently recommended that children undergo obesity screening beginning at age 6 [1]. An Expert Committee recommends starting at age 2 [2]. Analysis is needed to assess these recommendations and investigate whether there are better alternatives. We model the age- and sex-specific population-wide distribution of body mass index (BMI) through age 18 using National Longitudinal Survey of Youth data [3]. The impact of treatment on BMI is estimated using the targeted systematic review performed to aid the USPSTF [4]. The prevalence of hypertension and diabetes at age 40 are estimated from the Panel Study of Income Dynamics [5]. We fix the screening interval at 2 years, and derive the age- and sex-dependent BMI thresholds that minimize adult disease prevalence, subject to referring a specified percentage of children for treatment yearly. We compare this optimal biennial policy to biennial versions of the USPSTF and Expert Committee recommendations. Compared to the USPSTF recommendation, the optimal policy reduces adult disease prevalence by 3% in relative terms (the absolute reductions are disease prevalence at a 28% reduction in treatment referral rate. If compared to the Expert Committee recommendation, the reductions change to 6% and 40%, respectively. The optimal policy treats mostly 16 year olds and few children under age 14. Our results suggest that adult disease is minimized by focusing childhood obesity screening and treatment on older adolescents. PMID:22240724

  11. Base-stock policies with reservations

    NARCIS (Netherlands)

    van Foreest, Nicky D.; Teunter, Ruud H.; Syntetos, Aris A.

    2018-01-01

    All intensively studied and widely applied inventory control policies satisfy demand in accordance with the First-Come-First-Served (FCFS) rule, whether this demand is in backorder or not. Interestingly, this rule is sub-optimal when the fill-rate is constrained or when the backorder cost structure

  12. Predictive Feature Selection for Genetic Policy Search

    Science.gov (United States)

    2014-05-22

    limited manual intervention are becoming increasingly desirable as more complex tasks in dynamic and high- tempo environments are explored. Reinforcement...states in many domains causes features relevant to the reward variations to be overlooked, which hinders the policy search. 3.4 Parameter Selection PFS...the current feature subset. This local minimum may be “deceptive,” meaning that it does not clearly lead to the global optimal policy ( Goldberg and

  13. Privacy, Time Consistent Optimal Labour Income Taxation and Education Policy

    OpenAIRE

    Konrad, Kai A.

    1999-01-01

    Incomplete information is a commitment device for time consistency problems. In the context of time consistent labour income taxation privacy reduces welfare losses and increases the effectiveness of public education as a second best policy.

  14. Uncertainty and endogenous technical change in climate policy models

    International Nuclear Information System (INIS)

    Baker, Erin; Shittu, Ekundayo

    2008-01-01

    Until recently endogenous technical change and uncertainty have been modeled separately in climate policy models. In this paper, we review the emerging literature that considers both these elements together. Taken as a whole the literature indicates that explicitly including uncertainty has important quantitative and qualitative impacts on optimal climate change technology policy. (author)

  15. Optimal Drug Policy in Low-Income Neighborhoods

    Science.gov (United States)

    Chang, Sheng-Wen; Coulson, N. Edward; Wang, Ping

    2015-01-01

    The control of drug activity currently favors supply-side policies: drug suppliers in the U.S. face a higher arrest rate and longer sentences than demanders. We construct a simple model of drug activity with search and entry frictions in labor and drug markets. Our calibration analysis suggests a strong “dealer replacement effect.” As a result, given a variety of community objectives, it is beneficial to lower supplier arrests and raise the demand arrest rate from current values. A 10% shift from supply-side to demand-side arrests can reduce the population of potential drug dealers by 22–25,000 and raise aggregate local income by $380–400 million, at 2002 prices. (JEL Classification: D60, J60, K42, H70) PMID:27616878

  16. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  17. Optimal Maintenance for Stochastically Degrading Staellite Constellations

    National Research Council Canada - National Science Library

    Cook, Timothy J

    2005-01-01

    .... Previous work has developed a methodology to compute an optimal replacement policy for a satellite constellation in which satellites were viewed as binary entities, either operational or failed...

  18. Local Approximation and Hierarchical Methods for Stochastic Optimization

    Science.gov (United States)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  19. Optimizing detectability

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    HPLC is useful for trace and ultratrace analyses of a variety of compounds. For most applications, HPLC is useful for determinations in the nanogram-to-microgram range; however, detection limits of a picogram or less have been demonstrated in certain cases. These determinations require state-of-the-art capability; several examples of such determinations are provided in this chapter. As mentioned before, to detect and/or analyze low quantities of a given analyte at submicrogram or ultratrace levels, it is necessary to optimize the whole separation system, including the quantity and type of sample, sample preparation, HPLC equipment, chromatographic conditions (including column), choice of detector, and quantitation techniques. A limited discussion is provided here for optimization based on theoretical considerations, chromatographic conditions, detector selection, and miscellaneous approaches to detectability optimization. 59 refs

  20. New Economy - New Policy Rules?

    NARCIS (Netherlands)

    Bullard, J.; Schaling, E.

    2000-01-01

    The U.S. economy appears to have experienced a pronounced shift toward higher productivity over the last five years or so. We wish to understand the implications of such shifts for the structure of optimal monetary policy rules in simple dynamic economies. Accordingly, we begin with a standard

  1. Optimal Investment Control of Macroeconomic Systems

    Institute of Scientific and Technical Information of China (English)

    ZHAO Ke-jie; LIU Chuan-zhe

    2006-01-01

    Economic growth is always accompanied by economic fluctuation. The target of macroeconomic control is to keep a basic balance of economic growth, accelerate the optimization of economic structures and to lead a rapid, sustainable and healthy development of national economies, in order to propel society forward. In order to realize the above goal, investment control must be regarded as the most important policy for economic stability. Readjustment and control of investment includes not only control of aggregate investment, but also structural control which depends on economic-technology relationships between various industries of a national economy. On the basis of the theory of a generalized system, an optimal investment control model for government has been developed. In order to provide a scientific basis for government to formulate a macroeconomic control policy, the model investigates the balance of total supply and aggregate demand through an adjustment in investment decisions realizes a sustainable and stable growth of the national economy. The optimal investment decision function proposed by this study has a unique and specific expression, high regulating precision and computable characteristics.

  2. Optimal Price Skimming by a Monopolist Facing Rational Consumers

    OpenAIRE

    David Besanko; Wayne L. Winston

    1990-01-01

    This paper considers the intertemporal pricing problem for a monopolist marketing a new product. The key feature differentiating this paper from the extant management science literature on intertemporal pricing is the assumption that consumers are intertemporal utility maximizers. A subgame perfect Nash equilibrium pricing policy is characterized and shown to involve intertemporal price discrimination. We compare this policy to the optimal policy for a monopolist facing myopic consumers and f...

  3. Renewable resource policy when distributional impacts matter

    International Nuclear Information System (INIS)

    Horan, R.D.; Shortle, J.S.; Bulte, E.H.

    1999-01-01

    The standard assumption in bioeconomic resource models is that optimal policies maximize the present value of economic surplus to society. This assumption implies that regulatory agencies should not be concerned with the distributional consequences of management strategies. Both contemporary welfare-theoretic and rent-seeking approaches suggests distributional issues are important in designing resource management policies. This paper explores resource management when the managing agency has preferences defined over the economic welfare of various groups with a direct economic interest in the use of resources. Policy schemes consistent with this approach are derived and compared with standard results. 42 refs

  4. Optimal Long-Term Financial Contracting

    OpenAIRE

    Peter M. DeMarzo; Michael J. Fishman

    2007-01-01

    We develop an agency model of financial contracting. We derive long-term debt, a line of credit, and equity as optimal securities, capturing the debt coupon and maturity; the interest rate and limits on the credit line; inside versus outside equity; dividend policy; and capital structure dynamics. The optimal debt-equity ratio is history dependent, but debt and credit line terms are independent of the amount financed and, in some cases, the severity of the agency problem. In our model, the ag...

  5. Optimization of cooling strategy and seeding by FBRM analysis of batch crystallization

    Science.gov (United States)

    Zhang, Dejiang; Liu, Lande; Xu, Shijie; Du, Shichao; Dong, Weibing; Gong, Junbo

    2018-03-01

    A method is presented for optimizing the cooling strategy and seed loading simultaneously. Focused beam reflectance measurement (FBRM) was used to determine the approximating optimal cooling profile. Using these results in conjunction with constant growth rate assumption, modified Mullin-Nyvlt trajectory could be calculated. This trajectory could suppress secondary nucleation and has the potential to control product's polymorph distribution. Comparing with linear and two step cooling, modified Mullin-Nyvlt trajectory have a larger size distribution and a better morphology. Based on the calculating results, the optimized seed loading policy was also developed. This policy could be useful for guiding the batch crystallization process.

  6. Energy policies in the European Union. Germany's ecological tax reform

    International Nuclear Information System (INIS)

    Welfens, P.J.J.; Jungmittag, A.; Meyer, B.; Jasinski, P.

    2001-01-01

    The chapters discuss the following aspects: 1. Energy policy as a strategic element of economic policy in dynamic open economies. 2. Phasing out nuclear energy and core elements of sustainable energy strategy. 3. Ecological tax reform: Theory, modified double dividend and international aspects. 4. The policy framework in Europe and Germany. 5. Optimal ecological tax reform: Options and recommendations for an EU-action plan. 6. Conclusions. (orig./CB)

  7. The Q(s,S) control policy for the joint replenishment problem extended to the case of correlation among item-demands

    DEFF Research Database (Denmark)

    Larsen, Christian

    2009-01-01

    We develop an algorithm to compute an optimal Q(s,S) policy for the joint replenishment problem when demands follow a compound correlated Poisson process. It is a non-trivial generalization of the work by Nielsen and Larsen [2005. An analytical study of the Q(s,S) policy applied to the joint...... replenishment problem. European Journal of Operational Research 163, 721-732]. We make some numerical analyses on two-item problems where we compare the optimal Q(s,S) policy to the optimal uncoordinated (s,S) policies. The results indicate that the more negative the correlation the less advantageous...

  8. A Multiple-objective Optimization of Whey Fermentation in Stirred Tank Bioreactors

    Directory of Open Access Journals (Sweden)

    Mitko Petrov

    2006-12-01

    Full Text Available A multiple-objective optimization is applied to find an optimal policy of a fed-batch fermentation process for lactose oxidation from a natural substratum of the strain Kluyveromyces marxianus var. lactis MC5. The optimal policy is consisted of feed flow rate, agitation speed, and gas flow rate. The multiple-objective problem includes: the total price of the biomass production, the second objective functions are the separation cost in downstream processing and the third objective function corresponds to the oxygen mass-transfer in the bioreactor. The multiple-objective optimization are transforming to standard problem for optimization with single-objective function. Local criteria are defined utility function with different weight for single-type vector task. A fuzzy sets method is applied to be solved the maximizing decision problem. A simple combined algorithm guideline to find a satisfactory solution to the general multiple-objective optimization problem. The obtained optimal control results have shown an increase of the process productiveness and a decrease of the residual substrate concentration.

  9. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  10. Performance of an Optimized Paper-Based Test for Rapid Visual Measurement of Alanine Aminotransferase (ALT in Fingerstick and Venipuncture Samples.

    Directory of Open Access Journals (Sweden)

    Sidhartha Jain

    Full Text Available A paper-based, multiplexed, microfluidic assay has been developed to visually measure alanine aminotransferase (ALT in a fingerstick sample, generating rapid, semi-quantitative results. Prior studies indicated a need for improved accuracy; the device was subsequently optimized using an FDA-approved automated platform (Abaxis Piccolo Xpress as a comparator. Here, we evaluated the performance of the optimized paper test for measurement of ALT in fingerstick blood and serum, as compared to Abaxis and Roche/Hitachi platforms. To evaluate feasibility of remote results interpretation, we also compared reading cell phone camera images of completed tests to reading the device in real time.96 ambulatory patients with varied baseline ALT concentration underwent fingerstick testing using the paper device; cell phone images of completed devices were taken and texted to a blinded off-site reader. Venipuncture serum was obtained from 93/96 participants for routine clinical testing (Roche/Hitachi; subsequently, 88/93 serum samples were captured and applied to paper and Abaxis platforms. Paper test and reference standard results were compared by Bland-Altman analysis.For serum, there was excellent agreement between paper test and Abaxis results, with negligible bias (+4.5 U/L. Abaxis results were systematically 8.6% lower than Roche/Hitachi results. ALT values in fingerstick samples tested on paper were systematically lower than values in paired serum tested on paper (bias -23.6 U/L or Abaxis (bias -18.4 U/L; a correction factor was developed for the paper device to match fingerstick blood to serum. Visual reads of cell phone images closely matched reads made in real time (bias +5.5 U/L.The paper ALT test is highly accurate for serum testing, matching the reference method against which it was optimized better than the reference methods matched each other. A systematic difference exists between ALT values in fingerstick and paired serum samples, and can be

  11. Is climate change-centrism an optimal policy making strategy to set national electricity mixes?

    International Nuclear Information System (INIS)

    Vázquez-Rowe, Ian; Reyna, Janet L.; García-Torres, Samy; Kahhat, Ramzy

    2015-01-01

    Highlights: • The impact of climate-centric policies on other environmental impacts is uncertain. • Analysis of changing electricity grids of Peru and Spain in the period 1989–2013. • Life Cycle Assessment was the selected sustainability method to conduct the study. • Policies targeting GHG reductions also reduce air pollution and toxicity. • Resource usage, especially water, does not show the same trends as GHG emissions. - Abstract: In order to combat the threat of climate change, countries have begun to implement policies which restrict GHG emissions in the electricity sector. However, the development of national electricity mixes should also be sensitive to resource availability, geo-political forces, human health impacts, and social equity concerns. Policy focused on GHG goals could potentially lead to adverse consequences in other areas. To explore the impact of “climate-centric” policy making on long-term electricity mix changes, we develop two cases for Peru and Spain analyzing their changing electricity grids in the period 1989–2013. We perform a Life Cycle Assessment of annual electricity production to catalogue the improvements in GHG emissions relative to other environmental impacts. We conclude that policies targeting GHG reductions might have the co-benefit of also reducing air pollution and toxicity at the expense of other important environmental performance indicators such as water depletion. Moreover, as of 2013, both countries generate approximately equal GHG emissions per kWh, and relatively low emission rates of other pollutants compared to nations of similar development levels. Although climate-centric policy can lead to some positive environmental outcomes in certain areas, energy policy-making should be holistic and include other aspects of sustainability and vulnerability.

  12. Sampling bee communities using pan traps: alternative methods increase sample size

    Science.gov (United States)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  13. An integrated assessment of climate change, air pollution, and energy security policy

    International Nuclear Information System (INIS)

    Bollen, Johannes; Hers, Sebastiaan; Van der Zwaan, Bob

    2010-01-01

    This article presents an integrated assessment of climate change, air pollution, and energy security policy. Basis of our analysis is the MERGE model, designed to study the interaction between the global economy, energy use, and the impacts of climate change. For our purposes we expanded MERGE with expressions that quantify damages incurred to regional economies as a result of air pollution and lack of energy security. One of the main findings of our cost-benefit analysis is that energy security policy alone does not decrease the use of oil: global oil consumption is only delayed by several decades and oil reserves are still practically depleted before the end of the 21st century. If, on the other hand, energy security policy is integrated with optimal climate change and air pollution policy, the world's oil reserves will not be depleted, at least not before our modeling horizon well into the 22nd century: total cumulative demand for oil decreases by about 24%. More generally, we demonstrate that there are multiple other benefits of combining climate change, air pollution, and energy security policies and exploiting the possible synergies between them. These benefits can be large: for Europe the achievable CO 2 emission abatement and oil consumption reduction levels are significantly deeper for integrated policy than when a strategy is adopted in which one of the three policies is omitted. Integrated optimal energy policy can reduce the number of premature deaths from air pollution by about 14,000 annually in Europe and over 3 million per year globally, by lowering the chronic exposure to ambient particulate matter. Only the optimal strategy combining the three types of energy policy can constrain the global average atmospheric temperature increase to a limit of 3 C with respect to the pre-industrial level. (author)

  14. A Quasi-Feed-In-Tariff policy formulation in micro-grids: A bi-level multi-period approach

    International Nuclear Information System (INIS)

    Taha, Ahmad F.; Hachem, Nadim A.; Panchal, Jitesh H.

    2014-01-01

    A Quasi-Feed-In-Tariff (QFIT) policy formulation is presented for micro-grids that integrates renewable energy generation considering Policy Makers' and Generation Companies' (GENCOs) objectives assuming a bi-level multi-period formulation that integrates physical characteristics of the power-grid. The upper-level problem corresponds to the PM, whereas the lower-level decisions are made by GENCOs. We consider that some GENCOs are green energy producers, while others are black energy producers. Policy makers incentivize green energy producers to generate energy through the payment of optimal time-varying subsidy price. The policy maker's main objective is to maximize an overall social welfare that includes factors such as demand surplus, energy cost, renewable energy subsidy price, and environmental standards. The lower-level problem corresponding to the GENCOs is based on maximizing the players' profits. The proposed QFIT policy differs from the FIT policy in the sense that the subsidy price-based contracts offered to green energy producers dynamically change over time, depending on the physical properties of the grid, demand, and energy price fluctuations. The integrated problem solves for time-varying subsidy price and equilibrium energy quantities that optimize the system welfare under different grid and system conditions. - Highlights: • We present a bi-level optimization problem formulation for Quasi-Feed-In-Tariff (QFIT) policy. • QFIT dictates that subsidy prices dynamically vary over time depending on conditions. • Power grid's physical characteristics affect optimal subsidy prices and energy generation. • To maximize welfare, policy makers ought to increase subsidy prices during the peak-load

  15. Reducing Environmental Allergic Triggers: Policy Issues.

    Science.gov (United States)

    Abramson, Stuart L

    The implementation of policies to reduce environmental allergic triggers can be an important adjunct to optimal patient care for allergic rhinitis and allergic asthma. Policies at the local level in schools and other public as well as private buildings can make an impact on disease morbidity. Occupational exposures for allergens have not yet been met with the same rigorous policy standards applied for exposures to toxicants by Occupational Safety and Health Administration. Further benefit may be obtained through policies by local, county, state, and national governments, and possibly through international cooperative agreements. The reduction of allergenic exposures can and should be affected by policies with strong scientific, evidence-based derivation. However, a judicious application of the precautionary principle may be needed in circumstances where the health effect of inaction could lead to more serious threats to vulnerable populations with allergic disease. This commentary covers the scientific basis, current implementation, knowledge gaps, and pro/con views on policy issues in reducing environmental allergic triggers. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  16. Assessing views about gun violence reduction policy: a look at type of violence and expected effectiveness.

    Science.gov (United States)

    Sorenson, Susan B

    2015-10-01

    Public opinion polling about gun policy is routinely conducted and often disregarded. The purpose of this research is to explore ways in which surveys can be made more useful to policy makers, researchers, and the general public. A stratified random sample of 1000 undergraduates at a private, urban university was recruited for an online survey about proposed gun policies. A total of 51.7% answered the questions analyzed herein. Including but going beyond typical assessments of agreement, the survey elicited respondent evaluations of the effectiveness of seven gun policies under two randomly assigned conditions: the type of gun violence (e.g., homicide, suicide, violent crime) and its magnitude. Participants were asked to estimate the effectiveness of each policy, including the possibility of making things worse. Participants indicated strong support for all policies and expected each to be effective with one exception - a policy designed to increase the number of guns on the scene, that is, putting armed police in schools. Persons who did not support other policies, on average, did not expect them to make things worse. Telling participants about the scope of the violence did not but the type of gun violence did affect effectiveness ratings. Asking about expected effectiveness of (vs. general support for) a policy might identify some optimism: Even people who don't support a policy sometimes think it will be effective. Findings suggest that surveys about the effectiveness of gun violence policies likely assess views that exclude suicide, the most common form of gun-related mortality. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Designing Efficient College and Tax Policies

    OpenAIRE

    Findeisen, Sebastian; Sachs, Dominik

    2015-01-01

    The total social benefits of college education exceed the private benefits because the government receives a share of the monetary returns in the form of income taxes. We study the policy implications of this fiscal externality in an optimal dynamic tax framework. Using a variational approach we derive a formula for the revenue effect of an increase in college education subsidies and for the excess burden of income taxation caused by the college margin. We also show how the optimal nonlinear ...

  18. A trivariate optimal replacement policy for a deteriorating system based on cumulative damage and inspections

    International Nuclear Information System (INIS)

    Tsai, Hsin-Nan; Sheu, Shey-Huei; Zhang, Zhe George

    2017-01-01

    In this article, we study a trivariate replacement model for a deteriorating system consisting of two units. Failures of unit 1 can be classified into two types. Type I failure (minor failure) is fixed by a minimal repair and type II failure (catastrophic failure) is removed by a replacement. Both types of failures can only be detected through inspection. Each type I failure of unit 1 will result in a random amount of damage to unit 2 and the damages are cumulative. The probability of type I failure or type II failure is assumed to depend on the number of failures since the last replacement. We formulate a replacement policy based on the number of type I failure, the occurrence of the first type II failure, and the amount of accumulative damages. Hence the system is replaced either preventively or correctively at any of the following four conditions depend on whichever occurs first; preventively (a) at the Nth type I failure; or (b) when the total damage of unit 2 exceeds a pre-specified level Z (but less than the failure level l); and, correctively (c) at the first type II failure; or (d) when the total damage of unit 2 exceeds a failure level l, where Z and l represent the thresholds of total damage level for unit 2 to preventive and corrective replacements, respectively. Although a type I failure can be fixed by a minimal repair, but the operating period is stochastically decreasing and repair time is stochastically increasing as time goes on. The minimal total expected long-run net cost per unit time of the system is derived and a computational algorithm for determining the optimal policy is developed. A real-world application from electric power industry is provided. Several past studied are shown to be special cases of our model. Finally, a numerical example is presented. - Highlights: • A trivariate replacement policy for a deteriorating system with two units is proposed. • A real-world application from the electric power industry is provided. • The

  19. Policy Gradient SMDP for Resource Allocation and Routing in Integrated Services Networks

    Science.gov (United States)

    Vien, Ngo Anh; Viet, Nguyen Hoang; Lee, Seunggwan; Chung, Taechoong

    In this paper, we solve the call admission control (CAC) and routing problem in an integrated network that handles several classes of calls of different values and with different resource requirements. The problem of maximizing the average reward (or cost) of admitted calls per unit time is naturally formulated as a semi-Markov Decision Process (SMDP) problem, but is too complex to allow for an exact solution. Thus in this paper, a policy gradient algorithm, together with a decomposition approach, is proposed to find the dynamic (state-dependent) optimal CAC and routing policy among a parameterized policy space. To implement that gradient algorithm, we approximate the gradient of the average reward. Then, we present a simulation-based algorithm to estimate the approximate gradient of the average reward (called GSMDP algorithm), using only a single sample path of the underlying Markov chain for the SMDP of CAC and routing problem. The algorithm enhances performance in terms of convergence speed, rejection probability, robustness to the changing arrival statistics and an overall received average revenue. The experimental simulations will compare our method's performance with other existing methods and show the robustness of our method.

  20. Impacts of subsidy policies on vaccination decisions in contact networks

    Science.gov (United States)

    Zhang, Hai-Feng; Wu, Zhi-Xi; Xu, Xiao-Ke; Small, Michael; Wang, Lin; Wang, Bing-Hong

    2013-07-01

    To motivate more people to participate in vaccination campaigns, various subsidy policies are often supplied by government and the health sectors. However, these external incentives may also alter the vaccination decisions of the broader public, and hence the choice of incentive needs to be carefully considered. Since human behavior and the networking-constrained interactions among individuals significantly impact the evolution of an epidemic, here we consider the voluntary vaccination on human contact networks. To this end, two categories of typical subsidy policies are considered: (1) under the free subsidy policy, the total amount of subsidy is distributed to a certain fraction of individual and who are vaccinated without personal cost, and (2) under the partial-offset subsidy policy, each vaccinated person is offset by a certain amount of subsidy. A vaccination decision model based on evolutionary game theory is established to study the effects of these different subsidy policies on disease control. Simulations suggest that, because the partial-offset subsidy policy encourages more people to take vaccination, its performance is significantly better than that of the free subsidy policy. However, an interesting phenomenon emerges in the partial-offset scenario: with limited amount of total subsidy, a moderate subsidy rate for each vaccinated individual can guarantee the group-optimal vaccination, leading to the maximal social benefits, while such an optimal phenomenon is not evident for the free subsidy scenario.