WorldWideScience

Sample records for optimal sampling policies

  1. Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2015-01-01

    Roč. 52, č. 2 (2015), s. 419-440 ISSN 0021-9002 Grant - others:GA AV ČR(CZ) 171396 Institutional support: RVO:67985556 Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality Subject RIV: BC - Control Systems Theory Impact factor: 0.665, year: 2015 http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

  2. Optimal patent policies: A survey

    DEFF Research Database (Denmark)

    Poulsen, Odile

    2002-01-01

    This paper surveys some of the patent literature, in particular, it focuses on optimal patent policies. We compare two situations. The first where the government only has a single policy tool to design the optimal patent policy, namely the optimal patent length. In the second situation......, the government uses two policy tools, the optimal breadth and length. We show that theoretical models give very different answers to what is the optimal patent policy. In particular, we show that the optimal patent policy depends among othet things on the price elasticity of demand, the intersectoral elasticity...... of research outputs as well as the degree of compettition in the R&D sector. The actual law on intellectual property, which advocates a unique patent length of 20 years is in general not supported by theoretical models....

  3. Optimal Policy in OG Models

    DEFF Research Database (Denmark)

    Ghiglino, Christian; Tvede, Mich

    for generations, through fiscal policy, i.e. monetary transfers and taxes. Both situations with and without time discounting are considered. It is shown that if the discount factor is suffciently close to one then the optimal policy stabilizes the economy, i.e. the equilibrium path has the turnpike property...

  4. Optimal Policy in OG Models

    DEFF Research Database (Denmark)

    Ghiglino, Christian; Tvede, Mich

    2000-01-01

    for generations, through fiscal policy, i.e., monetary transfers and taxes. Situations both with and without time discounting are considered. It is shown that if the discount factor is sufficiently close to one then the optimal policy stabilizes the economy, i.e. the equilibrium path has the turnpike property...

  5. Optimal human capital policies

    Czech Academy of Sciences Publication Activity Database

    Boháček, Radim; Kapička, M.

    2008-01-01

    Roč. 55, č. 1 (2008), s. 1-16 ISSN 0304-3932 Institutional research plan: CEZ:AV0Z70850503 Keywords : dynamic optimal taxation * income taxation Subject RIV: AH - Economics Impact factor: 1.429, year: 2008

  6. β-NMR sample optimization

    CERN Document Server

    Zakoucka, Eva

    2013-01-01

    During my summer student programme I was working on sample optimization for a new β-NMR project at the ISOLDE facility. The β-NMR technique is well-established in solid-state physics and just recently it is being introduced for applications in biochemistry and life sciences. The β-NMR collaboration will be applying for beam time to the INTC committee in September for three nuclei: Cu, Zn and Mg. Sample optimization for Mg was already performed last year during the summer student programme. Therefore sample optimization for Cu and Zn had to be completed as well for the project proposal. My part in the project was to perform thorough literature research on techniques studying Cu and Zn complexes in native conditions, search for relevant binding candidates for Cu and Zn applicable for ß-NMR and eventually evaluate selected binding candidates using UV-VIS spectrometry.

  7. Rollout sampling approximate policy iteration

    NARCIS (Netherlands)

    Dimitrakakis, C.; Lagoudakis, M.G.

    2008-01-01

    Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a

  8. Optimal Temporal Policies in Fluid Milk Advertising

    OpenAIRE

    Vande Kamp, Philip R.; Kaiser, Harry M.

    1998-01-01

    This study develops an approach to obtain optimal temporal advertising strategies when consumers' response to advertising is asymmetric. Using this approach, optimal strategies for generic fluid milk advertising in New York City are determined. Results indicate that pulsed advertising policies are significantly more effective in increasing demand than a uniform advertising policy. Sensitivity analyses show that the optimal advertising policies are insensitive to reasonable variations in inter...

  9. Endogenous price flexibility and optimal monetary policy

    OpenAIRE

    Ozge Senay; Alan Sutherland

    2014-01-01

    Much of the literature on optimal monetary policy uses models in which the degree of nominal price flexibility is exogenous. There are, however, good reasons to suppose that the degree of price flexibility adjusts endogenously to changes in monetary conditions. This article extends the standard new Keynesian model to incorporate an endogenous degree of price flexibility. The model shows that endogenizing the degree of price flexibility tends to shift optimal monetary policy towards complete i...

  10. Optimization of Overflow Policies in Call Centers

    DEFF Research Database (Denmark)

    Koole, G.M.; Nielsen, B.F.; Nielsen, T.B.

    2015-01-01

    . A Markov decision chain is used to determine the optimal policy. This policy outperforms considerably the ones used most often in practice, which use a fixed threshold. The present method can be used also for other call-center models and other situations where performance is based on actual waiting times...

  11. The optimal sampling of outsourcing product

    International Nuclear Information System (INIS)

    Yang Chao; Pei Jiacheng

    2014-01-01

    In order to improve quality and cost, the sampling c = 0 has been introduced to the inspection of outsourcing product. According to the current quality level (p = 0.4%), we confirmed the optimal sampling that is: Ac = 0; if N ≤ 3000, n = 55; 3001 ≤ N ≤ 10000, n = 86; N ≥ 10001, n = 108. Through analyzing the OC curve, we came to the conclusion that when N ≤ 3000, the protective ability of optimal sampling for product quality is stronger than current sampling. Corresponding to the same 'consumer risk', the product quality of optimal sampling is superior to current sampling. (authors)

  12. State dependent optimization of measurement policy

    Science.gov (United States)

    Konkarikoski, K.

    2010-07-01

    Measurements are the key to rational decision making. Measurement information generates value, when it is applied in the decision making. An investment cost and maintenance costs are associated with each component of the measurement system. Clearly, there is - under a given set of scenarios - a measurement setup that is optimal in expected (discounted) utility. This paper deals how the measurement policy optimization is affected by different system states and how this problem can be tackled.

  13. Transparency and corruption: an optimal taxation policy

    OpenAIRE

    Jellal, Mohamed; Bouzahzah, Mohamed

    2013-01-01

    Under Principal-Agent-Supervisor paradigm, we examine in this paper how a tax collection agency changes optimal schemes in order to lessen the occurrence of corruption between the tax collector and the taxpayer. Indeed, the Principal, who maximizes the expected net fiscal revenue, reacts by decreasing tax rates when the supervisor is likely to engage in corrupt transaction with taxpayer. Therefore, the optimal policy against collusion and corruption may explain the rational of the greater rel...

  14. State dependent optimization of measurement policy

    International Nuclear Information System (INIS)

    Konkarikoski, K

    2010-01-01

    Measurements are the key to rational decision making. Measurement information generates value, when it is applied in the decision making. An investment cost and maintenance costs are associated with each component of the measurement system. Clearly, there is - under a given set of scenarios - a measurement setup that is optimal in expected (discounted) utility. This paper deals how the measurement policy optimization is affected by different system states and how this problem can be tackled.

  15. Designing optimal sampling schemes for field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-10-01

    Full Text Available This is a presentation of a statistical method for deriving optimal spatial sampling schemes. The research focuses on ground verification of minerals derived from hyperspectral data. Spectral angle mapper (SAM) and spectral feature fitting (SFF...

  16. Optimal sampling schemes applied in geology

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2010-05-01

    Full Text Available Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology UP 2010 2 / 47 Outline 1 Introduction to hyperspectral remote... sensing 2 Objective of Study 1 3 Study Area 4 Data used 5 Methodology 6 Results 7 Background and Research Question for Study 2 8 Study Area and Data 9 Methodology 10 Results 11 Conclusions Debba (CSIR) Optimal Sampling Schemes applied in Geology...

  17. Chaotic dynamics in optimal monetary policy

    Science.gov (United States)

    Gomes, O.; Mendes, V. M.; Mendes, D. A.; Sousa Ramos, J.

    2007-05-01

    There is by now a large consensus in modern monetary policy. This consensus has been built upon a dynamic general equilibrium model of optimal monetary policy as developed by, e.g., Goodfriend and King [ NBER Macroeconomics Annual 1997 edited by B. Bernanke and J. Rotemberg (Cambridge, Mass.: MIT Press, 1997), pp. 231 282], Clarida et al. [J. Econ. Lit. 37, 1661 (1999)], Svensson [J. Mon. Econ. 43, 607 (1999)] and Woodford [ Interest and Prices: Foundations of a Theory of Monetary Policy (Princeton, New Jersey, Princeton University Press, 2003)]. In this paper we extend the standard optimal monetary policy model by introducing nonlinearity into the Phillips curve. Under the specific form of nonlinearity proposed in our paper (which allows for convexity and concavity and secures closed form solutions), we show that the introduction of a nonlinear Phillips curve into the structure of the standard model in a discrete time and deterministic framework produces radical changes to the major conclusions regarding stability and the efficiency of monetary policy. We emphasize the following main results: (i) instead of a unique fixed point we end up with multiple equilibria; (ii) instead of saddle-path stability, for different sets of parameter values we may have saddle stability, totally unstable equilibria and chaotic attractors; (iii) for certain degrees of convexity and/or concavity of the Phillips curve, where endogenous fluctuations arise, one is able to encounter various results that seem intuitively correct. Firstly, when the Central Bank pays attention essentially to inflation targeting, the inflation rate has a lower mean and is less volatile; secondly, when the degree of price stickiness is high, the inflation rate displays a larger mean and higher volatility (but this is sensitive to the values given to the parameters of the model); and thirdly, the higher the target value of the output gap chosen by the Central Bank, the higher is the inflation rate and its

  18. Optimal time points sampling in pathway modelling.

    Science.gov (United States)

    Hu, Shiyan

    2004-01-01

    Modelling cellular dynamics based on experimental data is at the heart of system biology. Considerable progress has been made to dynamic pathway modelling as well as the related parameter estimation. However, few of them gives consideration for the issue of optimal sampling time selection for parameter estimation. Time course experiments in molecular biology rarely produce large and accurate data sets and the experiments involved are usually time consuming and expensive. Therefore, to approximate parameters for models with only few available sampling data is of significant practical value. For signal transduction, the sampling intervals are usually not evenly distributed and are based on heuristics. In the paper, we investigate an approach to guide the process of selecting time points in an optimal way to minimize the variance of parameter estimates. In the method, we first formulate the problem to a nonlinear constrained optimization problem by maximum likelihood estimation. We then modify and apply a quantum-inspired evolutionary algorithm, which combines the advantages of both quantum computing and evolutionary computing, to solve the optimization problem. The new algorithm does not suffer from the morass of selecting good initial values and being stuck into local optimum as usually accompanied with the conventional numerical optimization techniques. The simulation results indicate the soundness of the new method.

  19. The determination of optimal climate policy

    International Nuclear Information System (INIS)

    Aaheim, Asbjoern

    2010-01-01

    Analyses of the costs and benefits of climate policy, such as the Stern Review, evaluate alternative strategies to reduce greenhouse gas emissions by requiring that the cost of emission cuts in each and every year has to be covered by the associated value of avoided damage, discounted by a an exogenously chosen rate. An alternative is to optimize abatement programmes towards a stationary state, where the concentrations of greenhouse gases are stabilized and shadow prices, including the rate of discount, are determined endogenously. This paper examines the properties of optimized stabilization. It turns out that the implications for the evaluation of climate policy are substantial if compared with evaluations of the present value of costs and benefits based on exogenously chosen shadow prices. Comparisons of discounted costs and benefits tend to exaggerate the importance of the choice of discount rate, while ignoring the importance of future abatement costs, which turns out to be essential for the optimal abatement path. Numerical examples suggest that early action may be more beneficial than indicated by comparisons of costs and benefits discounted by a rate chosen on the basis of current observations. (author)

  20. Vertical integration and optimal reimbursement policy.

    Science.gov (United States)

    Afendulis, Christopher C; Kessler, Daniel P

    2011-09-01

    Health care providers may vertically integrate not only to facilitate coordination of care, but also for strategic reasons that may not be in patients' best interests. Optimal Medicare reimbursement policy depends upon the extent to which each of these explanations is correct. To investigate, we compare the consequences of the 1997 adoption of prospective payment for skilled nursing facilities (SNF PPS) in geographic areas with high versus low levels of hospital/SNF integration. We find that SNF PPS decreased spending more in high integration areas, with no measurable consequences for patient health outcomes. Our findings suggest that integrated providers should face higher-powered reimbursement incentives, i.e., less cost-sharing. More generally, we conclude that purchasers of health services (and other services subject to agency problems) should consider the organizational form of their suppliers when choosing a reimbursement mechanism.

  1. Sample Adaptive Offset Optimization in HEVC

    Directory of Open Access Journals (Sweden)

    Yang Zhang

    2014-11-01

    Full Text Available As the next generation of video coding standard, High Efficiency Video Coding (HEVC adopted many useful tools to improve coding efficiency. Sample Adaptive Offset (SAO, is a technique to reduce sample distortion by providing offsets to pixels in in-loop filter. In SAO, pixels in LCU are classified into several categories, then categories and offsets are given based on Rate-Distortion Optimization (RDO of reconstructed pixels in a Largest Coding Unit (LCU. Pixels in a LCU are operated by the same SAO process, however, transform and inverse transform makes the distortion of pixels in Transform Unit (TU edge larger than the distortion inside TU even after deblocking filtering (DF and SAO. And the categories of SAO can also be refined, since it is not proper for many cases. This paper proposed a TU edge offset mode and a category refinement for SAO in HEVC. Experimental results shows that those two kinds of optimization gets -0.13 and -0.2 gain respectively compared with the SAO in HEVC. The proposed algorithm which using the two kinds of optimization gets -0.23 gain on BD-rate compared with the SAO in HEVC which is a 47 % increase with nearly no increase on coding time.

  2. Resolution optimization with irregularly sampled Fourier data

    International Nuclear Information System (INIS)

    Ferrara, Matthew; Parker, Jason T; Cheney, Margaret

    2013-01-01

    Image acquisition systems such as synthetic aperture radar (SAR) and magnetic resonance imaging often measure irregularly spaced Fourier samples of the desired image. In this paper we show the relationship between sample locations, their associated backprojection weights, and image resolution as characterized by the resulting point spread function (PSF). Two new methods for computing data weights, based on different optimization criteria, are proposed. The first method, which solves a maximal-eigenvector problem, optimizes a PSF-derived resolution metric which is shown to be equivalent to the volume of the Cramer–Rao (positional) error ellipsoid in the uniform-weight case. The second approach utilizes as its performance metric the Frobenius error between the PSF operator and the ideal delta function, and is an extension of a previously reported algorithm. Our proposed extension appropriately regularizes the weight estimates in the presence of noisy data and eliminates the superfluous issue of image discretization in the choice of data weights. The Frobenius-error approach results in a Tikhonov-regularized inverse problem whose Tikhonov weights are dependent on the locations of the Fourier data as well as the noise variance. The two new methods are compared against several state-of-the-art weighting strategies for synthetic multistatic point-scatterer data, as well as an ‘interrupted SAR’ dataset representative of in-band interference commonly encountered in very high frequency radar applications. (paper)

  3. US fiscal regimes and optimal monetary policy

    NARCIS (Netherlands)

    Mavromatis, K.

    2014-01-01

    Fiscal policy in the US has been documented to have been the leading authority in the ‘60s and the ‘70s (active fiscal policy), while committing to make the necessary fiscal adjustments following Volcker’s appointment (passive fiscal policy). Moreover, while passive, US fiscal policy has at times

  4. OPTIMIZATION OF THE RUSSIAN MACROECONOMIC POLICY FOR 2016-2020

    Directory of Open Access Journals (Sweden)

    Gilmundinov V. M.

    2016-12-01

    Full Text Available This paper is concerned with the methodological issues of economic policy elaboration and optimization of economic policy instruments’ parameters. Actuality of this research is provided by growing complexity of social and economic systems, important state role in their functioning as well as multi-targets of economic policy with limited number of instruments. Considering a big variety of internal and external restrictions of social and economic development of modern Russia it has wide range of applications. Extension of the dynamic econometric general equilibrium input-output model of the Russian economy with development of the sub-model of economic policy optimization is a key purpose of this study. The sub-model of economic policy optimization allows estimating impact of economic policy measures on target indicators as well as defining optimal values of their parameters. For this purpose, we extend Robert Mundell’s approach by considering dynamic optimization and wider range of economic policy targets and measures. Use of general equilibrium input-output model allows considering impact of economic policy on different aggregate markets and sectors. Approbation of suggested approach allows us to develop multi-variant forecast for the Russian economy for 2016-2020, define optimal values of monetary policy parameters and compare considered variants by values of social losses. The obtained results could be further used in theoretical as well as applied researches concerned with issues of economic policy elaboration and forecasting of social and economic development.

  5. Optimal Policy under Restricted Government Spending

    DEFF Research Database (Denmark)

    Sørensen, Anders

    2006-01-01

    Welfare ranking of policy instruments is addressed in a two-sector Ramsey model with monopoly pricing in one sector as the only distortion. When government spending is restricted, i.e. when a government is unable or unwilling to finance the required costs for implementing the optimum policy...... effectiveness canexceed the welfare loss from introducing new distortions. Moreover, it is found that the investment subsidy is gradually phased out of the welfare maximizing policy, which may be a policy combining the two subsidies, when the level of government spending is increased.Keywords: welfare ranking......, indirect and direct policy instruments, restricted government spending JEL: E61, O21, O41...

  6. Extreme Trust Region Policy Optimization for Active Object Recognition.

    Science.gov (United States)

    Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei

    2018-06-01

    In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.

  7. Dynamic mobility applications policy analysis : policy and institutional issues for intelligent network flow optimization (INFLO).

    Science.gov (United States)

    2014-12-01

    The report documents policy considerations for the Intelligent Network Flow Optimization (INFLO) connected vehicle applications : bundle. INFLO aims to optimize network flow on freeways and arterials by informing motorists of existing and impen...

  8. Financing and funding health care: Optimal policy and political implementability.

    Science.gov (United States)

    Nuscheler, Robert; Roeder, Kerstin

    2015-07-01

    Health care financing and funding are usually analyzed in isolation. This paper combines the corresponding strands of the literature and thereby advances our understanding of the important interaction between them. We investigate the impact of three modes of health care financing, namely, optimal income taxation, proportional income taxation, and insurance premiums, on optimal provider payment and on the political implementability of optimal policies under majority voting. Considering a standard multi-task agency framework we show that optimal health care policies will generally differ across financing regimes when the health authority has redistributive concerns. We show that health care financing also has a bearing on the political implementability of optimal health care policies. Our results demonstrate that an isolated analysis of (optimal) provider payment rests on very strong assumptions regarding both the financing of health care and the redistributive preferences of the health authority. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Rational optimization of reliability and safety policies

    International Nuclear Information System (INIS)

    Melchers, Robert E.

    2001-01-01

    Optimization of structures for design has a long history, including optimization using numerical methods and optimality criteria. Much of this work has considered a subset of the complete design optimization problem--that of the technical issues alone. The more general problem must consider also non-technical issues and, importantly, the interplay between them and the parameters which influence them. Optimization involves optimal setting of design or acceptance criteria and, separately, optimal design within the criteria. In the modern context of probability based design codes this requires probabilistic acceptance criteria. The determination of such criteria involves more than the nominal code failure probability approach used for design code formulation. A more general view must be taken and a clear distinction must be made between those matters covered by technical reliability and non-technical reliability. The present paper considers this issue and outlines a framework for rational optimization of structural and other systems given the socio-economic and political systems within which optimization must be performed

  10. Fear of Floating: An optimal discretionary monetary policy analysis

    OpenAIRE

    Madhavi Bokil

    2005-01-01

    This paper explores the idea that “Fear of Floating” and accompanying pro-cyclical interest rate policies observed in the case of some emerging market economies may be justified as an optimal discretionary monetary policy response to shocks. The paper also examines how the differences in monetary policies may lead to different degrees of this fear. These questions are addressed with a small open economy, new- Keynesian model with endogenous capital accumulation and sticky prices. The economy ...

  11. Optimization of sampling parameters for standardized exhaled breath sampling.

    Science.gov (United States)

    Doran, Sophie; Romano, Andrea; Hanna, George B

    2017-09-05

    The lack of standardization of breath sampling is a major contributing factor to the poor repeatability of results and hence represents a barrier to the adoption of breath tests in clinical practice. On-line and bag breath sampling have advantages but do not suit multicentre clinical studies whereas storage and robust transport are essential for the conduct of wide-scale studies. Several devices have been developed to control sampling parameters and to concentrate volatile organic compounds (VOCs) onto thermal desorption (TD) tubes and subsequently transport those tubes for laboratory analysis. We conducted three experiments to investigate (i) the fraction of breath sampled (whole vs. lower expiratory exhaled breath); (ii) breath sample volume (125, 250, 500 and 1000ml) and (iii) breath sample flow rate (400, 200, 100 and 50 ml/min). The target VOCs were acetone and potential volatile biomarkers for oesophago-gastric cancer belonging to the aldehyde, fatty acids and phenol chemical classes. We also examined the collection execution time and the impact of environmental contamination. The experiments showed that the use of exhaled breath-sampling devices requires the selection of optimum sampling parameters. The increase in sample volume has improved the levels of VOCs detected. However, the influence of the fraction of exhaled breath and the flow rate depends on the target VOCs measured. The concentration of potential volatile biomarkers for oesophago-gastric cancer was not significantly different between the whole and lower airway exhaled breath. While the recovery of phenols and acetone from TD tubes was lower when breath sampling was performed at a higher flow rate, other VOCs were not affected. A dedicated 'clean air supply' overcomes the contamination from ambient air, but the breath collection device itself can be a source of contaminants. In clinical studies using VOCs to diagnose gastro-oesophageal cancer, the optimum parameters are 500mls sample volume

  12. Optimal government policies in models with heterogeneous agents

    Czech Academy of Sciences Publication Activity Database

    Boháček, Radim; Kejak, Michal

    -, č. 272 (2005), s. 1-55 ISSN 1211-3298 Institutional research plan: CEZ:AV0Z70850503 Keywords : optimal macroeconomic policy * optimal taxation * distribution of wealth and income Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp272.pdf

  13. Optimal sampling designs for large-scale fishery sample surveys in Greece

    Directory of Open Access Journals (Sweden)

    G. BAZIGOS

    2007-12-01

    The paper deals with the optimization of the following three large scale sample surveys: biological sample survey of commercial landings (BSCL, experimental fishing sample survey (EFSS, and commercial landings and effort sample survey (CLES.

  14. A bivariate optimal replacement policy for a multistate repairable system

    International Nuclear Information System (INIS)

    Zhang Yuanlin; Yam, Richard C.M.; Zuo, Ming J.

    2007-01-01

    In this paper, a deteriorating simple repairable system with k+1 states, including k failure states and one working state, is studied. It is assumed that the system after repair is not 'as good as new' and the deterioration of the system is stochastic. We consider a bivariate replacement policy, denoted by (T,N), in which the system is replaced when its working age has reached T or the number of failures it has experienced has reached N, whichever occurs first. The objective is to determine the optimal replacement policy (T,N)* such that the long-run expected profit per unit time is maximized. The explicit expression of the long-run expected profit per unit time is derived and the corresponding optimal replacement policy can be determined analytically or numerically. We prove that the optimal policy (T,N)* is better than the optimal policy N* for a multistate simple repairable system. We also show that a general monotone process model for a multistate simple repairable system is equivalent to a geometric process model for a two-state simple repairable system in the sense that they have the same structure for the long-run expected profit (or cost) per unit time and the same optimal policy. Finally, a numerical example is given to illustrate the theoretical results

  15. Carbon Sequestration and Optimal Climate Policy

    International Nuclear Information System (INIS)

    Grimaud, Andre; Rouge, Luc

    2009-01-01

    We present an endogenous growth model in which the use of a non-renewable natural resource generates carbon-dioxide emissions that can be partly sequestered. This approach breaks with the systematic link between resource use and pollution emission. The accumulated stock of remaining emissions has a negative impact on household utility and corporate productivity. While sequestration quickens the optimal extraction rate, it can also generate higher emissions in the short run. It also has an adverse effect on economic growth. We study the impact of a carbon tax: the level of the tax has an effect in our model, its optimal level is positive, and it can be interpreted ex post as a decreasing ad valorem tax on the resource

  16. Optimal sampling strategy for data mining

    International Nuclear Information System (INIS)

    Ghaffar, A.; Shahbaz, M.; Mahmood, W.

    2013-01-01

    Latest technology like Internet, corporate intranets, data warehouses, ERP's, satellites, digital sensors, embedded systems, mobiles networks all are generating such a massive amount of data that it is getting very difficult to analyze and understand all these data, even using data mining tools. Huge datasets are becoming a difficult challenge for classification algorithms. With increasing amounts of data, data mining algorithms are getting slower and analysis is getting less interactive. Sampling can be a solution. Using a fraction of computing resources, Sampling can often provide same level of accuracy. The process of sampling requires much care because there are many factors involved in the determination of correct sample size. The approach proposed in this paper tries to find a solution to this problem. Based on a statistical formula, after setting some parameters, it returns a sample size called s ufficient sample size , which is then selected through probability sampling. Results indicate the usefulness of this technique in coping with the problem of huge datasets. (author)

  17. Sample preparation optimization in fecal metabolic profiling.

    Science.gov (United States)

    Deda, Olga; Chatziioannou, Anastasia Chrysovalantou; Fasoula, Stella; Palachanis, Dimitris; Raikos, Νicolaos; Theodoridis, Georgios A; Gika, Helen G

    2017-03-15

    Metabolomic analysis of feces can provide useful insight on the metabolic status, the health/disease state of the human/animal and the symbiosis with the gut microbiome. As a result, recently there is increased interest on the application of holistic analysis of feces for biomarker discovery. For metabolomics applications, the sample preparation process used prior to the analysis of fecal samples is of high importance, as it greatly affects the obtained metabolic profile, especially since feces, as matrix are diversifying in their physicochemical characteristics and molecular content. However there is still little information in the literature and lack of a universal approach on sample treatment for fecal metabolic profiling. The scope of the present work was to study the conditions for sample preparation of rat feces with the ultimate goal of the acquisition of comprehensive metabolic profiles either untargeted by NMR spectroscopy and GC-MS or targeted by HILIC-MS/MS. A fecal sample pooled from male and female Wistar rats was extracted under various conditions by modifying the pH value, the nature of the organic solvent and the sample weight to solvent volume ratio. It was found that the 1/2 (w f /v s ) ratio provided the highest number of metabolites under neutral and basic conditions in both untargeted profiling techniques. Concerning LC-MS profiles, neutral acetonitrile and propanol provided higher signals and wide metabolite coverage, though extraction efficiency is metabolite dependent. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Optimal relaxed causal sampler using sampled-date system theory

    NARCIS (Netherlands)

    Shekhawat, Hanumant; Meinsma, Gjerrit

    This paper studies the design of an optimal relaxed causal sampler using sampled data system theory. A lifted frequency domain approach is used to obtain the existence conditions and the optimal sampler. A state space formulation of the results is also provided. The resulting optimal relaxed causal

  19. Optimizing sampling approaches along ecological gradients

    DEFF Research Database (Denmark)

    Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel

    2016-01-01

    1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...

  20. Trade Liberalization and Optimal Environmental Policies in Vertical Related Markets

    Directory of Open Access Journals (Sweden)

    Yan-Shu Lin

    2012-12-01

    Full Text Available This paper establishes a symmetric two-country model with vertically related markets. In the downstream market, there is one firm in each country selling a homogeneous good, whose production generates pollution, to its home and the foreign markets a la Brander (1981. In the intermediate good market, there is also one upstream firm in each country, supplying the intermediate good only to its own country’s downstream market. The upstream firms can choose either price or quantity to maximize their profits. With this setting, the paper examines the optimal environmental policy and how it is affected by the tariff on the final good. It is found that, under free trade, the optimal final-good output with imperfect intermediate-good market will have the same output level as that with perfect intermediate-good market after imposing the optimal emission tax. The optimal environmental tax is smaller and the optimal environmental policy is less likely to be a green strategy under trade liberalization if the market structure in the intermediate good market is imperfect than perfect competition. On the other hand, the optimal environmental tax is necessarily higher if the upstream firm chooses price than quantity. Moreover, the optimal environmental policy is less likely to be a green strategy under trade liberalization if the upstream firms choose quantity than price to maximize their profits.

  1. Optimal Replacement and Management Policies for Beef Cows

    OpenAIRE

    W. Marshall Frasier; George H. Pfeiffer

    1994-01-01

    Beef cow replacement studies have not reflected the interaction between herd management and the culling decision. We demonstrate techniques for modeling optimal beef cow replacement intervals and discrete management policies by incorporating the dynamic effects of management on future productivity when biological response is uncertain. Markovian decision analysis is used to identify optimal beef cow management on a ranch typical of the Sandhills region of Nebraska. Issues of breeding season l...

  2. A proposal of optimal sampling design using a modularity strategy

    Science.gov (United States)

    Simone, A.; Giustolisi, O.; Laucelli, D. B.

    2016-08-01

    In real water distribution networks (WDNs) are present thousands nodes and optimal placement of pressure and flow observations is a relevant issue for different management tasks. The planning of pressure observations in terms of spatial distribution and number is named sampling design and it was faced considering model calibration. Nowadays, the design of system monitoring is a relevant issue for water utilities e.g., in order to manage background leakages, to detect anomalies and bursts, to guarantee service quality, etc. In recent years, the optimal location of flow observations related to design of optimal district metering areas (DMAs) and leakage management purposes has been faced considering optimal network segmentation and the modularity index using a multiobjective strategy. Optimal network segmentation is the basis to identify network modules by means of optimal conceptual cuts, which are the candidate locations of closed gates or flow meters creating the DMAs. Starting from the WDN-oriented modularity index, as a metric for WDN segmentation, this paper proposes a new way to perform the sampling design, i.e., the optimal location of pressure meters, using newly developed sampling-oriented modularity index. The strategy optimizes the pressure monitoring system mainly based on network topology and weights assigned to pipes according to the specific technical tasks. A multiobjective optimization minimizes the cost of pressure meters while maximizing the sampling-oriented modularity index. The methodology is presented and discussed using the Apulian and Exnet networks.

  3. Storage Policies and Optimal Shape of a Storage System

    NARCIS (Netherlands)

    Zaerpour, N.; De Koster, René; Yu, Yugang

    2013-01-01

    The response time of a storage system is mainly influenced by its shape (configuration), the storage assignment and retrieval policies, and the location of the input/output (I/O) points. In this paper, we show that the optimal shape of a storage system, which minimises the response time for single

  4. Handling Practicalities in Agricultural Policy Optimization for Water Quality Improvements

    Science.gov (United States)

    Bilevel and multi-objective optimization methods are often useful to spatially target agri-environmental policy throughout a watershed. This type of problem is complex and is comprised of a number of practicalities: (i) a large number of decision variables, (ii) at least two inte...

  5. Optimal Control via Reinforcement Learning with Symbolic Policy Approximation

    NARCIS (Netherlands)

    Kubalìk, Jiřì; Alibekov, Eduard; Babuska, R.; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    Model-based reinforcement learning (RL) algorithms can be used to derive optimal control laws for nonlinear dynamic systems. With continuous-valued state and input variables, RL algorithms have to rely on function approximators to represent the value function and policy mappings. This paper

  6. Using remotely-sensed data for optimal field sampling

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-09-01

    Full Text Available M B E R 2 0 0 8 15 USING REMOTELY- SENSED DATA FOR OPTIMAL FIELD SAMPLING BY DR PRAVESH DEBBA STATISTICS IS THE SCIENCE pertaining to the collection, summary, analysis, interpretation and presentation of data. It is often impractical... studies are: where to sample, what to sample and how many samples to obtain. Conventional sampling techniques are not always suitable in environmental studies and scientists have explored the use of remotely-sensed data as ancillary information to aid...

  7. Optimization of ACC system spacing policy on curved highway

    Science.gov (United States)

    Ma, Jun; Qian, Kun; Gong, Zaiyan

    2017-05-01

    The paper optimizes the original spacing policy when adopting VTH (Variable Time Headway), proposes to introduce the road curve curvature K to the spacing policy to cope with following the wrong vehicle or failing to follow the vehicle owing to the radar limitation of curve in ACC system. By utilizing MATLAB/Simulink, automobile longitudinal dynamics model is established. At last, the paper sets up such three common cases as the vehicle ahead runs at a uniform velocity, an accelerated velocity and hits the brake suddenly, simulates these cases on the curve with different curvature, analyzes the curve spacing policy in the perspective of safety and vehicle following efficiency and draws the conclusion whether the optimization scheme is effective or not.

  8. The impact of uncertainty on optimal emission policies

    Science.gov (United States)

    Botta, Nicola; Jansson, Patrik; Ionescu, Cezar

    2018-05-01

    We apply a computational framework for specifying and solving sequential decision problems to study the impact of three kinds of uncertainties on optimal emission policies in a stylized sequential emission problem.We find that uncertainties about the implementability of decisions on emission reductions (or increases) have a greater impact on optimal policies than uncertainties about the availability of effective emission reduction technologies and uncertainties about the implications of trespassing critical cumulated emission thresholds. The results show that uncertainties about the implementability of decisions on emission reductions (or increases) call for more precautionary policies. In other words, delaying emission reductions to the point in time when effective technologies will become available is suboptimal when these uncertainties are accounted for rigorously. By contrast, uncertainties about the implications of exceeding critical cumulated emission thresholds tend to make early emission reductions less rewarding.

  9. On Optimal Policies for Network-Coded Cooperation

    DEFF Research Database (Denmark)

    Khamfroush, Hana; Roetter, Daniel Enrique Lucani; Pahlevani, Peyman

    2015-01-01

    Network-coded cooperative communication (NC-CC) has been proposed and evaluated as a powerful technology that can provide a better quality of service in the next-generation wireless systems, e.g., D2D communications. Previous contributions have focused on performance evaluation of NC-CC scenarios...... rather than searching for optimal policies that can minimize the total cost of reliable packet transmission. We break from this trend by initially analyzing the optimal design of NC-CC for a wireless network with one source, two receivers, and half-duplex erasure channels. The problem is modeled...... as a special case of Markov decision process (MDP), which is called stochastic shortest path (SSP), and is solved for any field size, arbitrary number of packets, and arbitrary erasure probabilities of the channels. The proposed MDP solution results in an optimal transmission policy per time slot, and we use...

  10. Optimal sampling schemes for vegetation and geological field visits

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2012-07-01

    Full Text Available The presentation made to Wits Statistics Department was on common classification methods used in the field of remote sensing, and the use of remote sensing to design optimal sampling schemes for field visits with applications in vegetation...

  11. Using remote sensing images to design optimal field sampling schemes

    CSIR Research Space (South Africa)

    Debba, Pravesh

    2008-08-01

    Full Text Available sampling schemes case studies Optimized field sampling representing the overall distribution of a particular mineral Deriving optimal exploration target zones CONTINUUM REMOVAL for vegetation [13, 27, 46]. The convex hull transform is a method... of normalizing spectra [16, 41]. The convex hull technique is anal- ogous to fitting a rubber band over a spectrum to form a continuum. Figure 5 shows the concept of the convex hull transform. The differ- ence between the hull and the orig- inal spectrum...

  12. Sampling optimization for printer characterization by direct search.

    Science.gov (United States)

    Bianco, Simone; Schettini, Raimondo

    2012-12-01

    Printer characterization usually requires many printer inputs and corresponding color measurements of the printed outputs. In this brief, a sampling optimization for printer characterization on the basis of direct search is proposed to maintain high color accuracy with a reduction in the number of characterization samples required. The proposed method is able to match a given level of color accuracy requiring, on average, a characterization set cardinality which is almost one-fourth of that required by the uniform sampling, while the best method in the state of the art needs almost one-third. The number of characterization samples required can be further reduced if the proposed algorithm is coupled with a sequential optimization method that refines the sample values in the device-independent color space. The proposed sampling optimization method is extended to deal with multiple substrates simultaneously, giving statistically better colorimetric accuracy (at the α = 0.05 significance level) than sampling optimization techniques in the state of the art optimized for each individual substrate, thus allowing use of a single set of characterization samples for multiple substrates.

  13. An optimal maintenance policy for machine replacement problem using dynamic programming

    OpenAIRE

    Mohsen Sadegh Amalnik; Morteza Pourgharibshahi

    2017-01-01

    In this article, we present an acceptance sampling plan for machine replacement problem based on the backward dynamic programming model. Discount dynamic programming is used to solve a two-state machine replacement problem. We plan to design a model for maintenance by consid-ering the quality of the item produced. The purpose of the proposed model is to determine the optimal threshold policy for maintenance in a finite time horizon. We create a decision tree based on a sequential sampling inc...

  14. Off-Policy Reinforcement Learning: Optimal Operational Control for Two-Time-Scale Industrial Processes.

    Science.gov (United States)

    Li, Jinna; Kiumarsi, Bahare; Chai, Tianyou; Lewis, Frank L; Fan, Jialu

    2017-12-01

    Industrial flow lines are composed of unit processes operating on a fast time scale and performance measurements known as operational indices measured at a slower time scale. This paper presents a model-free optimal solution to a class of two time-scale industrial processes using off-policy reinforcement learning (RL). First, the lower-layer unit process control loop with a fast sampling period and the upper-layer operational index dynamics at a slow time scale are modeled. Second, a general optimal operational control problem is formulated to optimally prescribe the set-points for the unit industrial process. Then, a zero-sum game off-policy RL algorithm is developed to find the optimal set-points by using data measured in real-time. Finally, a simulation experiment is employed for an industrial flotation process to show the effectiveness of the proposed method.

  15. An optimal maintenance policy for machine replacement problem using dynamic programming

    Directory of Open Access Journals (Sweden)

    Mohsen Sadegh Amalnik

    2017-06-01

    Full Text Available In this article, we present an acceptance sampling plan for machine replacement problem based on the backward dynamic programming model. Discount dynamic programming is used to solve a two-state machine replacement problem. We plan to design a model for maintenance by consid-ering the quality of the item produced. The purpose of the proposed model is to determine the optimal threshold policy for maintenance in a finite time horizon. We create a decision tree based on a sequential sampling including renew, repair and do nothing and wish to achieve an optimal threshold for making decisions including renew, repair and continue the production in order to minimize the expected cost. Results show that the optimal policy is sensitive to the data, for the probability of defective machines and parameters defined in the model. This can be clearly demonstrated by a sensitivity analysis technique.

  16. Optimal fleet conversion policy from a life cycle perspective

    International Nuclear Information System (INIS)

    Hyung Chul Kim; Ross, M.H.; Keoleian, G.A.

    2004-01-01

    Vehicles typically deteriorate with accumulating mileage and emit more tailpipe air pollutants per mile. Although incentive programs for scrapping old, high-emitting vehicles have been implemented to reduce urban air pollutants and greenhouse gases, these policies may create additional sales of new vehicles as well. From a life cycle perspective, the emissions from both the additional vehicle production and scrapping need to be addressed when evaluating the benefits of scrapping older vehicles. This study explores an optimal fleet conversion policy based on mid-sized internal combustion engine vehicles in the US, defined as one that minimizes total life cycle emissions from the entire fleet of new and used vehicles. To describe vehicles' lifetime emission profiles as functions of accumulated mileage, a series of life cycle inventories characterizing environmental performance for vehicle production, use, and retirement was developed for each model year between 1981 and 2020. A simulation program is developed to investigate ideal and practical fleet conversion policies separately for three regulated pollutants (CO, NMHC, and NO x ) and for CO 2 . According to the simulation results, accelerated scrapping policies are generally recommended to reduce regulated emissions, but they may increase greenhouse gases. Multi- objective analysis based on economic valuation methods was used to investigate trade-offs among emissions of different pollutants for optimal fleet conversion policies. (author)

  17. Input-output interactions and optimal monetary policy

    DEFF Research Database (Denmark)

    Petrella, Ivan; Santoro, Emiliano

    2011-01-01

    This paper deals with the implications of factor demand linkages for monetary policy design in a two-sector dynamic general equilibrium model. Part of the output of each sector serves as a production input in both sectors, in accordance with a realistic input–output structure. Strategic...... complementarities induced by factor demand linkages significantly alter the transmission of shocks and amplify the loss of social welfare under optimal monetary policy, compared to what is observed in standard two-sector models. The distinction between value added and gross output that naturally arises...... in this context is of key importance to explore the welfare properties of the model economy. A flexible inflation targeting regime is close to optimal only if the central bank balances inflation and value added variability. Otherwise, targeting gross output variability entails a substantial increase in the loss...

  18. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Directory of Open Access Journals (Sweden)

    Jake M Ferguson

    2014-06-01

    Full Text Available The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  19. Optimal sampling strategies for detecting zoonotic disease epidemics.

    Science.gov (United States)

    Ferguson, Jake M; Langebrake, Jessica B; Cannataro, Vincent L; Garcia, Andres J; Hamman, Elizabeth A; Martcheva, Maia; Osenberg, Craig W

    2014-06-01

    The early detection of disease epidemics reduces the chance of successful introductions into new locales, minimizes the number of infections, and reduces the financial impact. We develop a framework to determine the optimal sampling strategy for disease detection in zoonotic host-vector epidemiological systems when a disease goes from below detectable levels to an epidemic. We find that if the time of disease introduction is known then the optimal sampling strategy can switch abruptly between sampling only from the vector population to sampling only from the host population. We also construct time-independent optimal sampling strategies when conducting periodic sampling that can involve sampling both the host and the vector populations simultaneously. Both time-dependent and -independent solutions can be useful for sampling design, depending on whether the time of introduction of the disease is known or not. We illustrate the approach with West Nile virus, a globally-spreading zoonotic arbovirus. Though our analytical results are based on a linearization of the dynamical systems, the sampling rules appear robust over a wide range of parameter space when compared to nonlinear simulation models. Our results suggest some simple rules that can be used by practitioners when developing surveillance programs. These rules require knowledge of transition rates between epidemiological compartments, which population was initially infected, and of the cost per sample for serological tests.

  20. Optimal Pricing and Advertising Policies for New Product Oligopoly Models

    OpenAIRE

    Gerald L. Thompson; Jinn-Tsair Teng

    1984-01-01

    In this paper our previous work on monopoly and oligopoly new product models is extended by the addition of pricing as well as advertising control variables. These models contain Bass's demand growth model, and the Vidale-Wolfe and Ozga advertising models, as well as the production learning curve model and an exponential demand function. The problem of characterizing an optimal pricing and advertising policy over time is an important question in the field of marketing as well as in the areas ...

  1. On the Optimal Design of Distributed Generation Policies: Is Net Metering Ever Optimal?

    OpenAIRE

    Brown, David; Sappington, David

    2014-01-01

    Electricity customers who install solar panels often are paid the prevailing retail price for the electricity they generate. We show that this "net metering" policy typically is not optimal. A payment for distributed generation (w) that is below the retail price of electricity (r) will induce the welfare-maximizing level of distributed generation (DG) when centralized generation and DG produce similar (pollution) externalities. However, w can optimally exceed r when DG entails a substantial r...

  2. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  3. Optimal policy for mitigating emissions in the European transport sector

    Science.gov (United States)

    Leduc, Sylvain; Piera, Patrizio; Sennai, Mesfun; Igor, Staritsky; Berien, Elbersen; Tijs, Lammens; Florian, Kraxner

    2017-04-01

    A geographic explicit techno-economic model, BeWhere (www.iiasa.ac.at/bewhere), has been developed at the European scale (Europe 28, the Balkans countries, Turkey, Moldavia and Ukraine) at a 40km grid size, to assess the potential of bioenergy from non-food feedstock. Based on the minimization of the supply chain from feedstock collection to the final energy product distribution, the model identifies the optimal bioenergy production plants in terms of spatial location, technology and capacity. The feedstock of interests are woody biomass (divided into eight types from conifers and non-conifers) and five different crop residuals. For each type of feedstock, one or multiple technologies can be applied for either heat, electricity or biofuel production. The model is run for different policy tools such as carbon cost, biofuel support, or subsidies, and the optimal mix of technologies and biomass needed is optimized to reach a production cost competitive against the actual reference system which is fossil fuel based. From this approach, the optimal mix of policy tools that can be applied country wide in Europe will be identified. The preliminary results show that high carbon tax and biofuel support contribute to the development of large scale biofuel production based on woody biomass plants mainly located in the northern part of Europe. Finally the highest emission reduction is reached with low biofuel support and high carbon tax evenly distributed in Europe.

  4. Monte Carlo importance sampling optimization for system reliability applications

    International Nuclear Information System (INIS)

    Campioni, Luca; Vestrucci, Paolo

    2004-01-01

    This paper focuses on the reliability analysis of multicomponent systems by the importance sampling technique, and, in particular, it tackles the optimization aspect. A methodology based on the minimization of the variance at the component level is proposed for the class of systems consisting of independent components. The claim is that, by means of such a methodology, the optimal biasing could be achieved without resorting to the typical approach by trials

  5. Optimal Replacement Policies for Non-Uniform Cache Objects with Optional Eviction

    National Research Council Canada - National Science Library

    Bahat, Omri; Makowski, Armand M

    2002-01-01

    .... However, since the introduction of optimal replacement policies for conventional caching, the problem of finding optimal replacement policies under the factors indicated has not been studied in any systematic manner...

  6. spsann - optimization of sample patterns using spatial simulated annealing

    Science.gov (United States)

    Samuel-Rosa, Alessandro; Heuvelink, Gerard; Vasques, Gustavo; Anjos, Lúcia

    2015-04-01

    There are many algorithms and computer programs to optimize sample patterns, some private and others publicly available. A few have only been presented in scientific articles and text books. This dispersion and somewhat poor availability is holds back to their wider adoption and further development. We introduce spsann, a new R-package for the optimization of sample patterns using spatial simulated annealing. R is the most popular environment for data processing and analysis. Spatial simulated annealing is a well known method with widespread use to solve optimization problems in the soil and geo-sciences. This is mainly due to its robustness against local optima and easiness of implementation. spsann offers many optimizing criteria for sampling for variogram estimation (number of points or point-pairs per lag distance class - PPL), trend estimation (association/correlation and marginal distribution of the covariates - ACDC), and spatial interpolation (mean squared shortest distance - MSSD). spsann also includes the mean or maximum universal kriging variance (MUKV) as an optimizing criterion, which is used when the model of spatial variation is known. PPL, ACDC and MSSD were combined (PAN) for sampling when we are ignorant about the model of spatial variation. spsann solves this multi-objective optimization problem scaling the objective function values using their maximum absolute value or the mean value computed over 1000 random samples. Scaled values are aggregated using the weighted sum method. A graphical display allows to follow how the sample pattern is being perturbed during the optimization, as well as the evolution of its energy state. It is possible to start perturbing many points and exponentially reduce the number of perturbed points. The maximum perturbation distance reduces linearly with the number of iterations. The acceptance probability also reduces exponentially with the number of iterations. R is memory hungry and spatial simulated annealing is a

  7. Optimization of protein samples for NMR using thermal shift assays

    International Nuclear Information System (INIS)

    Kozak, Sandra; Lercher, Lukas; Karanth, Megha N.; Meijers, Rob; Carlomagno, Teresa; Boivin, Stephane

    2016-01-01

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor"® provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  8. Optimization of protein samples for NMR using thermal shift assays

    Energy Technology Data Exchange (ETDEWEB)

    Kozak, Sandra [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Lercher, Lukas; Karanth, Megha N. [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Meijers, Rob [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany); Carlomagno, Teresa, E-mail: teresa.carlomagno@oci.uni-hannover.de [European Molecular Biology Laboratory (EMBL), SCB Unit (Germany); Boivin, Stephane, E-mail: sboivin77@hotmail.com, E-mail: s.boivin@embl-hamburg.de [European Molecular Biology Laboratory (EMBL), Hamburg Outstation, SPC Facility (Germany)

    2016-04-15

    Maintaining a stable fold for recombinant proteins is challenging, especially when working with highly purified and concentrated samples at temperatures >20 °C. Therefore, it is worthwhile to screen for different buffer components that can stabilize protein samples. Thermal shift assays or ThermoFluor{sup ®} provide a high-throughput screening method to assess the thermal stability of a sample under several conditions simultaneously. Here, we describe a thermal shift assay that is designed to optimize conditions for nuclear magnetic resonance studies, which typically require stable samples at high concentration and ambient (or higher) temperature. We demonstrate that for two challenging proteins, the multicomponent screen helped to identify ingredients that increased protein stability, leading to clear improvements in the quality of the spectra. Thermal shift assays provide an economic and time-efficient method to find optimal conditions for NMR structural studies.

  9. On Optimal, Minimal BRDF Sampling for Reflectance Acquisition

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Jensen, Henrik Wann; Ramamoorthi, Ravi

    2015-01-01

    The bidirectional reflectance distribution function (BRDF) is critical for rendering, and accurate material representation requires data-driven reflectance models. However, isotropic BRDFs are 3D functions, and measuring the reflectance of a flat sample can require a million incident and outgoing...... direction pairs, making the use of measured BRDFs impractical. In this paper, we address the problem of reconstructing a measured BRDF from a limited number of samples. We present a novel mapping of the BRDF space, allowing for extraction of descriptive principal components from measured databases......, such as the MERL BRDF database. We optimize for the best sampling directions, and explicitly provide the optimal set of incident and outgoing directions in the Rusinkiewicz parameterization for n = {1, 2, 5, 10, 20} samples. Based on the principal components, we describe a method for accurately reconstructing BRDF...

  10. Optimal updating magnitude in adaptive flat-distribution sampling.

    Science.gov (United States)

    Zhang, Cheng; Drake, Justin A; Ma, Jianpeng; Pettitt, B Montgomery

    2017-11-07

    We present a study on the optimization of the updating magnitude for a class of free energy methods based on flat-distribution sampling, including the Wang-Landau (WL) algorithm and metadynamics. These methods rely on adaptive construction of a bias potential that offsets the potential of mean force by histogram-based updates. The convergence of the bias potential can be improved by decreasing the updating magnitude with an optimal schedule. We show that while the asymptotically optimal schedule for the single-bin updating scheme (commonly used in the WL algorithm) is given by the known inverse-time formula, that for the Gaussian updating scheme (commonly used in metadynamics) is often more complex. We further show that the single-bin updating scheme is optimal for very long simulations, and it can be generalized to a class of bandpass updating schemes that are similarly optimal. These bandpass updating schemes target only a few long-range distribution modes and their optimal schedule is also given by the inverse-time formula. Constructed from orthogonal polynomials, the bandpass updating schemes generalize the WL and Langfeld-Lucini-Rago algorithms as an automatic parameter tuning scheme for umbrella sampling.

  11. Optimizing Soil Moisture Sampling Locations for Validation Networks for SMAP

    Science.gov (United States)

    Roshani, E.; Berg, A. A.; Lindsay, J.

    2013-12-01

    Soil Moisture Active Passive satellite (SMAP) is scheduled for launch on Oct 2014. Global efforts are underway for establishment of soil moisture monitoring networks for both the pre- and post-launch validation and calibration of the SMAP products. In 2012 the SMAP Validation Experiment, SMAPVEX12, took place near Carman Manitoba, Canada where nearly 60 fields were sampled continuously over a 6 week period for soil moisture and several other parameters simultaneous to remotely sensed images of the sampling region. The locations of these sampling sites were mainly selected on the basis of accessibility, soil texture, and vegetation cover. Although these criteria are necessary to consider during sampling site selection, they do not guarantee optimal site placement to provide the most efficient representation of the studied area. In this analysis a method for optimization of sampling locations is presented which combines the state-of-art multi-objective optimization engine (non-dominated sorting genetic algorithm, NSGA-II), with the kriging interpolation technique to minimize the number of sampling sites while simultaneously minimizing the differences between the soil moisture map resulted from the kriging interpolation and soil moisture map from radar imaging. The algorithm is implemented in Whitebox Geospatial Analysis Tools, which is a multi-platform open-source GIS. The optimization framework is subject to the following three constraints:. A) sampling sites should be accessible to the crew on the ground, B) the number of sites located in a specific soil texture should be greater than or equal to a minimum value, and finally C) the number of sampling sites with a specific vegetation cover should be greater than or equal to a minimum constraint. The first constraint is implemented into the proposed model to keep the practicality of the approach. The second and third constraints are considered to guarantee that the collected samples from each soil texture categories

  12. Optimizing incomplete sample designs for item response model parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.

    Several models for optimizing incomplete sample designs with respect to information on the item parameters are presented. The following cases are considered: (1) known ability parameters; (2) unknown ability parameters; (3) item sets with multiple ability scales; and (4) response models with

  13. Identification of Optimal Policies in Markov Decision Processes

    Czech Academy of Sciences Publication Activity Database

    Sladký, Karel

    46 2010, č. 3 (2010), s. 558-570 ISSN 0023-5954. [International Conference on Mathematical Methods in Economy and Industry. České Budějovice, 15.06.2009-18.06.2009] R&D Projects: GA ČR(CZ) GA402/08/0107; GA ČR GA402/07/1113 Institutional research plan: CEZ:AV0Z10750506 Keywords : finite state Markov decision processes * discounted and average costs * elimination of suboptimal policies Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/E/sladky-identification of optimal policies in markov decision processes.pdf

  14. Optimal environmental policy and the dynamic property in LDCs

    Directory of Open Access Journals (Sweden)

    Masahiro Yabuta

    2002-01-01

    Full Text Available This paper has provided a model framework of foreign assistance policy in the context of dynamic optimal control and investigated the environmental policies in LDCs that received some financial support from abroad. The model framework features a specific behavior of the social planner who determines the level of voluntary expenditure for preservation of natural environment. Because more financial needs for natural environmental protection means less allowance of growth-oriented investment, the social planner confronts a trade-off problem between economic growth and environmental preservation. To tackle with this clearly, we have built a dynamic model with two control variables: per-capita consumption and voluntary expenditure for natural environment.

  15. Optimal Tax-Transfer Policies, Life-Cycle Labour Supply and Present-Biased Preferences

    DEFF Research Database (Denmark)

    Gunnersen, Lasse Frisgaard; Rasmussen, Bo Sandemann

    Using a two-period model with two types of agents that are characterized by present-biased preferences second-best optimal tax-transfer policies are considered. The paternalistic optimal tax-transfer policy has two main concerns: Income redistribution from high to low ability households...... consequences not only for optimal subsidies to savings but also for optimal marginal income taxes....

  16. Optimizing Sampling Efficiency for Biomass Estimation Across NEON Domains

    Science.gov (United States)

    Abercrombie, H. H.; Meier, C. L.; Spencer, J. J.

    2013-12-01

    Over the course of 30 years, the National Ecological Observatory Network (NEON) will measure plant biomass and productivity across the U.S. to enable an understanding of terrestrial carbon cycle responses to ecosystem change drivers. Over the next several years, prior to operational sampling at a site, NEON will complete construction and characterization phases during which a limited amount of sampling will be done at each site to inform sampling designs, and guide standardization of data collection across all sites. Sampling biomass in 60+ sites distributed among 20 different eco-climatic domains poses major logistical and budgetary challenges. Traditional biomass sampling methods such as clip harvesting and direct measurements of Leaf Area Index (LAI) involve collecting and processing plant samples, and are time and labor intensive. Possible alternatives include using indirect sampling methods for estimating LAI such as digital hemispherical photography (DHP) or using a LI-COR 2200 Plant Canopy Analyzer. These LAI estimations can then be used as a proxy for biomass. The biomass estimates calculated can then inform the clip harvest sampling design during NEON operations, optimizing both sample size and number so that standardized uncertainty limits can be achieved with a minimum amount of sampling effort. In 2011, LAI and clip harvest data were collected from co-located sampling points at the Central Plains Experimental Range located in northern Colorado, a short grass steppe ecosystem that is the NEON Domain 10 core site. LAI was measured with a LI-COR 2200 Plant Canopy Analyzer. The layout of the sampling design included four, 300 meter transects, with clip harvests plots spaced every 50m, and LAI sub-transects spaced every 10m. LAI was measured at four points along 6m sub-transects running perpendicular to the 300m transect. Clip harvest plots were co-located 4m from corresponding LAI transects, and had dimensions of 0.1m by 2m. We conducted regression analyses

  17. Optimized preparation of urine samples for two-dimensional electrophoresis and initial application to patient samples

    DEFF Research Database (Denmark)

    Lafitte, Daniel; Dussol, Bertrand; Andersen, Søren

    2002-01-01

    OBJECTIVE: We optimized of the preparation of urinary samples to obtain a comprehensive map of urinary proteins of healthy subjects and then compared this map with the ones obtained with patient samples to show that the pattern was specific of their kidney disease. DESIGN AND METHODS: The urinary...

  18. Ad-Hoc vs. Standardized and Optimized Arthropod Diversity Sampling

    Directory of Open Access Journals (Sweden)

    Pedro Cardoso

    2009-09-01

    Full Text Available The use of standardized and optimized protocols has been recently advocated for different arthropod taxa instead of ad-hoc sampling or sampling with protocols defined on a case-by-case basis. We present a comparison of both sampling approaches applied for spiders in a natural area of Portugal. Tests were made to their efficiency, over-collection of common species, singletons proportions, species abundance distributions, average specimen size, average taxonomic distinctness and behavior of richness estimators. The standardized protocol revealed three main advantages: (1 higher efficiency; (2 more reliable estimations of true richness; and (3 meaningful comparisons between undersampled areas.

  19. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  20. Optimal contant time injection policy for enhanced oil recovery and characterization of optimal viscous profiles

    Science.gov (United States)

    Daripa, Prabir

    2011-11-01

    We numerically investigate the optimal viscous profile in constant time injection policy of enhanced oil recovery. In particular, we investigate the effect of a combination of interfacial and layer instabilities in three-layer porous media flow on the overall growth of instabilities and thereby characterize the optimal viscous profile. Results based on monotonic and non-monotonic viscous profiles will be presented. Time permitting. we will also present results on multi-layer porous media flows for Newtonian and non-Newtonian fluids and compare the results. The support of Qatar National Fund under a QNRF Grant is acknowledged.

  1. Optimal subsidy policy for accelerating the diffusion of green products

    Directory of Open Access Journals (Sweden)

    Hongguang Peng

    2013-06-01

    Full Text Available Purpose: We consider a dynamic duopoly market in which two firms respectively produce green products and conventional products. The two types of product can substitute each other in some degree. Their demand rates depend on not only prices but the consumers’ increasing environmental awareness. Too high initial cost relative to conventional products becomes one of the major obstacles that hinder the adoption of green products. The government employs subsidy policy to trigger the adoption of green products. The purpose of the paper is to explore the optimal subsidy strategy to fulfill the government’s objective. Design/methodology/approach: We suppose the players in the game employ open-loop strategies, which make sense since the government generally cannot alter his policy for political and economic purposes. We take a differential game approach and use backward induction to analyze the firms’ pricing strategy under Cournot competition, and then focus upon a Stackelberg equilibrium to find the optimal subsidy strategy of the government. Findings: The results show that the more remarkable the energy or environmental performance, or the bigger the initial cost of green products, the higher the subsidy level should be. Due to the increasing environmental awareness and the learning curve, the optimal subsidy level decreases over time. Research limitations/implications: In our model several simplifying assumptions are made to keep the analysis more tractable. In particular, we have assumed only one type of green product. In reality several types of product with different energy or environmental performances exist. Our research can be extended in future work to take into account product differentiation on energy or environmental performance and devise a discriminatory subsidy policy accordingly. Originality/value: In the paper we set the objective of the government as minimizing the total social cost induced by the energy consumption or

  2. Optimal reservoir operation policies using novel nested algorithms

    Science.gov (United States)

    Delipetrev, Blagoj; Jonoski, Andreja; Solomatine, Dimitri

    2015-04-01

    optimization algorithm into the state transition that lowers the starting problem dimension and alleviates the curse of dimensionality. The algorithms can solve multi-objective optimization problems, without significantly increasing the complexity and the computational expenses. The algorithms can handle dense and irregular variable discretization, and are coded in Java as prototype applications. The three algorithms were tested at the multipurpose reservoir Knezevo of the Zletovica hydro-system located in the Republic of Macedonia, with eight objectives, including urban water supply, agriculture, ensuring ecological flow, and generation of hydropower. Because the Zletovica hydro-system is relatively complex, the novel algorithms were pushed to their limits, demonstrating their capabilities and limitations. The nSDP and nRL derived/learned the optimal reservoir policy using 45 (1951-1995) years historical data. The nSDP and nRL optimal reservoir policy was tested on 10 (1995-2005) years historical data, and compared with nDP optimal reservoir operation in the same period. The nested algorithms and optimal reservoir operation results are analysed and explained.

  3. Classifier-Guided Sampling for Complex Energy System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Backlund, Peter B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Eddy, John P. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    This report documents the results of a Laboratory Directed Research and Development (LDRD) effort enti tled "Classifier - Guided Sampling for Complex Energy System Optimization" that was conducted during FY 2014 and FY 2015. The goal of this proj ect was to develop, implement, and test major improvements to the classifier - guided sampling (CGS) algorithm. CGS is type of evolutionary algorithm for perform ing search and optimization over a set of discrete design variables in the face of one or more objective functions. E xisting evolutionary algorithms, such as genetic algorithms , may require a large number of o bjecti ve function evaluations to identify optimal or near - optimal solutions . Reducing the number of evaluations can result in significant time savings, especially if the objective function is computationally expensive. CGS reduce s the evaluation count by us ing a Bayesian network classifier to filter out non - promising candidate designs , prior to evaluation, based on their posterior probabilit ies . In this project, b oth the single - objective and multi - objective version s of the CGS are developed and tested on a set of benchm ark problems. As a domain - specific case study, CGS is used to design a microgrid for use in islanded mode during an extended bulk power grid outage.

  4. Simultaneous beam sampling and aperture shape optimization for SPORT

    International Nuclear Information System (INIS)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu

    2015-01-01

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  5. Simultaneous beam sampling and aperture shape optimization for SPORT

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Ye, Yinyu [Department of Management Science and Engineering, Stanford University, Stanford, California 94305 (United States)

    2015-02-15

    Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and

  6. Simultaneous beam sampling and aperture shape optimization for SPORT.

    Science.gov (United States)

    Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei

    2015-02-01

    Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case

  7. The optimal time path of clean energy R&D policy when patents have finite lifetime

    NARCIS (Netherlands)

    Gerlagh, R.; Kverndokk, S.; Rosendahl, K.E.

    We study the optimal time path for clean energy innovation policy. In a model with emission reduction through clean energy deployment, and with R&D increasing the overall productivity of clean energy, we describe optimal R&D policies jointly with emission pricing policies. We find that while

  8. Prescription drug samples--does this marketing strategy counteract policies for quality use of medicines?

    Science.gov (United States)

    Groves, K E M; Sketris, I; Tett, S E

    2003-08-01

    Prescription drug samples, as used by the pharmaceutical industry to market their products, are of current interest because of their influence on prescribing, and their potential impact on consumer safety. Very little research has been conducted into the use and misuse of prescription drug samples, and the influence of samples on health policies designed to improve the rational use of medicines. This is a topical issue in the prescription drug debate, with increasing costs and increasing concerns about optimizing use of medicines. This manuscript critically evaluates the research that has been conducted to date about prescription drug samples, discusses the issues raised in the context of traditional marketing theory, and suggests possible alternatives for the future.

  9. Optimizing Wind And Hydropower Generation Within Realistic Reservoir Operating Policy

    Science.gov (United States)

    Magee, T. M.; Clement, M. A.; Zagona, E. A.

    2012-12-01

    Previous studies have evaluated the benefits of utilizing the flexibility of hydropower systems to balance the variability and uncertainty of wind generation. However, previous hydropower and wind coordination studies have simplified non-power constraints on reservoir systems. For example, some studies have only included hydropower constraints on minimum and maximum storage volumes and minimum and maximum plant discharges. The methodology presented here utilizes the pre-emptive linear goal programming optimization solver in RiverWare to model hydropower operations with a set of prioritized policy constraints and objectives based on realistic policies that govern the operation of actual hydropower systems, including licensing constraints, environmental constraints, water management and power objectives. This approach accounts for the fact that not all policy constraints are of equal importance. For example target environmental flow levels may not be satisfied if it would require violating license minimum or maximum storages (pool elevations), but environmental flow constraints will be satisfied before optimizing power generation. Additionally, this work not only models the economic value of energy from the combined hydropower and wind system, it also captures the economic value of ancillary services provided by the hydropower resources. It is recognized that the increased variability and uncertainty inherent with increased wind penetration levels requires an increase in ancillary services. In regions with liberalized markets for ancillary services, a significant portion of hydropower revenue can result from providing ancillary services. Thus, ancillary services should be accounted for when determining the total value of a hydropower system integrated with wind generation. This research shows that the end value of integrated hydropower and wind generation is dependent on a number of factors that can vary by location. Wind factors include wind penetration level

  10. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  11. Optimal dynamic pricing and replenishment policies for deteriorating items

    Directory of Open Access Journals (Sweden)

    Masoud Rabbani

    2014-08-01

    Full Text Available Marketing strategies and proper inventory replenishment policies are often incorporated by enterprises to stimulate demand and maximize profit. The aim of this paper is to represent an integrated model for dynamic pricing and inventory control of deteriorating items. To reflect the dynamic characteristic of the problem, the selling price is defined as a time-dependent function of the initial selling price and the discount rate. In this regard, the price is exponentially discounted to compensate negative impact of the deterioration. The planning horizon is assumed to be infinite and the deterioration rate is time-dependent. In addition to price, the demand rate is dependent on advertisement as a powerful marketing tool. Several theoretical results and an iterative solution algorithm are developed to provide the optimal solution. Finally, to show validity of the model and illustrate the solution procedure, numerical results are presented.

  12. Convergence of Sample Path Optimal Policies for Stochastic Dynamic Programming

    National Research Council Canada - National Science Library

    Fu, Michael C; Jin, Xing

    2005-01-01

    .... These results have practical implications for Monte Carlo simulation-based solution approaches to stochastic dynamic programming problems where it is impractical to extract the explicit transition...

  13. Optimization Policy of Inventory Spare Parts Stocking and Provisioning

    International Nuclear Information System (INIS)

    Yun, Tae Sik; Park, Jong Hyuk; Hwang, Eui Youp; Yoo, Sung Soo; Kim, In Hwan

    2005-01-01

    Spare parts, especially safety related items, being used in Korea Nuclear Power Plants are largely from the United States, Canada, France and the like, meaning the inventory policy, stocking and provision, should be influenced by those countries' nuclear industry situation in a direct or indirect manner. As a result of nuclear industry downturn practices, lots of spare supply corporations have gone broke, which gave immediate signals we have to resolve inventory purchases in need. It is known for that nuclear maintenance spare items are particularly composed of many kinds with small quantities, which makes matters worse to Korea nuclear operation company (KHNP) to purchase them. Hence, Korea nuclear business is trying to change its exinventory purchasing paradigm into innovative schemes it did not have to consider in the past. In order to implement a new stocking policy, it should be kept in mind the factors such as not only how much to stock for a smooth operation but also economic point of view. Even though it has done a lot of studies to optimize the inventory stocking level in an academic curiosity, it is not easy to apply the researches in a real world. Since it is so tough job to anticipate when and how large scale even occurs. Hence, it would be thought that the nuclear inventory should be dealt in a different manner from the general manufacturing industry

  14. Optimizing Multireservoir System Operating Policies Using Exogenous Hydrologic Variables

    Science.gov (United States)

    Pina, Jasson; Tilmant, Amaury; Côté, Pascal

    2017-11-01

    Stochastic dual dynamic programming (SDDP) is one of the few available algorithms to optimize the operating policies of large-scale hydropower systems. This paper presents a variant, called SDDPX, in which exogenous hydrologic variables, such as snow water equivalent and/or sea surface temperature, are included in the state space vector together with the traditional (endogenous) variables, i.e., past inflows. A reoptimization procedure is also proposed in which SDDPX-derived benefit-to-go functions are employed within a simulation carried out over the historical record of both the endogenous and exogenous hydrologic variables. In SDDPX, release policies are now a function of storages, past inflows, and relevant exogenous variables that potentially capture more complex hydrological processes than those found in traditional SDDP formulations. To illustrate the potential gain associated with the use of exogenous variables when operating a multireservoir system, the 3,137 MW hydropower system of Rio Tinto (RT) located in the Saguenay-Lac-St-Jean River Basin in Quebec (Canada) is used as a case study. The performance of the system is assessed for various combinations of hydrologic state variables, ranging from the simple lag-one autoregressive model to more complex formulations involving past inflows, snow water equivalent, and winter precipitation.

  15. Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model

    OpenAIRE

    Shulgin, A.

    2015-01-01

    Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...

  16. Robust Estimation of Diffusion-Optimized Ensembles for Enhanced Sampling

    DEFF Research Database (Denmark)

    Tian, Pengfei; Jónsson, Sigurdur Æ.; Ferkinghoff-Borg, Jesper

    2014-01-01

    The multicanonical, or flat-histogram, method is a common technique to improve the sampling efficiency of molecular simulations. The idea is that free-energy barriers in a simulation can be removed by simulating from a distribution where all values of a reaction coordinate are equally likely......, and subsequently reweight the obtained statistics to recover the Boltzmann distribution at the temperature of interest. While this method has been successful in practice, the choice of a flat distribution is not necessarily optimal. Recently, it was proposed that additional performance gains could be obtained...

  17. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  18. Optimal CCD readout by digital correlated double sampling

    Science.gov (United States)

    Alessandri, C.; Abusleme, A.; Guzman, D.; Passalacqua, I.; Alvarez-Fontecilla, E.; Guarini, M.

    2016-01-01

    Digital correlated double sampling (DCDS), a readout technique for charge-coupled devices (CCD), is gaining popularity in astronomical applications. By using an oversampling ADC and a digital filter, a DCDS system can achieve a better performance than traditional analogue readout techniques at the expense of a more complex system analysis. Several attempts to analyse and optimize a DCDS system have been reported, but most of the work presented in the literature has been experimental. Some approximate analytical tools have been presented for independent parameters of the system, but the overall performance and trade-offs have not been yet modelled. Furthermore, there is disagreement among experimental results that cannot be explained by the analytical tools available. In this work, a theoretical analysis of a generic DCDS readout system is presented, including key aspects such as the signal conditioning stage, the ADC resolution, the sampling frequency and the digital filter implementation. By using a time-domain noise model, the effect of the digital filter is properly modelled as a discrete-time process, thus avoiding the imprecision of continuous-time approximations that have been used so far. As a result, an accurate, closed-form expression for the signal-to-noise ratio at the output of the readout system is reached. This expression can be easily optimized in order to meet a set of specifications for a given CCD, thus providing a systematic design methodology for an optimal readout system. Simulated results are presented to validate the theory, obtained with both time- and frequency-domain noise generation models for completeness.

  19. Neuro-genetic system for optimization of GMI samples sensitivity.

    Science.gov (United States)

    Pitta Botelho, A C O; Vellasco, M M B R; Hall Barbosa, C R; Costa Silva, E

    2016-03-01

    Magnetic sensors are largely used in several engineering areas. Among them, magnetic sensors based on the Giant Magnetoimpedance (GMI) effect are a new family of magnetic sensing devices that have a huge potential for applications involving measurements of ultra-weak magnetic fields. The sensitivity of magnetometers is directly associated with the sensitivity of their sensing elements. The GMI effect is characterized by a large variation of the impedance (magnitude and phase) of a ferromagnetic sample, when subjected to a magnetic field. Recent studies have shown that phase-based GMI magnetometers have the potential to increase the sensitivity by about 100 times. The sensitivity of GMI samples depends on several parameters, such as sample length, external magnetic field, DC level and frequency of the excitation current. However, this dependency is yet to be sufficiently well-modeled in quantitative terms. So, the search for the set of parameters that optimizes the samples sensitivity is usually empirical and very time consuming. This paper deals with this problem by proposing a new neuro-genetic system aimed at maximizing the impedance phase sensitivity of GMI samples. A Multi-Layer Perceptron (MLP) Neural Network is used to model the impedance phase and a Genetic Algorithm uses the information provided by the neural network to determine which set of parameters maximizes the impedance phase sensitivity. The results obtained with a data set composed of four different GMI sample lengths demonstrate that the neuro-genetic system is able to correctly and automatically determine the set of conditioning parameters responsible for maximizing their phase sensitivities. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Heuristic and optimal policy computations in the human brain during sequential decision-making.

    Science.gov (United States)

    Korn, Christoph W; Bach, Dominik R

    2018-01-23

    Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.

  1. OPTIMAL TRAINING POLICY FOR PROMOTION - STOCHASTIC MODELS OF MANPOWER SYSTEMS

    Directory of Open Access Journals (Sweden)

    V.S.S. Yadavalli

    2012-01-01

    Full Text Available In this paper, the optimal planning of manpower training programmes in a manpower system with two grades is discussed. The planning of manpower training within a given organization involves a trade-off between training costs and expected return. These planning problems are examined through models that reflect the random nature of manpower movement in two grades. To be specific, the system consists of two grades, grade 1 and grade 2. Any number of persons in grade 2 can be sent for training and after the completion of training, they will stay in grade 2 and will be given promotion as and when vacancies arise in grade 1. Vacancies arise in grade 1 only by wastage. A person in grade 1 can leave the system with probability p. Vacancies are filled with persons in grade 2 who have completed the training. It is assumed that there is a perfect passing rate and that the sizes of both grades are fixed. Assuming that the planning horizon is finite and is T, the underlying stochastic process is identified as a finite state Markov chain and using dynamic programming, a policy is evolved to determine how many persons should be sent for training at any time k so as to minimize the total expected cost for the entire planning period T.

  2. The State Fiscal Policy: Determinants and Optimization of Financial Flows

    Directory of Open Access Journals (Sweden)

    Sitash Tetiana D.

    2017-03-01

    Full Text Available The article outlines the determinants of the state fiscal policy at the present stage of global transformations. Using the principles of financial science it is determined that regulation of financial flows within the fiscal sphere, namely centralization and redistribution of the GDP, which results in the regulation of the financial capacity of economic agents, is of importance. It is emphasized that the urgent measure for improving the tax model is re-considering the provision of fiscal incentives, which are used to stimulate the accumulation of capital, investment activity, innovation, increase of the competitiveness of national products, expansion of exports, increase of the level of the population employment. The necessity of applying the instruments of fiscal regulation of financial flows, which should take place on the basis of institutional economics emphasizing the analysis of institutional changes, the evolution of institutions and their impact on the behavior of participants of economic relations. At the same time it is determined that the maximum effect of fiscal regulation of financial flows is ensured when application of fiscal instruments is aimed not only at achieving the target values of parameters of financial flows but at overcoming institutional deformations as well. It is determined that the optimal movement of financial flows enables creating favorable conditions for development and maintenance of financial balance in the society and achievement of the necessary level of competitiveness of the national economy.

  3. Optimal Monetary Policy Cooperation through State-Independent Contracts with Targets

    DEFF Research Database (Denmark)

    Jensen, Henrik

    2000-01-01

    Simple state-independent monetary institutions are shown to secure optimal cooperative policies in a stochastic, linear-quadratic two-country world with international policy spill-overs and national credibility problems. Institutions characterize delegation to independent central bankers facing...... quadratic performance related contracts punishing or rewarding deviations from primary and intermediate policy targets...

  4. Optimal Repair And Replacement Policy For A System With Multiple Components

    Science.gov (United States)

    2016-06-17

    increases. A future research direction is to develop efficient heuristic methods that can produce near-optimal policy with much less computational...decision variables represent the long-run fraction of time for each state- action pair. The objective function is the linear combination of long-run...exists an optimal policy. To find this policy we solve the linear program. The solution shows that for each state only one state-action pair, represented

  5. Optimal Operational Monetary Policy Rules in an Endogenous Growth Model: a calibrated analysis

    OpenAIRE

    Arato, Hiroki

    2009-01-01

    This paper constructs an endogenous growth New Keynesian model and considers growth and welfare effect of Taylor-type (operational) monetary policy rules. The Ramsey equilibrium and optimal operational monetary policy rule is also computed. In the calibrated model, the Ramseyoptimal volatility of inflation rate is smaller than that in standard exogenous growth New Keynesian model with physical capital accumulation. Optimal operational monetary policy rule makes nominal interest rate respond s...

  6. Intermittently Connected Cloudlet System to Obtain an Optimal Offloading Policy

    Directory of Open Access Journals (Sweden)

    Nadim Akhtar

    2016-09-01

    Full Text Available The great potential has been shown over the performance enhancement for offloading the mobile devices within intensive parts of computation within mobile cloud application. The complete realization for the potential which being mismatch within the particular mobile devices on the resource computing demand and that provide an offer. The request over offloading is connecting the variable network where cloud services are always being in the required process within infrequent, variable connectivity of network and quick response time for relatively incurring the times for long setup and quanta for long time are may be indifferent for the connectivity of network. The requirement over the mobile application is needed more resources for executing the single device task within the fact of mobile devices enhanced capabilities. The problems have been addressed for several computation of offloading the remote cloud services and resources which is locating the computing resources in the cloudlets. The proposed concept is proposing an experimental approach for highlighting the tradeoff of offloading. The proposed architecture of the generic algorithm is performing an integration of mobile cloud computing for automatic offloading to improve the application response time when minimizing the consumption of energy for mobile device. Offloading task within a remote machine is not better than performing task particularly. The particular performance of the task is always better than remote machine. The proposed system is developing an algorithm of optimal offloading for mobile user which considering over the cloudlets availability and local load of user’s. The solution and formulation of the MDP (Markov Decision Process model is for obtaining a policy to the user mobile and minimizing the objective of offloading cost and computation cost.

  7. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  8. Optimizing Input/Output Using Adaptive File System Policies

    Science.gov (United States)

    Madhyastha, Tara M.; Elford, Christopher L.; Reed, Daniel A.

    1996-01-01

    Parallel input/output characterization studies and experiments with flexible resource management algorithms indicate that adaptivity is crucial to file system performance. In this paper we propose an automatic technique for selecting and refining file system policies based on application access patterns and execution environment. An automatic classification framework allows the file system to select appropriate caching and pre-fetching policies, while performance sensors provide feedback used to tune policy parameters for specific system environments. To illustrate the potential performance improvements possible using adaptive file system policies, we present results from experiments involving classification-based and performance-based steering.

  9. Martian Radiative Transfer Modeling Using the Optimal Spectral Sampling Method

    Science.gov (United States)

    Eluszkiewicz, J.; Cady-Pereira, K.; Uymin, G.; Moncet, J.-L.

    2005-01-01

    The large volume of existing and planned infrared observations of Mars have prompted the development of a new martian radiative transfer model that could be used in the retrievals of atmospheric and surface properties. The model is based on the Optimal Spectral Sampling (OSS) method [1]. The method is a fast and accurate monochromatic technique applicable to a wide range of remote sensing platforms (from microwave to UV) and was originally developed for the real-time processing of infrared and microwave data acquired by instruments aboard the satellites forming part of the next-generation global weather satellite system NPOESS (National Polarorbiting Operational Satellite System) [2]. As part of our on-going research related to the radiative properties of the martian polar caps, we have begun the development of a martian OSS model with the goal of using it to perform self-consistent atmospheric corrections necessary to retrieve caps emissivity from the Thermal Emission Spectrometer (TES) spectra. While the caps will provide the initial focus area for applying the new model, it is hoped that the model will be of interest to the wider Mars remote sensing community.

  10. Inequality and Optimal Redistributive Tax and Transfer Policies

    OpenAIRE

    Howell H Zee

    1999-01-01

    This paper explores the revenue-raising aspect of progressive taxation and derives, on the basis of a simple model, the optimal degree of tax progressivity where the tax revenue is used exclusively to finance (perfectly) targeted transfers to the poor. The paper shows that not only would it be optimal to finance the targeted transfers with progressive taxation, but that the optimal progressivity increases unambiguously with growing income inequality. This conclusion holds up under different a...

  11. The Theory of Optimal Taxation: What is the Policy Relevance?

    OpenAIRE

    Birch Sørensen, Peter

    2006-01-01

    The paper discusses the implications of optimal tax theory for the debates on uniform commodity taxation and neutral capital income taxation. While strong administrative and political economy arguments in favor of uniform and neutral taxation remain, recent advances in optimal tax theory suggest that the information needed to implement the differentiated taxation prescribed by optimal tax theory may be easier to obtain than previously believed. The paper also points to the strong similarity b...

  12. Privacy, Time Consistent Optimal Labour Income Taxation and Education Policy

    OpenAIRE

    Konrad, Kai A.

    1999-01-01

    Incomplete information is a commitment device for time consistency problems. In the context of time consistent labour income taxation privacy reduces welfare losses and increases the effectiveness of public education as a second best policy.

  13. Optimal pricing and replenishment policies for instantaneous deteriorating items with backlogging and trade credit under inflation

    Science.gov (United States)

    Sundara Rajan, R.; Uthayakumar, R.

    2017-12-01

    In this paper we develop an economic order quantity model to investigate the optimal replenishment policies for instantaneous deteriorating items under inflation and trade credit. Demand rate is a linear function of selling price and decreases negative exponentially with time over a finite planning horizon. Shortages are allowed and partially backlogged. Under these conditions, we model the retailer's inventory system as a profit maximization problem to determine the optimal selling price, optimal order quantity and optimal replenishment time. An easy-to-use algorithm is developed to determine the optimal replenishment policies for the retailer. We also provide optimal present value of profit when shortages are completely backlogged as a special case. Numerical examples are presented to illustrate the algorithm provided to obtain optimal profit. And we also obtain managerial implications from numerical examples to substantiate our model. The results show that there is an improvement in total profit from complete backlogging rather than the items being partially backlogged.

  14. A Counterexample on Sample-Path Optimality in Stable Markov Decision Chains with the Average Reward Criterion

    Czech Academy of Sciences Publication Activity Database

    Cavazos-Cadena, R.; Montes-de-Oca, R.; Sladký, Karel

    2014-01-01

    Roč. 163, č. 2 (2014), s. 674-684 ISSN 0022-3239 Grant - others:PSF Organization(US) 012/300/02; CONACYT (México) and ASCR (Czech Republic)(MX) 171396 Institutional support: RVO:67985556 Keywords : Strong sample-path optimality * Lyapunov function condition * Stationary policy * Expected average reward criterion Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 1.509, year: 2014 http://library.utia.cas.cz/separaty/2014/E/sladky-0432661.pdf

  15. An accurate approximate solution of optimal sequential age replacement policy for a finite-time horizon

    International Nuclear Information System (INIS)

    Jiang, R.

    2009-01-01

    It is difficult to find the optimal solution of the sequential age replacement policy for a finite-time horizon. This paper presents an accurate approximation to find an approximate optimal solution of the sequential replacement policy. The proposed approximation is computationally simple and suitable for any failure distribution. Their accuracy is illustrated by two examples. Based on the approximate solution, an approximate estimate for the total cost is derived.

  16. Focusing light through dynamical samples using fast continuous wavefront optimization.

    Science.gov (United States)

    Blochet, B; Bourdieu, L; Gigan, S

    2017-12-01

    We describe a fast continuous optimization wavefront shaping system able to focus light through dynamic scattering media. A micro-electro-mechanical system-based spatial light modulator, a fast photodetector, and field programmable gate array electronics are combined to implement a continuous optimization of a wavefront with a single-mode optimization rate of 4.1 kHz. The system performances are demonstrated by focusing light through colloidal solutions of TiO 2 particles in glycerol with tunable temporal stability.

  17. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  18. Optimal policy for value-based decision-making.

    Science.gov (United States)

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  19. Optimal grade control sampling practice in open-pit mining

    DEFF Research Database (Denmark)

    Engström, Karin; Esbensen, Kim Harry

    2017-01-01

    Misclassification of ore grades results in lost revenues, and the need for representative sampling procedures in open pit mining is increasingly important in all mining industries. This study evaluated possible improvements in sampling representativity with the use of Reverse Circulation (RC) drill...... sampling compared to manual Blast Hole (BH) sampling in the Leveäniemi open pit mine, northern Sweden. The variographic experiment results showed that sampling variability was lower for RC than for BH sampling. However, the total costs for RC drill sampling are significantly exceeding current costs...... for manual BH sampling, which needs to be compensated for by other benefits to motivate introduction of RC drilling. The main conclusion is that manual BH sampling can be fit-for-purpose in the studied open pit mine. However, with so many mineral commodities and mining methods in use globally...

  20. Triangular Geometrized Sampling Heuristics for Fast Optimal Motion Planning

    Directory of Open Access Journals (Sweden)

    Ahmed Hussain Qureshi

    2015-02-01

    Full Text Available Rapidly-exploring Random Tree (RRT-based algorithms have become increasingly popular due to their lower computational complexity as compared with other path planning algorithms. The recently presented RRT* motion planning algorithm improves upon the original RRT algorithm by providing optimal path solutions. While RRT determines an initial collision-free path fairly quickly, RRT* guarantees almost certain convergence to an optimal, obstacle-free path from the start to the goal points for any given geometrical environment. However, the main limitations of RRT* include its slow processing rate and high memory consumption, due to the large number of iterations required for calculating the optimal path. In order to overcome these limitations, we present another improvement, i.e, the Triangular Geometerized-RRT* (TG-RRT* algorithm, which utilizes triangular geometrical methods to improve the performance of the RRT* algorithm in terms of the processing time and a decreased number of iterations required for an optimal path solution. Simulations comparing the performance results of the improved TG-RRT* with RRT* are presented to demonstrate the overall improvement in performance and optimal path detection.

  1. Optimal experiment design in a filtering context with application to sampled network data

    OpenAIRE

    Singhal, Harsh; Michailidis, George

    2010-01-01

    We examine the problem of optimal design in the context of filtering multiple random walks. Specifically, we define the steady state E-optimal design criterion and show that the underlying optimization problem leads to a second order cone program. The developed methodology is applied to tracking network flow volumes using sampled data, where the design variable corresponds to controlling the sampling rate. The optimal design is numerically compared to a myopic and a naive strategy. Finally, w...

  2. Sampled-data and discrete-time H2 optimal control

    NARCIS (Netherlands)

    Trentelman, Harry L.; Stoorvogel, Anton A.

    1993-01-01

    This paper deals with the sampled-data H2 optimal control problem. Given a linear time-invariant continuous-time system, the problem of minimizing the H2 performance over all sampled-data controllers with a fixed sampling period can be reduced to a pure discrete-time H2 optimal control problem. This

  3. Optimal Aquifer Pumping Policy to Reduce Contaminant Concentration

    Directory of Open Access Journals (Sweden)

    Ali Abaei

    2012-01-01

    Full Text Available Different sources of ground water contamination lead to non-uniform distribution of contaminant concentration in the aquifer. If elimination or containment of pollution sources was not possible, the distribution of contaminant concentrations could be modified in order to eliminate peak concentrations using optimal water pumping discharge plan. In the present investigation Visual MODFLOW model was used to simulate the flow and transport in a hypothetic aquifer. Genetic Algorithm (GA also was applied to optimize the location and pumping flow rate of wells in order to reduce contaminants peak concentrations in aquifer.

  4. Optimal policies for cumulative damage models with maintenance last and first

    International Nuclear Information System (INIS)

    Zhao, Xufeng; Qian, Cunhua; Nakagawa, Toshio

    2013-01-01

    From the economical viewpoint of several combined PM policies in reliability theory, this paper takes up a standard cumulative damage model in which the notion of maintenance last is applied, i.e., the unit undergoes preventive maintenances before failure at a planned time T, at a damage level Z, or at a shock number N, whichever occurs last. Expected cost rates are detailedly formulated, and optimal problems of two alternative policies which combine time-based with condition-based preventive maintenances are discussed, i.e., optimal T L ⁎ for N, Z L ⁎ for T, and N L ⁎ for T are rigorously obtained. Comparison methods between such maintenance last and conventional maintenance first are explored. It is determined theoretically and numerically which policy should be adopted, according to the different methods in different cases when the time-based or the condition-based PM policy is optimized.

  5. Optimal pricing policies for services with consideration of facility maintenance costs

    Science.gov (United States)

    Yeh, Ruey Huei; Lin, Yi-Fang

    2012-06-01

    For survival and success, pricing is an essential issue for service firms. This article deals with the pricing strategies for services with substantial facility maintenance costs. For this purpose, a mathematical framework that incorporates service demand and facility deterioration is proposed to address the problem. The facility and customers constitute a service system driven by Poisson arrivals and exponential service times. A service demand with increasing price elasticity and a facility lifetime with strictly increasing failure rate are also adopted in modelling. By examining the bidirectional relationship between customer demand and facility deterioration in the profit model, the pricing policies of the service are investigated. Then analytical conditions of customer demand and facility lifetime are derived to achieve a unique optimal pricing policy. The comparative statics properties of the optimal policy are also explored. Finally, numerical examples are presented to illustrate the effects of parameter variations on the optimal pricing policy.

  6. Optimization of maintenance policy using the proportional hazard model

    Energy Technology Data Exchange (ETDEWEB)

    Samrout, M. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: mohamad.el_samrout@utt.fr; Chatelet, E. [Information Sciences and Technologies Institute, University of Technology of Troyes, 10000 Troyes (France)], E-mail: chatelt@utt.fr; Kouta, R. [M3M Laboratory, University of Technology of Belfort Montbeliard (France); Chebbo, N. [Industrial Systems Laboratory, IUT, Lebanese University (Lebanon)

    2009-01-15

    The evolution of system reliability depends on its structure as well as on the evolution of its components reliability. The latter is a function of component age during a system's operating life. Component aging is strongly affected by maintenance activities performed on the system. In this work, we consider two categories of maintenance activities: corrective maintenance (CM) and preventive maintenance (PM). Maintenance actions are characterized by their ability to reduce this age. PM consists of actions applied on components while they are operating, whereas CM actions occur when the component breaks down. In this paper, we expound a new method to integrate the effect of CM while planning for the PM policy. The proportional hazard function was used as a modeling tool for that purpose. Interesting results were obtained when comparison between policies that take into consideration the CM effect and those that do not is established.

  7. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  8. Optimization of the Advertising Policy for a Recreation Park

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1979-01-01

    textabstractThis paper deals with the problem of the desirable level of advertising expenditure, the optimal distribution of this expenditure in time, and the allocation over the media -- TV, radio, and newspaper -- for a recreation park in the Netherlands. First, a model is specified and estimated,

  9. Optimal time policy for deteriorating items of two-warehouse

    Indian Academy of Sciences (India)

    ... goods in which the first is rented warehouse and the second is own warehouse that deteriorates with two different rates. The aim of this study is to determine the optimal order quantity to maximize the profit of the projected model. Finally, some numerical examples and sensitivity analysis of parameters are made to validate ...

  10. Analytical method for optimization of maintenance policy based on available system failure data

    International Nuclear Information System (INIS)

    Coria, V.H.; Maximov, S.; Rivas-Dávalos, F.; Melchor, C.L.; Guardado, J.L.

    2015-01-01

    An analytical optimization method for preventive maintenance (PM) policy with minimal repair at failure, periodic maintenance, and replacement is proposed for systems with historical failure time data influenced by a current PM policy. The method includes a new imperfect PM model based on Weibull distribution and incorporates the current maintenance interval T 0 and the optimal maintenance interval T to be found. The Weibull parameters are analytically estimated using maximum likelihood estimation. Based on this model, the optimal number of PM and the optimal maintenance interval for minimizing the expected cost over an infinite time horizon are also analytically determined. A number of examples are presented involving different failure time data and current maintenance intervals to analyze how the proposed analytical optimization method for periodic PM policy performances in response to changes in the distribution of the failure data and the current maintenance interval. - Highlights: • An analytical optimization method for preventive maintenance (PM) policy is proposed. • A new imperfect PM model is developed. • The Weibull parameters are analytically estimated using maximum likelihood. • The optimal maintenance interval and number of PM are also analytically determined. • The model is validated by several numerical examples

  11. 76 FR 41186 - Salmonella Verification Sampling Program: Response to Comments on New Agency Policies and...

    Science.gov (United States)

    2011-07-13

    ... Service [Docket No. FSIS-2008-0008] Salmonella Verification Sampling Program: Response to Comments on New Agency Policies and Clarification of Timeline for the Salmonella Initiative Program (SIP) AGENCY: Food... Federal Register notice (73 FR 4767- 4774), which described upcoming policy changes in the FSIS Salmonella...

  12. Optimized preventive replacement policy for large cascade systems

    International Nuclear Information System (INIS)

    Kretzen, H.H.

    1986-01-01

    The repair-bottleneck problem as a limiting factor for system reliability can be overcome. Design need only cover the steady state, wearout induced accumulations of failures being precluded by preventive replacements with subsequent recycling. As a result, a reliable system appears to be feasible on an economic basis, optimization in detail to be left to more precised cost-benefit studies. As a reference system the radio-frequency-generator cascade of a single-cell linear accelerator is considered. (DG)

  13. Optimal Bail Out Policy, Conditionality and Creative Ambiguity

    OpenAIRE

    Xavier Freixas

    1999-01-01

    This paper addresses the issue of the optimal behaviour of the Lender of Last Resort (LOLR) in its microeconomic role regarding individual financial institutions in distress. It has been argued that the LOLR should not intervene at the microeconomic level and let any defaulting institution face the market discipline, as it will be confronted with the consequences of the risks it has taken. By considering a simple cost benefit analysis we show that this position my lack a sufficient foundation...

  14. Optimal bail out policy, conditionality and constructive ambiguity

    OpenAIRE

    Freixas, Xavier

    1999-01-01

    This paper addresses the issue of the optimal behaviour of the Lender of Last Resort (LOLR) in its microeconomic role regarding individual financial institutions in distress. It has been argued that the LOLR should not intervene at the microeconomic level and let any defaulting institution face the market discipline, as it will be confronted with the consequences of the risks it has taken. By considering a simple cost benefit analysis we show that this position may lac...

  15. China's optimal stockpiling policies in the context of new oil price trend

    International Nuclear Information System (INIS)

    Xie, Nan; Yan, Zhijun; Zhou, Yi; Huang, Wenjun

    2017-01-01

    Optimizing the size of oil stockpiling plays a fundamental role in the process of making national strategic petroleum reserve (SPR) policies. There have been extensive studies on the operating strategies of SPR. However, previous literatures have paid more attention to a booming or stable international oil market, while few studies analyzed the impact of a long-term low oil price on SPR policy. As a supplement, this paper extends a static model to study China's optimal stockpiling policy under different oil price trends, and in response to different current oil prices. A new variable “FC”, which demonstrates the appreciation and depreciation of the reserved oil economic value, has been taken into account to assess the optimal size of SPR. In this paper, a more multi-perspective of view is provided to consider the policies of China's SPR, especially under the different trend of international oil price fluctuations. - Highlights: • We extended a static model to study optimal stockpiling size of China's SPR. • A new variable “FC” was applied to illustrate the shifting financial value of SPR. • We analyzed how current oil price and varied prediction influence optimal size. • Operational measures could be adjusted at the end of each decision-making period. • A more multifaceted of view might be provided for China's SPR policy-making.

  16. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  17. Modeling and optimizing periodically inspected software rejuvenation policy based on geometric sequences

    International Nuclear Information System (INIS)

    Meng, Haining; Liu, Jianjun; Hei, Xinhong

    2015-01-01

    Software aging is characterized by an increasing failure rate, progressive performance degradation and even a sudden crash in a long-running software system. Software rejuvenation is an effective method to counteract software aging. A periodically inspected rejuvenation policy for software systems is studied. The consecutive inspection intervals are assumed to be a decreasing geometric sequence, and upon the inspection times of software system and its failure features, software rejuvenation or system recovery is performed. The system availability function and cost rate function are obtained, and the optimal inspection time and rejuvenation interval are both derived to maximize system availability and minimize cost rate. Then, boundary conditions of the optimal rejuvenation policy are deduced. Finally, the numeric experiment result shows the effectiveness of the proposed policy. Further compared with the existing software rejuvenation policy, the new policy has higher system availability. - Highlights: • A periodically inspected rejuvenation policy for software systems is studied. • A decreasing geometric sequence is used to denote the consecutive inspection intervals. • The optimal inspection times and rejuvenation interval are found. • The new policy is capable of reducing average cost and improving system availability

  18. Optimism is universal: exploring the presence and benefits of optimism in a representative sample of the world.

    Science.gov (United States)

    Gallagher, Matthew W; Lopez, Shane J; Pressman, Sarah D

    2013-10-01

    Current theories of optimism suggest that the tendency to maintain positive expectations for the future is an adaptive psychological resource associated with improved well-being and physical health, but the majority of previous optimism research has been conducted in industrialized nations. The present study examined (a) whether optimism is universal, (b) what demographic factors predict optimism, and (c) whether optimism is consistently associated with improved subjective well-being and perceived health worldwide. The present study used representative samples of 142 countries that together represent 95% of the world's population. The total sample of 150,048 individuals had a mean age of 38.28 (SD = 16.85) and approximately equal sex distribution (51.2% female). The relationships between optimism, subjective well-being, and perceived health were examined using hierarchical linear modeling. Results indicated that most individuals and most countries worldwide are optimistic and that higher levels of optimism are associated with improved subjective well-being and perceived health worldwide. The present study provides compelling evidence that optimism is a universal phenomenon and that the associations between optimism and improved psychological functioning are not limited to industrialized nations. © 2012 Wiley Periodicals, Inc.

  19. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. Optimal climate policy is a utopia. From quantitative to qualitative cost-benefit analysis

    International Nuclear Information System (INIS)

    Van den Bergh, Jeroen C.J.M.

    2004-01-01

    The dominance of quantitative cost-benefit analysis (CBA) and optimality concepts in the economic analysis of climate policy is criticised. Among others, it is argued to be based in a misplaced interpretation of policy for a complex climate-economy system as being analogous to individual inter-temporal welfare optimisation. The transfer of quantitative CBA and optimality concepts reflects an overly ambitious approach that does more harm than good. An alternative approach is to focus the attention on extreme events, structural change and complexity. It is argued that a qualitative rather than a quantitative CBA that takes account of these aspects can support the adoption of a minimax regret approach or precautionary principle in climate policy. This means: implement stringent GHG reduction policies as soon as possible

  2. Optimal dynamic pricing and replenishment policy for perishable items with inventory-level-dependent demand

    Science.gov (United States)

    Lu, Lihao; Zhang, Jianxiong; Tang, Wansheng

    2016-04-01

    An inventory system for perishable items with limited replenishment capacity is introduced in this paper. The demand rate depends on the stock quantity displayed in the store as well as the sales price. With the goal to realise profit maximisation, an optimisation problem is addressed to seek for the optimal joint dynamic pricing and replenishment policy which is obtained by solving the optimisation problem with Pontryagin's maximum principle. A joint mixed policy, in which the sales price is a static decision variable and the replenishment rate remains to be a dynamic decision variable, is presented to compare with the joint dynamic policy. Numerical results demonstrate the advantages of the joint dynamic one, and further show the effects of different system parameters on the optimal joint dynamic policy and the maximal total profit.

  3. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  4. Joint Optimal Production Planning for Complex Supply Chains Constrained by Carbon Emission Abatement Policies

    OpenAIRE

    He, Longfei; Xu, Zhaoguang; Niu, Zhanwen

    2014-01-01

    We focus on the joint production planning of complex supply chains facing stochastic demands and being constrained by carbon emission reduction policies. We pick two typical carbon emission reduction policies to research how emission regulation influences the profit and carbon footprint of a typical supply chain. We use the input-output model to capture the interrelated demand link between an arbitrary pair of two nodes in scenarios without or with carbon emission constraints. We design optim...

  5. Determination and optimization of spatial samples for distributed measurements.

    Energy Technology Data Exchange (ETDEWEB)

    Huo, Xiaoming (Georgia Institute of Technology, Atlanta, GA); Tran, Hy D.; Shilling, Katherine Meghan; Kim, Heeyong (Georgia Institute of Technology, Atlanta, GA)

    2010-10-01

    There are no accepted standards for determining how many measurements to take during part inspection or where to take them, or for assessing confidence in the evaluation of acceptance based on these measurements. The goal of this work was to develop a standard method for determining the number of measurements, together with the spatial distribution of measurements and the associated risks for false acceptance and false rejection. Two paths have been taken to create a standard method for selecting sampling points. A wavelet-based model has been developed to select measurement points and to determine confidence in the measurement after the points are taken. An adaptive sampling strategy has been studied to determine implementation feasibility on commercial measurement equipment. Results using both real and simulated data are presented for each of the paths.

  6. Optimization conditions of samples saponification for tocopherol analysis.

    Science.gov (United States)

    Souza, Aloisio Henrique Pereira; Gohara, Aline Kirie; Rodrigues, Ângela Claudia; Ströher, Gisely Luzia; Silva, Danielle Cristina; Visentainer, Jesuí Vergílio; Souza, Nilson Evelázio; Matsushita, Makoto

    2014-09-01

    A full factorial design 2(2) (two factors at two levels) with duplicates was performed to investigate the influence of the factors agitation time (2 and 4 h) and the percentage of KOH (60% and 80% w/v) in the saponification of samples for the determination of α, β and γ+δ-tocopherols. The study used samples of peanuts (cultivar armadillo), produced and marketed in Maringá, PR. The factors % KOH and agitation time were significant, and an increase in their values contributed negatively to the responses. The interaction effect was not significant for the response δ-tocopherol, and the contribution of this effect to the other responses was positive, but less than 10%. The ANOVA and response surfaces analysis showed that the most efficient saponification procedure was obtained using a 60% (w/v) solution of KOH and with an agitation time of 2 h. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Optimization of series-parallel multi-state systems under maintenance policies

    International Nuclear Information System (INIS)

    Nourelfath, Mustapha; Ait-Kadi, Daoud

    2007-01-01

    In the redundancy optimization problem, the design goal is achieved by discrete choices made from components available in the market. In this paper, the problem is to find, under reliability constraints, the minimal cost configuration of a multi-state series-parallel system, which is subject to a specified maintenance policy. The number of maintenance teams is less than the number of repairable components, and a maintenance policy specifies the priorities between the system components. To take into account the dependencies resulting from the sharing of maintenance teams, the universal generating function approach is coupled with a Markov model. The resulting optimization approach has the advantage of being mainly analytical

  8. An integrated DEA-COLS-SFA algorithm for optimization and policy making of electricity distribution units

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.; Eivazy, H.

    2009-01-01

    This paper presents an integrated data envelopment analysis (DEA)-corrected ordinary least squares (COLS)-stochastic frontier analysis (SFA)-principal component analysis (PCA)-numerical taxonomy (NT) algorithm for performance assessment, optimization and policy making of electricity distribution units. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study proposes an integrated flexible approach to measure the rank and choose the best version of the DEA method for optimization and policy making purposes. It covers both static and dynamic aspects of information environment due to involvement of SFA which is finally compared with the best DEA model through the Spearman correlation technique. The integrated approach would yield in improved ranking and optimization of electricity distribution systems. To illustrate the usability and reliability of the proposed algorithm, 38 electricity distribution units in Iran have been considered, ranked and optimized by the proposed algorithm of this study.

  9. Optimal purification and sensitive quantification of DNA from fecal samples

    DEFF Research Database (Denmark)

    Jensen, Annette Nygaard; Hoorfar, Jeffrey

    2002-01-01

    Application of reliable, rapid and sensitive methods to laboratory diagnosis of zoonotic infections continues to challenge microbiological laboratories. The recovery of DNA from a swine fecal sample and a bacterial culture extracted by a conventional phenol-chloroform extraction method was compared...... = 0.99 and R-2 = 1.00). In conclusion, silica-membrane, columns can provide a more convenient and less hazardous alternative to the conventional phenol-based method. The results have implication for further improvement of sensitive amplification methods for laboratory diagnosis....

  10. Optimal policies for aggregate recycling from decommissioned forest roads.

    Science.gov (United States)

    Thompson, Matthew; Sessions, John

    2008-08-01

    To mitigate the adverse environmental impact of forest roads, especially degradation of endangered salmonid habitat, many public and private land managers in the western United States are actively decommissioning roads where practical and affordable. Road decommissioning is associated with reduced long-term environmental impact. When decommissioning a road, it may be possible to recover some aggregate (crushed rock) from the road surface. Aggregate is used on many low volume forest roads to reduce wheel stresses transferred to the subgrade, reduce erosion, reduce maintenance costs, and improve driver comfort. Previous studies have demonstrated the potential for aggregate to be recovered and used elsewhere on the road network, at a reduced cost compared to purchasing aggregate from a quarry. This article investigates the potential for aggregate recycling to provide an economic incentive to decommission additional roads by reducing transport distance and aggregate procurement costs for other actively used roads. Decommissioning additional roads may, in turn, result in improved aquatic habitat. We present real-world examples of aggregate recycling and discuss the advantages of doing so. Further, we present mixed integer formulations to determine optimal levels of aggregate recycling under economic and environmental objectives. Tested on an example road network, incorporation of aggregate recycling demonstrates substantial cost-savings relative to a baseline scenario without recycling, increasing the likelihood of road decommissioning and reduced habitat degradation. We find that aggregate recycling can result in up to 24% in cost savings (economic objective) and up to 890% in additional length of roads decommissioned (environmental objective).

  11. Tax policy can change the production path: A model of optimal oil extraction in Alaska

    International Nuclear Information System (INIS)

    Leighty, Wayne; Lin, C.-Y. Cynthia

    2012-01-01

    We model the economically optimal dynamic oil production decisions for seven production units (fields) on Alaska's North Slope. We use adjustment cost and discount rate to calibrate the model against historical production data, and use the calibrated model to simulate the impact of tax policy on production rate. We construct field-specific cost functions from average cost data and an estimated inverse production function, which incorporates engineering aspects of oil production into our economic modeling. Producers appear to have approximated dynamic optimality. Consistent with prior research, we find that changing the tax rate alone does not change the economically optimal oil production path, except for marginal fields that may cease production. Contrary to prior research, we find that the structure of tax policy can be designed to affect the economically optimal production path, but at a cost in net social benefit. - Highlights: ► We model economically optimal dynamic oil production decisions for 7 Alaska fields. ► Changing tax rate alone does not alter the economically optimal oil production path. ► But change in tax structure can affect the economically optimal oil production path. ► Tax structures that modify the optimal production path reduce net social benefit. ► Field-specific cost functions and inverse production functions are estimated

  12. Continuous Linguistic Rhetorical Education as a Means of Optimizing Language Policy in Russian Multinational Regions

    Science.gov (United States)

    Vorozhbitova, Alexandra A.; Konovalova, Galina M.; Ogneva, Tatiana N.; Chekulaeva, Natalia Y.

    2017-01-01

    Drawing on the function of Russian as a state language the paper proposes a concept of continuous linguistic rhetorical (LR) education perceived as a means of optimizing language policy in Russian multinational regions. LR education as an innovative pedagogical system shapes a learner's readiness for self-projection as a strong linguistic…

  13. AN OPTIMAL REPLENISHMENT POLICY FOR DETERIORATING ITEMS WITH RAMP TYPE DEMAND UNDER PERMISSIBLE DELAY IN PAYMENTS

    Directory of Open Access Journals (Sweden)

    Dr. Sanjay Jain

    2010-10-01

    Full Text Available The aim of this paper is to develop an optimal replenishment policy for inventory models of deteriorating items with ramp type demand under permissible delay in payments. Deterioration of items begins on their arrival in stock.  An example is also presented to illustrate the application of developed model.

  14. Welfare-based optimal monetary policy in a two-sector small open economy

    Czech Academy of Sciences Publication Activity Database

    Rychalovska, Yuliya

    -, č. 16 (2007), s. 1-46 ISSN 1803-2397 Institutional research plan: CEZ:AV0Z70850503 Keywords : DSGE models * optimal monetary policy * non-traded goods Subject RIV: AH - Economics http://www.cnb.cz/m2export/sites/www.cnb.cz/en/research/research_publications/cnb_wp/download/cnbwp_2007_16.pdf

  15. A policy iteration approach to online optimal control of continuous-time constrained-input systems.

    Science.gov (United States)

    Modares, Hamidreza; Naghibi Sistani, Mohammad-Bagher; Lewis, Frank L

    2013-09-01

    This paper is an effort towards developing an online learning algorithm to find the optimal control solution for continuous-time (CT) systems subject to input constraints. The proposed method is based on the policy iteration (PI) technique which has recently evolved as a major technique for solving optimal control problems. Although a number of online PI algorithms have been developed for CT systems, none of them take into account the input constraints caused by actuator saturation. In practice, however, ignoring these constraints leads to performance degradation or even system instability. In this paper, to deal with the input constraints, a suitable nonquadratic functional is employed to encode the constraints into the optimization formulation. Then, the proposed PI algorithm is implemented on an actor-critic structure to solve the Hamilton-Jacobi-Bellman (HJB) equation associated with this nonquadratic cost functional in an online fashion. That is, two coupled neural network (NN) approximators, namely an actor and a critic are tuned online and simultaneously for approximating the associated HJB solution and computing the optimal control policy. The critic is used to evaluate the cost associated with the current policy, while the actor is used to find an improved policy based on information provided by the critic. Convergence to a close approximation of the HJB solution as well as stability of the proposed feedback control law are shown. Simulation results of the proposed method on a nonlinear CT system illustrate the effectiveness of the proposed approach. Copyright © 2013 ISA. All rights reserved.

  16. Social Optimization and Pricing Policy in Cognitive Radio Networks with an Energy Saving Strategy

    Directory of Open Access Journals (Sweden)

    Shunfu Jin

    2016-01-01

    Full Text Available The rapid growth of wireless application results in an increase in demand for spectrum resource and communication energy. In this paper, we firstly introduce a novel energy saving strategy in cognitive radio networks (CRNs and then propose an appropriate pricing policy for secondary user (SU packets. We analyze the behavior of data packets in a discrete-time single-server priority queue under multiple-vacation discipline. With the help of a Quasi-Birth-Death (QBD process model, we obtain the joint distribution for the number of SU packets and the state of base station (BS via the Matrix-Geometric Solution method. We assess the average latency of SU packets and the energy saving ratio of system. According to a natural reward-cost structure, we study the individually optimal behavior and the socially optimal behavior of the energy saving strategy and use an optimization algorithm based on standard particle swarm optimization (SPSO method to search the socially optimal arrival rate of SU packets. By comparing the individually optimal behavior and the socially optimal behavior, we impose an appropriate admission fee to SU packets. Finally, we present numerical results to show the impacts of system parameters on the system performance and the pricing policy.

  17. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  18. Loss-Averse Retailer’s Optimal Ordering Policies for Perishable Products with Customer Returns

    Directory of Open Access Journals (Sweden)

    Xu Chen

    2014-01-01

    Full Text Available We investigate the loss-averse retailer’s ordering policies for perishable product with customer returns. With the introduction of the segmental loss utility function, we depict the retailer’s loss aversion decision bias and establish the loss-averse retailer’s ordering policy model. We derive that the loss-averse retailer’s optimal order quantity with customer returns exists and is unique. By comparison, we obtain that both the risk-neutral and the loss-averse retailer’s optimal order quantities depend on the inventory holding cost and the marginal shortage cost. Through the sensitivity analysis, we also discuss the effect of loss-averse coefficient and the ratio of return on the loss-averse retailer’s optimal order quantity with customer returns.

  19. Policy Gradient Adaptive Dynamic Programming for Data-Based Optimal Control.

    Science.gov (United States)

    Luo, Biao; Liu, Derong; Wu, Huai-Ning; Wang, Ding; Lewis, Frank L

    2017-10-01

    The model-free optimal control problem of general discrete-time nonlinear systems is considered in this paper, and a data-based policy gradient adaptive dynamic programming (PGADP) algorithm is developed to design an adaptive optimal controller method. By using offline and online data rather than the mathematical system model, the PGADP algorithm improves control policy with a gradient descent scheme. The convergence of the PGADP algorithm is proved by demonstrating that the constructed Q -function sequence converges to the optimal Q -function. Based on the PGADP algorithm, the adaptive control method is developed with an actor-critic structure and the method of weighted residuals. Its convergence properties are analyzed, where the approximate Q -function converges to its optimum. Computer simulation results demonstrate the effectiveness of the PGADP-based adaptive control method.

  20. A hybrid reliability algorithm using PSO-optimized Kriging model and adaptive importance sampling

    Science.gov (United States)

    Tong, Cao; Gong, Haili

    2018-03-01

    This paper aims to reduce the computational cost of reliability analysis. A new hybrid algorithm is proposed based on PSO-optimized Kriging model and adaptive importance sampling method. Firstly, the particle swarm optimization algorithm (PSO) is used to optimize the parameters of Kriging model. A typical function is fitted to validate improvement by comparing results of PSO-optimized Kriging model with those of the original Kriging model. Secondly, a hybrid algorithm for reliability analysis combined optimized Kriging model and adaptive importance sampling is proposed. Two cases from literatures are given to validate the efficiency and correctness. The proposed method is proved to be more efficient due to its application of small number of sample points according to comparison results.

  1. Realistic nurse-led policy implementation, optimization and evaluation: novel methodological exemplar.

    Science.gov (United States)

    Noyes, Jane; Lewis, Mary; Bennett, Virginia; Widdas, David; Brombley, Karen

    2014-01-01

    To report the first large-scale realistic nurse-led implementation, optimization and evaluation of a complex children's continuing-care policy. Health policies are increasingly complex, involve multiple Government departments and frequently fail to translate into better patient outcomes. Realist methods have not yet been adapted for policy implementation. Research methodology - Evaluation using theory-based realist methods for policy implementation. An expert group developed the policy and supporting tools. Implementation and evaluation design integrated diffusion of innovation theory with multiple case study and adapted realist principles. Practitioners in 12 English sites worked with Consultant Nurse implementers to manipulate the programme theory and logic of new decision-support tools and care pathway to optimize local implementation. Methods included key-stakeholder interviews, developing practical diffusion of innovation processes using key-opinion leaders and active facilitation strategies and a mini-community of practice. New and existing processes and outcomes were compared for 137 children during 2007-2008. Realist principles were successfully adapted to a shorter policy implementation and evaluation time frame. Important new implementation success factors included facilitated implementation that enabled 'real-time' manipulation of programme logic and local context to best-fit evolving theories of what worked; using local experiential opinion to change supporting tools to more realistically align with local context and what worked; and having sufficient existing local infrastructure to support implementation. Ten mechanisms explained implementation success and differences in outcomes between new and existing processes. Realistic policy implementation methods have advantages over top-down approaches, especially where clinical expertise is low and unlikely to diffuse innovations 'naturally' without facilitated implementation and local optimization. © 2013

  2. SamplingStrata: An R Package for the Optimization of Strati?ed Sampling

    Directory of Open Access Journals (Sweden)

    Giulio Barcaroli

    2014-11-01

    Full Text Available When designing a sampling survey, usually constraints are set on the desired precision levels regarding one or more target estimates (the Ys. If a sampling frame is available, containing auxiliary information related to each unit (the Xs, it is possible to adopt a stratified sample design. For any given strati?cation of the frame, in the multivariate case it is possible to solve the problem of the best allocation of units in strata, by minimizing a cost function sub ject to precision constraints (or, conversely, by maximizing the precision of the estimates under a given budget. The problem is to determine the best stratification in the frame, i.e., the one that ensures the overall minimal cost of the sample necessary to satisfy precision constraints. The Xs can be categorical or continuous; continuous ones can be transformed into categorical ones. The most detailed strati?cation is given by the Cartesian product of the Xs (the atomic strata. A way to determine the best stratification is to explore exhaustively the set of all possible partitions derivable by the set of atomic strata, evaluating each one by calculating the corresponding cost in terms of the sample required to satisfy precision constraints. This is una?ordable in practical situations, where the dimension of the space of the partitions can be very high. Another possible way is to explore the space of partitions with an algorithm that is particularly suitable in such situations: the genetic algorithm. The R package SamplingStrata, based on the use of a genetic algorithm, allows to determine the best strati?cation for a population frame, i.e., the one that ensures the minimum sample cost necessary to satisfy precision constraints, in a multivariate and multi-domain case.

  3. Off-policy integral reinforcement learning optimal tracking control for continuous-time chaotic systems

    International Nuclear Information System (INIS)

    Wei Qing-Lai; Song Rui-Zhuo; Xiao Wen-Dong; Sun Qiu-Ye

    2015-01-01

    This paper estimates an off-policy integral reinforcement learning (IRL) algorithm to obtain the optimal tracking control of unknown chaotic systems. Off-policy IRL can learn the solution of the HJB equation from the system data generated by an arbitrary control. Moreover, off-policy IRL can be regarded as a direct learning method, which avoids the identification of system dynamics. In this paper, the performance index function is first given based on the system tracking error and control error. For solving the Hamilton–Jacobi–Bellman (HJB) equation, an off-policy IRL algorithm is proposed. It is proven that the iterative control makes the tracking error system asymptotically stable, and the iterative performance index function is convergent. Simulation study demonstrates the effectiveness of the developed tracking control method. (paper)

  4. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    Energy Technology Data Exchange (ETDEWEB)

    Zarepisheh, M; Li, R; Xing, L [Stanford UniversitySchool of Medicine, Stanford, CA (United States); Ye, Y [Stanford Univ, Management Science and Engineering, Stanford, Ca (United States); Boyd, S [Stanford University, Electrical Engineering, Stanford, CA (United States)

    2014-06-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  5. SU-E-T-295: Simultaneous Beam Sampling and Aperture Shape Optimization for Station Parameter Optimized Radiation Therapy (SPORT)

    International Nuclear Information System (INIS)

    Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S

    2014-01-01

    Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves

  6. Optimizing preventive maintenance policy: A data-driven application for a light rail braking system.

    Science.gov (United States)

    Corman, Francesco; Kraijema, Sander; Godjevac, Milinko; Lodewijks, Gabriel

    2017-10-01

    This article presents a case study determining the optimal preventive maintenance policy for a light rail rolling stock system in terms of reliability, availability, and maintenance costs. The maintenance policy defines one of the three predefined preventive maintenance actions at fixed time-based intervals for each of the subsystems of the braking system. Based on work, maintenance, and failure data, we model the reliability degradation of the system and its subsystems under the current maintenance policy by a Weibull distribution. We then analytically determine the relation between reliability, availability, and maintenance costs. We validate the model against recorded reliability and availability and get further insights by a dedicated sensitivity analysis. The model is then used in a sequential optimization framework determining preventive maintenance intervals to improve on the key performance indicators. We show the potential of data-driven modelling to determine optimal maintenance policy: same system availability and reliability can be achieved with 30% maintenance cost reduction, by prolonging the intervals and re-grouping maintenance actions.

  7. Optimal post-warranty maintenance policy with repair time threshold for minimal repair

    International Nuclear Information System (INIS)

    Park, Minjae; Mun Jung, Ki; Park, Dong Ho

    2013-01-01

    In this paper, we consider a renewable minimal repair–replacement warranty policy and propose an optimal maintenance model after the warranty is expired. Such model adopts the repair time threshold during the warranty period and follows with a certain type of system maintenance policy during the post-warranty period. As for the criteria for optimality, we utilize the expected cost rate per unit time during the life cycle of the system, which has been frequently used in many existing maintenance models. Based on the cost structure defined for each failure of the system, we formulate the expected cost rate during the life cycle of the system, assuming that a renewable minimal repair–replacement warranty policy with the repair time threshold is provided to the user during the warranty period. Once the warranty is expired, the maintenance of the system is the user's sole responsibility. The life cycle of the system is defined on the perspective of the user and the expected cost rate per unit time is derived in this context. We obtain the optimal maintenance policy during the maintenance period following the expiration of the warranty period by minimizing such a cost rate. Numerical examples using actual failure data are presented to exemplify the applicability of the methodologies proposed in this paper.

  8. Optimal Overhaul-Replacement Policies for Repairable Machine Sold with Warranty

    Directory of Open Access Journals (Sweden)

    Kusmaningrum Soemadi

    2014-12-01

    Full Text Available This research deals with an overhaul-replacement policy for a repairable machine sold with Free Replacement Warranty (FRW. The machine will be used for a finite horizon, T (T <, and evaluated at a fixed interval, s (s< T. At each evaluation point, the buyer considers three alternative decisions i.e. Keep the machine, Overhaul it, or Replace it with a new identical one. An overhaul can reduce the machine age virtually, but not to a point that the machine is as good as new. If the machine fails during the warranty period, it is rectified at no cost to the buyer. Any failure occurring before and after the expiry of the warranty is restored by minimal repair. An overhaul-replacement policy is formulated for such machines by using dynamic programming approach to obtain the buyer’s optimal policy. The results show that a significant rejuvenation effect due to overhaul may extend the length of machine life cycle and delay the replacement decision. In contrast, the warranty stimulates early machine replacement and by then increases the replacement frequencies for a certain range of replacement cost. This demonstrates that to minimize the total ownership cost over T the buyer needs to consider the minimal repair cost reduction due to rejuvenation effect of overhaul as well as the warranty benefit due to replacement. Numerical examples are presented for both illustrating the optimal policy and describing the behavior of the optimal solution.

  9. Measuring public opinion on alcohol policy: a factor analytic study of a US probability sample.

    Science.gov (United States)

    Latimer, William W; Harwood, Eileen M; Newcomb, Michael D; Wagenaar, Alexander C

    2003-03-01

    Public opinion has been one factor affecting change in policies designed to reduce underage alcohol use. Extant research, however, has been criticized for using single survey items of unknown reliability to define adult attitudes on alcohol policy issues. The present investigation addresses a critical gap in the literature by deriving scales on public attitudes, knowledge, and concerns pertinent to alcohol policies designed to reduce underage drinking using a US probability sample survey of 7021 adults. Five attitudinal scales were derived from exploratory and confirmatory factor analyses addressing policies to: (1) regulate alcohol marketing, (2) regulate alcohol consumption in public places, (3) regulate alcohol distribution, (4) increase alcohol taxes, and (5) regulate youth access. The scales exhibited acceptable psychometric properties and were largely consistent with a rational framework which guided the survey construction.

  10. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  11. Joint Optimal Production Planning for Complex Supply Chains Constrained by Carbon Emission Abatement Policies

    Directory of Open Access Journals (Sweden)

    Longfei He

    2014-01-01

    Full Text Available We focus on the joint production planning of complex supply chains facing stochastic demands and being constrained by carbon emission reduction policies. We pick two typical carbon emission reduction policies to research how emission regulation influences the profit and carbon footprint of a typical supply chain. We use the input-output model to capture the interrelated demand link between an arbitrary pair of two nodes in scenarios without or with carbon emission constraints. We design optimization algorithm to obtain joint optimal production quantities combination for maximizing overall profit under regulatory policies, respectively. Furthermore, numerical studies by featuring exponentially distributed demand compare systemwide performances in various scenarios. We build the “carbon emission elasticity of profit (CEEP” index as a metric to evaluate the impact of regulatory policies on both chainwide emissions and profit. Our results manifest that by facilitating the mandatory emission cap in proper installation within the network one can balance well effective emission reduction and associated acceptable profit loss. The outcome that CEEP index when implementing Carbon emission tax is elastic implies that the scale of profit loss is greater than that of emission reduction, which shows that this policy is less effective than mandatory cap from industry standpoint at least.

  12. Evaluation of optimized bronchoalveolar lavage sampling designs for characterization of pulmonary drug distribution.

    Science.gov (United States)

    Clewe, Oskar; Karlsson, Mats O; Simonsson, Ulrika S H

    2015-12-01

    Bronchoalveolar lavage (BAL) is a pulmonary sampling technique for characterization of drug concentrations in epithelial lining fluid and alveolar cells. Two hypothetical drugs with different pulmonary distribution rates (fast and slow) were considered. An optimized BAL sampling design was generated assuming no previous information regarding the pulmonary distribution (rate and extent) and with a maximum of two samples per subject. Simulations were performed to evaluate the impact of the number of samples per subject (1 or 2) and the sample size on the relative bias and relative root mean square error of the parameter estimates (rate and extent of pulmonary distribution). The optimized BAL sampling design depends on a characterized plasma concentration time profile, a population plasma pharmacokinetic model, the limit of quantification (LOQ) of the BAL method and involves only two BAL sample time points, one early and one late. The early sample should be taken as early as possible, where concentrations in the BAL fluid ≥ LOQ. The second sample should be taken at a time point in the declining part of the plasma curve, where the plasma concentration is equivalent to the plasma concentration in the early sample. Using a previously described general pulmonary distribution model linked to a plasma population pharmacokinetic model, simulated data using the final BAL sampling design enabled characterization of both the rate and extent of pulmonary distribution. The optimized BAL sampling design enables characterization of both the rate and extent of the pulmonary distribution for both fast and slowly equilibrating drugs.

  13. Optimization of the sampling scheme for maps of physical and chemical properties estimated by kriging

    Directory of Open Access Journals (Sweden)

    Gener Tadeu Pereira

    2013-10-01

    Full Text Available The sampling scheme is essential in the investigation of the spatial variability of soil properties in Soil Science studies. The high costs of sampling schemes optimized with additional sampling points for each physical and chemical soil property, prevent their use in precision agriculture. The purpose of this study was to obtain an optimal sampling scheme for physical and chemical property sets and investigate its effect on the quality of soil sampling. Soil was sampled on a 42-ha area, with 206 geo-referenced points arranged in a regular grid spaced 50 m from each other, in a depth range of 0.00-0.20 m. In order to obtain an optimal sampling scheme for every physical and chemical property, a sample grid, a medium-scale variogram and the extended Spatial Simulated Annealing (SSA method were used to minimize kriging variance. The optimization procedure was validated by constructing maps of relative improvement comparing the sample configuration before and after the process. A greater concentration of recommended points in specific areas (NW-SE direction was observed, which also reflects a greater estimate variance at these locations. The addition of optimal samples, for specific regions, increased the accuracy up to 2 % for chemical and 1 % for physical properties. The use of a sample grid and medium-scale variogram, as previous information for the conception of additional sampling schemes, was very promising to determine the locations of these additional points for all physical and chemical soil properties, enhancing the accuracy of kriging estimates of the physical-chemical properties.

  14. Optimal replenishment policy for fuzzy inventory model with deteriorating items and allowable shortages under inflationary conditions

    Directory of Open Access Journals (Sweden)

    Jaggi Chandra K.

    2016-01-01

    Full Text Available This study develops an inventory model to determine ordering policy for deteriorating items with constant demand rate under inflationary condition over a fixed planning horizon. Shortages are allowed and are partially backlogged. In today’s wobbling economy, especially for long term investment, the effects of inflation cannot be disregarded as uncertainty about future inflation may influence the ordering policy. Therefore, in this paper a fuzzy model is developed that fuzzify the inflation rate, discount rate, deterioration rate, and backlogging parameter by using triangular fuzzy numbers to represent the uncertainty. For Defuzzification, the well known signed distance method is employed to find the total profit over the planning horizon. The objective of the study is to derive the optimal number of cycles and their optimal length so to maximize the net present value of the total profit over a fixed planning horizon. The necessary and sufficient conditions for an optimal solution are characterized. An algorithm is proposed to find the optimal solution. Finally, the proposed model has been validated with numerical example. Sensitivity analysis has been performed to study the impact of various parameters on the optimal solution, and some important managerial implications are presented.

  15. Joint Optimization of Preventive Maintenance and Spare Parts Inventory with Appointment Policy

    Directory of Open Access Journals (Sweden)

    Jing Cai

    2017-01-01

    Full Text Available Under the background of the wide application of condition-based maintenance (CBM in maintenance practice, the joint optimization of maintenance and spare parts inventory is becoming a hot research to take full advantage of CBM and reduce the operational cost. In order to avoid both the high inventory level and the shortage of spare parts, an appointment policy of spare parts is first proposed based on the prediction of remaining useful lifetime, and then a corresponding joint optimization model of preventive maintenance and spare parts inventory is established. Due to the complexity of the model, the combination method of genetic algorithm and Monte Carlo is presented to get the optimal maximum inventory level, safety inventory level, potential failure threshold, and appointment threshold to minimize the cost rate. Finally, the proposed model is studied through a case study and compared with both the separate optimization and the joint optimization without appointment policy, and the results show that the proposed model is more effective. In addition, the sensitivity analysis shows that the proposed model is consistent with the actual situation of maintenance practices and inventory management.

  16. Optimized Policies for Improving Fairness of Location-based Relay Selection

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Olsen, Rasmus Løvenstein; Madsen, Tatiana Kozlova

    2013-01-01

    For WLAN systems in which relaying is used to improve throughput performance for nodes located at the cell edge, node mobility and information collection delays can have a significant impact on the performance of a relay selection scheme. In this paper we extend our existing Markov Chain modeling...... framework for relay selection to allow for efficient calculation of relay policies given either mean throughput or kth throughput percentile as optimization criterium. In a scenario with static access point, static relay, and a mobile destination node, the kth throughput percentile optimization...

  17. Optimal Inventory Policy under Permissible Payment Delay in Fashion Supply Chains

    OpenAIRE

    Guo Li; Yuchen Kang; Mengqi Liu; Zhaohua Wang

    2014-01-01

    This paper investigates a retailer’s optimal inventory cycle and the corresponding time of payment in fashion supply chains where a supplier allows the payment delay. Here according to the established model we first analyze the retailer's reaction, and then find out the retailer’s optimal inventory policy and time of payment to maximize its total profit. Our result shows that it is not always the best choice for retailers of fashion supply chains to choose the discount way to replenish stocks...

  18. Optimal operation and forecasting policy for pump storage plants in day-ahead markets

    International Nuclear Information System (INIS)

    Muche, Thomas

    2014-01-01

    Highlights: • We investigate unit commitment deploying stochastic and deterministic approaches. • We consider day-ahead markets, its forecast and weekly price based unit commitment. • Stochastic and deterministic unit commitment are identical for the first planning day. • Unit commitment and bidding policy can be based on the deterministic approach. • Robust forecasting models should be estimated based on the whole planning horizon. - Abstract: Pump storage plants are an important electricity storage technology at present. Investments in this technology are expected to increase. The necessary investment valuation often includes expected cash flows from future price-based unit commitment policies. A price-based unit commitment policy has to consider market price uncertainty and the information revealing nature of electricity markets. For this environment stochastic programming models are suggested to derive the optimal unit commitment policy. For the considered day-ahead price electricity market stochastic and deterministic unit commitment policies are comparable suggesting an application of easier implementable deterministic models. In order to identify suitable unit commitment and forecasting policies, deterministic unit commitment models are applied to actual day-ahead electricity prices of a whole year. As a result, a robust forecasting model should consider the unit commitment planning period. This robust forecasting models result in expected cash flows similar to realized ones allowing a reliable investment valuation

  19. Asymptotically optimal production policies in dynamic stochastic jobshops with limited buffers

    Science.gov (United States)

    Hou, Yumei; Sethi, Suresh P.; Zhang, Hanqin; Zhang, Qing

    2006-05-01

    We consider a production planning problem for a jobshop with unreliable machines producing a number of products. There are upper and lower bounds on intermediate parts and an upper bound on finished parts. The machine capacities are modelled as finite state Markov chains. The objective is to choose the rate of production so as to minimize the total discounted cost of inventory and production. Finding an optimal control policy for this problem is difficult. Instead, we derive an asymptotic approximation by letting the rates of change of the machine states approach infinity. The asymptotic analysis leads to a limiting problem in which the stochastic machine capacities are replaced by their equilibrium mean capacities. The value function for the original problem is shown to converge to the value function of the limiting problem. The convergence rate of the value function together with the error estimate for the constructed asymptotic optimal production policies are established.

  20. Off-Policy Actor-Critic Structure for Optimal Control of Unknown Systems With Disturbances.

    Science.gov (United States)

    Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai; Zhang, Huaguang

    2016-05-01

    An optimal control method is developed for unknown continuous-time systems with unknown disturbances in this paper. The integral reinforcement learning (IRL) algorithm is presented to obtain the iterative control. Off-policy learning is used to allow the dynamics to be completely unknown. Neural networks are used to construct critic and action networks. It is shown that if there are unknown disturbances, off-policy IRL may not converge or may be biased. For reducing the influence of unknown disturbances, a disturbances compensation controller is added. It is proven that the weight errors are uniformly ultimately bounded based on Lyapunov techniques. Convergence of the Hamiltonian function is also proven. The simulation study demonstrates the effectiveness of the proposed optimal control method for unknown systems with disturbances.

  1. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  2. Automated procedure for selection of optimal refueling policies for light water reactors

    International Nuclear Information System (INIS)

    Lin, B.I.; Zolotar, B.; Weisman, J.

    1979-01-01

    An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function

  3. Optimal replacement policy of products with repair-cost threshold after the extended warranty

    Institute of Scientific and Technical Information of China (English)

    Lijun Shang; Zhiqiang Cai

    2017-01-01

    The reliability of the product sold under a warranty is usually maintained by the manufacturer during the warranty period. After the expiry of the warranty, however, the consumer confronts a problem about how to maintain the reliability of the product. This paper proposes, from the consumer's perspective, a replace-ment policy after the extended warranty, under the assumption that the product is sold under the renewable free replacement warranty (RFRW) policy in which the replacement is dependent on the repair-cost threshold. The proposed replacement policy is the replacement after the extended warranty is performed by the consumer based on the repair-cost threshold or preventive replacement (PR) age, which are decision variables. The expected cost rate model is derived from the consumer's perspective. The existence and uniqueness of the optimal solution that minimizes the expected cost rate per unit time are offered. Finally, a numeri-cal example is presented to exemplify the proposed model.

  4. A comparison of alternative medicare reimbursement policies under optimal hospital pricing.

    Science.gov (United States)

    Dittman, D A; Morey, R C

    1983-01-01

    This paper applies and extends the use of a nonlinear hospital pricing model, recently posited in the literature by Dittman and Morey [1]. That model applied a hospital profit-maximizing behavior and studied the effects of optimal pricing of hospital ancillary services on the incidence of payment by private insurance companies and the Medicare trust fund. Here, we examine variations of the above model where both hospital profit-maximizing and profit-satisficing postures are of interest. We apply the model to three types of Medicare reimbursement policies currently in use or under legislative mandate to implement. The policies differ according to hospital size and whether cross-subsidies are allowed. We are interested in determining the effects of profit-maximizing and -satisficing behaviors of these three reimbursement policies on the levels of profits received, and on the respective implications for private payors and the Medicare trust fund. PMID:6347973

  5. Optimal Monetary Policy and Exchange Rate in a Small Open Economy with Unemployment

    Directory of Open Access Journals (Sweden)

    Hyuk-Jae Rhee

    2014-09-01

    Full Text Available In this paper, we consider a small open economy under the New Keynesian model with unemployment of Gali (2011a, b to discuss the design of the monetary policy. Our findings can be summarized in three parts. First, even with the existence of unemployment, the optimal policy is to minimize variance of domestic price inflation, wage inflation, and the output gap when both domestic price and wage are sticky. Second, stabilizing unemployment rate is important in reducing the welfare loss incurred by both technology and labor supply shocks. Therefore, introducing the unemployment rate as an another argument into the Taylor-rule type interest rate rule will be welfare-enhancing. Lastly, controlling CPI inflation is the best option when the policy is not allowed to respond to unemployment rate. Once the unemployment rate is controlled, however, stabilizing power of CPI inflation-based Taylor rule is diminished.

  6. Joint cost of energy under an optimal economic policy of hybrid power systems subject to uncertainty

    International Nuclear Information System (INIS)

    Díaz, Guzmán; Planas, Estefanía; Andreu, Jon; Kortabarria, Iñigo

    2015-01-01

    Economical optimization of hybrid systems is usually performed by means of LCoE (levelized cost of energy) calculation. Previous works deal with the LCoE calculation of the whole hybrid system disregarding an important issue: the stochastic component of the system units must be jointly considered. This paper deals with this issue and proposes a new fast optimal policy that properly calculates the LCoE of a hybrid system and finds the lowest LCoE. This proposed policy also considers the implied competition among power sources when variability of gas and electricity prices are taken into account. Additionally, it presents a comparative between the LCoE of the hybrid system and its individual technologies of generation by means of a fast and robust algorithm based on vector logical computation. Numerical case analyses based on realistic data are presented that valuate the contribution of technologies in a hybrid power system to the joint LCoE. - Highlights: • We perform the LCoE calculation with the stochastic component jointly considered. • We propose a fast an optimal policy that minimizes the LCoE. • We compare the obtained LCoEs by means of a fast and robust algorithm. • We take into account the competition among gas prices and electricity prices

  7. Nationwide survey of policies and practices related to capillary blood sampling in medical laboratories in Croatia.

    Science.gov (United States)

    Krleza, Jasna Lenicek

    2014-01-01

    Capillary sampling is increasingly used to obtain blood for laboratory tests in volumes as small as necessary and as non-invasively as possible. Whether capillary blood sampling is also frequent in Croatia, and whether it is performed according to international laboratory standards is unclear. All medical laboratories that participate in the Croatian National External Quality Assessment Program (N = 204) were surveyed on-line to collect information about the laboratory's parent institution, patient population, types and frequencies of laboratory tests based on capillary blood samples, choice of reference intervals, and policies and procedures specifically related to capillary sampling. Sampling practices were compared with guidelines from the Clinical and Laboratory Standards Institute (CLSI) and the World Health Organization (WHO). Of the 204 laboratories surveyed, 174 (85%) responded with complete questionnaires. Among the 174 respondents, 155 (89%) reported that they routinely perform capillary sampling, which is carried out by laboratory staff in 118 laboratories (76%). Nearly half of respondent laboratories (48%) do not have a written protocol including order of draw for multiple sampling. A single puncture site is used to provide capillary blood for up to two samples at 43% of laboratories that occasionally or regularly perform such sampling. Most respondents (88%) never perform arterialisation prior to capillary blood sampling. Capillary blood sampling is highly prevalent in Croatia across different types of clinical facilities and patient populations. Capillary sampling procedures are not standardised in the country, and the rate of laboratory compliance with CLSI and WHO guidelines is low.

  8. People adopt optimal policies in simple decision-making, after practice and guidance.

    Science.gov (United States)

    Evans, Nathan J; Brown, Scott D

    2017-04-01

    Organisms making repeated simple decisions are faced with a tradeoff between urgent and cautious strategies. While animals can adopt a statistically optimal policy for this tradeoff, findings about human decision-makers have been mixed. Some studies have shown that people can optimize this "speed-accuracy tradeoff", while others have identified a systematic bias towards excessive caution. These issues have driven theoretical development and spurred debate about the nature of human decision-making. We investigated a potential resolution to the debate, based on two factors that routinely differ between human and animal studies of decision-making: the effects of practice, and of longer-term feedback. Our study replicated the finding that most people, by default, are overly cautious. When given both practice and detailed feedback, people moved rapidly towards the optimal policy, with many participants reaching optimality with less than 1 h of practice. Our findings have theoretical implications for cognitive and neural models of simple decision-making, as well as methodological implications.

  9. Automatic Motion Generation for Robotic Milling Optimizing Stiffness with Sample-Based Planning

    Directory of Open Access Journals (Sweden)

    Julian Ricardo Diaz Posada

    2017-01-01

    Full Text Available Optimal and intuitive robotic machining is still a challenge. One of the main reasons for this is the lack of robot stiffness, which is also dependent on the robot positioning in the Cartesian space. To make up for this deficiency and with the aim of increasing robot machining accuracy, this contribution describes a solution approach for optimizing the stiffness over a desired milling path using the free degree of freedom of the machining process. The optimal motion is computed based on the semantic and mathematical interpretation of the manufacturing process modeled on its components: product, process and resource; and by configuring automatically a sample-based motion problem and the transition-based rapid-random tree algorithm for computing an optimal motion. The approach is simulated on a CAM software for a machining path revealing its functionality and outlining future potentials for the optimal motion generation for robotic machining processes.

  10. Optimal pricing and promotional effort control policies for a new product growth in segmented market

    Directory of Open Access Journals (Sweden)

    Jha P.C.

    2015-01-01

    Full Text Available Market segmentation enables the marketers to understand and serve the customers more effectively thereby improving company’s competitive position. In this paper, we study the impact of price and promotion efforts on evolution of sales intensity in segmented market to obtain the optimal price and promotion effort policies. Evolution of sales rate for each segment is developed under the assumption that marketer may choose both differentiated as well as mass market promotion effort to influence the uncaptured market potential. An optimal control model is formulated and a solution method using Maximum Principle has been discussed. The model is extended to incorporate budget constraint. Model applicability is illustrated by a numerical example. P.C. Jha, P. Manik, K. Chaudhary, R. Cambini / Optimal Pricing and Promotional 2 Since the discrete time data is available, the formulated model is discretized. For solving the discrete model, differential evolution algorithm is used.

  11. Optimal Policies for Deteriorating Items with Maximum Lifetime and Two-Level Trade Credits

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2014-01-01

    Full Text Available The retailer’s optimal policies are developed when the product has fixed lifetime and also the units in inventory are subject to deterioration at a constant rate. This study will be mainly applicable to pharmaceuticals, drugs, beverages, and dairy products, and so forth. To boost the demand, offering a credit period is considered as the promotional tool. The retailer passes credit period to the buyers which is received from the supplier. The objective is to maximize the total profit per unit time of the retailer with respect to optimal retail price of an item and purchase quantity during the optimal cycle time. The concavity of the total profit per unit time is exhibited using inventory parametric values. The sensitivity analysis is carried out to advise the decision maker to keep an eye on critical inventory parameters.

  12. Gamut Volume Index: a color preference metric based on meta-analysis and optimized colour samples.

    Science.gov (United States)

    Liu, Qiang; Huang, Zheng; Xiao, Kaida; Pointer, Michael R; Westland, Stephen; Luo, M Ronnier

    2017-07-10

    A novel metric named Gamut Volume Index (GVI) is proposed for evaluating the colour preference of lighting. This metric is based on the absolute gamut volume of optimized colour samples. The optimal colour set of the proposed metric was obtained by optimizing the weighted average correlation between the metric predictions and the subjective ratings for 8 psychophysical studies. The performance of 20 typical colour metrics was also investigated, which included colour difference based metrics, gamut based metrics, memory based metrics as well as combined metrics. It was found that the proposed GVI outperformed the existing counterparts, especially for the conditions where correlated colour temperatures differed.

  13. Optimal policies of non-cross-resistant chemotherapy on Goldie and Coldman's cancer model.

    Science.gov (United States)

    Chen, Jeng-Huei; Kuo, Ya-Hui; Luh, Hsing Paul

    2013-10-01

    Mathematical models can be used to study the chemotherapy on tumor cells. Especially, in 1979, Goldie and Coldman proposed the first mathematical model to relate the drug sensitivity of tumors to their mutation rates. Many scientists have since referred to this pioneering work because of its simplicity and elegance. Its original idea has also been extended and further investigated in massive follow-up studies of cancer modeling and optimal treatment. Goldie and Coldman, together with Guaduskas, later used their model to explain why an alternating non-cross-resistant chemotherapy is optimal with a simulation approach. Subsequently in 1983, Goldie and Coldman proposed an extended stochastic based model and provided a rigorous mathematical proof to their earlier simulation work when the extended model is approximated by its quasi-approximation. However, Goldie and Coldman's analytic study of optimal treatments majorly focused on a process with symmetrical parameter settings, and presented few theoretical results for asymmetrical settings. In this paper, we recast and restate Goldie, Coldman, and Guaduskas' model as a multi-stage optimization problem. Under an asymmetrical assumption, the conditions under which a treatment policy can be optimal are derived. The proposed framework enables us to consider some optimal policies on the model analytically. In addition, Goldie, Coldman and Guaduskas' work with symmetrical settings can be treated as a special case of our framework. Based on the derived conditions, this study provides an alternative proof to Goldie and Coldman's work. In addition to the theoretical derivation, numerical results are included to justify the correctness of our work. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. OPTIMAL METHOD FOR PREPARATION OF SILICATE ROCK SAMPLES FOR ANALYTICAL PURPOSES

    Directory of Open Access Journals (Sweden)

    Maja Vrkljan

    2004-12-01

    Full Text Available The purpose of this study was to determine an optimal dissolution method for silicate rock samples for further analytical purposes. Analytical FAAS method of determining cobalt, chromium, copper, nickel, lead and zinc content in gabbro sample and geochemical standard AGV-1 has been applied for verification. Dissolution in mixtures of various inorganic acids has been tested, as well as Na2CO3 fusion technique. The results obtained by different methods have been compared and dissolution in the mixture of HNO3 + HF has been recommended as optimal.

  15. Optimal combination of energy crops under different policy scenarios; The case of Northern Greece

    International Nuclear Information System (INIS)

    Zafeiriou, Eleni; Petridis, Konstantinos; Karelakis, Christos; Arabatzis, Garyfallos

    2016-01-01

    Energy crops production is considered as environmentally benign and socially acceptable, offering ecological benefits over fossil fuels through their contribution to the reduction of greenhouse gases and acidifying emissions. Energy crops are subjected to persistent policy support by the EU, despite their limited or even marginally negative impact on the greenhouse effect. The present study endeavors to optimize the agricultural income generated by energy crops in a remote and disadvantageous region, with the assistance of linear programming. The optimization concerns the income created from soybean, sunflower (proxy for energy crop), and corn. Different policy scenarios imposed restrictions on the value of the subsidies as a proxy for EU policy tools, the value of inputs (costs of capital and labor) and different irrigation conditions. The results indicate that the area and the imports per energy crop remain unchanged, independently of the policy scenario enacted. Furthermore, corn cultivation contributes the most to iFncome maximization, whereas the implemented CAP policy plays an incremental role in uptaking an energy crop. A key implication is that alternative forms of motivation should be provided to the farmers beyond the financial ones in order the extensive use of energy crops to be achieved. - Highlights: •A stochastic and a deterministic LP model is formulated. •The role of CAP is vital in generated income. •Imports and cultivated areas are subsidy neutral. •The regime of free market results in lower income acquired from the potential crop mix. •Non – financial motivation is a key determinant of the farmers’ attitude towards energy crops.

  16. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  17. Integration of electromagnetic induction sensor data in soil sampling scheme optimization using simulated annealing.

    Science.gov (United States)

    Barca, E; Castrignanò, A; Buttafuoco, G; De Benedetto, D; Passarella, G

    2015-07-01

    Soil survey is generally time-consuming, labor-intensive, and costly. Optimization of sampling scheme allows one to reduce the number of sampling points without decreasing or even increasing the accuracy of investigated attribute. Maps of bulk soil electrical conductivity (EC a ) recorded with electromagnetic induction (EMI) sensors could be effectively used to direct soil sampling design for assessing spatial variability of soil moisture. A protocol, using a field-scale bulk EC a survey, has been applied in an agricultural field in Apulia region (southeastern Italy). Spatial simulated annealing was used as a method to optimize spatial soil sampling scheme taking into account sampling constraints, field boundaries, and preliminary observations. Three optimization criteria were used. the first criterion (minimization of mean of the shortest distances, MMSD) optimizes the spreading of the point observations over the entire field by minimizing the expectation of the distance between an arbitrarily chosen point and its nearest observation; the second criterion (minimization of weighted mean of the shortest distances, MWMSD) is a weighted version of the MMSD, which uses the digital gradient of the grid EC a data as weighting function; and the third criterion (mean of average ordinary kriging variance, MAOKV) minimizes mean kriging estimation variance of the target variable. The last criterion utilizes the variogram model of soil water content estimated in a previous trial. The procedures, or a combination of them, were tested and compared in a real case. Simulated annealing was implemented by the software MSANOS able to define or redesign any sampling scheme by increasing or decreasing the original sampling locations. The output consists of the computed sampling scheme, the convergence time, and the cooling law, which can be an invaluable support to the process of sampling design. The proposed approach has found the optimal solution in a reasonable computation time. The

  18. Adaptive optimal control of unknown constrained-input systems using policy iteration and neural networks.

    Science.gov (United States)

    Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher

    2013-10-01

    This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.

  19. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    International Nuclear Information System (INIS)

    Oliveira, Karina B. de; Oliveira, Bras H. de

    2013-01-01

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C 18 column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min−1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 ± 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  20. HPLC/DAD determination of rosmarinic acid in Salvia officinalis: sample preparation optimization by factorial design

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Karina B. de [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Farmacia; Oliveira, Bras H. de, E-mail: bho@ufpr.br [Universidade Federal do Parana (UFPR), Curitiba, PR (Brazil). Dept. de Quimica

    2013-01-15

    Sage (Salvia officinalis) contains high amounts of the biologically active rosmarinic acid (RA) and other polyphenolic compounds. RA is easily oxidized, and may undergo degradation during sample preparation for analysis. The objective of this work was to develop and validate an analytical procedure for determination of RA in sage, using factorial design of experiments for optimizing sample preparation. The statistically significant variables for improving RA extraction yield were determined initially and then used in the optimization step, using central composite design (CCD). The analytical method was then fully validated, and used for the analysis of commercial samples of sage. The optimized procedure involved extraction with aqueous methanol (40%) containing an antioxidant mixture (ascorbic acid and ethylenediaminetetraacetic acid (EDTA)), with sonication at 45 deg C for 20 min. The samples were then injected in a system containing a C{sub 18} column, using methanol (A) and 0.1% phosphoric acid in water (B) in step gradient mode (45A:55B, 0-5 min; 80A:20B, 5-10 min) with flow rate of 1.0 mL min-1 and detection at 330 nm. Using this conditions, RA concentrations were 50% higher when compared to extractions without antioxidants (98.94 {+-} 1.07% recovery). Auto-oxidation of RA during sample extraction was prevented by the use of antioxidants resulting in more reliable analytical results. The method was then used for the analysis of commercial samples of sage. (author)

  1. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  2. Optimizing headspace sampling temperature and time for analysis of volatile oxidation products in fish oil

    DEFF Research Database (Denmark)

    Rørbæk, Karen; Jensen, Benny

    1997-01-01

    Headspace-gas chromatography (HS-GC), based on adsorption to Tenax GR(R), thermal desorption and GC, has been used for analysis of volatiles in fish oil. To optimize sam sampling conditions, the effect of heating the fish oil at various temperatures and times was evaluated from anisidine values (AV...

  3. Isolation and identification of phytase-producing strains from soil samples and optimization of production parameters

    Directory of Open Access Journals (Sweden)

    Masoud Mohammadi

    2017-09-01

    Discussion and conclusion: Penicillium sp. isolated from a soil sample near Qazvin, was able to produce highly active phytase in optimized environmental conditions, which could be a suitable candidate for commercial production of phytase to be used as complement in poultry feeding industries.

  4. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Statistical surrogate model based sampling criterion for stochastic global optimization of problems with constraints

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Su Gil; Jang, Jun Yong; Kim, Ji Hoon; Lee, Tae Hee [Hanyang University, Seoul (Korea, Republic of); Lee, Min Uk [Romax Technology Ltd., Seoul (Korea, Republic of); Choi, Jong Su; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Daejeon (Korea, Republic of)

    2015-04-15

    Sequential surrogate model-based global optimization algorithms, such as super-EGO, have been developed to increase the efficiency of commonly used global optimization technique as well as to ensure the accuracy of optimization. However, earlier studies have drawbacks because there are three phases in the optimization loop and empirical parameters. We propose a united sampling criterion to simplify the algorithm and to achieve the global optimum of problems with constraints without any empirical parameters. It is able to select the points located in a feasible region with high model uncertainty as well as the points along the boundary of constraint at the lowest objective value. The mean squared error determines which criterion is more dominant among the infill sampling criterion and boundary sampling criterion. Also, the method guarantees the accuracy of the surrogate model because the sample points are not located within extremely small regions like super-EGO. The performance of the proposed method, such as the solvability of a problem, convergence properties, and efficiency, are validated through nonlinear numerical examples with disconnected feasible regions.

  6. Nationwide survey of policies and practices related to capillary blood sampling in medical laboratories in Croatia

    Science.gov (United States)

    Krleza, Jasna Lenicek

    2014-01-01

    Introduction: Capillary sampling is increasingly used to obtain blood for laboratory tests in volumes as small as necessary and as non-invasively as possible. Whether capillary blood sampling is also frequent in Croatia, and whether it is performed according to international laboratory standards is unclear. Materials and methods: All medical laboratories that participate in the Croatian National External Quality Assessment Program (N = 204) were surveyed on-line to collect information about the laboratory’s parent institution, patient population, types and frequencies of laboratory tests based on capillary blood samples, choice of reference intervals, and policies and procedures specifically related to capillary sampling. Sampling practices were compared with guidelines from the Clinical and Laboratory Standards Institute (CLSI) and the World Health Organization (WHO). Results: Of the 204 laboratories surveyed, 174 (85%) responded with complete questionnaires. Among the 174 respondents, 155 (89%) reported that they routinely perform capillary sampling, which is carried out by laboratory staff in 118 laboratories (76%). Nearly half of respondent laboratories (48%) do not have a written protocol including order of draw for multiple sampling. A single puncture site is used to provide capillary blood for up to two samples at 43% of laboratories that occasionally or regularly perform such sampling. Most respondents (88%) never perform arterialisation prior to capillary blood sampling. Conclusions: Capillary blood sampling is highly prevalent in Croatia across different types of clinical facilities and patient populations. Capillary sampling procedures are not standardised in the country, and the rate of laboratory compliance with CLSI and WHO guidelines is low. PMID:25351353

  7. Energy efficiency optimization in distribution transformers considering Spanish distribution regulation policy

    International Nuclear Information System (INIS)

    Pezzini, Paola; Gomis-Bellmunt, Oriol; Frau-Valenti, Joan; Sudria-Andreu, Antoni

    2010-01-01

    In transmission and distribution systems, the high number of installed transformers, a loss source in networks, suggests a good potential for energy savings. This paper presents how the Spanish Distribution regulation policy, Royal Decree 222/2008, affects the overall energy efficiency in distribution transformers. The objective of a utility is the maximization of the benefit, and in case of failures, to install a chosen transformer in order to maximize the profit. Here, a novel method to optimize energy efficiency, considering the constraints set by the Spanish Distribution regulation policy, is presented; its aim is to achieve the objectives of the utility when installing new transformers. The overall energy efficiency increase is a clear result that can help in meeting the requirements of European environmental plans, such as the '20-20-20' action plan.

  8. Energy efficiency optimization in distribution transformers considering Spanish distribution regulation policy

    Energy Technology Data Exchange (ETDEWEB)

    Pezzini, Paola [Centre d' Innovacio en Convertidors Estatics i Accionaments (CITCEA-UPC), E.T.S. Enginyeria Industrial Barcelona, Universitat Politecnica Catalunya, Diagonal, 647, Pl. 2, 08028 Barcelona (Spain); Gomis-Bellmunt, Oriol; Sudria-Andreu, Antoni [Centre d' Innovacio en Convertidors Estatics i Accionaments (CITCEA-UPC), E.T.S. Enginyeria Industrial Barcelona, Universitat Politecnica Catalunya, Diagonal, 647, Pl. 2, 08028 Barcelona (Spain); IREC Catalonia Institute for Energy Research, Josep Pla, B2, Pl. Baixa, 08019 Barcelona (Spain); Frau-Valenti, Joan [ENDESA, Carrer Joan Maragall, 16 07006 Palma (Spain)

    2010-12-15

    In transmission and distribution systems, the high number of installed transformers, a loss source in networks, suggests a good potential for energy savings. This paper presents how the Spanish Distribution regulation policy, Royal Decree 222/2008, affects the overall energy efficiency in distribution transformers. The objective of a utility is the maximization of the benefit, and in case of failures, to install a chosen transformer in order to maximize the profit. Here, a novel method to optimize energy efficiency, considering the constraints set by the Spanish Distribution regulation policy, is presented; its aim is to achieve the objectives of the utility when installing new transformers. The overall energy efficiency increase is a clear result that can help in meeting the requirements of European environmental plans, such as the '20-20-20' action plan. (author)

  9. Optimal maintenance policy for a system subject to damage in a discrete time process

    International Nuclear Information System (INIS)

    Chien, Yu-Hung; Sheu, Shey-Huei; Zhang, Zhe George

    2012-01-01

    Consider a system operating over n discrete time periods (n=1, 2, …). Each operation period causes a random amount of damage to the system which accumulates over time periods. The system fails when the cumulative damage exceeds a failure level ζ and a corrective maintenance (CM) action is immediately taken. To prevent such a failure, a preventive maintenance (PM) may be performed. In an operation period without a CM or PM, a regular maintenance (RM) is conducted at the end of that period to maintain the operation of the system. We propose a maintenance policy which prescribes a PM when the accumulated damage exceeds a pre-specified level δ ( ⁎ and N ⁎ and discuss some useful properties about them. It has been shown that a δ-based PM outperforms a N-based PM in terms of cost minimization. Numerical examples are presented to demonstrate the optimization of this class of maintenance policies.

  10. Time optimization of 90Sr measurements: Sequential measurement of multiple samples during ingrowth of 90Y

    International Nuclear Information System (INIS)

    Holmgren, Stina; Tovedal, Annika; Björnham, Oscar; Ramebäck, Henrik

    2016-01-01

    The aim of this paper is to contribute to a more rapid determination of a series of samples containing 90 Sr by making the Cherenkov measurement of the daughter nuclide 90 Y more time efficient. There are many instances when an optimization of the measurement method might be favorable, such as; situations requiring rapid results in order to make urgent decisions or, on the other hand, to maximize the throughput of samples in a limited available time span. In order to minimize the total analysis time, a mathematical model was developed which calculates the time of ingrowth as well as individual measurement times for n samples in a series. This work is focused on the measurement of 90 Y during ingrowth, after an initial chemical separation of strontium, in which it is assumed that no other radioactive strontium isotopes are present. By using a fixed minimum detectable activity (MDA) and iterating the measurement time for each consecutive sample the total analysis time will be less, compared to using the same measurement time for all samples. It was found that by optimization, the total analysis time for 10 samples can be decreased greatly, from 21 h to 6.5 h, when assuming a MDA of 1 Bq/L and at a background count rate of approximately 0.8 cpm. - Highlights: • An approach roughly a factor of three more efficient than an un-optimized method. • The optimization gives a more efficient use of instrument time. • The efficiency increase ranges from a factor of three to 10, for 10 to 40 samples.

  11. Optimal sampling plan for clean development mechanism energy efficiency lighting projects

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2013-01-01

    Highlights: • A metering cost minimisation model is built to assist the sampling plan for CDM projects. • The model minimises the total metering cost by the determination of optimal sample size. • The required 90/10 criterion sampling accuracy is maintained. • The proposed metering cost minimisation model is applicable to other CDM projects as well. - Abstract: Clean development mechanism (CDM) project developers are always interested in achieving required measurement accuracies with the least metering cost. In this paper, a metering cost minimisation model is proposed for the sampling plan of a specific CDM energy efficiency lighting project. The problem arises from the particular CDM sampling requirement of 90% confidence and 10% precision for the small-scale CDM energy efficiency projects, which is known as the 90/10 criterion. The 90/10 criterion can be met through solving the metering cost minimisation problem. All the lights in the project are classified into different groups according to uncertainties of the lighting energy consumption, which are characterised by their statistical coefficient of variance (CV). Samples from each group are randomly selected to install power meters. These meters include less expensive ones with less functionality and more expensive ones with greater functionality. The metering cost minimisation model will minimise the total metering cost through the determination of the optimal sample size at each group. The 90/10 criterion is formulated as constraints to the metering cost objective. The optimal solution to the minimisation problem will therefore minimise the metering cost whilst meeting the 90/10 criterion, and this is verified by a case study. Relationships between the optimal metering cost and the population sizes of the groups, CV values and the meter equipment cost are further explored in three simulations. The metering cost minimisation model proposed for lighting systems is applicable to other CDM projects as

  12. Optimal dividend policies with transaction costs for a class of jump-diffusion processes

    DEFF Research Database (Denmark)

    Hunting, Martin; Paulsen, Jostein

    2013-01-01

    his paper addresses the problem of finding an optimal dividend policy for a class of jump-diffusion processes. The jump component is a compound Poisson process with negative jumps, and the drift and diffusion components are assumed to satisfy some regularity and growth restrictions. Each dividend...... payment is changed by a fixed and a proportional cost, meaning that if ξ is paid out by the company, the shareholders receive kξ−K, where k and K are positive. The aim is to maximize expected discounted dividends until ruin. It is proved that when the jumps belong to a certain class of light...

  13. Optimal ordering and pricing policy for price sensitive stock–dependent demand under progressive payment scheme

    Directory of Open Access Journals (Sweden)

    Nita H. Shah

    2011-01-01

    Full Text Available The terminal condition of inventory level to be zero at the end of the cycle time adopted by Soni and Shah (2008, 2009 is not viable when demand is stock-dependent. To rectify this assumption, we extend their model for (1 an ending – inventory to be non-zero; (2 limited floor space; (3 a profit maximization model; (4 selling price to be a decision variable, and (5 units in inventory deteriorate at a constant rate. The algorithm is developed to search for the optimal decision policy. The working of the proposed model is supported with a numerical example. Sensitivity analysis is carried out to investigate critical parameters.

  14. Determination of optimal environmental policy for reclamation of land unearthed in lignite mines - Strategy and tactics

    Science.gov (United States)

    Batzias, Dimitris F.; Pollalis, Yannis A.

    2012-12-01

    In this paper, optimal environmental policy for reclamation of land unearthed in lignite mines is defined as a strategic target. The tactics concerning the achievement of this target, includes estimation of optimal time lag between each lignite site (which is a segment of the whole lignite field) complete exploitation and its reclamation. Subsidizing of reclamation has been determined as a function of this time lag and relevant implementation is presented for parameter values valid for the Greek economy. We proved that the methodology we have developed gives reasonable quantitative results within the norms imposed by legislation. Moreover, the interconnection between strategy and tactics becomes evident, since the former causes the latter by deduction and the latter revises the former by induction in the time course of land reclamation.

  15. REGULATORY POLICY AND OPTIMIZATION OF INVESTMENT RESOURCE ALLOCATION IN THE MODEL OF FUNCTIONING OF RECREATION INDUSTRY

    Directory of Open Access Journals (Sweden)

    Hanna Shevchenko

    2017-11-01

    Full Text Available The research objective is the rationale of the theoretical and methodical approach concerning the improvement of regulatory policy as well as the process of distribution of financial investments using the model of the functioning of a recreational sector of the national economy. The methodology of the study includes the use of optimal control theory for the model formation of the functioning of the recreational industry as well as determining the behaviour of regulatory authorities and capabilities to optimize the allocation of investment resources in the recreational sector of the national economy. Results. The issue of equilibration of regulatory policy in the recreational sector of the national economy is actualized, including the question of targeted distribution of state and external financial investments. Also, it is proved that regulatory policy should establish the frameworks that on the one hand, do not allow public authorities to exercise extra influence on the economy of recreation, on the other hand, to keep the behaviour of the recreational business entities within the limits of normal socio-economic activity – on the basis of analysis of the continuum “recreation – work” by means of modified Brennan-Buchanan model. It is revealed that even with the condition of the tax reduction, the situation when the population resting less and works more than in the background of a developed economy is observed. However, according to the optimistic forecast, eventually on condition when the economy is emerging from the shade, we will obtain an official mode of the work in which, while maintaining taxes on proposed more advantageous for the population level, ultimately the ratio leisure and work will be established which is corresponding to the principles of sustainable development. Practical value. On the basis of methodical principles of the theory of optimal control, the model of the functioning of the recreational industry under the

  16. Optimized IMAC-IMAC protocol for phosphopeptide recovery from complex biological samples

    DEFF Research Database (Denmark)

    Ye, Juanying; Zhang, Xumin; Young, Clifford

    2010-01-01

    using Fe(III)-NTA IMAC resin and it proved to be highly selective in the phosphopeptide enrichment of a highly diluted standard sample (1:1000) prior to MALDI MS analysis. We also observed that a higher iron purity led to an increased IMAC enrichment efficiency. The optimized method was then adapted...... to phosphoproteome analyses of cell lysates of high protein complexity. From either 20 microg of mouse sample or 50 microg of Drosophila melanogaster sample, more than 1000 phosphorylation sites were identified in each study using IMAC-IMAC and LC-MS/MS. We demonstrate efficient separation of multiply phosphorylated...... characterization of phosphoproteins in functional phosphoproteomics research projects....

  17. Optimal household refrigerator replacement policy for life cycle energy, greenhouse gas emissions, and cost

    International Nuclear Information System (INIS)

    Kim, Hyung Chul; Keoleian, Gregory A.; Horie, Yuhta A.

    2006-01-01

    Although the last decade witnessed dramatic progress in refrigerator efficiencies, inefficient, outdated refrigerators are still in operation, sometimes consuming more than twice as much electricity per year compared with modern, efficient models. Replacing old refrigerators before their designed lifetime could be a useful policy to conserve electric energy and greenhouse gas emissions. However, from a life cycle perspective, product replacement decisions also induce additional economic and environmental burdens associated with disposal of old models and production of new models. This paper discusses optimal lifetimes of mid-sized refrigerator models in the US, using a life cycle optimization model based on dynamic programming. Model runs were conducted to find optimal lifetimes that minimize energy, global warming potential (GWP), and cost objectives over a time horizon between 1985 and 2020. The baseline results show that depending on model years, optimal lifetimes range 2-7 years for the energy objective, and 2-11 years for the GWP objective. On the other hand, an 18-year of lifetime minimizes the economic cost incurred during the time horizon. Model runs with a time horizon between 2004 and 2020 show that current owners should replace refrigerators that consume more than 1000 kWh/year of electricity (typical mid-sized 1994 models and older) as an efficient strategy from both cost and energy perspectives

  18. A model based on stochastic dynamic programming for determining China's optimal strategic petroleum reserve policy

    International Nuclear Information System (INIS)

    Zhang Xiaobing; Fan Ying; Wei Yiming

    2009-01-01

    China's Strategic Petroleum Reserve (SPR) is currently being prepared. But how large the optimal stockpile size for China should be, what the best acquisition strategies are, how to release the reserve if a disruption occurs, and other related issues still need to be studied in detail. In this paper, we develop a stochastic dynamic programming model based on a total potential cost function of establishing SPRs to evaluate the optimal SPR policy for China. Using this model, empirical results are presented for the optimal size of China's SPR and the best acquisition and drawdown strategies for a few specific cases. The results show that with comprehensive consideration, the optimal SPR size for China is around 320 million barrels. This size is equivalent to about 90 days of net oil import amount in 2006 and should be reached in the year 2017, three years earlier than the national goal, which implies that the need for China to fill the SPR is probably more pressing; the best stockpile release action in a disruption is related to the disruption levels and expected continuation probabilities. The information provided by the results will be useful for decision makers.

  19. Optimal inventory policy in a closed loop supply chain system with multiple periods

    International Nuclear Information System (INIS)

    Sasi Kumar, A.; Natarajan, K.; Ramasubramaniam, Muthu Rathna Sapabathy.; Deepaknallasamy, K.K.

    2017-01-01

    Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods. The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C) for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  20. Optimal inventory policy in a closed loop supply chain system with multiple periods

    Energy Technology Data Exchange (ETDEWEB)

    Sasi Kumar, A.; Natarajan, K.; Ramasubramaniam, Muthu Rathna Sapabathy.; Deepaknallasamy, K.K.

    2017-07-01

    Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods. The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C) for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  1. Optimal inventory policy in a closed loop supply chain system with multiple periods

    Directory of Open Access Journals (Sweden)

    SasiKumar A.

    2017-05-01

    Full Text Available Purpose: This paper aims to model and optimize the closed loop supply chain for maximizing the profit by considering the fixed order quantity inventory policy in various sites at multiple periods. Design/methodology/approach: In forward supply chain, a standard inventory policy can be followed when the product moves from manufacturer, distributer, retailer and customer but the inventory in the reverse supply chain of the product with the similar standard policy is very difficult to manage. This model investigates the standard policy of fixed order quantity by considering the three major types of return-recovery pair such as commercial returns, end- of- use returns, end –of- life returns and their inventory positioning at multiple periods.  The model is configured as mixed integer linear programming and solved by IBM ILOG CPLEX OPL studio. Findings: To find the performance of the model a numerical example is considered for a product with three Parts (A which of 2nos, B and C for 12 multiple periods. The results of the analysis show that the manufacturer can know how much should to be manufacture in multiple periods based on Variations of the demand by adopting the FOQ inventory policy at different sites considering its capacity constraints. In addition, it is important how much of parts should be purchased from the supplier at the given 12 periods. Originality/value: A sensitivity analysis is performed to validate the proposed model two parts. First part of the analysis will focus on the inventory of product and parts and second part of analysis focus on profit of the company. The analysis which provides some insights in to the structure of the model.

  2. An Optimization of (Q,r Inventory Policy Based on Health Care Apparel Products with Compound Poisson Demands

    Directory of Open Access Journals (Sweden)

    An Pan

    2014-01-01

    Full Text Available Addressing the problems of a health care center which produces tailor-made clothes for specific people, the paper proposes a single product continuous review model and establishes an optimal policy for the center based on (Q,r control policy to minimize expected average cost on an order cycle. A generic mathematical model to compute cost on real-time inventory level is developed to generate optimal order quantity under stochastic stock variation. The customer demands are described as compound Poisson process. Comparisons on cost between optimization method and experience-based decision on Q are made through numerical studies conducted for the inventory system of the center.

  3. Suboptimal and optimal order policies for fixed and varying replenishment interval with declining market

    Science.gov (United States)

    Yu, Jonas C. P.; Wee, H. M.; Yang, P. C.; Wu, Simon

    2016-06-01

    One of the supply chain risks for hi-tech products is the result of rapid technological innovation; it results in a significant decline in the selling price and demand after the initial launch period. Hi-tech products include computers and communication consumer's products. From a practical standpoint, a more realistic replenishment policy is needed to consider the impact of risks; especially when some portions of shortages are lost. In this paper, suboptimal and optimal order policies with partial backordering are developed for a buyer when the component cost, the selling price, and the demand rate decline at a continuous rate. Two mathematical models are derived and discussed: one model has the suboptimal solution with the fixed replenishment interval and a simpler computational process; the other one has the optimal solution with the varying replenishment interval and a more complicated computational process. The second model results in more profit. Numerical examples are provided to illustrate the two replenishment models. Sensitivity analysis is carried out to investigate the relationship between the parameters and the net profit.

  4. Is there room for geoengineering in the optimal climate policy mix?

    International Nuclear Information System (INIS)

    Bahn, Olivier; Chesney, Marc; Gheyssens, Jonathan; Knutti, Reto; Pana, Anca Claudia

    2015-01-01

    Highlights: • We investigate the optimal policy mix for dealing with climate change. • We consider jointly mitigation, adaptation, and solar radiation management (SRM). • SRM can control temperature, but brings environmental side-effects. • SRM is not robust due to uncertainty in magnitude and persistency of side-effects. • Implementing SRM with wrong assumptions about side-effects largely decreases welfare. - Abstract: We investigate geoengineering as a possible substitute for mitigation and adaptation measures to address climate change. Relying on an integrated assessment model, we distinguish between the effects of solar radiation management (SRM) on atmospheric temperature levels and its side-effects on the environment. The optimal climate portfolio is a mix of mitigation, adaptation, and SRM. When accounting for uncertainty in the magnitude of SRM side-effects and their persistency over time, we show that the SRM option lacks robustness. We then analyse the welfare consequences of basing the SRM decision on wrong assumptions about its side-effects, and show that total output losses are considerable and increase with the error horizon. This reinforces the need to balance the policy portfolio in favour of mitigation

  5. Optimization of sampling for the determination of the mean Radium-226 concentration in surface soil

    International Nuclear Information System (INIS)

    Williams, L.R.; Leggett, R.W.; Espegren, M.L.; Little, C.A.

    1987-08-01

    This report describes a field experiment that identifies an optimal method for determination of compliance with the US Environmental Protection Agency's Ra-226 guidelines for soil. The primary goals were to establish practical levels of accuracy and precision in estimating the mean Ra-226 concentration of surface soil in a small contaminated region; to obtain empirical information on composite vs. individual soil sampling and on random vs. uniformly spaced sampling; and to examine the practicality of using gamma measurements in predicting the average surface radium concentration and in estimating the number of soil samples required to obtain a given level of accuracy and precision. Numerous soil samples were collected on each six sites known to be contaminated with uranium mill tailings. Three types of samples were collected on each site: 10-composite samples, 20-composite samples, and individual or post hole samples; 10-composite sampling is the method of choice because it yields a given level of accuracy and precision for the least cost. Gamma measurements can be used to reduce surface soil sampling on some sites. 2 refs., 5 figs., 7 tabs

  6. Efficient Round-Trip Time Optimization for Replica-Exchange Enveloping Distribution Sampling (RE-EDS).

    Science.gov (United States)

    Sidler, Dominik; Cristòfol-Clough, Michael; Riniker, Sereina

    2017-06-13

    Replica-exchange enveloping distribution sampling (RE-EDS) allows the efficient estimation of free-energy differences between multiple end-states from a single molecular dynamics (MD) simulation. In EDS, a reference state is sampled, which can be tuned by two types of parameters, i.e., smoothness parameters(s) and energy offsets, such that all end-states are sufficiently sampled. However, the choice of these parameters is not trivial. Replica exchange (RE) or parallel tempering is a widely applied technique to enhance sampling. By combining EDS with the RE technique, the parameter choice problem could be simplified and the challenge shifted toward an optimal distribution of the replicas in the smoothness-parameter space. The choice of a certain replica distribution can alter the sampling efficiency significantly. In this work, global round-trip time optimization (GRTO) algorithms are tested for the use in RE-EDS simulations. In addition, a local round-trip time optimization (LRTO) algorithm is proposed for systems with slowly adapting environments, where a reliable estimate for the round-trip time is challenging to obtain. The optimization algorithms were applied to RE-EDS simulations of a system of nine small-molecule inhibitors of phenylethanolamine N-methyltransferase (PNMT). The energy offsets were determined using our recently proposed parallel energy-offset (PEOE) estimation scheme. While the multistate GRTO algorithm yielded the best replica distribution for the ligands in water, the multistate LRTO algorithm was found to be the method of choice for the ligands in complex with PNMT. With this, the 36 alchemical free-energy differences between the nine ligands were calculated successfully from a single RE-EDS simulation 10 ns in length. Thus, RE-EDS presents an efficient method for the estimation of relative binding free energies.

  7. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Optimal maintenance policy incorporating system level and unit level for mechanical systems

    Science.gov (United States)

    Duan, Chaoqun; Deng, Chao; Wang, Bingran

    2018-04-01

    The study works on a multi-level maintenance policy combining system level and unit level under soft and hard failure modes. The system experiences system-level preventive maintenance (SLPM) when the conditional reliability of entire system exceeds SLPM threshold, and also undergoes a two-level maintenance for each single unit, which is initiated when a single unit exceeds its preventive maintenance (PM) threshold, and the other is performed simultaneously the moment when any unit is going for maintenance. The units experience both periodic inspections and aperiodic inspections provided by failures of hard-type units. To model the practical situations, two types of economic dependence have been taken into account, which are set-up cost dependence and maintenance expertise dependence due to the same technology and tool/equipment can be utilised. The optimisation problem is formulated and solved in a semi-Markov decision process framework. The objective is to find the optimal system-level threshold and unit-level thresholds by minimising the long-run expected average cost per unit time. A formula for the mean residual life is derived for the proposed multi-level maintenance policy. The method is illustrated by a real case study of feed subsystem from a boring machine, and a comparison with other policies demonstrates the effectiveness of our approach.

  9. Optimization of the two-sample rank Neyman-Pearson detector

    Science.gov (United States)

    Akimov, P. S.; Barashkov, V. M.

    1984-10-01

    The development of optimal algorithms concerned with rank considerations in the case of finite sample sizes involves considerable mathematical difficulties. The present investigation provides results related to the design and the analysis of an optimal rank detector based on a utilization of the Neyman-Pearson criteria. The detection of a signal in the presence of background noise is considered, taking into account n observations (readings) x1, x2, ... xn in the experimental communications channel. The computation of the value of the rank of an observation is calculated on the basis of relations between x and the variable y, representing interference. Attention is given to conditions in the absence of a signal, the probability of the detection of an arriving signal, details regarding the utilization of the Neyman-Pearson criteria, the scheme of an optimal rank, multichannel, incoherent detector, and an analysis of the detector.

  10. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2001-01-01

    We present an algorithm for computing a near-optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...... be off. The systems' semi-Markov decision model has infinite action sets for the positive states. We design a new tailor-made policy-iteration algorithm for computing a policy for which the long-run average cost is at most a positive tolerance above the minimum average cost. For any positive tolerance...

  11. Computation of a near-optimal service policy for a single-server queue with homogeneous jobs

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Larsen, Christian

    2000-01-01

    We present an algorithm for computing a near optimal service policy for a single-server queueing system when the service cost is a convex function of the service time. The policy has state-dependent service times, and it includes the options to remove jobs from the system and to let the server...... be off. The system's semi-Markov decision model has infinite action sets for the positive states. We design a new tailor-made policy iteration algorithm for computing a policy for which the long-run average cost is at most a positive tolerance above the minimum average cost. For any positive tolerance...

  12. Optimal repairable spare-parts procurement policy under total business volume discount environment

    International Nuclear Information System (INIS)

    Pascual, Rodrigo; Santelices, Gabriel; Lüer-Villagra, Armin; Vera, Jorge; Cawley, Alejandro Mac

    2017-01-01

    In asset intensive fields, where components are expensive and high system availability is required, spare parts procurement is often a critical issue. To gain competitiveness and market share is common for vendors to offer Total Business Volume Discounts (TBVD). Accordingly, companies must define the procurement and stocking policy of their spare parts in order to reduce procurement costs and increase asset availability. In response to those needs, this work presents an optimization model that maximizes the availability of the equipment under a TBVD environment, subject to a budget constraint. The model uses a single-echelon structure where parts can be repaired. It determines the optimal number of repairable spare parts to be stocked, giving emphasis on asset availability, procurement costs and service levels as the main decision criteria. A heuristic procedure that achieves high quality solutions in a fast and time-consistent way was implemented to improve the time required to obtain the model solution. Results show that using an optimal procurement policy of spare parts and accounting for TBVD produces better overall results and yields a better availability performance. - Highlights: • We propose a model for procurement of repairable components in single-echelon and business volume discount environments. • We used a mathematical model to develop a competitive heuristic that provides high quality solutions in very short times. • Our model places emphasis on using system availability, procurement costs and service levels as leading decision criteria. • The model can be used as an engine for a multi-criteria Decision Support System.

  13. INDEXABILITY AND OPTIMAL INDEX POLICIES FOR A CLASS OF REINITIALISING RESTLESS BANDITS.

    Science.gov (United States)

    Villar, Sofía S

    2016-01-01

    Motivated by a class of Partially Observable Markov Decision Processes with application in surveillance systems in which a set of imperfectly observed state processes is to be inferred from a subset of available observations through a Bayesian approach, we formulate and analyze a special family of multi-armed restless bandit problems. We consider the problem of finding an optimal policy for observing the processes that maximizes the total expected net rewards over an infinite time horizon subject to the resource availability. From the Lagrangian relaxation of the original problem, an index policy can be derived, as long as the existence of the Whittle index is ensured. We demonstrate that such a class of reinitializing bandits in which the projects' state deteriorates while active and resets to its initial state when passive until its completion possesses the structural property of indexability and we further show how to compute the index in closed form. In general, the Whittle index rule for restless bandit problems does not achieve optimality. However, we show that the proposed Whittle index rule is optimal for the problem under study in the case of stochastically heterogenous arms under the expected total criterion, and it is further recovered by a simple tractable rule referred to as the 1-limited Round Robin rule. Moreover, we illustrate the significant suboptimality of other widely used heuristic: the Myopic index rule, by computing in closed form its suboptimality gap. We present numerical studies which illustrate for the more general instances the performance advantages of the Whittle index rule over other simple heuristics.

  14. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  15. Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples

    Directory of Open Access Journals (Sweden)

    Hyunok Oh

    2003-05-01

    Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.

  16. Optimization of liquid scintillation measurements applied to smears and aqueous samples collected in industrial environments

    Directory of Open Access Journals (Sweden)

    Arnaud Chapon

    Full Text Available Search for low-energy β contaminations in industrial environments requires using Liquid Scintillation Counting. This indirect measurement method supposes a fine control from sampling to measurement itself. Thus, in this paper, we focus on the definition of a measurement method, as generic as possible, for both smears and aqueous samples’ characterization. That includes choice of consumables, sampling methods, optimization of counting parameters and definition of energy windows, using the maximization of a Figure of Merit. Detection limits are then calculated considering these optimized parameters. For this purpose, we used PerkinElmer Tri-Carb counters. Nevertheless, except those relative to some parameters specific to PerkinElmer, most of the results presented here can be extended to other counters. Keywords: Liquid Scintillation Counting (LSC, PerkinElmer, Tri-Carb, Smear, Swipe

  17. The SDSS-IV MaNGA Sample: Design, Optimization, and Usage Considerations

    Science.gov (United States)

    Wake, David A.; Bundy, Kevin; Diamond-Stanic, Aleksandar M.; Yan, Renbin; Blanton, Michael R.; Bershady, Matthew A.; Sánchez-Gallego, José R.; Drory, Niv; Jones, Amy; Kauffmann, Guinevere; Law, David R.; Li, Cheng; MacDonald, Nicholas; Masters, Karen; Thomas, Daniel; Tinker, Jeremy; Weijmans, Anne-Marie; Brownstein, Joel R.

    2017-09-01

    We describe the sample design for the SDSS-IV MaNGA survey and present the final properties of the main samples along with important considerations for using these samples for science. Our target selection criteria were developed while simultaneously optimizing the size distribution of the MaNGA integral field units (IFUs), the IFU allocation strategy, and the target density to produce a survey defined in terms of maximizing signal-to-noise ratio, spatial resolution, and sample size. Our selection strategy makes use of redshift limits that only depend on I-band absolute magnitude (M I ), or, for a small subset of our sample, M I and color (NUV - I). Such a strategy ensures that all galaxies span the same range in angular size irrespective of luminosity and are therefore covered evenly by the adopted range of IFU sizes. We define three samples: the Primary and Secondary samples are selected to have a flat number density with respect to M I and are targeted to have spectroscopic coverage to 1.5 and 2.5 effective radii (R e ), respectively. The Color-Enhanced supplement increases the number of galaxies in the low-density regions of color-magnitude space by extending the redshift limits of the Primary sample in the appropriate color bins. The samples cover the stellar mass range 5× {10}8≤slant {M}* ≤slant 3× {10}11 {M}⊙ {h}-2 and are sampled at median physical resolutions of 1.37 and 2.5 kpc for the Primary and Secondary samples, respectively. We provide weights that will statistically correct for our luminosity and color-dependent selection function and IFU allocation strategy, thus correcting the observed sample to a volume-limited sample.

  18. Replacement policy of residential lighting optimized for cost, energy, and greenhouse gas emissions

    Science.gov (United States)

    Liu, Lixi; Keoleian, Gregory A.; Saitou, Kazuhiro

    2017-11-01

    Accounting for 10% of the electricity consumption in the US, artificial lighting represents one of the easiest ways to cut household energy bills and greenhouse gas (GHG) emissions by upgrading to energy-efficient technologies such as compact fluorescent lamps (CFL) and light emitting diodes (LED). However, given the high initial cost and rapidly improving trajectory of solid-state lighting today, estimating the right time to switch over to LEDs from a cost, primary energy, and GHG emissions perspective is not a straightforward problem. This is an optimal replacement problem that depends on many determinants, including how often the lamp is used, the state of the initial lamp, and the trajectories of lighting technology and of electricity generation. In this paper, multiple replacement scenarios of a 60 watt-equivalent A19 lamp are analyzed and for each scenario, a few replacement policies are recommended. For example, at an average use of 3 hr day-1 (US average), it may be optimal both economically and energetically to delay the adoption of LEDs until 2020 with the use of CFLs, whereas purchasing LEDs today may be optimal in terms of GHG emissions. In contrast, incandescent and halogen lamps should be replaced immediately. Based on expected LED improvement, upgrading LED lamps before the end of their rated lifetime may provide cost and environmental savings over time by taking advantage of the higher energy efficiency of newer models.

  19. Optimal preventive maintenance and repair policies for multi-state systems

    International Nuclear Information System (INIS)

    Sheu, Shey-Huei; Chang, Chin-Chih; Chen, Yen-Luan; George Zhang, Zhe

    2015-01-01

    This paper studies the optimal preventive maintenance (PM) policies for multi-state systems. The scheduled PMs can be either imperfect or perfect type. The improved effective age is utilized to model the effect of an imperfect PM. The system is considered as in a failure state (unacceptable state) once its performance level falls below a given customer demand level. If the system fails before a scheduled PM, it is repaired and becomes operational again. We consider three types of major, minimal, and imperfect repair actions, respectively. The deterioration of the system is assumed to follow a non-homogeneous continuous time Markov process (NHCTMP) with finite state space. A recursive approach is proposed to efficiently compute the time-dependent distribution of the multi-state system. For each repair type, we find the optimal PM schedule that minimizes the average cost rate. The main implication of our results is that in determining the optimal scheduled PM, choosing the right repair type will significantly improve the efficiency of the system maintenance. Thus PM and repair decisions must be made jointly to achieve the best performance

  20. Unit Stratified Sampling as a Tool for Approximation of Stochastic Optimization Problems

    Czech Academy of Sciences Publication Activity Database

    Šmíd, Martin

    2012-01-01

    Roč. 19, č. 30 (2012), s. 153-169 ISSN 1212-074X R&D Projects: GA ČR GAP402/11/0150; GA ČR GAP402/10/0956; GA ČR GA402/09/0965 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : Stochastic programming * approximation * stratified sampling Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/E/smid-unit stratified sampling as a tool for approximation of stochastic optimization problems.pdf

  1. Generalized Likelihood Uncertainty Estimation (GLUE) Using Multi-Optimization Algorithm as Sampling Method

    Science.gov (United States)

    Wang, Z.

    2015-12-01

    For decades, distributed and lumped hydrological models have furthered our understanding of hydrological system. The development of hydrological simulation in large scale and high precision elaborated the spatial descriptions and hydrological behaviors. Meanwhile, the new trend is also followed by the increment of model complexity and number of parameters, which brings new challenges of uncertainty quantification. Generalized Likelihood Uncertainty Estimation (GLUE) has been widely used in uncertainty analysis for hydrological models referring to Monte Carlo method coupled with Bayesian estimation. However, the stochastic sampling method of prior parameters adopted by GLUE appears inefficient, especially in high dimensional parameter space. The heuristic optimization algorithms utilizing iterative evolution show better convergence speed and optimality-searching performance. In light of the features of heuristic optimization algorithms, this study adopted genetic algorithm, differential evolution, shuffled complex evolving algorithm to search the parameter space and obtain the parameter sets of large likelihoods. Based on the multi-algorithm sampling, hydrological model uncertainty analysis is conducted by the typical GLUE framework. To demonstrate the superiority of the new method, two hydrological models of different complexity are examined. The results shows the adaptive method tends to be efficient in sampling and effective in uncertainty analysis, providing an alternative path for uncertainty quantilization.

  2. SU-E-T-21: A Novel Sampling Algorithm to Reduce Intensity-Modulated Radiation Therapy (IMRT) Optimization Time

    International Nuclear Information System (INIS)

    Tiwari, P; Xie, Y; Chen, Y; Deasy, J

    2014-01-01

    Purpose: The IMRT optimization problem requires substantial computer time to find optimal dose distributions because of the large number of variables and constraints. Voxel sampling reduces the number of constraints and accelerates the optimization process, but usually deteriorates the quality of the dose distributions to the organs. We propose a novel sampling algorithm that accelerates the IMRT optimization process without significantly deteriorating the quality of the dose distribution. Methods: We included all boundary voxels, as well as a sampled fraction of interior voxels of organs in the optimization. We selected a fraction of interior voxels using a clustering algorithm, that creates clusters of voxels that have similar influence matrix signatures. A few voxels are selected from each cluster based on the pre-set sampling rate. Results: We ran sampling and no-sampling IMRT plans for de-identified head and neck treatment plans. Testing with the different sampling rates, we found that including 10% of inner voxels produced the good dose distributions. For this optimal sampling rate, the algorithm accelerated IMRT optimization by a factor of 2–3 times with a negligible loss of accuracy that was, on average, 0.3% for common dosimetric planning criteria. Conclusion: We demonstrated that a sampling could be developed that reduces optimization time by more than a factor of 2, without significantly degrading the dose quality

  3. Evaluation of sample preparation methods and optimization of nickel determination in vegetable tissues

    Directory of Open Access Journals (Sweden)

    Rodrigo Fernando dos Santos Salazar

    2011-02-01

    Full Text Available Nickel, although essential to plants, may be toxic to plants and animals. It is mainly assimilated by food ingestion. However, information about the average levels of elements (including Ni in edible vegetables from different regions is still scarce in Brazil. The objectives of this study were to: (a evaluate and optimize a method for preparation of vegetable tissue samples for Ni determination; (b optimize the analytical procedures for determination by Flame Atomic Absorption Spectrometry (FAAS and by Electrothermal Atomic Absorption (ETAAS in vegetable samples and (c determine the Ni concentration in vegetables consumed in the cities of Lorena and Taubaté in the Vale do Paraíba, State of São Paulo, Brazil. By means of the analytical technique for determination by ETAAS or FAAS, the results were validated by the test of analyte addition and recovery. The most viable method tested for quantification of this element was HClO4-HNO3 wet digestion. All samples but carrot tissue collected in Lorena contained Ni levels above the permitted by the Brazilian Ministry of Health. The most disturbing results, requiring more detailed studies, were the Ni concentrations measured in carrot samples from Taubaté, where levels were five times higher than permitted by Brazilian regulations.

  4. Demonstration and Optimization of BNFL's Pulsed Jet Mixing and RFD Sampling Systems Using NCAW Simulant

    International Nuclear Information System (INIS)

    Bontha, J.R.; Golcar, G.R.; Hannigan, N.

    2000-01-01

    The BNFL Inc. flowsheet for the pretreatment and vitrification of the Hanford High Level Tank waste includes the use of several hundred Reverse Flow Diverters (RFDs) for sampling and transferring the radioactive slurries and Pulsed Jet mixers to homogenize or suspend the tank contents. The Pulsed Jet mixing and the RFD sampling devices represent very simple and efficient methods to mix and sample slurries, respectively, using compressed air to achieve the desired operation. The equipment has no moving parts, which makes them very suitable for mixing and sampling highly radioactive wastes. However, the effectiveness of the mixing and sampling systems are yet to be demonstrated when dealing with Hanford slurries, which exhibit a wide range of physical and theological properties. This report describes the results of the testing of BNFL's Pulsed Jet mixing and RFD sampling systems in a 13-ft ID and 15-ft height dish-bottomed tank at Battelle's 336 building high-bay facility using AZ-101/102 simulants containing up to 36-wt% insoluble solids. The specific objectives of the work were to: Demonstrate the effectiveness of the Pulsed Jet mixing system to thoroughly homogenize Hanford-type slurries over a range of solids loading; Minimize/optimize air usage by changing sequencing of the Pulsed Jet mixers or by altering cycle times; and Demonstrate that the RFD sampler can obtain representative samples of the slurry up to the maximum RPP-WTP baseline concentration of 25-wt%

  5. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    Science.gov (United States)

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  6. On the optimal sampling of bandpass measurement signals through data acquisition systems

    International Nuclear Information System (INIS)

    Angrisani, L; Vadursi, M

    2008-01-01

    Data acquisition systems (DAS) play a fundamental role in a lot of modern measurement solutions. One of the parameters characterizing a DAS is its maximum sample rate, which imposes constraints on the signals that can be alias-free digitized. Bandpass sampling theory singles out separated ranges of admissible sample rates, which can be significantly lower than carrier frequency. But, how to choose the most convenient sample rate according to the purpose at hand? The paper proposes a method for the automatic selection of the optimal sample rate in measurement applications involving bandpass signals; the effects of sample clock instability and limited resolution are also taken into account. The method allows the user to choose the location of spectral replicas of the sampled signal in terms of normalized frequency, and the minimum guard band between replicas, thus introducing a feature that no DAS currently available on the market seems to offer. A number of experimental tests on bandpass digitally modulated signals are carried out to assess the concurrence of the obtained central frequency with the expected one

  7. Optimal sample to tracer ratio for isotope dilution mass spectrometry: the polyisotopic case

    International Nuclear Information System (INIS)

    Laszlo, G.; Ridder, P. de; Goldman, A.; Cappis, J.; Bievre, P. de

    1991-01-01

    The Isotope Dilution Mass Spectrometry (IDMS) measurement technique provides a means for determining the unknown amount of various isotopes of an element in a sample solution of known mass. The sample solution is mixed with an auxiliary solution, or tracer, containing a known amount of the same element having the same isotopes but of different relative abundances or isotopic composition and the induced change in the isotopic composition measured by isotope mass spectrometry. The technique involves the measurement of the abundance ratio of each isotope to a (same) reference isotope in the sample solution, in the tracer solution and in the blend of the sample and tracer solution. These isotope ratio measurements, the known element amount in the tracer and the known mass of sample solution are used to calculate the unknown amount of one isotope in the sample solution. Subsequently the unknown amount of element is determined. The purpose of this paper is to examine the optimization of the ratio of the estimated unknown amount of element in the sample solution to the known amount of element in the tracer solution in order to minimize the relative uncertainty in the determination of the unknown amount of element

  8. An optimal replacement policy for a repairable system based on its repairman having vacations

    Energy Technology Data Exchange (ETDEWEB)

    Yuan Li [School of Aerospace Engineering and Applied Mechanics, Tongji University, Shanghai 200092 (China); Xu Jian, E-mail: xujian@tongji.edu.c [School of Aerospace Engineering and Applied Mechanics, Tongji University, Shanghai 200092 (China)

    2011-07-15

    This paper studies a cold standby repairable system with two different components and one repairman who can take multiple vacations. If there is a component which fails and the repairman is on vacation, the failed component will wait for repair until the repairman is available. In the system, assume that component 1 has priority in use. After repair, component 1 follows a geometric process repair, while component 2 can be repaired as good as new after failures. Under these assumptions, a replacement policy N based on the failed times of component 1 is studied. The system will be replaced if the failure times of component 1 reach N. The explicit expression of the expected cost rate is given, so that the optimal replacement time N{sup *} is determined. Finally, a numerical example is given to illustrate the theoretical results of the model.

  9. Optimal Policies for Random and Periodic Garbage Collections with Tenuring Threshold

    Science.gov (United States)

    Zhao, Xufeng; Nakamura, Syouji; Nakagawa, Toshio

    It is an important problem to determine the tenuring threshold to meet the pause time goal for a generational garbage collector. From such viewpoint, this paper proposes two stochastic models based on the working schemes of a generational garbage collector: One is random collection which occurs at a nonhomogeneous Poisson process and the other is periodic collection which occurs at periodic times. Since the cost suffered for minor collection increases, as the amount of surviving objects accumulates, tenuring minor collection should be made at some tenuring threshold. Using the techniques of cumulative processes and reliability theory, expected cost rates with tenuring threshold are obtained, and optimal policies which minimize them are discussed analytically and computed numerically.

  10. Optimal Replacement Policy of Jet Engine Modules from the Aircarrier's Point of View

    Directory of Open Access Journals (Sweden)

    Anita Domitrović

    2008-01-01

    Full Text Available A mathematical model for optimising preventive maintenanceof aircraft jet engine was developed by dynamic programming.Replacement planning for jet engine modules is regardedas a multistage decision process, while optimum modulereplacement is considered as a problem of equipment replacement.The goal of the optimal replacement policy of jet enginemodules is a defined series of decisions resulting in minimummaintenance costs. The model was programmed inC++ programming language and tested by using CFM56 jetengine data. The optimum maintenance strategy costs werecompared to costs of simpler experience-based maintenancestrategies. The results of the comparison j usti.JY further developmentand usage of the model in order to achieve significant costreduction for airline carriers.

  11. Optimizing pricing and ordering strategies in a three-level supply chain under return policy

    Science.gov (United States)

    Noori-daryan, Mahsa; Taleizadeh, Ata Allah

    2018-03-01

    This paper develops an economic production quantity model in a three-echelon supply chain composing of a supplier, a manufacturer and a wholesaler under two scenarios. As the first scenario, we consider a return contract between the outside supplier and the supplier and also between the manufacturer and the wholesaler, but in the second one, the return policy between the manufacturer and the wholesaler is not applied. Here, it is assumed that shortage is permitted and demand is price-sensitive. The principal goal of the research is to maximize the total profit of the chain by optimizing the order quantity of the supplier and the selling prices of the manufacturer and the wholesaler. Nash-equilibrium approach is considered between the chain members. In the end, a numerical example is presented to clarify the applicability of the introduced model and compare the profit of the chain under two scenarios.

  12. Optimal trade-credit policy for perishable items deeming imperfect production and stock dependent demand

    Directory of Open Access Journals (Sweden)

    S. R. Singh

    2014-01-01

    Full Text Available Trade credit is the most succeeding economic phenomenon which is used by the supplier for encouraging the retailers to buy more quantity. In this article, a mathematical model with stock dependent demand and deterioration is developed to investigate the retailer’s optimal inventory policy under the scheme of permissible delay in payment. It is assumed that defective items are produced during the production process and delay period is progressive. The objective is to minimize the total average cost of the system. To exemplify hypothesis of the proposed model numerical examples and sensitivity analysis are provided. Finally, the convexities of the cost functions and the effects of changing parameters are represented through the graphs.

  13. [Sampling optimization for tropical invertebrates: an example using dung beetles (Coleoptera: Scarabaeinae) in Venezuela].

    Science.gov (United States)

    Ferrer-Paris, José Rafael; Sánchez-Mercado, Ada; Rodríguez, Jon Paul

    2013-03-01

    The development of efficient sampling protocols is an essential prerequisite to evaluate and identify priority conservation areas. There are f ew protocols for fauna inventory and monitoring in wide geographical scales for the tropics, where the complexity of communities and high biodiversity levels, make the implementation of efficient protocols more difficult. We proposed here a simple strategy to optimize the capture of dung beetles, applied to sampling with baited traps and generalizable to other sampling methods. We analyzed data from eight transects sampled between 2006-2008 withthe aim to develop an uniform sampling design, that allows to confidently estimate species richness, abundance and composition at wide geographical scales. We examined four characteristics of any sampling design that affect the effectiveness of the sampling effort: the number of traps, sampling duration, type and proportion of bait, and spatial arrangement of the traps along transects. We used species accumulation curves, rank-abundance plots, indicator species analysis, and multivariate correlograms. We captured 40 337 individuals (115 species/morphospecies of 23 genera). Most species were attracted by both dung and carrion, but two thirds had greater relative abundance in traps baited with human dung. Different aspects of the sampling design influenced each diversity attribute in different ways. To obtain reliable richness estimates, the number of traps was the most important aspect. Accurate abundance estimates were obtained when the sampling period was increased, while the spatial arrangement of traps was determinant to capture the species composition pattern. An optimum sampling strategy for accurate estimates of richness, abundance and diversity should: (1) set 50-70 traps to maximize the number of species detected, (2) get samples during 48-72 hours and set trap groups along the transect to reliably estimate species abundance, (3) set traps in groups of at least 10 traps to

  14. Optimization of a sample processing protocol for recovery of Bacillus anthracis spores from soil

    Science.gov (United States)

    Silvestri, Erin E.; Feldhake, David; Griffin, Dale; Lisle, John T.; Nichols, Tonya L.; Shah, Sanjiv; Pemberton, A; Schaefer III, Frank W

    2016-01-01

    Following a release of Bacillus anthracis spores into the environment, there is a potential for lasting environmental contamination in soils. There is a need for detection protocols for B. anthracis in environmental matrices. However, identification of B. anthracis within a soil is a difficult task. Processing soil samples helps to remove debris, chemical components, and biological impurities that can interfere with microbiological detection. This study aimed to optimize a previously used indirect processing protocol, which included a series of washing and centrifugation steps. Optimization of the protocol included: identifying an ideal extraction diluent, variation in the number of wash steps, variation in the initial centrifugation speed, sonication and shaking mechanisms. The optimized protocol was demonstrated at two laboratories in order to evaluate the recovery of spores from loamy and sandy soils. The new protocol demonstrated an improved limit of detection for loamy and sandy soils over the non-optimized protocol with an approximate matrix limit of detection at 14 spores/g of soil. There were no significant differences overall between the two laboratories for either soil type, suggesting that the processing protocol will be robust enough to use at multiple laboratories while achieving comparable recoveries.

  15. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    International Nuclear Information System (INIS)

    Carpy, R; Picker, G; Amann, B; Ranebo, H; Vincent-Bonnieu, S; Minster, O; Winter, J; Dettmann, J; Castiglione, L; Höhler, R; Langevin, D

    2011-01-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of 'wet foams' have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 3 . These units, will be on orbit replaceable sets, that will allow multiple sample compositions processing (in the range of >40).

  16. Racing Sampling Based Microimmune Optimization Approach Solving Constrained Expected Value Programming

    Directory of Open Access Journals (Sweden)

    Kai Yang

    2016-01-01

    Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.

  17. Rats track odour trails accurately using a multi-layered strategy with near-optimal sampling.

    Science.gov (United States)

    Khan, Adil Ghani; Sarangi, Manaswini; Bhalla, Upinder Singh

    2012-02-28

    Tracking odour trails is a crucial behaviour for many animals, often leading to food, mates or away from danger. It is an excellent example of active sampling, where the animal itself controls how to sense the environment. Here we show that rats can track odour trails accurately with near-optimal sampling. We trained rats to follow odour trails drawn on paper spooled through a treadmill. By recording local field potentials (LFPs) from the olfactory bulb, and sniffing rates, we find that sniffing but not LFPs differ between tracking and non-tracking conditions. Rats can track odours within ~1 cm, and this accuracy is degraded when one nostril is closed. Moreover, they show path prediction on encountering a fork, wide 'casting' sweeps on encountering a gap and detection of reappearance of the trail in 1-2 sniffs. We suggest that rats use a multi-layered strategy, and achieve efficient sampling and high accuracy in this complex task.

  18. Optimal Ordering Policy of a Risk-Averse Retailer Subject to Inventory Inaccuracy

    Directory of Open Access Journals (Sweden)

    Lijing Zhu

    2013-01-01

    Full Text Available Inventory inaccuracy refers to the discrepancy between the actual inventory and the recorded inventory information. Inventory inaccuracy is prevalent in retail stores. It may result in a higher inventory level or poor customer service. Earlier studies of inventory inaccuracy have traditionally assumed risk-neutral retailers whose objective is to maximize expected profits. We investigate a risk-averse retailer within a newsvendor framework. The risk aversion attitude is measured by conditional-value-at-risk (CVaR. We consider inventory inaccuracy stemming both from permanent shrinkage and temporary shrinkage. Two scenarios of reducing inventory shrinkage are presented. In the first scenario, the retailer conducts physical inventory audits to identify the discrepancy. In the second scenario, the retailer deploys an automatic tracking technology, radiofrequency identification (RFID, to reduce inventory shrinkage. With the CVaR criterion, we propose optimal policies for the two scenarios. We show monotonicity between the retailer’s ordering policy and his risk aversion degree. A numerical analysis provides managerial insights for risk-averse retailers considering investing in RFID technology.

  19. An Optimized Method for Quantification of Pathogenic Leptospira in Environmental Water Samples.

    Science.gov (United States)

    Riediger, Irina N; Hoffmaster, Alex R; Casanovas-Massana, Arnau; Biondo, Alexander W; Ko, Albert I; Stoddard, Robyn A

    2016-01-01

    Leptospirosis is a zoonotic disease usually acquired by contact with water contaminated with urine of infected animals. However, few molecular methods have been used to monitor or quantify pathogenic Leptospira in environmental water samples. Here we optimized a DNA extraction method for the quantification of leptospires using a previously described Taqman-based qPCR method targeting lipL32, a gene unique to and highly conserved in pathogenic Leptospira. QIAamp DNA mini, MO BIO PowerWater DNA and PowerSoil DNA Isolation kits were evaluated to extract DNA from sewage, pond, river and ultrapure water samples spiked with leptospires. Performance of each kit varied with sample type. Sample processing methods were further evaluated and optimized using the PowerSoil DNA kit due to its performance on turbid water samples and reproducibility. Centrifugation speeds, water volumes and use of Escherichia coli as a carrier were compared to improve DNA recovery. All matrices showed a strong linearity in a range of concentrations from 106 to 10° leptospires/mL and lower limits of detection ranging from Leptospira in environmental waters (river, pond and sewage) which consists of the concentration of 40 mL samples by centrifugation at 15,000×g for 20 minutes at 4°C, followed by DNA extraction with the PowerSoil DNA Isolation kit. Although the method described herein needs to be validated in environmental studies, it potentially provides the opportunity for effective, timely and sensitive assessment of environmental leptospiral burden.

  20. Fuel demand elasticities for energy and environmental policies: Indian sample survey evidence

    International Nuclear Information System (INIS)

    Gundimeda, Haripriya; Koehlin, Gunnar

    2008-01-01

    India has been running large-scale interventions in the energy sector over the last decades. Still, there is a dearth of reliable and readily available price and income elasticities of demand to base these on, especially for domestic use of traditional fuels. This study uses the linear approximate Almost Ideal Demand System (LA-AIDS) using micro data of more than 100,000 households sampled across India. The LA-AIDS model is expanded by specifying the intercept as a linear function of household characteristics. Marshallian and Hicksian price and expenditure elasticities of demand for four main fuels are estimated for both urban and rural areas by different income groups. These can be used to evaluate recent and current energy policies. The results can also be used for energy projections and carbon dioxide simulations given different growth rates for different segments of the Indian population. (author)

  1. Policy Analysis Screening System (PASS) demonstration: sample queries and terminal instructions

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-10-16

    This document contains the input and output for the Policy Analysis Screening System (PASS) demonstration. This demonstration is stored on a portable disk at the Environmental Impacts Division. Sample queries presented here include: (1) how to use PASS; (2) estimated 1995 energy consumption from Mid-Range Energy-Forecasting System (MEFS) data base; (3) pollution projections from Strategic Environmental Assessment System (SEAS) data base; (4) diesel auto regulations; (5) diesel auto health effects; (6) oil shale health and safety measures; (7) water pollution effects of SRC; (8) acid rainfall from Energy Environmental Statistics (EES) data base; 1990 EIA electric generation by fuel type; sulfate concentrations by Federal region; forecast of 1995 SO/sub 2/ emissions in Region III; and estimated electrical generating capacity in California to 1990. The file name for each query is included.

  2. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    International Nuclear Information System (INIS)

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  3. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    Energy Technology Data Exchange (ETDEWEB)

    Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it [Sapienza Università di Roma, Dipartimento di Ingegneria Civile, Edile e Ambientale (Italy); Alfonso, L. [Hydroinformatics Chair Group, UNESCO-IHE, Delft (Netherlands); Di Baldassarre, G. [Department of Earth Sciences, Program for Air, Water and Landscape Sciences, Uppsala University (Sweden)

    2016-06-08

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  4. Optimal cross-sectional sampling for river modelling with bridges: An information theory-based method

    International Nuclear Information System (INIS)

    Ridolfi, E.; Napolitano, F.; Alfonso, L.; Di Baldassarre, G.

    2016-01-01

    The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existing guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.

  5. Brachytherapy dose-volume histogram computations using optimized stratified sampling methods

    International Nuclear Information System (INIS)

    Karouzakis, K.; Lahanas, M.; Milickovic, N.; Giannouli, S.; Baltas, D.; Zamboglou, N.

    2002-01-01

    A stratified sampling method for the efficient repeated computation of dose-volume histograms (DVHs) in brachytherapy is presented as used for anatomy based brachytherapy optimization methods. The aim of the method is to reduce the number of sampling points required for the calculation of DVHs for the body and the PTV. From the DVHs are derived the quantities such as Conformity Index COIN and COIN integrals. This is achieved by using partial uniform distributed sampling points with a density in each region obtained from a survey of the gradients or the variance of the dose distribution in these regions. The shape of the sampling regions is adapted to the patient anatomy and the shape and size of the implant. For the application of this method a single preprocessing step is necessary which requires only a few seconds. Ten clinical implants were used to study the appropriate number of sampling points, given a required accuracy for quantities such as cumulative DVHs, COIN indices and COIN integrals. We found that DVHs of very large tissue volumes surrounding the PTV, and also COIN distributions, can be obtained using a factor of 5-10 times smaller the number of sampling points in comparison with uniform distributed points

  6. A structural model for electricity prices with spikes: measurement of spike risk and optimal policies for hydropower plant operation

    International Nuclear Information System (INIS)

    Kanamura, Takashi

    2007-01-01

    This paper proposes a new model for electricity prices based on demand and supply, which we call a structural model. We show that the structural model can generate price spikes that fits the observed data better than those generated by other preceding models such as the jump diffusion model and the Box-Cox transformation model. We apply the structural model to obtain the optimal operation policy for a pumped-storage hydropower generator, and show that the structural model can provide more realistic optimal policies than the jump diffusion model. (author)

  7. A structural model for electricity prices with spikes: measurement of spike risk and optimal policies for hydropower plant operation

    Energy Technology Data Exchange (ETDEWEB)

    Kanamura, Takashi [Hitotsubashi University, Tokyo (Japan). Graduate School of International Corporate Strategy; Ohashi, Azuhiko [J-Power, Tokyo (Japan)

    2007-09-15

    This paper proposes a new model for electricity prices based on demand and supply, which we call a structural model. We show that the structural model can generate price spikes that fits the observed data better than those generated by other preceding models such as the jump diffusion model and the Box-Cox transformation model. We apply the structural model to obtain the optimal operation policy for a pumped-storage hydropower generator, and show that the structural model can provide more realistic optimal policies than the jump diffusion model. (author)

  8. Optimal policy of energy innovation in developing countries: Development of solar PV in Iran

    International Nuclear Information System (INIS)

    Shafiei, Ehsan; Saboohi, Yadollah; Ghofrani, Mohammad B.

    2009-01-01

    The purpose of this study is to apply managerial economics and methods of decision analysis to study the optimal pattern of innovation activities for development of new energy technologies in developing countries. For this purpose, a model of energy research and development (R and D) planning is developed and it is then linked to a bottom-up energy-systems model. The set of interlinked models provide a comprehensive analytical tool for assessment of energy technologies and innovation planning taking into account the specific conditions of developing countries. An energy-system model is used as a tool for the assessment and prioritization of new energy technologies. Based on the results of the technology assessment model, the optimal R and D resources allocation for new energy technologies is estimated with the help of the R and D planning model. The R and D planning model is based on maximization of the total net present value of resulting R and D benefits taking into account the dynamics of technological progress, knowledge and experience spillovers from advanced economies, technology adoption and R and D constraints. Application of the set of interlinked models is explained through the analysis of the development of solar PV in Iranian electricity supply system and then some important policy insights are concluded

  9. Optimal sampling plan for clean development mechanism lighting projects with lamp population decay

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua; Zhang, Jiangfeng

    2014-01-01

    Highlights: • A metering cost minimisation model is built with the lamp population decay to optimise CDM lighting projects sampling plan. • The model minimises the total metering cost and optimise the annual sample size during the crediting period. • The required 90/10 criterion sampling accuracy is satisfied for each CDM monitoring report. - Abstract: This paper proposes a metering cost minimisation model that minimises metering cost under the constraints of sampling accuracy requirement for clean development mechanism (CDM) energy efficiency (EE) lighting project. Usually small scale (SSC) CDM EE lighting projects expect a crediting period of 10 years given that the lighting population will decay as time goes by. The SSC CDM sampling guideline requires that the monitored key parameters for the carbon emission reduction quantification must satisfy the sampling accuracy of 90% confidence and 10% precision, known as the 90/10 criterion. For the existing registered CDM lighting projects, sample sizes are either decided by professional judgment or by rule-of-thumb without considering any optimisation. Lighting samples are randomly selected and their energy consumptions are monitored continuously by power meters. In this study, the sampling size determination problem is formulated as a metering cost minimisation model by incorporating a linear lighting decay model as given by the CDM guideline AMS-II.J. The 90/10 criterion is formulated as constraints to the metering cost minimisation problem. Optimal solutions to the problem minimise the metering cost whilst satisfying the 90/10 criterion for each reporting period. The proposed metering cost minimisation model is applicable to other CDM lighting projects with different population decay characteristics as well

  10. Optimizing 4-Dimensional Magnetic Resonance Imaging Data Sampling for Respiratory Motion Analysis of Pancreatic Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Stemkens, Bjorn, E-mail: b.stemkens@umcutrecht.nl [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Tijssen, Rob H.N. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands); Senneville, Baudouin D. de [Imaging Division, University Medical Center Utrecht, Utrecht (Netherlands); L' Institut de Mathématiques de Bordeaux, Unité Mixte de Recherche 5251, Centre National de la Recherche Scientifique/University of Bordeaux, Bordeaux (France); Heerkens, Hanne D.; Vulpen, Marco van; Lagendijk, Jan J.W.; Berg, Cornelis A.T. van den [Department of Radiotherapy, University Medical Center Utrecht, Utrecht (Netherlands)

    2015-03-01

    Purpose: To determine the optimum sampling strategy for retrospective reconstruction of 4-dimensional (4D) MR data for nonrigid motion characterization of tumor and organs at risk for radiation therapy purposes. Methods and Materials: For optimization, we compared 2 surrogate signals (external respiratory bellows and internal MRI navigators) and 2 MR sampling strategies (Cartesian and radial) in terms of image quality and robustness. Using the optimized protocol, 6 pancreatic cancer patients were scanned to calculate the 4D motion. Region of interest analysis was performed to characterize the respiratory-induced motion of the tumor and organs at risk simultaneously. Results: The MRI navigator was found to be a more reliable surrogate for pancreatic motion than the respiratory bellows signal. Radial sampling is most benign for undersampling artifacts and intraview motion. Motion characterization revealed interorgan and interpatient variation, as well as heterogeneity within the tumor. Conclusions: A robust 4D-MRI method, based on clinically available protocols, is presented and successfully applied to characterize the abdominal motion in a small number of pancreatic cancer patients.

  11. Optimal sampling in damage detection of flexural beams by continuous wavelet transform

    International Nuclear Information System (INIS)

    Basu, B; Broderick, B M; Montanari, L; Spagnoli, A

    2015-01-01

    Modern measurement techniques are improving in capability to capture spatial displacement fields occurring in deformed structures with high precision and in a quasi-continuous manner. This in turn has made the use of vibration-based damage identification methods more effective and reliable for real applications. However, practical measurement and data processing issues still present barriers to the application of these methods in identifying several types of structural damage. This paper deals with spatial Continuous Wavelet Transform (CWT) damage identification methods in beam structures with the aim of addressing the following key questions: (i) can the cost of damage detection be reduced by down-sampling? (ii) what is the minimum number of sampling intervals required for optimal damage detection ? The first three free vibration modes of a cantilever and a simple supported beam with an edge open crack are numerically simulated. A thorough parametric study is carried out by taking into account the key parameters governing the problem, including level of noise, crack depth and location, mechanical and geometrical parameters of the beam. The results are employed to assess the optimal number of sampling intervals for effective damage detection. (paper)

  12. Optimizing two-dimensional renewable warranty policies for sensor embedded remanufactured products

    International Nuclear Information System (INIS)

    Alqahtani, Ammar; Gupta, Surendra M.

    2017-01-01

    Remanufactured products, in addition to being environment friendly, are popular with consumers because they can offer the latest technology with lower prices in comparison to brand new products. However, some consumers are hesitant to buy remanufactured products because they are skeptical about the quality of the remanufactured product and thus are unsure of the extent to which the product will render services when compared to a new product. A strategy that remanufacturers may employ to entice customers is to offer warranties on remanufactured products. To that end, this paper studies and scrutinizes the impact of offering renewing warranties on remanufactured products. Specifically, the paper suggests a methodology which simultaneously minimizes the cost incurred by the remanufacturers and maximizes the confidence of the consumers towards buying remanufacturing products. Design/methodology/approach: This study uses discrete-event simulation to optimize the implementation of a two-dimensional renewing warranty policy for remanufactured products. The implementation is illustrated using a specific product recovery system called the Advanced Remanufacturing-To-Order (ARTO) system. The experiments used in the study were designed using Taguchi’s Orthogonal Arrays to represent the entire domain of the recovery system so as to observe the system behavior under various experimental conditions. In order to determine the optimum strategy offered by the remanufacturer, various warranty and preventive maintenance scenarios were analyzed using pairwise t-tests along with one-way analysis of variance (ANOVA) and Tukey pairwise comparisons tests for every scenario. Findings: The proposed methodology is able to simultaneously minimize the cost incurred by the remanufacturer, optimize the warranty price and period, and optimize the preventive maintenance strategy resulting in increased consumer confidence. Originality/value: This is the first study that evaluates in a

  13. Balancing Exploration, Uncertainty Representation and Computational Time in Many-Objective Reservoir Policy Optimization

    Science.gov (United States)

    Zatarain-Salazar, J.; Reed, P. M.; Quinn, J.; Giuliani, M.; Castelletti, A.

    2016-12-01

    As we confront the challenges of managing river basin systems with a large number of reservoirs and increasingly uncertain tradeoffs impacting their operations (due to, e.g. climate change, changing energy markets, population pressures, ecosystem services, etc.), evolutionary many-objective direct policy search (EMODPS) solution strategies will need to address the computational demands associated with simulating more uncertainties and therefore optimizing over increasingly noisy objective evaluations. Diagnostic assessments of state-of-the-art many-objective evolutionary algorithms (MOEAs) to support EMODPS have highlighted that search time (or number of function evaluations) and auto-adaptive search are key features for successful optimization. Furthermore, auto-adaptive MOEA search operators are themselves sensitive to having a sufficient number of function evaluations to learn successful strategies for exploring complex spaces and for escaping from local optima when stagnation is detected. Fortunately, recent parallel developments allow coordinated runs that enhance auto-adaptive algorithmic learning and can handle scalable and reliable search with limited wall-clock time, but at the expense of the total number of function evaluations. In this study, we analyze this tradeoff between parallel coordination and depth of search using different parallelization schemes of the Multi-Master Borg on a many-objective stochastic control problem. We also consider the tradeoff between better representing uncertainty in the stochastic optimization, and simplifying this representation to shorten the function evaluation time and allow for greater search. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple competing objectives for hydropower production, urban water supply, recreation and environmental flows need to be balanced. Our results provide guidance for balancing exploration, uncertainty, and computational demands when using the EMODPS

  14. Optimizing two-dimensional renewable warranty policies for sensor embedded remanufactured products

    Directory of Open Access Journals (Sweden)

    Ammar Alqahtani

    2017-05-01

    Full Text Available Purpose: Remanufactured products, in addition to being environment friendly, are popular with consumers because they can offer the latest technology with lower prices in comparison to brand new products. However, some consumers are hesitant to buy remanufactured products because they are skeptical about the quality of the remanufactured product and thus are unsure of the extent to which the product will render services when compared to a new product. A strategy that remanufacturers may employ to entice customers is to offer warranties on remanufactured products. To that end, this paper studies and scrutinizes the impact of offering renewing warranties on remanufactured products. Specifically, the paper suggests a methodology which simultaneously minimizes the cost incurred by the remanufacturers and maximizes the confidence of the consumers towards buying remanufacturing products. Design/methodology/approach: This study uses discrete-event simulation to optimize the implementation of a two-dimensional renewing warranty policy for remanufactured products. The implementation is illustrated using a specific product recovery system called the Advanced Remanufacturing-To-Order (ARTO system. The experiments used in the study were designed using Taguchi’s Orthogonal Arrays to represent the entire domain of the recovery system so as to observe the system behavior under various experimental conditions. In order to determine the optimum strategy offered by the remanufacturer, various warranty and preventive maintenance scenarios were analyzed using pairwise t-tests along with one-way analysis of variance (ANOVA and Tukey pairwise comparisons tests for every scenario. Findings: The proposed methodology is able to simultaneously minimize the cost incurred by the remanufacturer, optimize the warranty price and period, and optimize the preventive maintenance strategy resulting in increased consumer confidence. Originality/value: This is the first study that

  15. Optimizing two-dimensional renewable warranty policies for sensor embedded remanufactured products

    Energy Technology Data Exchange (ETDEWEB)

    Alqahtani, Ammar; Gupta, Surendra M.

    2017-07-01

    Remanufactured products, in addition to being environment friendly, are popular with consumers because they can offer the latest technology with lower prices in comparison to brand new products. However, some consumers are hesitant to buy remanufactured products because they are skeptical about the quality of the remanufactured product and thus are unsure of the extent to which the product will render services when compared to a new product. A strategy that remanufacturers may employ to entice customers is to offer warranties on remanufactured products. To that end, this paper studies and scrutinizes the impact of offering renewing warranties on remanufactured products. Specifically, the paper suggests a methodology which simultaneously minimizes the cost incurred by the remanufacturers and maximizes the confidence of the consumers towards buying remanufacturing products. Design/methodology/approach: This study uses discrete-event simulation to optimize the implementation of a two-dimensional renewing warranty policy for remanufactured products. The implementation is illustrated using a specific product recovery system called the Advanced Remanufacturing-To-Order (ARTO) system. The experiments used in the study were designed using Taguchi’s Orthogonal Arrays to represent the entire domain of the recovery system so as to observe the system behavior under various experimental conditions. In order to determine the optimum strategy offered by the remanufacturer, various warranty and preventive maintenance scenarios were analyzed using pairwise t-tests along with one-way analysis of variance (ANOVA) and Tukey pairwise comparisons tests for every scenario. Findings: The proposed methodology is able to simultaneously minimize the cost incurred by the remanufacturer, optimize the warranty price and period, and optimize the preventive maintenance strategy resulting in increased consumer confidence. Originality/value: This is the first study that evaluates in a

  16. Optimization of sampling pattern and the design of Fourier ptychographic illuminator.

    Science.gov (United States)

    Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan

    2015-03-09

    Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.

  17. Optimal harvesting policy of a stochastic two-species competitive model with Lévy noise in a polluted environment

    Science.gov (United States)

    Zhao, Yu; Yuan, Sanling

    2017-07-01

    As well known that the sudden environmental shocks and toxicant can affect the population dynamics of fish species, a mechanistic understanding of how sudden environmental change and toxicant influence the optimal harvesting policy requires development. This paper presents the optimal harvesting of a stochastic two-species competitive model with Lévy noise in a polluted environment, where the Lévy noise is used to describe the sudden climate change. Due to the discontinuity of the Lévy noise, the classical optimal harvesting methods based on the explicit solution of the corresponding Fokker-Planck equation are invalid. The object of this paper is to fill up this gap and establish the optimal harvesting policy. By using of aggregation and ergodic methods, the approximation of the optimal harvesting effort and maximum expectation of sustainable yields are obtained. Numerical simulations are carried out to support these theoretical results. Our analysis shows that the Lévy noise and the mean stress measure of toxicant in organism may affect the optimal harvesting policy significantly.

  18. Foam generation and sample composition optimization for the FOAM-C experiment of the ISS

    Science.gov (United States)

    Carpy, R.; Picker, G.; Amann, B.; Ranebo, H.; Vincent-Bonnieu, S.; Minster, O.; Winter, J.; Dettmann, J.; Castiglione, L.; Höhler, R.; Langevin, D.

    2011-12-01

    End of 2009 and early 2010 a sealed cell, for foam generation and observation, has been designed and manufactured at Astrium Friedrichshafen facilities. With the use of this cell, different sample compositions of "wet foams" have been optimized for mixtures of chemicals such as water, dodecanol, pluronic, aethoxisclerol, glycerol, CTAB, SDS, as well as glass beads. This development is performed in the frame of the breadboarding development activities of the Experiment Container FOAM-C for operation in the ISS Fluid Science Laboratory (ISS). The sample cell supports multiple observation methods such as: Diffusing-Wave and Diffuse Transmission Spectrometry, Time Resolved Correlation Spectroscopy [1] and microscope observation, all of these methods are applied in the cell with a relatively small experiment volume 40).

  19. AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange

    International Nuclear Information System (INIS)

    Gledhill, John M.; Walters, Benjamin T.; Wand, A. Joshua

    2009-01-01

    The Cartesian sampled three-dimensional HNCO experiment is inherently limited in time resolution and sensitivity for the real time measurement of protein hydrogen exchange. This is largely overcome by use of the radial HNCO experiment that employs the use of optimized sampling angles. The significant practical limitation presented by use of three-dimensional data is the large data storage and processing requirements necessary and is largely overcome by taking advantage of the inherent capabilities of the 2D-FT to process selective frequency space without artifact or limitation. Decomposition of angle spectra into positive and negative ridge components provides increased resolution and allows statistical averaging of intensity and therefore increased precision. Strategies for averaging ridge cross sections within and between angle spectra are developed to allow further statistical approaches for increasing the precision of measured hydrogen occupancy. Intensity artifacts potentially introduced by over-pulsing are effectively eliminated by use of the BEST approach

  20. Population pharmacokinetic analysis of clopidogrel in healthy Jordanian subjects with emphasis optimal sampling strategy.

    Science.gov (United States)

    Yousef, A M; Melhem, M; Xue, B; Arafat, T; Reynolds, D K; Van Wart, S A

    2013-05-01

    Clopidogrel is metabolized primarily into an inactive carboxyl metabolite (clopidogrel-IM) or to a lesser extent an active thiol metabolite. A population pharmacokinetic (PK) model was developed using NONMEM(®) to describe the time course of clopidogrel-IM in plasma and to design a sparse-sampling strategy to predict clopidogrel-IM exposures for use in characterizing anti-platelet activity. Serial blood samples from 76 healthy Jordanian subjects administered a single 75 mg oral dose of clopidogrel were collected and assayed for clopidogrel-IM using reverse phase high performance liquid chromatography. A two-compartment (2-CMT) PK model with first-order absorption and elimination plus an absorption lag-time was evaluated, as well as a variation of this model designed to mimic enterohepatic recycling (EHC). Optimal PK sampling strategies (OSS) were determined using WinPOPT based upon collection of 3-12 post-dose samples. A two-compartment model with EHC provided the best fit and reduced bias in C(max) (median prediction error (PE%) of 9.58% versus 12.2%) relative to the basic two-compartment model, AUC(0-24) was similar for both models (median PE% = 1.39%). The OSS for fitting the two-compartment model with EHC required the collection of seven samples (0.25, 1, 2, 4, 5, 6 and 12 h). Reasonably unbiased and precise exposures were obtained when re-fitting this model to a reduced dataset considering only these sampling times. A two-compartment model considering EHC best characterized the time course of clopidogrel-IM in plasma. Use of the suggested OSS will allow for the collection of fewer PK samples when assessing clopidogrel-IM exposures. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Optimization of multi-channel neutron focusing guides for extreme sample environments

    International Nuclear Information System (INIS)

    Di Julio, D D; Lelièvre-Berna, E; Andersen, K H; Bentley, P M; Courtois, P

    2014-01-01

    In this work, we present and discuss simulation results for the design of multichannel neutron focusing guides for extreme sample environments. A single focusing guide consists of any number of supermirror-coated curved outer channels surrounding a central channel. Furthermore, a guide is separated into two sections in order to allow for extension into a sample environment. The performance of a guide is evaluated through a Monte-Carlo ray tracing simulation which is further coupled to an optimization algorithm in order to find the best possible guide for a given situation. A number of population-based algorithms have been investigated for this purpose. These include particle-swarm optimization, artificial bee colony, and differential evolution. The performance of each algorithm and preliminary results of the design of a multi-channel neutron focusing guide using these methods are described. We found that a three-channel focusing guide offered the best performance, with a gain factor of 2.4 compared to no focusing guide, for the design scenario investigated in this work.

  2. Neutron activation analysis for the optimal sampling and extraction of extractable organohalogens in human hari

    International Nuclear Information System (INIS)

    Zhang, H.; Chai, Z.F.; Sun, H.B.; Xu, H.F.

    2005-01-01

    Many persistent organohalogen compounds such as DDTs and polychlorinated biphenyls have caused seriously environmental pollution problem that now involves all life. It is know that neutron activation analysis (NAA) is a very convenient method for halogen analysis and is also the only method currently available for simultaneously determining organic chlorine, bromine and iodine in one extract. Human hair is a convenient material to evaluate the burden of such compounds in human body and dan be easily collected from people over wide ranges of age, sex, residential areas, eating habits and working environments. To effectively extract organohalogen compounds from human hair, in present work the optimal Soxhelt-extraction time of extractable organohalogen (EOX) and extractable persistent organohalogen (EPOX) from hair of different lengths were studied by NAA. The results indicated that the optimal Soxhelt-extraction time of EOX and EPOX from human hair was 8-11 h, and the highest EOX and EPOX contents were observed in hair powder extract. The concentrations of both EOX and EPOX in different hair sections were in the order of hair powder ≥ 2 mm > 5 mm, which stated that hair samples milled into hair powder or cut into very short sections were not only for homogeneous. hair sample but for the best hair extraction efficiency.

  3. Bayesian assessment of the expected data impact on prediction confidence in optimal sampling design

    Science.gov (United States)

    Leube, P. C.; Geiges, A.; Nowak, W.

    2012-02-01

    Incorporating hydro(geo)logical data, such as head and tracer data, into stochastic models of (subsurface) flow and transport helps to reduce prediction uncertainty. Because of financial limitations for investigation campaigns, information needs toward modeling or prediction goals should be satisfied efficiently and rationally. Optimal design techniques find the best one among a set of investigation strategies. They optimize the expected impact of data on prediction confidence or related objectives prior to data collection. We introduce a new optimal design method, called PreDIA(gnosis) (Preposterior Data Impact Assessor). PreDIA derives the relevant probability distributions and measures of data utility within a fully Bayesian, generalized, flexible, and accurate framework. It extends the bootstrap filter (BF) and related frameworks to optimal design by marginalizing utility measures over the yet unknown data values. PreDIA is a strictly formal information-processing scheme free of linearizations. It works with arbitrary simulation tools, provides full flexibility concerning measurement types (linear, nonlinear, direct, indirect), allows for any desired task-driven formulations, and can account for various sources of uncertainty (e.g., heterogeneity, geostatistical assumptions, boundary conditions, measurement values, model structure uncertainty, a large class of model errors) via Bayesian geostatistics and model averaging. Existing methods fail to simultaneously provide these crucial advantages, which our method buys at relatively higher-computational costs. We demonstrate the applicability and advantages of PreDIA over conventional linearized methods in a synthetic example of subsurface transport. In the example, we show that informative data is often invisible for linearized methods that confuse zero correlation with statistical independence. Hence, PreDIA will often lead to substantially better sampling designs. Finally, we extend our example to specifically

  4. A note on optimal (s,S) and (R,nQ) policies under a stuttering Poisson demand process

    DEFF Research Database (Denmark)

    Larsen, Christian

    2015-01-01

    In this note, a new efficient algorithm is proposed to find an optimal (s, S) replenishment policy for inventory systems with continuous reviews and where the demand follows a stuttering Poisson process (the compound element is geometrically distributed). We also derive three upper bounds...

  5. Optimization of a Pre-MEKC Separation SPE Procedure for Steroid Molecules in Human Urine Samples

    Directory of Open Access Journals (Sweden)

    Ilona Olędzka

    2013-11-01

    Full Text Available Many steroid hormones can be considered as potential biomarkers and their determination in body fluids can create opportunities for the rapid diagnosis of many diseases and disorders of the human body. Most existing methods for the determination of steroids are usually time- and labor-consuming and quite costly. Therefore, the aim of analytical laboratories is to develop a new, relatively low-cost and rapid implementation methodology for their determination in biological samples. Due to the fact that there is little literature data on concentrations of steroid hormones in urine samples, we have made attempts at the electrophoretic determination of these compounds. For this purpose, an extraction procedure for the optimized separation and simultaneous determination of seven steroid hormones in urine samples has been investigated. The isolation of analytes from biological samples was performed by liquid-liquid extraction (LLE with dichloromethane and compared to solid phase extraction (SPE with C18 and hydrophilic-lipophilic balance (HLB columns. To separate all the analytes a micellar electrokinetic capillary chromatography (MECK technique was employed. For full separation of all the analytes a running buffer (pH 9.2, composed of 10 mM sodium tetraborate decahydrate (borax, 50 mM sodium dodecyl sulfate (SDS, and 10% methanol was selected. The methodology developed in this work for the determination of steroid hormones meets all the requirements of analytical methods. The applicability of the method has been confirmed for the analysis of urine samples collected from volunteers—both men and women (students, amateur bodybuilders, using and not applying steroid doping. The data obtained during this work can be successfully used for further research on the determination of steroid hormones in urine samples.

  6. A trivariate optimal replacement policy for a deteriorating system based on cumulative damage and inspections

    International Nuclear Information System (INIS)

    Tsai, Hsin-Nan; Sheu, Shey-Huei; Zhang, Zhe George

    2017-01-01

    In this article, we study a trivariate replacement model for a deteriorating system consisting of two units. Failures of unit 1 can be classified into two types. Type I failure (minor failure) is fixed by a minimal repair and type II failure (catastrophic failure) is removed by a replacement. Both types of failures can only be detected through inspection. Each type I failure of unit 1 will result in a random amount of damage to unit 2 and the damages are cumulative. The probability of type I failure or type II failure is assumed to depend on the number of failures since the last replacement. We formulate a replacement policy based on the number of type I failure, the occurrence of the first type II failure, and the amount of accumulative damages. Hence the system is replaced either preventively or correctively at any of the following four conditions depend on whichever occurs first; preventively (a) at the Nth type I failure; or (b) when the total damage of unit 2 exceeds a pre-specified level Z (but less than the failure level l); and, correctively (c) at the first type II failure; or (d) when the total damage of unit 2 exceeds a failure level l, where Z and l represent the thresholds of total damage level for unit 2 to preventive and corrective replacements, respectively. Although a type I failure can be fixed by a minimal repair, but the operating period is stochastically decreasing and repair time is stochastically increasing as time goes on. The minimal total expected long-run net cost per unit time of the system is derived and a computational algorithm for determining the optimal policy is developed. A real-world application from electric power industry is provided. Several past studied are shown to be special cases of our model. Finally, a numerical example is presented. - Highlights: • A trivariate replacement policy for a deteriorating system with two units is proposed. • A real-world application from the electric power industry is provided. • The

  7. Modeling Optimal Cutoffs for the Brazilian Household Food Insecurity Measurement Scale in a Nationwide Representative Sample.

    Science.gov (United States)

    Interlenghi, Gabriela S; Reichenheim, Michael E; Segall-Corrêa, Ana M; Pérez-Escamilla, Rafael; Moraes, Claudia L; Salles-Costa, Rosana

    2017-07-01

    Background: This is the second part of a model-based approach to examine the suitability of the current cutoffs applied to the raw score of the Brazilian Household Food Insecurity Measurement Scale [Escala Brasileira de Insegurança Alimentar (EBIA)]. The approach allows identification of homogeneous groups who correspond to severity levels of food insecurity (FI) and, by extension, discriminant cutoffs able to accurately distinguish these groups. Objective: This study aims to examine whether the model-based approach for identifying optimal cutoffs first implemented in a local sample is replicated in a countrywide representative sample. Methods: Data were derived from the Brazilian National Household Sample Survey of 2013 ( n = 116,543 households). Latent class factor analysis (LCFA) models from 2 to 5 classes were applied to the scale's items to identify the number of underlying FI latent classes. Next, identification of optimal cutoffs on the overall raw score was ascertained from these identified classes. Analyses were conducted in the aggregate data and by macroregions. Finally, model-based classifications (latent classes and groupings identified thereafter) were contrasted to the traditionally used classification. Results: LCFA identified 4 homogeneous groups with a very high degree of class separation (entropy = 0.934-0.975). The following cutoffs were identified in the aggregate data: between 1 and 2 (1/2), 5 and 6 (5/6), and 10 and 11 (10/11) in households with children and/or adolescents category emerged consistently in all analyses. Conclusions: Nationwide findings corroborate previous local evidence that households with an overall score of 1 are more akin to those scoring negative on all items. These results may contribute to guide experts' and policymakers' decisions on the most appropriate EBIA cutoffs. © 2017 American Society for Nutrition.

  8. Optimization of China's generating portfolio and policy implications based on portfolio theory

    International Nuclear Information System (INIS)

    Zhu, Lei; Fan, Ying

    2010-01-01

    This paper applies portfolio theory to evaluate China's 2020-medium-term plans for generating technologies and its generating portfolio. With reference to the risk of relevant generating-cost streams, the paper discusses China's future development of efficient (Pareto optimal) generating portfolios that enhance energy security in different scenarios, including CO 2 -emission-constrained scenarios. This research has found that the future adjustment of China's planned 2020 generating portfolio can reduce the portfolio's cost risk through appropriate diversification of generating technologies, but a price will be paid in the form of increased generating cost. In the CO 2 -emission-constrained scenarios, the generating-cost risk of China's planned 2020 portfolio is even greater than that of the 2005 portfolio, but increasing the proportion of nuclear power in the generating portfolio can reduce the cost risk effectively. For renewable-power generation, because of relatively high generating costs, it will be necessary to obtain stronger policy support to promote renewable-power development.

  9. Barriers to optimizing investments in the built environment to reduce youth obesity: policy-maker perspectives.

    Science.gov (United States)

    Grant, Jill L; MacKay, Kathryn C; Manuel, Patricia M; McHugh, Tara-Leigh F

    2010-01-01

    To identify factors which limit the ability of local governments to make appropriate investments in the built environment to promote youth health and reduce obesity outcomes in Atlantic Canada. Policy-makers and professionals participated in focus groups to discuss the receptiveness of local governments to introducing health considerations into decision-making. Seven facilitated focus groups involved 44 participants from Atlantic Canada. Thematic discourse analysis of the meeting transcripts identified systemic barriers to creating a built environment that fosters health for youth aged 12-15 years. Participants consistently identified four categories of barriers. Financial barriers limit the capacities of local government to build, maintain and operate appropriate facilities. Legacy issues mean that communities inherit a built environment designed to facilitate car use, with inadequate zoning authority to control fast food outlets, and without the means to determine where schools are built or how they are used. Governance barriers derive from government departments with distinct and competing mandates, with a professional structure that privileges engineering, and with funding programs that encourage competition between municipalities. Cultural factors and values affect outcomes: people have adapted to car-oriented living; poverty reduces options for many families; parental fears limit children's mobility; youth receive limited priority in built environment investments. Participants indicated that health issues have increasing profile within local government, making this an opportune time to discuss strategies for optimizing investments in the built environment. The focus group method can foster mutual learning among professionals within government in ways that could advance health promotion.

  10. Optimized measurement of radium-226 concentration in liquid samples with radon-222 emanation

    International Nuclear Information System (INIS)

    Perrier, Frédéric; Aupiais, Jean; Girault, Frédéric; Przylibski, Tadeusz A.; Bouquerel, Hélène

    2016-01-01

    Measuring radium-226 concentration in liquid samples using radon-222 emanation remains competitive with techniques such as liquid scintillation, alpha or mass spectrometry. Indeed, we show that high-precision can be obtained without air circulation, using an optimal air to liquid volume ratio and moderate heating. Cost-effective and efficient measurement of radon concentration is achieved by scintillation flasks and sufficiently long counting times for signal and background. More than 400 such measurements were performed, including 39 dilution experiments, a successful blind measurement of six reference test solutions, and more than 110 repeated measurements. Under optimal conditions, uncertainties reach 5% for an activity concentration of 100 mBq L"−"1 and 10% for 10 mBq L"−"1. While the theoretical detection limit predicted by Monte Carlo simulation is around 3 mBq L"−"1, a conservative experimental estimate is rather 5 mBq L"−"1, corresponding to 0.14 fg g"−"1. The method was applied to 47 natural waters, 51 commercial waters, and 17 wine samples, illustrating that it could be an option for liquids that cannot be easily measured by other methods. Counting of scintillation flasks can be done in remote locations in absence of electricity supply, using a solar panel. Thus, this portable method, which has demonstrated sufficient accuracy for numerous natural liquids, could be useful in geological and environmental problems, with the additional benefit that it can be applied in isolated locations and in circumstances when samples cannot be transported. - Highlights: • Radium-226 concentration measured with optimized accumulation in a container. • Radon-222 in air measured precisely with scintillation flasks and long countings. • Method tested by repetition tests, dilution experiments, and successful blind tests. • Estimated conservative detection limit without pre-concentration is 5 mBq L"−"1. • Method is portable, cost

  11. Evaluation and optimization of DNA extraction and purification procedures for soil and sediment samples.

    Science.gov (United States)

    Miller, D N; Bryant, J E; Madsen, E L; Ghiorse, W C

    1999-11-01

    We compared and statistically evaluated the effectiveness of nine DNA extraction procedures by using frozen and dried samples of two silt loam soils and a silt loam wetland sediment with different organic matter contents. The effects of different chemical extractants (sodium dodecyl sulfate [SDS], chloroform, phenol, Chelex 100, and guanadinium isothiocyanate), different physical disruption methods (bead mill homogenization and freeze-thaw lysis), and lysozyme digestion were evaluated based on the yield and molecular size of the recovered DNA. Pairwise comparisons of the nine extraction procedures revealed that bead mill homogenization with SDS combined with either chloroform or phenol optimized both the amount of DNA extracted and the molecular size of the DNA (maximum size, 16 to 20 kb). Neither lysozyme digestion before SDS treatment nor guanidine isothiocyanate treatment nor addition of Chelex 100 resin improved the DNA yields. Bead mill homogenization in a lysis mixture containing chloroform, SDS, NaCl, and phosphate-Tris buffer (pH 8) was found to be the best physical lysis technique when DNA yield and cell lysis efficiency were used as criteria. The bead mill homogenization conditions were also optimized for speed and duration with two different homogenizers. Recovery of high-molecular-weight DNA was greatest when we used lower speeds and shorter times (30 to 120 s). We evaluated four different DNA purification methods (silica-based DNA binding, agarose gel electrophoresis, ammonium acetate precipitation, and Sephadex G-200 gel filtration) for DNA recovery and removal of PCR inhibitors from crude extracts. Sephadex G-200 spin column purification was found to be the best method for removing PCR-inhibiting substances while minimizing DNA loss during purification. Our results indicate that for these types of samples, optimum DNA recovery requires brief, low-speed bead mill homogenization in the presence of a phosphate-buffered SDS-chloroform mixture, followed

  12. Optimization of a radiochemistry method for plutonium determination in biological samples

    International Nuclear Information System (INIS)

    Cerchetti, Maria L.; Arguelles, Maria G.

    2005-01-01

    Plutonium has been widely used for civilian an military activities. Nevertheless, the methods to control work exposition have not evolved in the same way, remaining as one of the major challengers for the radiological protection practice. Due to the low acceptable incorporation limit, the usual determination is based on indirect methods in urine samples. Our main objective was to optimize a technique used to monitor internal contamination of workers exposed to Plutonium isotopes. Different parameters were modified and their influence on the three steps of the method was evaluated. Those which gave the highest yield and feasibility were selected. The method involves: 1-) Sample concentration (coprecipitation); 2-) Plutonium purification; and 3-) Source preparation by electrodeposition. On the coprecipitation phase, changes on temperature and concentration of the carrier were evaluated. On the ion-exchange separation, changes on the type of the resin, elution solution for hydroxylamine (concentration and volume), length and column recycle were evaluated. Finally, on the electrodeposition phase, we modified the following: electrolytic solution, pH and time. Measures were made by liquid scintillation counting and alpha spectrometry (PIPS). We obtained the following yields: 88% for coprecipitation (at 60 C degree with 2 ml of CaHPO 4 ), 71% for ion-exchange (resins AG 1x8 Cl - 100-200 mesh, hydroxylamine 0.1N in HCl 0.2N as eluent, column between 4.5 and 8 cm), and 93% for electrodeposition (H 2 SO 4 -NH 4 OH, 100 minutes and pH from 2 to 2.8). The expand uncertainty was 30% (NC 95%), the decision threshold (Lc) was 0.102 Bq/L and the minimum detectable activity was 0.218 Bq/L of urine. We obtained an optimized method to screen workers exposed to Plutonium. (author)

  13. A Jackson network model and threshold policy for joint optimization of energy and delay in multi-hop wireless networks

    KAUST Repository

    Xia, Li

    2014-11-20

    This paper studies the joint optimization problem of energy and delay in a multi-hop wireless network. The optimization variables are the transmission rates, which are adjustable according to the packet queueing length in the buffer. The optimization goal is to minimize the energy consumption of energy-critical nodes and the packet transmission delay throughout the network. In this paper, we aim at understanding the well-known decentralized algorithms which are threshold based from a different research angle. By using a simplified network model, we show that we can adopt the semi-open Jackson network model and study this optimization problem in closed form. This simplified network model further allows us to establish some significant optimality properties. We prove that the system performance is monotonic with respect to (w.r.t.) the transmission rate. We also prove that the threshold-type policy is optimal, i.e., when the number of packets in the buffer is larger than a threshold, transmit with the maximal rate (power); otherwise, no transmission. With these optimality properties, we develop a heuristic algorithm to iteratively find the optimal threshold. Finally, we conduct some simulation experiments to demonstrate the main idea of this paper.

  14. A Jackson network model and threshold policy for joint optimization of energy and delay in multi-hop wireless networks

    KAUST Repository

    Xia, Li; Shihada, Basem

    2014-01-01

    This paper studies the joint optimization problem of energy and delay in a multi-hop wireless network. The optimization variables are the transmission rates, which are adjustable according to the packet queueing length in the buffer. The optimization goal is to minimize the energy consumption of energy-critical nodes and the packet transmission delay throughout the network. In this paper, we aim at understanding the well-known decentralized algorithms which are threshold based from a different research angle. By using a simplified network model, we show that we can adopt the semi-open Jackson network model and study this optimization problem in closed form. This simplified network model further allows us to establish some significant optimality properties. We prove that the system performance is monotonic with respect to (w.r.t.) the transmission rate. We also prove that the threshold-type policy is optimal, i.e., when the number of packets in the buffer is larger than a threshold, transmit with the maximal rate (power); otherwise, no transmission. With these optimality properties, we develop a heuristic algorithm to iteratively find the optimal threshold. Finally, we conduct some simulation experiments to demonstrate the main idea of this paper.

  15. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    Science.gov (United States)

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  16. [Application of simulated annealing method and neural network on optimizing soil sampling schemes based on road distribution].

    Science.gov (United States)

    Han, Zong-wei; Huang, Wei; Luo, Yun; Zhang, Chun-di; Qi, Da-cheng

    2015-03-01

    Taking the soil organic matter in eastern Zhongxiang County, Hubei Province, as a research object, thirteen sample sets from different regions were arranged surrounding the road network, the spatial configuration of which was optimized by the simulated annealing approach. The topographic factors of these thirteen sample sets, including slope, plane curvature, profile curvature, topographic wetness index, stream power index and sediment transport index, were extracted by the terrain analysis. Based on the results of optimization, a multiple linear regression model with topographic factors as independent variables was built. At the same time, a multilayer perception model on the basis of neural network approach was implemented. The comparison between these two models was carried out then. The results revealed that the proposed approach was practicable in optimizing soil sampling scheme. The optimal configuration was capable of gaining soil-landscape knowledge exactly, and the accuracy of optimal configuration was better than that of original samples. This study designed a sampling configuration to study the soil attribute distribution by referring to the spatial layout of road network, historical samples, and digital elevation data, which provided an effective means as well as a theoretical basis for determining the sampling configuration and displaying spatial distribution of soil organic matter with low cost and high efficiency.

  17. A simple optimized microwave digestion method for multielement monitoring in mussel samples

    International Nuclear Information System (INIS)

    Saavedra, Y.; Gonzalez, A.; Fernandez, P.; Blanco, J.

    2004-01-01

    With the aim of obtaining a set of common decomposition conditions allowing the determination of several metals in mussel tissue (Hg by cold vapour atomic absorption spectrometry; Cu and Zn by flame atomic absorption spectrometry; and Cd, PbCr, Ni, As and Ag by electrothermal atomic absorption spectrometry), a factorial experiment was carried out using as factors the sample weight, digestion time and acid addition. It was found that the optimal conditions were 0.5 g of freeze-dried and triturated samples with 6 ml of nitric acid and subjected to microwave heating for 20 min at 180 psi. This pre-treatment, using only one step and one oxidative reagent, was suitable to determine the nine metals studied with no subsequent handling of the digest. It was possible to carry out the determination of atomic absorption using calibrations with aqueous standards and matrix modifiers for cadmium, lead, chromium, arsenic and silver. The accuracy of the procedure was checked using oyster tissue (SRM 1566b) and mussel tissue (CRM 278R) certified reference materials. The method is now used routinely to monitor these metals in wild and cultivated mussels, and found to be good

  18. An Optimization Model for Expired Drug Recycling Logistics Networks and Government Subsidy Policy Design Based on Tri-level Programming

    Directory of Open Access Journals (Sweden)

    Hui Huang

    2015-07-01

    Full Text Available In order to recycle and dispose of all people’s expired drugs, the government should design a subsidy policy to stimulate users to return their expired drugs, and drug-stores should take the responsibility of recycling expired drugs, in other words, to be recycling stations. For this purpose it is necessary for the government to select the right recycling stations and treatment stations to optimize the expired drug recycling logistics network and minimize the total costs of recycling and disposal. This paper establishes a tri-level programming model to study how the government can optimize an expired drug recycling logistics network and the appropriate subsidy policies. Furthermore, a Hybrid Genetic Simulated Annealing Algorithm (HGSAA is proposed to search for the optimal solution of the model. An experiment is discussed to illustrate the good quality of the recycling logistics network and government subsides obtained by the HGSAA. The HGSAA is proven to have the ability to converge on the global optimal solution, and to act as an effective algorithm for solving the optimization problem of expired drug recycling logistics network and government subsidies.

  19. An Optimization Model for Expired Drug Recycling Logistics Networks and Government Subsidy Policy Design Based on Tri-level Programming.

    Science.gov (United States)

    Huang, Hui; Li, Yuyu; Huang, Bo; Pi, Xing

    2015-07-09

    In order to recycle and dispose of all people's expired drugs, the government should design a subsidy policy to stimulate users to return their expired drugs, and drug-stores should take the responsibility of recycling expired drugs, in other words, to be recycling stations. For this purpose it is necessary for the government to select the right recycling stations and treatment stations to optimize the expired drug recycling logistics network and minimize the total costs of recycling and disposal. This paper establishes a tri-level programming model to study how the government can optimize an expired drug recycling logistics network and the appropriate subsidy policies. Furthermore, a Hybrid Genetic Simulated Annealing Algorithm (HGSAA) is proposed to search for the optimal solution of the model. An experiment is discussed to illustrate the good quality of the recycling logistics network and government subsides obtained by the HGSAA. The HGSAA is proven to have the ability to converge on the global optimal solution, and to act as an effective algorithm for solving the optimization problem of expired drug recycling logistics network and government subsidies.

  20. Examination of energy price policies in Iran for optimal configuration of CHP and CCHP systems based on particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Tichi, S.G.; Ardehali, M.M.; Nazari, M.E.

    2010-01-01

    The current subsidized energy prices in Iran are proposed to be gradually eliminated over the next few years. The objective of this study is to examine the effects of current and future energy price policies on optimal configuration of combined heat and power (CHP) and combined cooling, heating, and power (CCHP) systems in Iran, under the conditions of selling and not-selling electricity to utility. The particle swarm optimization algorithm is used for minimizing the cost function for owning and operating various CHP and CCHP systems in an industrial dairy unit. The results show that with the estimated future unsubsidized utility prices, CHP and CCHP systems operating with reciprocating engine prime mover have total costs of 5.6 and $2.9x10 6 over useful life of 20 years, respectively, while both systems have the same capital recovery periods of 1.3 years. However, for the same prime mover and with current subsidized prices, CHP and CCHP systems require 4.9 and 5.2 years for capital recovery, respectively. It is concluded that the current energy price policies hinder the promotion of installing CHP and CCHP systems and, the policy of selling electricity to utility as well as eliminating subsidies are prerequisites to successful widespread utilization of such systems.

  1. Optimized Analytical Method to Determine Gallic and Picric Acids in Pyrotechnic Samples by Using HPLC/UV (Reverse Phase)

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the optimization and development of a chromatographic method for the determination of gallic and picric acids in pyrotechnic samples is presented. In order to achieve this, both analytical conditions by HPLC with diode detection and extraction step of a selected sample were studied. (Author)

  2. An Optimization Model for Expired Drug Recycling Logistics Networks and Government Subsidy Policy Design Based on Tri-level Programming

    OpenAIRE

    Huang, Hui; Li, Yuyu; Huang, Bo; Pi, Xing

    2015-01-01

    In order to recycle and dispose of all people’s expired drugs, the government should design a subsidy policy to stimulate users to return their expired drugs, and drug-stores should take the responsibility of recycling expired drugs, in other words, to be recycling stations. For this purpose it is necessary for the government to select the right recycling stations and treatment stations to optimize the expired drug recycling logistics network and minimize the total costs of recycling and disp...

  3. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    Science.gov (United States)

    He, Hao; Wang, Jun; Zhu, Jiang; Li, Shaoqian

    2010-12-01

    In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP), which can be solved by standard linear programming (LP) method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  4. Optimal Policy of Cross-Layer Design for Channel Access and Transmission Rate Adaptation in Cognitive Radio Networks

    Directory of Open Access Journals (Sweden)

    Jiang Zhu

    2010-01-01

    Full Text Available In this paper, we investigate the cross-layer design of joint channel access and transmission rate adaptation in CR networks with multiple channels for both centralized and decentralized cases. Our target is to maximize the throughput of CR network under transmission power constraint by taking spectrum sensing errors into account. In centralized case, this problem is formulated as a special constrained Markov decision process (CMDP, which can be solved by standard linear programming (LP method. As the complexity of finding the optimal policy by LP increases exponentially with the size of action space and state space, we further apply action set reduction and state aggregation to reduce the complexity without loss of optimality. Meanwhile, for the convenience of implementation, we also consider the pure policy design and analyze the corresponding characteristics. In decentralized case, where only local information is available and there is no coordination among the CR users, we prove the existence of the constrained Nash equilibrium and obtain the optimal decentralized policy. Finally, in the case that the traffic load parameters of the licensed users are unknown for the CR users, we propose two methods to estimate the parameters for two different cases. Numerical results validate the theoretic analysis.

  5. Optimization of Sample Preparation and Instrumental Parameters for the Rapid Analysis of Drugs of Abuse in Hair samples by MALDI-MS/MS Imaging

    Science.gov (United States)

    Flinders, Bryn; Beasley, Emma; Verlaan, Ricky M.; Cuypers, Eva; Francese, Simona; Bassindale, Tom; Clench, Malcolm R.; Heeren, Ron M. A.

    2017-08-01

    Matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) has been employed to rapidly screen longitudinally sectioned drug user hair samples for cocaine and its metabolites using continuous raster imaging. Optimization of the spatial resolution and raster speed were performed on intact cocaine contaminated hair samples. The optimized settings (100 × 150 μm at 0.24 mm/s) were subsequently used to examine longitudinally sectioned drug user hair samples. The MALDI-MS/MS images showed the distribution of the most abundant cocaine product ion at m/z 182. Using the optimized settings, multiple hair samples obtained from two users were analyzed in approximately 3 h: six times faster than the standard spot-to-spot acquisition method. Quantitation was achieved using longitudinally sectioned control hair samples sprayed with a cocaine dilution series. A multiple reaction monitoring (MRM) experiment was also performed using the `dynamic pixel' imaging method to screen for cocaine and a range of its metabolites, in order to differentiate between contaminated hairs and drug users. Cocaine, benzoylecgonine, and cocaethylene were detectable, in agreement with analyses carried out using the standard LC-MS/MS method. [Figure not available: see fulltext.

  6. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  7. Identifying optimal postmarket surveillance strategies for medical and surgical devices: implications for policy, practice and research.

    Science.gov (United States)

    Gagliardi, Anna R; Umoquit, Muriah; Lehoux, Pascale; Ross, Sue; Ducey, Ariel; Urbach, David R

    2013-03-01

    Non-drug technologies offer many benefits, but have been associated with adverse events, prompting calls for improved postmarket surveillance. There is little empirical research to guide the development of such a system. The purpose of this study was to identify optimal postmarket surveillance strategies for medical and surgical devices. Qualitative methods were used for sampling, data collection and analysis. Stakeholders from Canada and the USA representing different roles and perspectives were first interviewed to identify examples and characteristics of different surveillance strategies. These stakeholders and others they recommended were then assembled at a 1-day nominal group meeting to discuss and prioritise the components of a postmarket device surveillance system, and research needed to achieve such a system. Consultations were held with 37 participants, and 47 participants attended the 1-day meeting. They recommended a multicomponent system including reporting by facilities, clinicians and patients, supported with some external surveillance for validation and real-time trials for high-risk devices. Many considerations were identified that constitute desirable characteristics of, and means by which to implement such a system. An overarching network was envisioned to broker linkages, establish a shared minimum dataset, and support communication and decision making. Numerous research questions were identified, which could be pursued in tandem with phased implementation of the system. These findings provide unique guidance for establishing a device safety network that is based on existing initiatives, and could be expanded and evaluated in a prospective, phased fashion as it was developed.

  8. Testing of Alignment Parameters for Ancient Samples: Evaluating and Optimizing Mapping Parameters for Ancient Samples Using the TAPAS Tool

    Directory of Open Access Journals (Sweden)

    Ulrike H. Taron

    2018-03-01

    Full Text Available High-throughput sequence data retrieved from ancient or other degraded samples has led to unprecedented insights into the evolutionary history of many species, but the analysis of such sequences also poses specific computational challenges. The most commonly used approach involves mapping sequence reads to a reference genome. However, this process becomes increasingly challenging with an elevated genetic distance between target and reference or with the presence of contaminant sequences with high sequence similarity to the target species. The evaluation and testing of mapping efficiency and stringency are thus paramount for the reliable identification and analysis of ancient sequences. In this paper, we present ‘TAPAS’, (Testing of Alignment Parameters for Ancient Samples, a computational tool that enables the systematic testing of mapping tools for ancient data by simulating sequence data reflecting the properties of an ancient dataset and performing test runs using the mapping software and parameter settings of interest. We showcase TAPAS by using it to assess and improve mapping strategy for a degraded sample from a banded linsang (Prionodon linsang, for which no closely related reference is currently available. This enables a 1.8-fold increase of the number of mapped reads without sacrificing mapping specificity. The increase of mapped reads effectively reduces the need for additional sequencing, thus making more economical use of time, resources, and sample material.

  9. The Optimal Replenishment Policy under Trade Credit Financing with Ramp Type Demand and Demand Dependent Production Rate

    Directory of Open Access Journals (Sweden)

    Juanjuan Qin

    2014-01-01

    Full Text Available This paper investigates the optimal replenishment policy for the retailer with the ramp type demand and demand dependent production rate involving the trade credit financing, which is not reported in the literatures. First, the two inventory models are developed under the above situation. Second, the algorithms are given to optimize the replenishment cycle time and the order quantity for the retailer. Finally, the numerical examples are carried out to illustrate the optimal solutions and the sensitivity analysis is performed. The results show that if the value of production rate is small, the retailer will lower the frequency of putting the orders to cut down the order cost; if the production rate is high, the demand dependent production rate has no effect on the optimal decisions. When the trade credit is less than the growth stage time, the retailer will shorten the replenishment cycle; when it is larger than the breakpoint of the demand, within the maturity stage of the products, the trade credit has no effect on the optimal order cycle and the optimal order quantity.

  10. Designing evaluation studies to optimally inform policy: what factors do policy-makers in China consider when making resource allocation decisions on healthcare worker training programmes?

    Science.gov (United States)

    Wu, Shishi; Legido-Quigley, Helena; Spencer, Julia; Coker, Richard James; Khan, Mishal Sameer

    2018-02-23

    In light of the gap in evidence to inform future resource allocation decisions about healthcare provider (HCP) training in low- and middle-income countries (LMICs), and the considerable donor investments being made towards training interventions, evaluation studies that are optimally designed to inform local policy-makers are needed. The aim of our study is to understand what features of HCP training evaluation studies are important for decision-making by policy-makers in LMICs. We investigate the extent to which evaluations based on the widely used Kirkpatrick model - focusing on direct outcomes of training, namely reaction of trainees, learning, behaviour change and improvements in programmatic health indicators - align with policy-makers' evidence needs for resource allocation decisions. We use China as a case study where resource allocation decisions about potential scale-up (using domestic funding) are being made about an externally funded pilot HCP training programme. Qualitative data were collected from high-level officials involved in resource allocation at the national and provincial level in China through ten face-to-face, in-depth interviews and two focus group discussions consisting of ten participants each. Data were analysed manually using an interpretive thematic analysis approach. Our study indicates that Chinese officials not only consider information about the direct outcomes of a training programme, as captured in the Kirkpatrick model, but also need information on the resources required to implement the training, the wider or indirect impacts of training, and the sustainability and scalability to other settings within the country. In addition to considering findings presented in evaluation studies, we found that Chinese policy-makers pay close attention to whether the evaluations were robust and to the composition of the evaluation team. Our qualitative study indicates that training programme evaluations that focus narrowly on direct training

  11. Determination of total concentration of chemically labeled metabolites as a means of metabolome sample normalization and sample loading optimization in mass spectrometry-based metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2012-12-18

    For mass spectrometry (MS)-based metabolomics, it is important to use the same amount of starting materials from each sample to compare the metabolome changes in two or more comparative samples. Unfortunately, for biological samples, the total amount or concentration of metabolites is difficult to determine. In this work, we report a general approach of determining the total concentration of metabolites based on the use of chemical labeling to attach a UV absorbent to the metabolites to be analyzed, followed by rapid step-gradient liquid chromatography (LC) UV detection of the labeled metabolites. It is shown that quantification of the total labeled analytes in a biological sample facilitates the preparation of an appropriate amount of starting materials for MS analysis as well as the optimization of the sample loading amount to a mass spectrometer for achieving optimal detectability. As an example, dansylation chemistry was used to label the amine- and phenol-containing metabolites in human urine samples. LC-UV quantification of the labeled metabolites could be optimally performed at the detection wavelength of 338 nm. A calibration curve established from the analysis of a mixture of 17 labeled amino acid standards was found to have the same slope as that from the analysis of the labeled urinary metabolites, suggesting that the labeled amino acid standard calibration curve could be used to determine the total concentration of the labeled urinary metabolites. A workflow incorporating this LC-UV metabolite quantification strategy was then developed in which all individual urine samples were first labeled with (12)C-dansylation and the concentration of each sample was determined by LC-UV. The volumes of urine samples taken for producing the pooled urine standard were adjusted to ensure an equal amount of labeled urine metabolites from each sample was used for the pooling. The pooled urine standard was then labeled with (13)C-dansylation. Equal amounts of the (12)C

  12. Near-optimal alternative generation using modified hit-and-run sampling for non-linear, non-convex problems

    Science.gov (United States)

    Rosenberg, D. E.; Alafifi, A.

    2016-12-01

    Water resources systems analysis often focuses on finding optimal solutions. Yet an optimal solution is optimal only for the modelled issues and managers often seek near-optimal alternatives that address un-modelled objectives, preferences, limits, uncertainties, and other issues. Early on, Modelling to Generate Alternatives (MGA) formalized near-optimal as the region comprising the original problem constraints plus a new constraint that allowed performance within a specified tolerance of the optimal objective function value. MGA identified a few maximally-different alternatives from the near-optimal region. Subsequent work applied Markov Chain Monte Carlo (MCMC) sampling to generate a larger number of alternatives that span the near-optimal region of linear problems or select portions for non-linear problems. We extend the MCMC Hit-And-Run method to generate alternatives that span the full extent of the near-optimal region for non-linear, non-convex problems. First, start at a feasible hit point within the near-optimal region, then run a random distance in a random direction to a new hit point. Next, repeat until generating the desired number of alternatives. The key step at each iterate is to run a random distance along the line in the specified direction to a new hit point. If linear equity constraints exist, we construct an orthogonal basis and use a null space transformation to confine hits and runs to a lower-dimensional space. Linear inequity constraints define the convex bounds on the line that runs through the current hit point in the specified direction. We then use slice sampling to identify a new hit point along the line within bounds defined by the non-linear inequity constraints. This technique is computationally efficient compared to prior near-optimal alternative generation techniques such MGA, MCMC Metropolis-Hastings, evolutionary, or firefly algorithms because search at each iteration is confined to the hit line, the algorithm can move in one

  13. Emergency Diesel Generation System Surveillance Test Policy Optimization Through Genetic Algorithms Using Non-Periodic Intervention Frequencies and Seasonal Constraints

    International Nuclear Information System (INIS)

    Lapa, Celso M.F.; Pereira, Claudio M.N.A.; Frutuoso e Melo, P.F.

    2002-01-01

    Nuclear standby safety systems must frequently, be submitted to periodic surveillance tests. The main reason is to detect, as soon as possible, the occurrence of unrevealed failure states. Such interventions may, however, affect the overall system availability due to component outages. Besides, as the components are demanded, deterioration by aging may occur, penalizing again the system performance. By these reasons, planning a good surveillance test policy implies in a trade-off between gains and overheads due to the surveillance test interventions. In order maximize the systems average availability during a given period of time, it has recently been developed a non-periodic surveillance test optimization methodology based on genetic algorithms (GA). The fact of allowing non-periodic tests turns the solution space much more flexible and schedules can be better adjusted, providing gains in the overall system average availability, when compared to those obtained by an optimized periodic tests scheme. The optimization problem becomes, however, more complex. Hence, the use of a powerful optimization technique, such as GAs, is required. Some particular features of certain systems can turn it advisable to introduce other specific constraints in the optimization problem. The Emergency Diesel Generation System (EDGS) of a Nuclear Power Plant (N-PP) is a good example for demonstrating the introduction of seasonal constraints in the optimization problem. This system is responsible for power supply during an external blackout. Therefore, it is desirable during periods of high blackout probability to maintain the system availability as high as possible. Previous applications have demonstrated the robustness and effectiveness of the methodology. However, no seasonal constraints have ever been imposed. This work aims at investigating the application of such methodology in the Angra-II Brazilian NPP EDGS surveillance test policy optimization, considering the blackout probability

  14. The Proteome of Ulcerative Colitis in Colon Biopsies from Adults - Optimized Sample Preparation and Comparison with Healthy Controls.

    Science.gov (United States)

    Schniers, Armin; Anderssen, Endre; Fenton, Christopher Graham; Goll, Rasmus; Pasing, Yvonne; Paulssen, Ruth Hracky; Florholmen, Jon; Hansen, Terkel

    2017-12-01

    The purpose of the study was to optimize the sample preparation and to further use an improved sample preparation to identify proteome differences between inflamed ulcerative colitis tissue from untreated adults and healthy controls. To optimize the sample preparation, we studied the effect of adding different detergents to a urea containing lysis buffer for a Lys-C/trypsin tandem digestion. With the optimized method, we prepared clinical samples from six ulcerative colitis patients and six healthy controls and analysed them by LC-MS/MS. We examined the acquired data to identify differences between the states. We improved the protein extraction and protein identification number by utilizing a urea and sodium deoxycholate containing buffer. Comparing ulcerative colitis and healthy tissue, we found 168 of 2366 identified proteins differently abundant. Inflammatory proteins are higher abundant in ulcerative colitis, proteins related to anion-transport and mucus production are lower abundant. A high proportion of S100 proteins is differently abundant, notably with both up-regulated and down-regulated proteins. The optimized sample preparation method will improve future proteomic studies on colon mucosa. The observed protein abundance changes and their enrichment in various groups improve our understanding of ulcerative colitis on protein level. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Development and optimization of the determination of pharmaceuticals in water samples by SPE and HPLC with diode-array detection.

    Science.gov (United States)

    Pavlović, Dragana Mutavdžić; Ašperger, Danijela; Tolić, Dijana; Babić, Sandra

    2013-09-01

    This paper describes the development, optimization, and validation of a method for the determination of five pharmaceuticals from different therapeutic classes (antibiotics, anthelmintics, glucocorticoides) in water samples. Water samples were prepared using SPE and extracts were analyzed by HPLC with diode-array detection. The efficiency of 11 different SPE cartridges to extract the investigated compounds from water was tested in preliminary experiments. Then, the pH of the water sample, elution solvent, and sorbent mass were optimized. Except for optimization of the SPE procedure, selection of the optimal HPLC column with different stationary phases from different manufacturers has been performed. The developed method was validated using spring water samples spiked with appropriate concentrations of pharmaceuticals. Good linearity was obtained in the range of 2.4-200 μg/L, depending on the pharmaceutical with the correlation coefficients >0.9930 in all cases, except for ciprofloxacin (0.9866). Also, the method has revealed that low LODs (0.7-3.9 μg/L), good precision (intra- and interday) with RSD below 17% and recoveries above 98% for all pharmaceuticals. The method has been successfully applied to the analysis of production wastewater samples from the pharmaceutical industry. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Using the multi-objective optimization replica exchange Monte Carlo enhanced sampling method for protein-small molecule docking.

    Science.gov (United States)

    Wang, Hongrui; Liu, Hongwei; Cai, Leixin; Wang, Caixia; Lv, Qiang

    2017-07-10

    In this study, we extended the replica exchange Monte Carlo (REMC) sampling method to protein-small molecule docking conformational prediction using RosettaLigand. In contrast to the traditional Monte Carlo (MC) and REMC sampling methods, these methods use multi-objective optimization Pareto front information to facilitate the selection of replicas for exchange. The Pareto front information generated to select lower energy conformations as representative conformation structure replicas can facilitate the convergence of the available conformational space, including available near-native structures. Furthermore, our approach directly provides min-min scenario Pareto optimal solutions, as well as a hybrid of the min-min and max-min scenario Pareto optimal solutions with lower energy conformations for use as structure templates in the REMC sampling method. These methods were validated based on a thorough analysis of a benchmark data set containing 16 benchmark test cases. An in-depth comparison between MC, REMC, multi-objective optimization-REMC (MO-REMC), and hybrid MO-REMC (HMO-REMC) sampling methods was performed to illustrate the differences between the four conformational search strategies. Our findings demonstrate that the MO-REMC and HMO-REMC conformational sampling methods are powerful approaches for obtaining protein-small molecule docking conformational predictions based on the binding energy of complexes in RosettaLigand.

  17. Subdivision, Sampling, and Initialization Strategies for Simplical Branch and Bound in Global Optimization

    DEFF Research Database (Denmark)

    Clausen, Jens; Zilinskas, A,

    2002-01-01

    We consider the problem of optimizing a Lipshitzian function. The branch and bound technique is a well-known solution method, and the key components for this are the subdivision scheme, the bound calculation scheme, and the initialization. For Lipschitzian optimization, the bound calculations are...

  18. Hyphenation of optimized microfluidic sample preparation with nano liquid chromatography for faster and greener alkaloid analysis

    NARCIS (Netherlands)

    Shen, Y.; Beek, van T.A.; Zuilhof, H.; Chen, B.

    2013-01-01

    A glass liquid–liquid extraction (LLE) microchip with three parallel 3.5 cm long and 100 µm wide interconnecting channels was optimized in terms of more environmentally friendly (greener) solvents and extraction efficiency. In addition, the optimized chip was successfully hyphenated with nano-liquid

  19. The optimal amount and allocation of of sampling effort for plant health inspection

    NARCIS (Netherlands)

    Surkov, I.; Oude Lansink, A.G.J.M.; Werf, van der W.

    2009-01-01

    Plant import inspection can prevent the introduction of exotic pests and diseases, thereby averting economic losses. We explore the optimal allocation of a fixed budget, taking into account risk differentials, and the optimal-sized budget to minimise total pest costs. A partial-equilibrium market

  20. Optimal climate change: economics and climate science policy histories (from heuristic to normative).

    Science.gov (United States)

    Randalls, Samuel

    2011-01-01

    Historical accounts of climate change science and policy have reflected rather infrequently upon the debates, discussions, and policy advice proffered by economists in the 1980s. While there are many forms of economic analysis, this article focuses upon cost-benefit analysis, especially as adopted in the work of William Nordhaus. The article addresses the way in which climate change economics subtly altered debates about climate policy from the late 1970s through the 1990s. These debates are often technical and complex, but the argument in this article is that the development of a philosophy of climate change as an issue for cost-benefit analysis has had consequences for how climate policy is made today.

  1. Is climate change-centrism an optimal policy making strategy to set national electricity mixes?

    International Nuclear Information System (INIS)

    Vázquez-Rowe, Ian; Reyna, Janet L.; García-Torres, Samy; Kahhat, Ramzy

    2015-01-01

    Highlights: • The impact of climate-centric policies on other environmental impacts is uncertain. • Analysis of changing electricity grids of Peru and Spain in the period 1989–2013. • Life Cycle Assessment was the selected sustainability method to conduct the study. • Policies targeting GHG reductions also reduce air pollution and toxicity. • Resource usage, especially water, does not show the same trends as GHG emissions. - Abstract: In order to combat the threat of climate change, countries have begun to implement policies which restrict GHG emissions in the electricity sector. However, the development of national electricity mixes should also be sensitive to resource availability, geo-political forces, human health impacts, and social equity concerns. Policy focused on GHG goals could potentially lead to adverse consequences in other areas. To explore the impact of “climate-centric” policy making on long-term electricity mix changes, we develop two cases for Peru and Spain analyzing their changing electricity grids in the period 1989–2013. We perform a Life Cycle Assessment of annual electricity production to catalogue the improvements in GHG emissions relative to other environmental impacts. We conclude that policies targeting GHG reductions might have the co-benefit of also reducing air pollution and toxicity at the expense of other important environmental performance indicators such as water depletion. Moreover, as of 2013, both countries generate approximately equal GHG emissions per kWh, and relatively low emission rates of other pollutants compared to nations of similar development levels. Although climate-centric policy can lead to some positive environmental outcomes in certain areas, energy policy-making should be holistic and include other aspects of sustainability and vulnerability.

  2. A bi-objective model for optimizing replacement time of age and block policies with consideration of spare parts’ availability

    Science.gov (United States)

    Alsyouf, Imad

    2018-05-01

    Reliability and availability of critical systems play an important role in achieving the stated objectives of engineering assets. Preventive replacement time affects the reliability of the components, thus the number of system failures encountered and its downtime expenses. On the other hand, spare parts inventory level is a very critical factor that affects the availability of the system. Usually, the decision maker has many conflicting objectives that should be considered simultaneously for the selection of the optimal maintenance policy. The purpose of this research was to develop a bi-objective model that will be used to determine the preventive replacement time for three maintenance policies (age, block good as new, block bad as old) with consideration of spare parts’ availability. It was suggested to use a weighted comprehensive criterion method with two objectives, i.e. cost and availability. The model was tested with a typical numerical example. The results of the model demonstrated its effectiveness in enabling the decision maker to select the optimal maintenance policy under different scenarios and taking into account preferences with respect to contradicting objectives such as cost and availability.

  3. A linear programming model to optimize diets in environmental policy scenarios.

    Science.gov (United States)

    Moraes, L E; Wilen, J E; Robinson, P H; Fadel, J G

    2012-03-01

    The objective was to develop a linear programming model to formulate diets for dairy cattle when environmental policies are present and to examine effects of these policies on diet formulation and dairy cattle nitrogen and mineral excretions as well as methane emissions. The model was developed as a minimum cost diet model. Two types of environmental policies were examined: a tax and a constraint on methane emissions. A tax was incorporated to simulate a greenhouse gas emissions tax policy, and prices of carbon credits in the current carbon markets were attributed to the methane production variable. Three independent runs were made, using carbon dioxide equivalent prices of $5, $17, and $250/t. A constraint was incorporated into the model to simulate the second type of environmental policy, reducing methane emissions by predetermined amounts. The linear programming formulation of this second alternative enabled the calculation of marginal costs of reducing methane emissions. Methane emission and manure production by dairy cows were calculated according to published equations, and nitrogen and mineral excretions were calculated by mass conservation laws. Results were compared with respect to the values generated by a base least-cost model. Current prices of the carbon credit market did not appear onerous enough to have a substantive incentive effect in reducing methane emissions and altering diet costs of our hypothetical dairy herd. However, when emissions of methane were assumed to be reduced by 5, 10, and 13.5% from the base model, total diet costs increased by 5, 19.1, and 48.5%, respectively. Either these increased costs would be passed onto the consumer or dairy producers would go out of business. Nitrogen and potassium excretions were increased by 16.5 and 16.7% with a 13.5% reduction in methane emissions from the base model. Imposing methane restrictions would further increase the demand for grains and other human-edible crops, which is not a progressive

  4. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  5. Transmission characteristics and optimal diagnostic samples to detect an FMDV infection in vaccinated and non-vaccinated sheep

    NARCIS (Netherlands)

    Eble, P.L.; Orsel, K.; Kluitenberg-van Hemert, F.; Dekker, A.

    2015-01-01

    We wanted to quantify transmission of FMDV Asia-1 in sheep and to evaluate which samples would be optimal for detection of an FMDV infection in sheep. For this, we used 6 groups of 4 non-vaccinated and 6 groups of 4 vaccinated sheep. In each group 2 sheep were inoculated and contact exposed to 2

  6. Optimal Mobile Sensing and Actuation Policies in Cyber-physical Systems

    CERN Document Server

    Tricaud, Christophe

    2012-01-01

    A successful cyber-physical system, a complex interweaving of hardware and software in direct interaction with some parts of the physical environment, relies heavily on proper identification of the, often pre-existing, physical elements. Based on information from that process, a bespoke “cyber” part of the system may then be designed for a specific purpose. Optimal Mobile Sensing and Actuation Strategies in Cyber-physical Systems focuses on distributed-parameter systems the dynamics of which can be modelled with partial differential equations. Such systems are very challenging to measure, their states being distributed throughout a spatial domain. Consequently, optimal strategies are needed and systematic approaches to the optimization of sensor locations have to be devised for parameter estimation. The text begins by reviewing the newer field of cyber-physical systems and introducing background notions of distributed parameter systems and optimal observation theory. New research opportunities are then de...

  7. Global warming and carbon taxation. Optimal policy and the role of administration costs

    International Nuclear Information System (INIS)

    Williams, M.

    1995-01-01

    This paper develops a model relating CO 2 emissions to atmosphere concentrations, global temperature change and economic damages. For a variety of parameter assumptions, the model provides estimates of the marginal cost of emissions in various years. The optimal carbon tax is a function of the marginal emission cost and the costs of administering the tax. This paper demonstrates that under any reasonable assumptions, the optimal carbon tax is zero for at least several decades. (author)

  8. Optimal Monetary Policy with Durable Consumption Goods and Factor Demand Linkages

    DEFF Research Database (Denmark)

    Petrella, Ivan; Santoro, Emiliano

    of production in both sectors, according to an input-output matrix calibrated on the US economy. As shown in a number of recent contributions, this roundabout technology allows us to reconcile standard two-sector New Keynesian models with the empirical evidence showing co-movement between durable and non......-durable spending in response to a monetary policy shock. A main result of our monetary policy analysis is that strategic complementarities generated by factor demand linkages amplify social welfare loss. As the degree of interconnection between sectors increases, the cost of misperceiving the correct production......This paper deals with the implications of factor demand linkages for monetary policy design. We develop a dynamic general equilibrium model with two sectors that produce durable and non-durable goods, respectively. Part of the output produced in each sector is used as an intermediate input...

  9. A niching genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Sacco, W.F.; Lapa, Celso M.F.; Pereira, C.M.N.A.; Oliveira, C.R.E. de

    2006-01-01

    This article extends previous efforts on genetic algorithms (GAs) applied to a nuclear power plant (NPP) auxiliary feedwater system (AFWS) surveillance tests policy optimization. We introduce the application of a niching genetic algorithm (NGA) to this problem and compare its performance to previous results. The NGA maintains a populational diversity during the search process, thus promoting a greater exploration of the search space. The optimization problem consists in maximizing the system's average availability for a given period of time, considering realistic features such as: (i) aging effects on standby components during the tests; (ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; (iii) components have distinct test parameters (outage time, aging factors, etc.) and (iv) tests are not necessarily periodic. We find that the NGA performs better than the conventional GA and the island GA due to a greater exploration of the search space

  10. Archival policies and collections database for the Woods Hole Science Center's marine sediment samples

    Science.gov (United States)

    Buczkowski, Brian J.; Kelsey, Sarah A.

    2007-01-01

    The Woods Hole Science Center of the U.S. Geological Survey (USGS) has been an active member of the Woods Hole research community, Woods Hole, Massachusetts, for over 40 years. In that time there have been many projects that involved the collection of sediment samples conducted by USGS scientists and technicians for the research and study of seabed environments and processes. These samples were collected at sea or near shore and then brought back to the Woods Hole Science Center (WHSC) for analysis. While at the center, samples are stored in ambient temperature, refrigerated and freezing conditions ranging from +2º Celsius to -18º Celsius, depending on the best mode of preparation for the study being conducted or the duration of storage planned for the samples. Recently, storage methods and available storage space have become a major concern at the WHSC. The core and sediment archive program described herein has been initiated to set standards for the management, methods, and duration of sample storage. A need has arisen to maintain organizational consistency and define storage protocol. This handbook serves as a reference and guide to all parties interested in using and accessing the WHSC's sample archive and also defines all the steps necessary to construct and maintain an organized collection of geological samples. It answers many questions as to the way in which the archive functions.

  11. Nationwide survey of policies and practices related to capillary blood sampling in medical laboratories in Croatia

    OpenAIRE

    Lenicek Krleza, Jasna

    2014-01-01

    Introduction: Capillary sampling is increasingly used to obtain blood for laboratory tests in volumes as small as necessary and as non-invasively as possible. Whether capillary blood sampling is also frequent in Croatia, and whether it is performed according to international laboratory standards is unclear. Materials and methods: All medical laboratories that participate in the Croatian National External Quality Assessment Program (N = 204) were surveyed on-line to collect information about t...

  12. Optimal monetary policy rules: the problem of stability under heterogeneous learning

    Czech Academy of Sciences Publication Activity Database

    Bogomolova, Anna; Kolyuzhnov, Dmitri

    -, č. 379 (2008), s. 1-34 ISSN 1211-3298 R&D Projects: GA MŠk LC542 Institutional research plan: CEZ:AV0Z70850503 Keywords : monetary policy rules * New Keynesian model * adaptive learning Subject RIV: AH - Economics http://www.cerge-ei.cz/pdf/wp/Wp379.pdf

  13. Unravelling the concept of consumer preference: implications for health policy and optimal planning in primary care.

    Science.gov (United States)

    Foster, Michele M; Earl, Peter E; Haines, Terry P; Mitchell, Geoffrey K

    2010-10-01

    Accounting for consumer preference in health policy and delivery system design makes good economic sense since this is linked to outcomes, quality of care and cost control. Probability trade-off methods are commonly used in policy evaluation, marketing and economics. Increasingly applied to health matters, the trade-off preference model has indicated that consumers of health care discriminate between different attributes of care. However, the complexities of the health decision-making environment raise questions about the inherent assumptions concerning choice and decision-making behavior which frame this view of consumer preference. In this article, we use the example of primary care in Australia as a vehicle to examine the concept of 'consumer preference' from different perspectives within economics and discuss the significance of how we model preferences for health policy makers. In doing so, we question whether mainstream thinking, namely that consumers are capable of deliberating between rival strategies and are willing to make trade-offs, is a reliable way of thinking about preferences given the complexities of the health decision-making environment. Alternative perspectives on preference can assist health policy makers and health providers by generating more precise information about the important attributes of care that are likely to enhance consumer engagement and optimise acceptability of health care. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  14. Interactions between optimal replacement policies and feeding strategies in dairy herds

    NARCIS (Netherlands)

    Vargas, B.; Herrero, M.; Arendonk, van J.A.M.

    2001-01-01

    A dynamic performance model was integrated with a model that optimised culling and insemination policies in dairy herds using dynamic programming. The performance model estimated daily feed intake, milk yield and body weight change of dairy cows on the basis of availability and quality of feed and

  15. Ionizing radiation as optimization method for aluminum detection from drinking water samples

    International Nuclear Information System (INIS)

    Bazante-Yamguish, Renata; Geraldo, Aurea Beatriz C.; Moura, Eduardo; Manzoli, Jose Eduardo

    2013-01-01

    The presence of organic compounds in water samples is often responsible for metal complexation; depending on the analytic method, the organic fraction may dissemble the evaluation of the real values of metal concentration. Pre-treatment of the samples is advised when organic compounds are interfering agents, and thus sample mineralization may be accomplished by several chemical and/or physical methods. Here, the ionizing radiation was used as an advanced oxidation process (AOP), for sample pre-treatment before the analytic determination of total and dissolved aluminum by ICP-OES in drinking water samples from wells and spring source located at Billings dam region. Before irradiation, the spring source and wells' samples showed aluminum levels of 0.020 mg/l and 0.2 mg/l respectively; after irradiation, both samples showed a 8-fold increase of aluminum concentration. These results are discussed considering other physical and chemical parameters and peculiarities of sample sources. (author)

  16. Using Multi-Objective Optimization to Explore Robust Policies in the Colorado River Basin

    Science.gov (United States)

    Alexander, E.; Kasprzyk, J. R.; Zagona, E. A.; Prairie, J. R.; Jerla, C.; Butler, A.

    2017-12-01

    The long term reliability of water deliveries in the Colorado River Basin has degraded due to the imbalance of growing demand and dwindling supply. The Colorado River meanders 1,450 miles across a watershed that covers seven US states and Mexico and is an important cultural, economic, and natural resource for nearly 40 million people. Its complex operating policy is based on the "Law of the River," which has evolved since the Colorado River Compact in 1922. Recent (2007) refinements to address shortage reductions and coordinated operations of Lakes Powell and Mead were negotiated with stakeholders in which thousands of scenarios were explored to identify operating guidelines that could ultimately be agreed on. This study explores a different approach to searching for robust operating policies to inform the policy making process. The Colorado River Simulation System (CRSS), a long-term water management simulation model implemented in RiverWare, is combined with the Borg multi-objective evolutionary algorithm (MOEA) to solve an eight objective problem formulation. Basin-wide performance metrics are closely tied to system health through incorporating critical reservoir pool elevations, duration, frequency and quantity of shortage reductions in the objective set. For example, an objective to minimize the frequency that Lake Powell falls below the minimum power pool elevation of 3,490 feet for Glen Canyon Dam protects a vital economic and renewable energy source for the southwestern US. The decision variables correspond to operating tiers in Lakes Powell and Mead that drive the implementation of various shortage and release policies, thus affecting system performance. The result will be a set of non-dominated solutions that can be compared with respect to their trade-offs based on the various objectives. These could inform policy making processes by eliminating dominated solutions and revealing robust solutions that could remain hidden under conventional analysis.

  17. Optimal sampling period of the digital control system for the nuclear power plant steam generator water level control

    International Nuclear Information System (INIS)

    Hur, Woo Sung; Seong, Poong Hyun

    1995-01-01

    A great effort has been made to improve the nuclear plant control system by use of digital technologies and a long term schedule for the control system upgrade has been prepared with an aim to implementation in the next generation nuclear plants. In case of digital control system, it is important to decide the sampling period for analysis and design of the system, because the performance and the stability of a digital control system depend on the value of the sampling period of the digital control system. There is, however, currently no systematic method used universally for determining the sampling period of the digital control system. Generally, a traditional way to select the sampling frequency is to use 20 to 30 times the bandwidth of the analog control system which has the same system configuration and parameters as the digital one. In this paper, a new method to select the sampling period is suggested which takes into account of the performance as well as the stability of the digital control system. By use of the Irving's model steam generator, the optimal sampling period of an assumptive digital control system for steam generator level control is estimated and is actually verified in the digital control simulation system for Kori-2 nuclear power plant steam generator level control. Consequently, we conclude the optimal sampling period of the digital control system for Kori-2 nuclear power plant steam generator level control is 1 second for all power ranges. 7 figs., 3 tabs., 8 refs. (Author)

  18. Empirical data and optimal monitoring policies: the case of four Russian sea harbours

    Energy Technology Data Exchange (ETDEWEB)

    Deissenberg, C. [CEFI-CNRS, Les Milles (France); Gurman, V.; Shevchuk, E. [RAS, Program Systems Inst., Pereslavl-Zalessky (Russian Federation); Ryumina, E. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Economic Market Problems; Shevlyagin, K. [State Committee of the Environment Protection of the Russian Federation, Moscow (Russian Federation). Marine Environment Dept.

    2001-07-01

    In this paper, we describe the present state of empirical information about oil spills and oil monitoring activities in Russian harbours. We explain how we gathered, organized, and estimated the data needed to run the monitoring efforts optimization model of Deissenberg et al. (2001). We present, analyse, and discuss the results of the optimizations carried out with this model on the basis of the empirical data. These results show, in particular, that the economic efficiency of the monitoring activities decreases rapidly as the corresponding budget increases. This suggests that, rather urgently, measures other than monitoring should be initiated to control sea harbour pollution. (Author)

  19. Financial stability, wealth effects and optimal macroeconomic policy combination in the United Kingdom: A new-Keynesian dynamic stochastic general equilibrium framework

    Directory of Open Access Journals (Sweden)

    Muhammad Ali Nasir

    2016-12-01

    Full Text Available This study derives an optimal macroeconomic policy combination for financial sector stability in the United Kingdom by employing a New Keynesian Dynamic Stochastic General Equilibrium (NK-DSGE framework. The empirical results obtained show that disciplined fiscal and accommodative monetary policies stance is optimal for financial sector stability. Furthermore, fiscal indiscipline countered by contractionary monetary stance adversely affects financial sector stability. Financial markets, e.g. stocks and Gilts show a short-term asymmetric response to macroeconomic policy interaction and to each other. The asymmetry is a reflection of portfolio adjustment. However in the long-run, the responses to suggested optimal policy combination had homogenous effects and there was evidence of co-movement in the stock and Gilt markets.

  20. Optimization of sample preparation variables for wedelolactone from Eclipta alba using Box-Behnken experimental design followed by HPLC identification.

    Science.gov (United States)

    Patil, A A; Sachin, B S; Shinde, D B; Wakte, P S

    2013-07-01

    Coumestan wedelolactone is an important phytocomponent from Eclipta alba (L.) Hassk. It possesses diverse pharmacological activities, which have prompted the development of various extraction techniques and strategies for its better utilization. The aim of the present study is to develop and optimize supercritical carbon dioxide assisted sample preparation and HPLC identification of wedelolactone from E. alba (L.) Hassk. The response surface methodology was employed to study the optimization of sample preparation using supercritical carbon dioxide for wedelolactone from E. alba (L.) Hassk. The optimized sample preparation involves the investigation of quantitative effects of sample preparation parameters viz. operating pressure, temperature, modifier concentration and time on yield of wedelolactone using Box-Behnken design. The wedelolactone content was determined using validated HPLC methodology. The experimental data were fitted to second-order polynomial equation using multiple regression analysis and analyzed using the appropriate statistical method. By solving the regression equation and analyzing 3D plots, the optimum extraction conditions were found to be: extraction pressure, 25 MPa; temperature, 56 °C; modifier concentration, 9.44% and extraction time, 60 min. Optimum extraction conditions demonstrated wedelolactone yield of 15.37 ± 0.63 mg/100 g E. alba (L.) Hassk, which was in good agreement with the predicted values. Temperature and modifier concentration showed significant effect on the wedelolactone yield. The supercritical carbon dioxide extraction showed higher selectivity than the conventional Soxhlet assisted extraction method. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  1. Sampling optimization trade-offs for long-term monitoring of gamma dose rates

    NARCIS (Netherlands)

    Melles, S.J.; Heuvelink, G.B.M.; Twenhöfel, C.J.W.; Stöhlker, U.

    2008-01-01

    This paper applies a recently developed optimization method to examine the design of networks that monitor radiation under routine conditions. Annual gamma dose rates were modelled by combining regression with interpolation of the regression residuals using spatially exhaustive predictors and an

  2. Counting, enumerating and sampling of execution plans in a cost-based query optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    1999-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on

  3. Counting, Enumerating and Sampling of Execution Plans in a Cost-Based Query Optimizer

    NARCIS (Netherlands)

    F. Waas; C.A. Galindo-Legaria

    2000-01-01

    textabstractTesting an SQL database system by running large sets of deterministic or stochastic SQL statements is common practice in commercial database development. However, code defects often remain undetected as the query optimizer's choice of an execution plan is not only depending on the query

  4. Evaluation of dynamically dimensioned search algorithm for optimizing SWAT by altering sampling distributions and searching range

    Science.gov (United States)

    The primary advantage of Dynamically Dimensioned Search algorithm (DDS) is that it outperforms many other optimization techniques in both convergence speed and the ability in searching for parameter sets that satisfy statistical guidelines while requiring only one algorithm parameter (perturbation f...

  5. Multi-Factor Policy Evaluation and Selection in the One-Sample Situation

    NARCIS (Netherlands)

    C.M. Chen (Chien-Ming)

    2008-01-01

    textabstractFirms nowadays need to make decisions with fast information obsolesce. In this paper I deal with one class of decision problems in this situation, called the “one-sample” problems: we have finite options and one sample of the multiple criteria with which we use to evaluate those options.

  6. Efficient optimization of the dual-index policy using Markov chains

    NARCIS (Netherlands)

    Arts, J.J.; Vuuren, van M.; Kiesmüller, G.P.

    2009-01-01

    3We consider the inventory control of a single product in one location with two supply sources facing stochastic demand. A premium is paid for each product ordered from the faster `emergency' supply source. Unsatistfied emand is backordered and ordering decisions are made periodically. The optimal

  7. Modelling impact of advertising and optimizing advertising policy: an application in recreation

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1981-01-01

    textabstractThis paper deals with the problem of the desirable level of advertising expenditure, the optimal distribution of this expenditure in time and the allocation over the media: TV, radio and newspaper for a recreation park in the Netherlands. Although the model id developed for the specific

  8. Optimal core acquisition and remanufacturing policies under uncertain core quality fractions

    NARCIS (Netherlands)

    Teunter, R.H.; Flapper, S.D.P.

    2011-01-01

    Cores acquired by a remanufacturer are typically highly variable in quality. Even if the expected fractions of the various quality levels are known, then the exact fractions when acquiring cores are still uncertain. Our model incorporates this uncertainty in determining optimal acquisition decisions

  9. Optimal Dynamic Investment Policy under Different Rates for Tax Depreciation and Economic Depreciation

    NARCIS (Netherlands)

    Wielhouwer, J.L.; De Waegenaere, A.M.B.; Kort, P.M.

    1999-01-01

    This paper analyzes the consequences of incorporating a different rate for tax depreciation than for economic depreciation. Firms most often choose their tax depreciation rate in a strategic way. It would therefore be a coincidence if this optimization process leads to a tax depreciation rate that

  10. Spare parts sharing with joint optimization of maintenance and inventory policies

    DEFF Research Database (Denmark)

    Larsen, Christian; Wong, Hartanto Wijaya; Nielsen, Lars Relund

    We consider a collaborative arrangement where a number of companies are willing to share expensive spare parts, required for both failure replacement and preventive maintenance purposes. We develop a discrete-time Markov decision model for the joint optimization of maintenance and spare parts...

  11. Optimal Design for Study-Abroad Scholarship: The Effect of Payback Policy

    Science.gov (United States)

    Lien, Donald; Wang, Yaqin

    2010-01-01

    This paper examines the optimal design for a study-abroad scholarship. A student is awarded a fixed-amount scholarship to participate in the program but will have to pay back the scholarship if his/her performance fails to meet a target level. When the program is highly productive, the scholarship is low and the target performance is high. The…

  12. A multi-period optimization model for planning of China's power sector with consideration of carbon dioxide mitigation—The importance of continuous and stable carbon mitigation policy

    International Nuclear Information System (INIS)

    Zhang, Dongjie; Liu, Pei; Ma, Linwei; LI, Zheng

    2013-01-01

    A great challenge China's power sector faces is to mitigate its carbon emissions whilst satisfying the ever-increasing power demand. Optimal planning of the power sector with consideration of carbon mitigation for a long-term future remains a complex task, involving many technical alternatives and an infinite number of possible plants installations, retrofitting, and decommissioning over the planning horizon. Previously the authors built a multi-period optimization model for the planning of China's power sector during 2010–2050. Based on that model, this paper executed calculations on the optimal pathways of China's power sector with two typical decision-making modes, which are based on “full-information” and “limited-information” hypothesis, and analyzed the impacts on the optimal planning results by two typical types of carbon tax policies including a “continuous and stable” one and a “loose first and tight later” one. The results showed that making carbon tax policy for long-term future, and improving the continuity and stability in policy execution can effectively help reduce the accumulated total carbon emissions, and also the cost for carbon mitigation of the power sector. The conclusion of this study is of great significance for the policy makers to make carbon mitigation policies in China and other countries as well. - Highlights: • A multi-stage optimization model for planning the power sector is applied as basis. • Difference of ideal and actual decision making processes are proposed and analyzed. • A “continuous and stable” policy and a “loose first and tight later” one are designed. • 4 policy scenarios are studied applying the optimal planning model and compared. • The importance of “continuous and stable” policy for long term is well demonstrated

  13. Article Review on World Bank Report, Optimal Design for a Minimum Wage Policy in Malaysia

    OpenAIRE

    Nurrachmi, Rininta; Mad-Ahin, Ashanee; Waeowanjit, Phimpaporn; Kareemarif Arif, Naz Abdul

    2012-01-01

    There are many pros and cons with the implementation of minimum wage in Malaysia, since it is the first time. This article review is to analyze the World Bank report on Malaysian minimum wage policy that will be implemented in 2013. There are strength and weakness on the report. Moreover the review will also be analyzed from Islamic perspective since majority population in Malaysia is Muslim.

  14. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico

    Science.gov (United States)

    Haer, Toon; Botzen, W. J. Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M.; Ward, Philip J.

    2018-06-01

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  15. Compromises in energy policy-Using fuzzy optimization in an energy systems model

    International Nuclear Information System (INIS)

    Martinsen, Dag; Krey, Volker

    2008-01-01

    Over the last year in Germany a great many political discussions have centered around the future direction of energy and climate policy. Due to a number of events related to energy prices, security of supply and climate change, it has been necessary to develop cornerstones for a new integrated energy and climate policy. To supplement this decision process, model-based scenarios were used. In this paper we introduce fuzzy constraints to obtain a better representation of political decision processes, in particular, to find compromises between often contradictory targets (e.g. economic, environmentally friendly and secure energy supply). A number of policy aims derived from a review of the ongoing political discussions were formulated as fuzzy constraints to explicitly include trade-offs between various targets. The result is an overall satisfaction level of about 60% contingent upon the following restrictions: share of energy imports, share of biofuels, share of CHP electricity, CO 2 reduction target and use of domestic hard coal. The restrictions for the share of renewable electricity, share of renewable heat, energy efficiency and postponement of nuclear phase out have higher membership function values, i.e. they are not binding and therefore get done on the side

  16. Relationships between depressive symptoms and perceived social support, self-esteem, & optimism in a sample of rural adolescents.

    Science.gov (United States)

    Weber, Scott; Puskar, Kathryn Rose; Ren, Dianxu

    2010-09-01

    Stress, developmental changes and social adjustment problems can be significant in rural teens. Screening for psychosocial problems by teachers and other school personnel is infrequent but can be a useful health promotion strategy. We used a cross-sectional survey descriptive design to examine the inter-relationships between depressive symptoms and perceived social support, self-esteem, and optimism in a sample of rural school-based adolescents. Depressive symptoms were negatively correlated with peer social support, family social support, self-esteem, and optimism. Findings underscore the importance for teachers and other school staff to provide health education. Results can be used as the basis for education to improve optimism, self-esteem, social supports and, thus, depression symptoms of teens.

  17. Optimization of Sample Preparation for the Identification and Quantification of Saxitoxin in Proficiency Test Mussel Sample using Liquid Chromatography-Tandem Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available Saxitoxin (STX and some selected paralytic shellfish poisoning (PSP analogues in mussel samples were identified and quantified with liquid chromatography-tandem mass spectrometry (LC-MS/MS. Sample extraction and purification methods of mussel sample were optimized for LC-MS/MS analysis. The developed method was applied to the analysis of the homogenized mussel samples in the proficiency test (PT within the EQuATox project (Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk. Ten laboratories from eight countries participated in the STX PT. Identification of PSP toxins in naturally contaminated mussel samples was performed by comparison of product ion spectra and retention times with those of reference standards. The quantitative results were obtained with LC-MS/MS by spiking reference standards in toxic mussel extracts. The results were within the z-score of ±1 when compared to the results measured with the official AOAC (Association of Official Analytical Chemists method 2005.06, pre-column oxidation high-performance liquid chromatography with fluorescence detection (HPLC-FLD.

  18. Optimizing detection of noble gas emission at a former UNE site: sample strategy, collection, and analysis

    Science.gov (United States)

    Kirkham, R.; Olsen, K.; Hayes, J. C.; Emer, D. F.

    2013-12-01

    Underground nuclear tests may be first detected by seismic or air samplers operated by the CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organization). After initial detection of a suspicious event, member nations may call for an On-Site Inspection (OSI) that in part, will sample for localized releases of radioactive noble gases and particles. Although much of the commercially available equipment and methods used for surface and subsurface environmental sampling of gases can be used for an OSI scenario, on-site sampling conditions, required sampling volumes and establishment of background concentrations of noble gases require development of specialized methodologies. To facilitate development of sampling equipment and methodologies that address OSI sampling volume and detection objectives, and to collect information required for model development, a field test site was created at a former underground nuclear explosion site located in welded volcanic tuff. A mixture of SF-6, Xe127 and Ar37 was metered into 4400 m3 of air as it was injected into the top region of the UNE cavity. These tracers were expected to move towards the surface primarily in response to barometric pumping or through delayed cavity pressurization (accelerated transport to minimize source decay time). Sampling approaches compared during the field exercise included sampling at the soil surface, inside surface fractures, and at soil vapor extraction points at depths down to 2 m. Effectiveness of various sampling approaches and the results of tracer gas measurements will be presented.

  19. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  20. Optimal sample preparation for nanoparticle metrology (statistical size measurements) using atomic force microscopy

    International Nuclear Information System (INIS)

    Hoo, Christopher M.; Doan, Trang; Starostin, Natasha; West, Paul E.; Mecartney, Martha L.

    2010-01-01

    Optimal deposition procedures are determined for nanoparticle size characterization by atomic force microscopy (AFM). Accurate nanoparticle size distribution analysis with AFM requires non-agglomerated nanoparticles on a flat substrate. The deposition of polystyrene (100 nm), silica (300 and 100 nm), gold (100 nm), and CdSe quantum dot (2-5 nm) nanoparticles by spin coating was optimized for size distribution measurements by AFM. Factors influencing deposition include spin speed, concentration, solvent, and pH. A comparison using spin coating, static evaporation, and a new fluid cell deposition method for depositing nanoparticles is also made. The fluid cell allows for a more uniform and higher density deposition of nanoparticles on a substrate at laminar flow rates, making nanoparticle size analysis via AFM more efficient and also offers the potential for nanoparticle analysis in liquid environments.

  1. Optimizing human semen cryopreservation by reducing test vial volume and repetitive test vial sampling

    DEFF Research Database (Denmark)

    Jensen, Christian F S; Ohl, Dana A; Parker, Walter R

    2015-01-01

    OBJECTIVE: To investigate optimal test vial (TV) volume, utility and reliability of TVs, intermediate temperature exposure (-88°C to -93°C) before cryostorage, cryostorage in nitrogen vapor (VN2) and liquid nitrogen (LN2), and long-term stability of VN2 cryostorage of human semen. DESIGN......: Prospective clinical laboratory study. SETTING: University assisted reproductive technology (ART) laboratory. PATIENT(S): A total of 594 patients undergoing semen analysis and cryopreservation. INTERVENTION(S): Semen analysis, cryopreservation with different intermediate steps and in different volumes (50......-1,000 μL), and long-term storage in LN2 or VN2. MAIN OUTCOME MEASURE(S): Optimal TV volume, prediction of cryosurvival (CS) in ART procedure vials (ARTVs) with pre-freeze semen parameters and TV CS, post-thaw motility after two- or three-step semen cryopreservation and cryostorage in VN2 and LN2. RESULT...

  2. Optimization of Sample Preparation processes of Bone Material for Raman Spectroscopy.

    Science.gov (United States)

    Chikhani, Madelen; Wuhrer, Richard; Green, Hayley

    2018-03-30

    Raman spectroscopy has recently been investigated for use in the calculation of postmortem interval from skeletal material. The fluorescence generated by samples, which affects the interpretation of Raman data, is a major limitation. This study compares the effectiveness of two sample preparation techniques, chemical bleaching and scraping, in the reduction of fluorescence from bone samples during testing with Raman spectroscopy. Visual assessment of Raman spectra obtained at 1064 nm excitation following the preparation protocols indicates an overall reduction in fluorescence. Results demonstrate that scraping is more effective at resolving fluorescence than chemical bleaching. The scraping of skeletonized remains prior to Raman analysis is a less destructive method and allows for the preservation of a bone sample in a state closest to its original form, which is beneficial in forensic investigations. It is recommended that bone scraping supersedes chemical bleaching as the preferred method for sample preparation prior to Raman spectroscopy. © 2018 American Academy of Forensic Sciences.

  3. Optimism and self-esteem are related to sleep. Results from a large community-based sample.

    Science.gov (United States)

    Lemola, Sakari; Räikkönen, Katri; Gomez, Veronica; Allemand, Mathias

    2013-12-01

    There is evidence that positive personality characteristics, such as optimism and self-esteem, are important for health. Less is known about possible determinants of positive personality characteristics. To test the relationship of optimism and self-esteem with insomnia symptoms and sleep duration. Sleep parameters, optimism, and self-esteem were assessed by self-report in a community-based sample of 1,805 adults aged between 30 and 84 years in the USA. Moderation of the relation between sleep and positive characteristics by gender and age as well as potential confounding of the association by depressive disorder was tested. Individuals with insomnia symptoms scored lower on optimism and self-esteem largely independent of age and sex, controlling for symptoms of depression and sleep duration. Short sleep duration (self-esteem when compared to individuals sleeping 7-8 h, controlling depressive symptoms. Long sleep duration (>9 h) was also related to low optimism and self-esteem independent of age and sex. Good and sufficient sleep is associated with positive personality characteristics. This relationship is independent of the association between poor sleep and depression.

  4. Parallel island genetic algorithm applied to a nuclear power plant auxiliary feedwater system surveillance tests policy optimization

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Lapa, Celso M.F.

    2003-01-01

    In this work, we focus the application of an Island Genetic Algorithm (IGA), a coarse-grained parallel genetic algorithm (PGA) model, to a Nuclear Power Plant (NPP) Auxiliary Feedwater System (AFWS) surveillance tests policy optimization. Here, the main objective is to outline, by means of comparisons, the advantages of the IGA over the simple (non-parallel) genetic algorithm (GA), which has been successfully applied in the solution of such kind of problem. The goal of the optimization is to maximize the system's average availability for a given period of time, considering realistic features such as: i) aging effects on standby components during the tests; ii) revealing failures in the tests implies on corrective maintenance, increasing outage times; iii) components have distinct test parameters (outage time, aging factors, etc.) and iv) tests are not necessarily periodic. In our experiments, which were made in a cluster comprised by 8 1-GHz personal computers, we could clearly observe gains not only in the computational time, which reduced linearly with the number of computers, but in the optimization outcome

  5. Optimization Extracting Technology of Cynomorium songaricum Rupr. Saponins by Ultrasonic and Determination of Saponins Content in Samples with Different Source

    OpenAIRE

    Xiaoli Wang; Qingwei Wei; Xinqiang Zhu; Chunmei Wang; Yonggang Wang; Peng Lin; Lin Yang

    2015-01-01

    Extraction process was optimized by single factor and orthogonal experiment (L9 (34)). Moreover, the content determination was studied in methodology. The optimum ultrasonic extraction conditions were: ethanol concentration of 75%, ultrasonic power of 420 w, the solid-liquid ratio of 1:15, extraction duration of 45 min, extraction temperature of 90°C and extraction for 2 times. Saponins content in Guazhou samples was significantly higher than those in Xinjiang and Inner Mongolia. Meanwhile, G...

  6. Improved detection of multiple environmental antibiotics through an optimized sample extraction strategy in liquid chromatography-mass spectrometry analysis.

    Science.gov (United States)

    Yi, Xinzhu; Bayen, Stéphane; Kelly, Barry C; Li, Xu; Zhou, Zhi

    2015-12-01

    A solid-phase extraction/liquid chromatography/electrospray ionization/multi-stage mass spectrometry (SPE-LC-ESI-MS/MS) method was optimized in this study for sensitive and simultaneous detection of multiple antibiotics in urban surface waters and soils. Among the seven classes of tested antibiotics, extraction efficiencies of macrolides, lincosamide, chloramphenicol, and polyether antibiotics were significantly improved under optimized sample extraction pH. Instead of only using acidic extraction in many existing studies, the results indicated that antibiotics with low pK a values (antibiotics with high pK a values (>7) were extracted more efficiently under neutral conditions. The effects of pH were more obvious on polar compounds than those on non-polar compounds. Optimization of extraction pH resulted in significantly improved sample recovery and better detection limits. Compared with reported values in the literature, the average reduction of minimal detection limits obtained in this study was 87.6% in surface waters (0.06-2.28 ng/L) and 67.1% in soils (0.01-18.16 ng/g dry wt). This method was subsequently applied to detect antibiotics in environmental samples in a heavily populated urban city, and macrolides, sulfonamides, and lincomycin were frequently detected. Antibiotics with highest detected concentrations were sulfamethazine (82.5 ng/L) in surface waters and erythromycin (6.6 ng/g dry wt) in soils. The optimized sample extraction strategy can be used to improve the detection of a variety of antibiotics in environmental surface waters and soils.

  7. Multi - party Game Analysis of Coal Industry and Industry Regulation Policy Optimization

    Science.gov (United States)

    Jiang, Tianqi

    2018-01-01

    In the face of the frequent occurrence of coal mine safety accidents, this paper analyses the relationship between central and local governments, coal mining enterprises and miners from the perspective of multi - group game. In the actual production, the decision of one of the three groups can affect the game strategy of the other of the three, so we should assume the corresponding game order. In this order, the game analysis of the income and decision of the three is carried out, and the game decision of the government, the enterprise and the workers is obtained through the establishment of the benefit matrix and so on. And then on the existing system to optimize the coal industry regulation proposed practical recommendations to reduce the frequency of industry safety accidents, optimize the industry production environment.

  8. Optimizing maintenance and repair policies via a combination of genetic algorithms and Monte Carlo simulation

    International Nuclear Information System (INIS)

    Marseguerra, M.; Zio, E.

    2000-01-01

    In this paper we present an optimization approach based on the combination of a Genetic Algorithms maximization procedure with a Monte Carlo simulation. The approach is applied within the context of plant logistic management for what concerns the choice of maintenance and repair strategies. A stochastic model of plant operation is developed from the standpoint of its reliability/availability behavior, i.e. of the failure/repair/maintenance processes of its components. The model is evaluated by Monte Carlo simulation in terms of economic costs and revenues of operation. The flexibility of the Monte Carlo method allows us to include several practical aspects such as stand-by operation modes, deteriorating repairs, aging, sequences of periodic maintenances, number of repair teams available for different kinds of repair interventions (mechanical, electronic, hydraulic, etc.), components priority rankings. A genetic algorithm is then utilized to optimize the components maintenance periods and number of repair teams. The fitness function object of the optimization is a profit function which inherently accounts for the safety and economic performance of the plant and whose value is computed by the above Monte Carlo simulation model. For an efficient combination of Genetic Algorithms and Monte Carlo simulation, only few hundreds Monte Carlo histories are performed for each potential solution proposed by the genetic algorithm. Statistical significance of the results of the solutions of interest (i.e. the best ones) is then attained exploiting the fact that during the population evolution the fit chromosomes appear repeatedly many times. The proposed optimization approach is applied on two case studies of increasing complexity

  9. GROUP-BUYING ONLINE AUCTION AND OPTIMAL INVENTORY POLICY IN UNCERTAIN MARKET

    Institute of Scientific and Technical Information of China (English)

    Jian CHEN; Yunhui LIU; Xiping SONG

    2004-01-01

    In this paper we consider a group-buying online auction (GBA) model for a monopolistic manufacturer selling novel products in the uncertain market. Firstly, we introduce the bidder's dominant strategy, after which we optimize the GBA price curve and the production volume together.Finally, we compare the GBA with the traditional posted pricing mechanism and find that the GBA is highly probable to be advantageous over the posted pricing mechanism in some appropriate market environments.

  10. Optimal Return Service Charging Policy for a Fashion Mass Customization Program

    OpenAIRE

    Tsan-Ming Choi

    2013-01-01

    Mass customization (MC) service is a pertinent industrial practice in the fashion industry. To foster trust and enhance demand, some brands now allow dissatisfied customers to return the MC fashion product for a full refund minus a service charge. The service charge is a measure to avoid the abuse of the return right and to subsidize the operations cost (e.g., shipping) and loss from the return. Motivated by this observed industrial practice, this paper analytically examines the optimal retur...

  11. Externalities, Border Trade and Illegal Production: An Optimal Tax Approach to Alcohol Policy

    OpenAIRE

    Aronsson, Thomas; Sjögren, Tomas

    2005-01-01

    This paper deals with optimal income and commodity taxation in an economy, where alcohol is an externality-generating consumption good. In our model, alcohol can be bought domestically, imported (via border trade) or produced illegally. Border trade implies an incentive to set the domestic alcohol tax below the marginal social damage of alcohol, and to tax (subsidize) commodities which are complementary with (substitutable for) alcohol. In addition, since leisure and alcohol consumption are g...

  12. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification

    Directory of Open Access Journals (Sweden)

    D. Ramyachitra

    2015-09-01

    Full Text Available Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM, K-nearest neighbor (KNN, Interval Valued Classification (IVC and the improvised Interval Value based Particle Swarm Optimization (IVPSO algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  13. Interval-value Based Particle Swarm Optimization algorithm for cancer-type specific gene selection and sample classification.

    Science.gov (United States)

    Ramyachitra, D; Sofia, M; Manikandan, P

    2015-09-01

    Microarray technology allows simultaneous measurement of the expression levels of thousands of genes within a biological tissue sample. The fundamental power of microarrays lies within the ability to conduct parallel surveys of gene expression using microarray data. The classification of tissue samples based on gene expression data is an important problem in medical diagnosis of diseases such as cancer. In gene expression data, the number of genes is usually very high compared to the number of data samples. Thus the difficulty that lies with data are of high dimensionality and the sample size is small. This research work addresses the problem by classifying resultant dataset using the existing algorithms such as Support Vector Machine (SVM), K-nearest neighbor (KNN), Interval Valued Classification (IVC) and the improvised Interval Value based Particle Swarm Optimization (IVPSO) algorithm. Thus the results show that the IVPSO algorithm outperformed compared with other algorithms under several performance evaluation functions.

  14. Optimal Overhaul-Replacement Policies for Repairable Machine Sold with Warranty

    OpenAIRE

    Soemadi, Kusmaningrum; Iskandar, Bermawi P; Taroepratjeka, Harsono

    2014-01-01

    This research deals with an overhaul-replacement policy for a repairable machine sold with Free Replacement Warranty (FRW). The machine will be used for a finite horizon, T (T <), and evaluated at a fixed interval, s (s< T). At each evaluation point, the buyer considers three alternative decisions i.e. Keep the machine, Overhaul it, or Replace it with a new identical one. An overhaul can reduce the machine age virtually, but not to a point that the machine is as good as new. If the mac...

  15. Optimized Clinical Use of RNALater and FFPE Samples for Quantitative Proteomics

    DEFF Research Database (Denmark)

    Bennike, Tue Bjerg; Kastaniegaard, Kenneth; Padurariu, Simona

    2015-01-01

    Introduction and Objectives The availability of patient samples is essential for clinical proteomic research. Biobanks worldwide store mainly samples stabilized in RNAlater as well as formalin-fixed and paraffin embedded (FFPE) biopsies. Biobank material is a potential source for clinical...... we compare to FFPE and frozen samples being the control. Methods From the sigmoideum of two healthy participants’ twenty-four biopsies were extracted using endoscopy. The biopsies was stabilized either by being directly frozen, RNAlater, FFPE or incubated for 30 min at room temperature prior to FFPE...... information. Conclusion We have demonstrated that quantitative proteome analysis and pathway mapping of samples stabilized in RNAlater as well as by FFPE is feasible with minimal impact on the quality of protein quantification and post-translational modifications....

  16. COARSE: Convex Optimization based autonomous control for Asteroid Rendezvous and Sample Exploration, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Sample return missions, by nature, require high levels of spacecraft autonomy. Developments in hardware avionics have led to more capable real-time onboard computing...

  17. Optimal Ordering and Pricing Policies for Seasonal Products: Impacts of Demand Uncertainty and Capital Constraint

    Directory of Open Access Journals (Sweden)

    Jinzhao Shi

    2016-01-01

    Full Text Available With a stochastic price-dependent market demand, this paper investigates how demand uncertainty and capital constraint affect retailer’s integrated ordering and pricing policies towards seasonal products. The retailer with capital constraint is normalized to be with zero capital endowment while it can be financed by an external bank. The problems are studied under a low and high demand uncertainty scenario, respectively. Results show that when demand uncertainty level is relatively low, the retailer faced with demand uncertainty always sets a lower price than the riskless one, while its order quantity may be smaller or larger than the riskless retailer’s which depends on the level of market size. When adding a capital constraint, the retailer will strictly prefer a higher price but smaller quantity policy. However, in a high demand uncertainty scenario, the impacts are more intricate. The retailer faced with demand uncertainty will always order a larger quantity than the riskless one if demand uncertainty level is high enough (above a critical value, while the capital-constrained retailer is likely to set a lower price than the well-funded one when demand uncertainty level falls within a specific interval. Therefore, it can be further concluded that the impact of capital constraint on the retailer’s pricing decision can be influenced by different demand uncertainty levels.

  18. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  19. Efficiency enhancement of optimized Latin hypercube sampling strategies: Application to Monte Carlo uncertainty analysis and meta-modeling

    Science.gov (United States)

    Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad; Janssen, Hans

    2015-02-01

    The majority of literature regarding optimized Latin hypercube sampling (OLHS) is devoted to increasing the efficiency of these sampling strategies through the development of new algorithms based on the combination of innovative space-filling criteria and specialized optimization schemes. However, little attention has been given to the impact of the initial design that is fed into the optimization algorithm, on the efficiency of OLHS strategies. Previous studies, as well as codes developed for OLHS, have relied on one of the following two approaches for the selection of the initial design in OLHS: (1) the use of random points in the hypercube intervals (random LHS), and (2) the use of midpoints in the hypercube intervals (midpoint LHS). Both approaches have been extensively used, but no attempt has been previously made to compare the efficiency and robustness of their resulting sample designs. In this study we compare the two approaches and show that the space-filling characteristics of OLHS designs are sensitive to the initial design that is fed into the optimization algorithm. It is also illustrated that the space-filling characteristics of OLHS designs based on midpoint LHS are significantly better those based on random LHS. The two approaches are compared by incorporating their resulting sample designs in Monte Carlo simulation (MCS) for uncertainty propagation analysis, and then, by employing the sample designs in the selection of the training set for constructing non-intrusive polynomial chaos expansion (NIPCE) meta-models which subsequently replace the original full model in MCSs. The analysis is based on two case studies involving numerical simulation of density dependent flow and solute transport in porous media within the context of seawater intrusion in coastal aquifers. We show that the use of midpoint LHS as the initial design increases the efficiency and robustness of the resulting MCSs and NIPCE meta-models. The study also illustrates that this

  20. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  1. Road traffic related mortality in Vietnam: Evidence for policy from a national sample mortality surveillance system

    Directory of Open Access Journals (Sweden)

    Ngo Anh D

    2012-07-01

    Full Text Available Abstract Background Road traffic injuries (RTIs are among the leading causes of mortality in Vietnam. However, mortality data collection systems in Vietnam in general and for RTIs in particular, remain inconsistent and incomplete. Underlying distributions of external causes and body injuries are not available from routine data collection systems or from studies till date. This paper presents characteristics, user type pattern, seasonal distribution, and causes of 1,061 deaths attributable to road crashes ascertained from a national sample mortality surveillance system in Vietnam over a two-year period (2008 and 2009. Methods A sample mortality surveillance system was designed for Vietnam, comprising 192 communes in 16 provinces, accounting for approximately 3% of the Vietnamese population. Deaths were identified from commune level data sources, and followed up by verbal autopsy (VA based ascertainment of cause of death. Age-standardised mortality rates from RTIs were computed. VA questionnaires were analysed in depth to derive descriptive characteristics of RTI deaths in the sample. Results The age-standardized mortality rates from RTIs were 33.5 and 8.5 per 100,000 for males and females respectively. Majority of deaths were males (79%. Seventy three percent of all deaths were aged from 15 to 49 years and 58% were motorcycle users. As high as 80% of deaths occurred on the day of injury, 42% occurred prior to arrival at hospital, and a further 29% occurred on-site. Direct causes of death were identified for 446 deaths (42% with head injuries being the most common cause attributable to road traffic injuries overall (79% and to motorcycle crashes in particular (78%. Conclusion The VA method can provide a useful data source to analyse RTI mortality. The observed considerable mortality from head injuries among motorcycle users highlights the need to evaluate current practice and effectiveness of motorcycle helmet use in Vietnam. The high number of

  2. MCMC-ODPR: Primer design optimization using Markov Chain Monte Carlo sampling

    Directory of Open Access Journals (Sweden)

    Kitchen James L

    2012-11-01

    Full Text Available Abstract Background Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR algorithm. Results After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. Conclusions MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  3. MCMC-ODPR: primer design optimization using Markov Chain Monte Carlo sampling.

    Science.gov (United States)

    Kitchen, James L; Moore, Jonathan D; Palmer, Sarah A; Allaby, Robin G

    2012-11-05

    Next generation sequencing technologies often require numerous primer designs that require good target coverage that can be financially costly. We aimed to develop a system that would implement primer reuse to design degenerate primers that could be designed around SNPs, thus find the fewest necessary primers and the lowest cost whilst maintaining an acceptable coverage and provide a cost effective solution. We have implemented Metropolis-Hastings Markov Chain Monte Carlo for optimizing primer reuse. We call it the Markov Chain Monte Carlo Optimized Degenerate Primer Reuse (MCMC-ODPR) algorithm. After repeating the program 1020 times to assess the variance, an average of 17.14% fewer primers were found to be necessary using MCMC-ODPR for an equivalent coverage without implementing primer reuse. The algorithm was able to reuse primers up to five times. We compared MCMC-ODPR with single sequence primer design programs Primer3 and Primer-BLAST and achieved a lower primer cost per amplicon base covered of 0.21 and 0.19 and 0.18 primer nucleotides on three separate gene sequences, respectively. With multiple sequences, MCMC-ODPR achieved a lower cost per base covered of 0.19 than programs BatchPrimer3 and PAMPS, which achieved 0.25 and 0.64 primer nucleotides, respectively. MCMC-ODPR is a useful tool for designing primers at various melting temperatures at good target coverage. By combining degeneracy with optimal primer reuse the user may increase coverage of sequences amplified by the designed primers at significantly lower costs. Our analyses showed that overall MCMC-ODPR outperformed the other primer-design programs in our study in terms of cost per covered base.

  4. Spatio-temporal optimization of sampling for bluetongue vectors (Culicoides) near grazing livestock

    DEFF Research Database (Denmark)

    Kirkeby, Carsten; Stockmarr, Anders; Bødker, Rene

    2013-01-01

    BACKGROUND: Estimating the abundance of Culicoides using light traps is influenced by a large variation in abundance in time and place. This study investigates the optimal trapping strategy to estimate the abundance or presence/absence of Culicoides on a field with grazing animals. We used 45 light...... absence of vectors on the field. The variation in the estimated abundance decreased steeply when using up to six traps, and was less pronounced when using more traps, although no clear cutoff was found. CONCLUSIONS: Despite spatial clustering in vector abundance, we found no effect of increasing...... monitoring programmes on fields with grazing animals....

  5. Optimized sample preparation for two-dimensional gel electrophoresis of soluble proteins from chicken bursa of Fabricius

    Directory of Open Access Journals (Sweden)

    Zheng Xiaojuan

    2009-10-01

    Full Text Available Abstract Background Two-dimensional gel electrophoresis (2-DE is a powerful method to study protein expression and function in living organisms and diseases. This technique, however, has not been applied to avian bursa of Fabricius (BF, a central immune organ. Here, optimized 2-DE sample preparation methodologies were constructed for the chicken BF tissue. Using the optimized protocol, we performed further 2-DE analysis on a soluble protein extract from the BF of chickens infected with virulent avibirnavirus. To demonstrate the quality of the extracted proteins, several differentially expressed protein spots selected were cut from 2-DE gels and identified by matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF MS. Results An extraction buffer containing 7 M urea, 2 M thiourea, 2% (w/v 3-[(3-cholamidopropyl-dimethylammonio]-1-propanesulfonate (CHAPS, 50 mM dithiothreitol (DTT, 0.2% Bio-Lyte 3/10, 1 mM phenylmethylsulfonyl fluoride (PMSF, 20 U/ml Deoxyribonuclease I (DNase I, and 0.25 mg/ml Ribonuclease A (RNase A, combined with sonication and vortex, yielded the best 2-DE data. Relative to non-frozen immobilized pH gradient (IPG strips, frozen IPG strips did not result in significant changes in the 2-DE patterns after isoelectric focusing (IEF. When the optimized protocol was used to analyze the spleen and thymus, as well as avibirnavirus-infected bursa, high quality 2-DE protein expression profiles were obtained. 2-DE maps of BF of chickens infected with virulent avibirnavirus were visibly different and many differentially expressed proteins were found. Conclusion These results showed that method C, in concert extraction buffer IV, was the most favorable for preparing samples for IEF and subsequent protein separation and yielded the best quality 2-DE patterns. The optimized protocol is a useful sample preparation method for comparative proteomics analysis of chicken BF tissues.

  6. Optimizing sampling strategy for radiocarbon dating of Holocene fluvial systems in a vertically aggrading setting

    International Nuclear Information System (INIS)

    Toernqvist, T.E.; Dijk, G.J. Van

    1993-01-01

    The authors address the question of how to determine the period of activity (sedimentation) of fossil (Holocene) fluvial systems in vertically aggrading environments. The available data base consists of almost 100 14 C ages from the Rhine-Meuse delta. Radiocarbon samples from the tops of lithostratigraphically correlative organic beds underneath overbank deposits (sample type 1) yield consistent ages, indicating a synchronous onset of overbank deposition over distances of at least up to 20 km along channel belts. Similarly, 14 C ages from the base of organic residual channel fills (sample type 3) generally indicate a clear termination of within-channel sedimentation. In contrast, 14 C ages from the base of organic beds overlying overbank deposits (sample type 2), commonly assumed to represent the end of fluvial sedimentation, show a large scatter reaching up to 1000 14 C years. It is concluded that a combination of sample types 1 and 3 generally yields a satisfactory delimitation of the period of activity of a fossil fluvial system. 30 refs., 11 figs., 4 tabs

  7. The optimal replenishment policy for time-varying stochastic demand under vendor managed inventory

    DEFF Research Database (Denmark)

    Govindan, Kannan

    2015-01-01

    A Vendor Managed Inventory (VMI) partnership places the responsibility on the vendor (rather than on buyers) to schedule purchase orders for inventory replenishment in the supply chain system. In this research, the supply chain network considers the Silver-Meal heuristic with an augmentation...... quantity replenishment policy between both traditional and VMI systems. We consider time-varying stochastic demand in two-echelon (one vendor, multiple retailers) supply chains. This paper seeks to find the supply chain that minimizes system cost through comparing performance between traditional and VMI...... systems. A mathematical model is developed, and total supply chain cost is used as the measure of comparison. The models are applied in both traditional and VMI supply chains based on pharmaceutical industry data, and we focus on total cost difference compared through the use of Adjusted Silver-Meal (ASM...

  8. Climate policy and the optimal extraction of high- and low-carbon fossil fuels

    International Nuclear Information System (INIS)

    Smulders, S.; Van der Werf, E.

    2005-01-01

    We study how restricting CO2 emissions affects resource prices and depletion over time. We use a Hotelling-style model with two non- renewable fossil fuels that differ in their carbon content (e.g. coal and natural gas) and that are imperfect substitutes in final good production. We study both an unexpected constraint and an anticipated constraint. Both shocks induce intertemporal substitution of resource use. When emissions are unexpectedly restricted, it is cost-effective to use high-carbon resources relatively more (less) intensively on impact if this resource is relatively scarce (abundant). If the emission constraint is anticipated, it is cost-effective to use relatively more (less) of the low-carbon input before the constraint becomes binding, in order to conserve relatively more (less) of the high-carbon input for the period when climate policy is active in case the high-carbon resource is relatively scarce (abundant)

  9. Optimal sampling designs for estimation of Plasmodium falciparum clearance rates in patients treated with artemisinin derivatives

    Science.gov (United States)

    2013-01-01

    Background The emergence of Plasmodium falciparum resistance to artemisinins in Southeast Asia threatens the control of malaria worldwide. The pharmacodynamic hallmark of artemisinin derivatives is rapid parasite clearance (a short parasite half-life), therefore, the in vivo phenotype of slow clearance defines the reduced susceptibility to the drug. Measurement of parasite counts every six hours during the first three days after treatment have been recommended to measure the parasite clearance half-life, but it remains unclear whether simpler sampling intervals and frequencies might also be sufficient to reliably estimate this parameter. Methods A total of 2,746 parasite density-time profiles were selected from 13 clinical trials in Thailand, Cambodia, Mali, Vietnam, and Kenya. In these studies, parasite densities were measured every six hours until negative after treatment with an artemisinin derivative (alone or in combination with a partner drug). The WWARN Parasite Clearance Estimator (PCE) tool was used to estimate “reference” half-lives from these six-hourly measurements. The effect of four alternative sampling schedules on half-life estimation was investigated, and compared to the reference half-life (time zero, 6, 12, 24 (A1); zero, 6, 18, 24 (A2); zero, 12, 18, 24 (A3) or zero, 12, 24 (A4) hours and then every 12 hours). Statistical bootstrap methods were used to estimate the sampling distribution of half-lives for parasite populations with different geometric mean half-lives. A simulation study was performed to investigate a suite of 16 potential alternative schedules and half-life estimates generated by each of the schedules were compared to the “true” half-life. The candidate schedules in the simulation study included (among others) six-hourly sampling, schedule A1, schedule A4, and a convenience sampling schedule at six, seven, 24, 25, 48 and 49 hours. Results The median (range) parasite half-life for all clinical studies combined was 3.1 (0

  10. Sterile Reverse Osmosis Water Combined with Friction Are Optimal for Channel and Lever Cavity Sample Collection of Flexible Duodenoscopes

    Directory of Open Access Journals (Sweden)

    Michelle J. Alfa

    2017-11-01

    Full Text Available IntroductionSimulated-use buildup biofilm (BBF model was used to assess various extraction fluids and friction methods to determine the optimal sample collection method for polytetrafluorethylene channels. In addition, simulated-use testing was performed for the channel and lever cavity of duodenoscopes.Materials and methodsBBF was formed in polytetrafluorethylene channels using Enterococcus faecalis, Escherichia coli, and Pseudomonas aeruginosa. Sterile reverse osmosis (RO water, and phosphate-buffered saline with and without Tween80 as well as two neutralizing broths (Letheen and Dey–Engley were each assessed with and without friction. Neutralizer was added immediately after sample collection and samples concentrated using centrifugation. Simulated-use testing was done using TJF-Q180V and JF-140F Olympus duodenoscopes.ResultsDespite variability in the bacterial CFU in the BBF model, none of the extraction fluids tested were significantly better than RO. Borescope examination showed far less residual material when friction was part of the extraction protocol. The RO for flush-brush-flush (FBF extraction provided significantly better recovery of E. coli (p = 0.02 from duodenoscope lever cavities compared to the CDC flush method.Discussion and conclusionWe recommend RO with friction for FBF extraction of the channel and lever cavity of duodenoscopes. Neutralizer and sample concentration optimize recovery of viable bacteria on culture.

  11. Controlling measles using supplemental immunization activities: a mathematical model to inform optimal policy.

    Science.gov (United States)

    Verguet, Stéphane; Johri, Mira; Morris, Shaun K; Gauvreau, Cindy L; Jha, Prabhat; Jit, Mark

    2015-03-03

    The Measles & Rubella Initiative, a broad consortium of global health agencies, has provided support to measles-burdened countries, focusing on sustaining high coverage of routine immunization of children and supplementing it with a second dose opportunity for measles vaccine through supplemental immunization activities (SIAs). We estimate optimal scheduling of SIAs in countries with the highest measles burden. We develop an age-stratified dynamic compartmental model of measles transmission. We explore the frequency of SIAs in order to achieve measles control in selected countries and two Indian states with high measles burden. Specifically, we compute the maximum allowable time period between two consecutive SIAs to achieve measles control. Our analysis indicates that a single SIA will not control measles transmission in any of the countries with high measles burden. However, regular SIAs at high coverage levels are a viable strategy to prevent measles outbreaks. The periodicity of SIAs differs between countries and even within a single country, and is determined by population demographics and existing routine immunization coverage. Our analysis can guide country policymakers deciding on the optimal scheduling of SIA campaigns and the best combination of routine and SIA vaccination to control measles. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Optimizing sampling design to deal with mist-net avoidance in Amazonian birds and bats.

    Directory of Open Access Journals (Sweden)

    João Tiago Marques

    Full Text Available Mist netting is a widely used technique to sample bird and bat assemblages. However, captures often decline with time because animals learn and avoid the locations of nets. This avoidance or net shyness can substantially decrease sampling efficiency. We quantified the day-to-day decline in captures of Amazonian birds and bats with mist nets set at the same location for four consecutive days. We also evaluated how net avoidance influences the efficiency of surveys under different logistic scenarios using re-sampling techniques. Net avoidance caused substantial declines in bird and bat captures, although more accentuated in the latter. Most of the decline occurred between the first and second days of netting: 28% in birds and 47% in bats. Captures of commoner species were more affected. The numbers of species detected also declined. Moving nets daily to minimize the avoidance effect increased captures by 30% in birds and 70% in bats. However, moving the location of nets may cause a reduction in netting time and captures. When moving the nets caused the loss of one netting day it was no longer advantageous to move the nets frequently. In bird surveys that could even decrease the number of individuals captured and species detected. Net avoidance can greatly affect sampling efficiency but adjustments in survey design can minimize this. Whenever nets can be moved without losing netting time and the objective is to capture many individuals, they should be moved daily. If the main objective is to survey species present then nets should still be moved for bats, but not for birds. However, if relocating nets causes a significant loss of netting time, moving them to reduce effects of shyness will not improve sampling efficiency in either group. Overall, our findings can improve the design of mist netting sampling strategies in other tropical areas.

  13. The optimal retailer's ordering policies with trade credit financing and limited storage capacity in the supply chain system

    Science.gov (United States)

    Yen, Ghi-Feng; Chung, Kun-Jen; Chen, Tzung-Ching

    2012-11-01

    The traditional economic order quantity model assumes that the retailer's storage capacity is unlimited. However, as we all know, the capacity of any warehouse is limited. In practice, there usually exist various factors that induce the decision-maker of the inventory system to order more items than can be held in his/her own warehouse. Therefore, for the decision-maker, it is very practical to determine whether or not to rent other warehouses. In this article, we try to incorporate two levels of trade credit and two separate warehouses (own warehouse and rented warehouse) to establish a new inventory model to help the decision-maker to make the decision. Four theorems are provided to determine the optimal cycle time to generalise some existing articles. Finally, the sensitivity analysis is executed to investigate the effects of the various parameters on ordering policies and annual costs of the inventory system.

  14. An optimized Line Sampling method for the estimation of the failure probability of nuclear passive systems

    International Nuclear Information System (INIS)

    Zio, E.; Pedroni, N.

    2010-01-01

    The quantitative reliability assessment of a thermal-hydraulic (T-H) passive safety system of a nuclear power plant can be obtained by (i) Monte Carlo (MC) sampling the uncertainties of the system model and parameters, (ii) computing, for each sample, the system response by a mechanistic T-H code and (iii) comparing the system response with pre-established safety thresholds, which define the success or failure of the safety function. The computational effort involved can be prohibitive because of the large number of (typically long) T-H code simulations that must be performed (one for each sample) for the statistical estimation of the probability of success or failure. In this work, Line Sampling (LS) is adopted for efficient MC sampling. In the LS method, an 'important direction' pointing towards the failure domain of interest is determined and a number of conditional one-dimensional problems are solved along such direction; this allows for a significant reduction of the variance of the failure probability estimator, with respect, for example, to standard random sampling. Two issues are still open with respect to LS: first, the method relies on the determination of the 'important direction', which requires additional runs of the T-H code; second, although the method has been shown to improve the computational efficiency by reducing the variance of the failure probability estimator, no evidence has been given yet that accurate and precise failure probability estimates can be obtained with a number of samples reduced to below a few hundreds, which may be required in case of long-running models. The work presented in this paper addresses the first issue by (i) quantitatively comparing the efficiency of the methods proposed in the literature to determine the LS important direction; (ii) employing artificial neural network (ANN) regression models as fast-running surrogates of the original, long-running T-H code to reduce the computational cost associated to the

  15. Cadmium and lead determination by ICPMS: Method optimization and application in carabao milk samples

    Directory of Open Access Journals (Sweden)

    Riza A. Magbitang

    2012-06-01

    Full Text Available A method utilizing inductively coupled plasma mass spectrometry (ICPMS as the element-selective detector with microwave-assisted nitric acid digestion as the sample pre-treatment technique was developed for the simultaneous determination of cadmium (Cd and lead (Pb in milk samples. The estimated detection limits were 0.09ìg kg-1 and 0.33ìg kg-1 for Cd and Pb, respectively. The method was linear in the concentration range 0.01 to 500ìg kg-1with correlation coefficients of 0.999 for both analytes.The method was validated using certified reference material BCR 150 and the determined values for Cd and Pb were 18.24 ± 0.18 ìg kg-1 and 807.57 ± 7.07ìg kg-1, respectively. Further validation using another certified reference material, NIST 1643e, resulted in determined concentrations of 6.48 ± 0.10 ìg L-1 for Cd and 21.96 ± 0.87 ìg L-1 for Pb. These determined values agree well with the certified values in the reference materials.The method was applied to processed and raw carabao milk samples collected in Nueva Ecija, Philippines.The Cd levels determined in the samples were in the range 0.11 ± 0.07 to 5.17 ± 0.13 ìg kg-1 for the processed milk samples, and 0.11 ± 0.07 to 0.45 ± 0.09 ìg kg-1 for the raw milk samples. The concentrations of Pb were in the range 0.49 ± 0.21 to 5.82 ± 0.17 ìg kg-1 for the processed milk samples, and 0.72 ± 0.18 to 6.79 ± 0.20 ìg kg-1 for the raw milk samples.

  16. Method optimization for non-equilibrium solid phase microextraction sampling of HAPs for GC/MS analysis

    Science.gov (United States)

    Zawadowicz, M. A.; Del Negro, L. A.

    2010-12-01

    Hazardous air pollutants (HAPs) are usually present in the atmosphere at pptv-level, requiring measurements with high sensitivity and minimal contamination. Commonly used evacuated canister methods require an overhead in space, money and time that often is prohibitive to primarily-undergraduate institutions. This study optimized an analytical method based on solid-phase microextraction (SPME) of ambient gaseous matrix, which is a cost-effective technique of selective VOC extraction, accessible to an unskilled undergraduate. Several approaches to SPME extraction and sample analysis were characterized and several extraction parameters optimized. Extraction time, temperature and laminar air flow velocity around the fiber were optimized to give highest signal and efficiency. Direct, dynamic extraction of benzene from a moving air stream produced better precision (±10%) than sampling of stagnant air collected in a polymeric bag (±24%). Using a low-polarity chromatographic column in place of a standard (5%-Phenyl)-methylpolysiloxane phase decreased the benzene detection limit from 2 ppbv to 100 pptv. The developed method is simple and fast, requiring 15-20 minutes per extraction and analysis. It will be field-validated and used as a field laboratory component of various undergraduate Chemistry and Environmental Studies courses.

  17. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  18. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  19. Geochemical sampling scheme optimization on mine wastes based on hyperspectral data

    CSIR Research Space (South Africa)

    Zhao, T

    2008-07-01

    Full Text Available decontamination, for example, acid-generating minerals. Acid rock drainage can adversely have an impact on the quality of drinking water and the health of riparian ecosystems. To assess or monitor environmental impact of mining, sampling of mine waste is required...

  20. Robust, Sensitive, and Automated Phosphopeptide Enrichment Optimized for Low Sample Amounts Applied to Primary Hippocampal Neurons

    NARCIS (Netherlands)

    Post, Harm; Penning, Renske; Fitzpatrick, Martin; Garrigues, L.B.; Wu, W.; Mac Gillavry, H.D.; Hoogenraad, C.C.; Heck, A.J.R.; Altelaar, A.F.M.

    2017-01-01

    Because of the low stoichiometry of protein phosphorylation, targeted enrichment prior to LC–MS/MS analysis is still essential. The trend in phosphoproteome analysis is shifting toward an increasing number of biological replicates per experiment, ideally starting from very low sample amounts,

  1. Optimal sampling strategies to assess inulin clearance in children by the inulin single-injection method

    NARCIS (Netherlands)

    van Rossum, Lyonne K.; Mathot, Ron A. A.; Cransberg, Karlien; Vulto, Arnold G.

    2003-01-01

    Glomerular filtration rate in patients can be determined by estimating the plasma clearance of inulin with the single-injection method. In this method, a single bolus injection of inulin is administered and several blood samples are collected. For practical and convenient application of this method

  2. Optimization of deconvolution software used in the study of spectra of soil samples from Madagascar

    International Nuclear Information System (INIS)

    ANDRIAMADY NARIMANANA, S.F.

    2005-01-01

    The aim of this work is to perform the deconvolution of gamma spectra by using the deconvolution peak program. Synthetic spectra, reference materials and ten soil samples with various U-238 activities from three regions of Madagascar were used. This work concerns : soil sample spectra with low activities of about (47±2) Bq.kg -1 from Ankatso, soil sample spectra with average activities of about (125±2)Bq.kg -1 from Antsirabe and soil sample spectra with high activities of about (21100± 120) Bq.kg -1 from Vinaninkarena. Singlet and multiplet peaks with various intensities were found in each soil spectrum. Interactive Peak Fit (IPF) program in Genie-PC from Canberra Industries allows to deconvoluate many multiplet regions : quartet within 235 keV-242 keV, Pb-214 and Pb-212 within 294 keV -301 keV; Th-232 daughters within 582 keV - 584 keV; Ac-228 within 904 keV -911 keV and within 964 keV-970 keV and Bi-214 within 1401 keV - 1408 keV. Those peaks were used to quantify considered radionuclides. However, IPF cannot resolve Ra-226 peak at 186,1 keV. [fr

  3. A systematic random sampling scheme optimized to detect the proportion of rare synapses in the neuropil.

    Science.gov (United States)

    da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C

    2009-05-30

    Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.

  4. Optimization of fecal cytology in the dog: comparison of three sampling methods.

    Science.gov (United States)

    Frezoulis, Petros S; Angelidou, Elisavet; Diakou, Anastasia; Rallis, Timoleon S; Mylonakis, Mathios E

    2017-09-01

    Dry-mount fecal cytology (FC) is a component of the diagnostic evaluation of gastrointestinal diseases. There is limited information on the possible effect of the sampling method on the cytologic findings of healthy dogs or dogs admitted with diarrhea. We aimed to: (1) establish sampling method-specific expected values of selected cytologic parameters (isolated or clustered epithelial cells, neutrophils, lymphocytes, macrophages, spore-forming rods) in clinically healthy dogs; (2) investigate if the detection of cytologic abnormalities differs among methods in dogs admitted with diarrhea; and (3) investigate if there is any association between FC abnormalities and the anatomic origin (small- or large-bowel diarrhea) or the chronicity of diarrhea. Sampling with digital examination (DE), rectal scraping (RS), and rectal lavage (RL) was prospectively assessed in 37 healthy and 34 diarrheic dogs. The median numbers of isolated ( p = 0.000) or clustered ( p = 0.002) epithelial cells, and of lymphocytes ( p = 0.000), differed among the 3 methods in healthy dogs. In the diarrheic dogs, the RL method was the least sensitive in detecting neutrophils, and isolated or clustered epithelial cells. Cytologic abnormalities were not associated with the origin or the chronicity of diarrhea. Sampling methods differed in their sensitivity to detect abnormalities in FC; DE or RS may be of higher sensitivity compared to RL. Anatomic origin or chronicity of diarrhea do not seem to affect the detection of cytologic abnormalities.

  5. An analysis of the feasibility of carbon management policies as a mechanism to influence water conservation using optimization methods.

    Science.gov (United States)

    Wright, Andrew; Hudson, Darren

    2014-10-01

    Studies of how carbon reduction policies would affect agricultural production have found that there is a connection between carbon emissions and irrigation. Using county level data we develop an optimization model that accounts for the gross carbon emitted during the production process to evaluate how carbon reducing policies applied to agriculture would affect the choices of what to plant and how much to irrigate by producers on the Texas High Plains. Carbon emissions were calculated using carbon equivalent (CE) calculations developed by researchers at the University of Arkansas. Carbon reduction was achieved in the model through a constraint, a tax, or a subsidy. Reducing carbon emissions by 15% resulted in a significant reduction in the amount of water applied to a crop; however, planted acreage changed very little due to a lack of feasible alternative crops. The results show that applying carbon restrictions to agriculture may have important implications for production choices in areas that depend on groundwater resources for agricultural production. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Optimal energy efficiency policies and regulatory demand-side management tests: How well do they match?

    International Nuclear Information System (INIS)

    Brennan, Timothy J.

    2010-01-01

    Under conventional models, subsidizing energy efficiency requires electricity to be priced below marginal cost. Its benefits increase when electricity prices increase to finance the subsidy. With high prices, subsidies are counterproductive unless consumers fail to make efficiency investments when private benefits exceed costs. If the gain from adopting efficiency is only reduced electricity spending, capping revenues from energy sales may induce a utility to substitute efficiency for generation when the former is less costly. This goes beyond standard 'decoupling' of distribution revenues from sales, requiring complex energy price regulation. The models' results are used to evaluate tests in the 2002 California Standard Practice Manual for assessing demand-side management programs. Its 'Ratepayer Impact Measure' test best conforms to the condition that electricity price is too low. Its 'Total Resource Cost' and 'Societal Cost' tests resemble the condition for expanded decoupling. No test incorporates optimality conditions apart from consumer choice failure.

  7. Optimal replacement policy for safety-related multi-component multi-state systems

    International Nuclear Information System (INIS)

    Xu Ming; Chen Tao; Yang Xianhui

    2012-01-01

    This paper investigates replacement scheduling for non-repairable safety-related systems (SRS) with multiple components and states. The aim is to determine the cost-minimizing time for replacing SRS while meeting the required safety. Traditionally, such scheduling decisions are made without considering the interaction between the SRS and the production system under protection, the interaction being essential to formulate the expected cost to be minimized. In this paper, the SRS is represented by a non-homogeneous continuous time Markov model, and its state distribution is evaluated with the aid of the universal generating function. Moreover, a structure function of SRS with recursive property is developed to evaluate the state distribution efficiently. These methods form the basis to derive an explicit expression of the expected system cost per unit time, and to determine the optimal time to replace the SRS. The proposed methodology is demonstrated through an illustrative example.

  8. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  9. Centrifugation protocols: tests to determine optimal lithium heparin and citrate plasma sample quality.

    Science.gov (United States)

    Dimeski, Goce; Solano, Connie; Petroff, Mark K; Hynd, Matthew

    2011-05-01

    Currently, no clear guidelines exist for the most appropriate tests to determine sample quality from centrifugation protocols for plasma sample types with both lithium heparin in gel barrier tubes for biochemistry testing and citrate tubes for coagulation testing. Blood was collected from 14 participants in four lithium heparin and one serum tube with gel barrier. The plasma tubes were centrifuged at four different centrifuge settings and analysed for potassium (K(+)), lactate dehydrogenase (LD), glucose and phosphorus (Pi) at zero time, poststorage at six hours at 21 °C and six days at 2-8°C. At the same time, three citrate tubes were collected and centrifuged at three different centrifuge settings and analysed immediately for prothrombin time/international normalized ratio, activated partial thromboplastin time, derived fibrinogen and surface-activated clotting time (SACT). The biochemistry analytes indicate plasma is less stable than serum. Plasma sample quality is higher with longer centrifugation time, and much higher g force. Blood cells present in the plasma lyse with time or are damaged when transferred in the reaction vessels, causing an increase in the K(+), LD and Pi above outlined limits. The cells remain active and consume glucose even in cold storage. The SACT is the only coagulation parameter that was affected by platelets >10 × 10(9)/L in the citrate plasma. In addition to the platelet count, a limited but sensitive number of assays (K(+), LD, glucose and Pi for biochemistry, and SACT for coagulation) can be used to determine appropriate centrifuge settings to consistently obtain the highest quality lithium heparin and citrate plasma samples. The findings will aid laboratories to balance the need to provide the most accurate results in the best turnaround time.

  10. Secondary water treatment optimization in French PWRs: Recent ways of investigation and policy

    Energy Technology Data Exchange (ETDEWEB)

    Millet, L.; Serres, F. [Electricite de France, Group des Laboratoires (France); Vermeeren, D. [Electricite de France, Groupe Ingenierie Process (France); Moreaux, D. [Electricite de France, Groupe Environnement (France)

    2002-07-01

    In French nuclear power plants, the secondary water conditioning is essentially based on the use of a volatile amine and a reducing reagent. The additional use of a corrosion inhibitor is limited to units with secondary side corrosion of Alloy 600 MA SG tubes. The main aim of secondary water treatment optimisation is to achieve the best compromise as follows: to minimize the different types of corrosion of the different PWRs materials (copper corrosion, flow assisted corrosion, SG fouling and secondary side corrosion), to reduce operation and maintenance costs (short term and long term), to minimise the impacts on the environment, to protect workers health. In a first part, this paper describes the studies recently carried out to try to optimise the secondary water treatment in French PWRs. They concern the possibility to use ethanolamine (ETA) in replacement of morpholine and ammonia and the possibility to use carbohydrazide (CBH) in replacement of hydrazine. In a second part, this paper presents the French secondary water treatment policy established in 2000, which is depending on the presence or not of copper alloys. (authors)

  11. Optimal Mission Abort Policy for Systems Operating in a Random Environment.

    Science.gov (United States)

    Levitin, Gregory; Finkelstein, Maxim

    2018-04-01

    Many real-world critical systems, e.g., aircrafts, manned space flight systems, and submarines, utilize mission aborts to enhance their survivability. Specifically, a mission can be aborted when a certain malfunction condition is met and a rescue or recovery procedure is then initiated. For systems exposed to external impacts, the malfunctions are often caused by the consequences of these impacts. Traditional system reliability models typically cannot address a possibility of mission aborts. Therefore, in this article, we first develop the corresponding methodology for modeling and evaluation of the mission success probability and survivability of systems experiencing both internal failures and external shocks. We consider a policy when a mission is aborted and a rescue procedure is activated upon occurrence of the mth shock. We demonstrate the tradeoff between the system survivability and the mission success probability that should be balanced by the proper choice of the decision variable m. A detailed illustrative example of a mission performed by an unmanned aerial vehicle is presented. © 2017 Society for Risk Analysis.

  12. Secondary water treatment optimization in French PWRs: Recent ways of investigation and policy

    International Nuclear Information System (INIS)

    Millet, L.; Serres, F.; Vermeeren, D.; Moreaux, D.

    2002-01-01

    In French nuclear power plants, the secondary water conditioning is essentially based on the use of a volatile amine and a reducing reagent. The additional use of a corrosion inhibitor is limited to units with secondary side corrosion of Alloy 600 MA SG tubes. The main aim of secondary water treatment optimisation is to achieve the best compromise as follows: to minimize the different types of corrosion of the different PWRs materials (copper corrosion, flow assisted corrosion, SG fouling and secondary side corrosion), to reduce operation and maintenance costs (short term and long term), to minimise the impacts on the environment, to protect workers health. In a first part, this paper describes the studies recently carried out to try to optimise the secondary water treatment in French PWRs. They concern the possibility to use ethanolamine (ETA) in replacement of morpholine and ammonia and the possibility to use carbohydrazide (CBH) in replacement of hydrazine. In a second part, this paper presents the French secondary water treatment policy established in 2000, which is depending on the presence or not of copper alloys. (authors)

  13. [Optimization of solid-phase extraction for enrichment of toxic organic compounds in water samples].

    Science.gov (United States)

    Zhang, Ming-quan; Li, Feng-min; Wu, Qian-yuan; Hu, Hong-ying

    2013-05-01

    A concentration method for enrichment of toxic organic compounds in water samples has been developed based on combined solid-phase extraction (SPE) to reduce impurities and improve recoveries of target compounds. This SPE method was evaluated in every stage to identify the source of impurities. Based on the analysis of Waters Oasis HLB without water samples, the eluent of SPE sorbent after dichloromethane and acetone contributed 85% of impurities during SPE process. In order to reduce the impurities from SPE sorbent, soxhlet extraction of dichloromethane followed by acetone and lastly methanol was applied to the sorbents for 24 hours and the results had proven that impurities were reduced significantly. In addition to soxhlet extraction, six types of prevalent SPE sorbents were used to absorb 40 target compounds, the lgK(ow) values of which were within the range of 1.46 and 8.1, and recovery rates were compared. It was noticed and confirmed that Waters Oasis HLB had shown the best recovery results for most of the common testing samples among all three styrenedivinylbenzene (SDB) polymer sorbents, which were 77% on average. Furthermore, Waters SepPak AC-2 provided good recovery results for pesticides among three types of activated carbon sorbents and the average recovery rates reached 74%. Therefore, Waters Oasis HLB and Waters SepPak AC-2 were combined to obtain a better recovery and the average recovery rate for the tested 40 compounds of this new SPE method was 87%.

  14. Optimal replenishment and credit policy in supply chain inventory model under two levels of trade credit with time- and credit-sensitive demand involving default risk

    Science.gov (United States)

    Mahata, Puspita; Mahata, Gour Chandra; Kumar De, Sujit

    2018-03-01

    Traditional supply chain inventory modes with trade credit usually only assumed that the up-stream suppliers offered the down-stream retailers a fixed credit period. However, in practice the retailers will also provide a credit period to customers to promote the market competition. In this paper, we formulate an optimal supply chain inventory model under two levels of trade credit policy with default risk consideration. Here, the demand is assumed to be credit-sensitive and increasing function of time. The major objective is to determine the retailer's optimal credit period and cycle time such that the total profit per unit time is maximized. The existence and uniqueness of the optimal solution to the presented model are examined, and an easy method is also shown to find the optimal inventory policies of the considered problem. Finally, numerical examples and sensitive analysis are presented to illustrate the developed model and to provide some managerial insights.

  15. Optimizing the data acquisition rate for a remotely controllable structural monitoring system with parallel operation and self-adaptive sampling

    International Nuclear Information System (INIS)

    Sheng, Wenjuan; Guo, Aihuang; Liu, Yang; Azmi, Asrul Izam; Peng, Gang-Ding

    2011-01-01

    We present a novel technique that optimizes the real-time remote monitoring and control of dispersed civil infrastructures. The monitoring system is based on fiber Bragg gating (FBG) sensors, and transfers data via Ethernet. This technique combines parallel operation and self-adaptive sampling to increase the data acquisition rate in remote controllable structural monitoring systems. The compact parallel operation mode is highly efficient at achieving the highest possible data acquisition rate for the FBG sensor based local data acquisition system. Self-adaptive sampling is introduced to continuously coordinate local acquisition and remote control for data acquisition rate optimization. Key issues which impact the operation of the whole system, such as the real-time data acquisition rate, data processing capability, and buffer usage, are investigated. The results show that, by introducing parallel operation and self-adaptive sampling, the data acquisition rate can be increased by several times without affecting the system operating performance on both local data acquisition and remote process control

  16. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Science.gov (United States)

    Adu-Brimpong, Joel; Coffey, Nathan; Ayers, Colby; Berrigan, David; Yingling, Leah R.; Thomas, Samantha; Mitchell, Valerie; Ahuja, Chaarushi; Rivers, Joshua; Hartz, Jacob; Powell-Wiley, Tiffany M.

    2017-01-01

    Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist), a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783) participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions). Twelve street segments per home address were assessed for (1) Land-Use Type; (2) Public Transportation Availability; (3) Street Characteristics; (4) Environment Quality and (5) Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9) and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6). Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3). Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p health behaviors and outcomes. PMID:28282878

  17. Population Pharmacokinetics of Gemcitabine and dFdU in Pancreatic Cancer Patients Using an Optimal Design, Sparse Sampling Approach.

    Science.gov (United States)

    Serdjebi, Cindy; Gattacceca, Florence; Seitz, Jean-François; Fein, Francine; Gagnière, Johan; François, Eric; Abakar-Mahamat, Abakar; Deplanque, Gael; Rachid, Madani; Lacarelle, Bruno; Ciccolini, Joseph; Dahan, Laetitia

    2017-06-01

    Gemcitabine remains a pillar in pancreatic cancer treatment. However, toxicities are frequently observed. Dose adjustment based on therapeutic drug monitoring might help decrease the occurrence of toxicities. In this context, this work aims at describing the pharmacokinetics (PK) of gemcitabine and its metabolite dFdU in pancreatic cancer patients and at identifying the main sources of their PK variability using a population PK approach, despite a sparse sampled-population and heterogeneous administration and sampling protocols. Data from 38 patients were included in the analysis. The 3 optimal sampling times were determined using KineticPro and the population PK analysis was performed on Monolix. Available patient characteristics, including cytidine deaminase (CDA) status, were tested as covariates. Correlation between PK parameters and occurrence of severe hematological toxicities was also investigated. A two-compartment model best fitted the gemcitabine and dFdU PK data (volume of distribution and clearance for gemcitabine: V1 = 45 L and CL1 = 4.03 L/min; for dFdU: V2 = 36 L and CL2 = 0.226 L/min). Renal function was found to influence gemcitabine clearance, and body surface area to impact the volume of distribution of dFdU. However, neither CDA status nor the occurrence of toxicities was correlated to PK parameters. Despite sparse sampling and heterogeneous administration and sampling protocols, population and individual PK parameters of gemcitabine and dFdU were successfully estimated using Monolix population PK software. The estimated parameters were consistent with previously published results. Surprisingly, CDA activity did not influence gemcitabine PK, which was explained by the absence of CDA-deficient patients enrolled in the study. This work suggests that even sparse data are valuable to estimate population and individual PK parameters in patients, which will be usable to individualize the dose for an optimized benefit to risk ratio.

  18. Tracking a changing environment: optimal sampling, adaptive memory and overnight effects.

    Science.gov (United States)

    Dunlap, Aimee S; Stephens, David W

    2012-02-01

    Foraging in a variable environment presents a classic problem of decision making with incomplete information. Animals must track the changing environment, remember the best options and make choices accordingly. While several experimental studies have explored the idea that sampling behavior reflects the amount of environmental change, we take the next logical step in asking how change influences memory. We explore the hypothesis that memory length should be tied to the ecological relevance and the value of the information learned, and that environmental change is a key determinant of the value of memory. We use a dynamic programming model to confirm our predictions and then test memory length in a factorial experiment. In our experimental situation we manipulate rates of change in a simple foraging task for blue jays over a 36 h period. After jays experienced an experimentally determined change regime, we tested them at a range of retention intervals, from 1 to 72 h. Manipulated rates of change influenced learning and sampling rates: subjects sampled more and learned more quickly in the high change condition. Tests of retention revealed significant interactions between retention interval and the experienced rate of change. We observed a striking and surprising difference between the high and low change treatments at the 24h retention interval. In agreement with earlier work we find that a circadian retention interval is special, but we find that the extent of this 'specialness' depends on the subject's prior experience of environmental change. Specifically, experienced rates of change seem to influence how subjects balance recent information against past experience in a way that interacts with the passage of time. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. The economics of medicines optimization: policy developments, remaining challenges and research priorities

    Science.gov (United States)

    Faria, Rita; Barbieri, Marco; Light, Kate; Elliott, Rachel A.; Sculpher, Mark

    2014-01-01

    Background This review scopes the evidence on the effectiveness and cost-effectiveness of interventions to improve suboptimal use of medicines in order to determine the evidence gaps and help inform research priorities. Sources of data Systematic searches of the National Health Service (NHS) Economic Evaluation Database, the Cochrane Database of Systematic Reviews and the Database of Abstracts of Reviews of Effects. Areas of agreement The majority of the studies evaluated interventions to improve adherence, inappropriate prescribing and prescribing errors. Areas of controversy Interventions tend to be specific to a particular stage of the pathway and/or to a particular disease and have mostly been evaluated for their effect on intermediate or process outcomes. Growing points Medicines optimization offers an opportunity to improve health outcomes and efficiency of healthcare. Areas timely for developing research The available evidence is insufficient to assess the effectiveness and cost-effectiveness of interventions to address suboptimal medicine use in the UK NHS. Decision modelling, evidence synthesis and elicitation have the potential to address the evidence gaps and help prioritize research. PMID:25190760

  20. A Robust Bayesian Approach to an Optimal Replacement Policy for Gas Pipelines

    Directory of Open Access Journals (Sweden)

    José Pablo Arias-Nicolás

    2015-06-01

    Full Text Available In the paper, we address Bayesian sensitivity issues when integrating experts’ judgments with available historical data in a case study about strategies for the preventive maintenance of low-pressure cast iron pipelines in an urban gas distribution network. We are interested in replacement priorities, as determined by the failure rates of pipelines deployed under different conditions. We relax the assumptions, made in previous papers, about the prior distributions on the failure rates and study changes in replacement priorities under different choices of generalized moment-constrained classes of priors. We focus on the set of non-dominated actions, and among them, we propose the least sensitive action as the optimal choice to rank different classes of pipelines, providing a sound approach to the sensitivity problem. Moreover, we are also interested in determining which classes have a failure rate exceeding a given acceptable value, considered as the threshold determining no need for replacement. Graphical tools are introduced to help decisionmakers to determine if pipelines are to be replaced and the corresponding priorities.

  1. Optimal policies for activated sludge treatment systems with multi effluent stream generation

    Directory of Open Access Journals (Sweden)

    Gouveia R.

    2000-01-01

    Full Text Available Most industrial processes generate liquid waste, which requires treatment prior to disposal. These processes are divided into sectors that generate effluents with time dependent characteristics. Each sector sends the effluent to wastewater treatment plants through pumping-stations. In general, activated sludge is the most suitable treatment and consists of equalization, aeration and settling tanks. During the treatment, there is an increase in the mass of microorganisms, which needs to be removed. Sludge removal represents the major operating costs for wastewater treatment plants. The objective of this work is to propose an optimization model to minimize sludge generation using a superstructure in which the streams from pumping-stations can be sent to the equalization tank. In addition, the aeration tank is divided into cells that can be fed in series and parallel. The model relies on mass balances, kinetic equations, and the resulting Nonlinear Programming problem generates the best operational strategy for the system feed streams with a high substrate removal. Reductions of up to 30 % can be achieved with the proposed strategy maintened BOD efficiency removal upper than 98 %.

  2. Determination of Ergot Alkaloids: Purity and Stability Assessment of Standards and Optimization of Extraction Conditions for Cereal Samples

    DEFF Research Database (Denmark)

    Krska, R.; Berthiller, F.; Schuhmacher, R.

    2008-01-01

    as those that are the most common and physiologically active. The purity of the standards was investigated by means of liquid chromatography with diode array detection, electrospray ionization, and time-of-flight mass spectrometry (LC-DAD-ESI-TOF-MS). All of the standards assessed showed purity levels...... (PSA) before LC/MS/MS. Based on the results obtained from these optimization studies, a mixture of acetonitrile with ammonium carbonate buffer was used as extraction solvent, as recoveries for all analyzed ergot alkaloids were significantly higher than those with the other solvents. Different sample...

  3. Optimizing Scoring and Sampling Methods for Assessing Built Neighborhood Environment Quality in Residential Areas

    Directory of Open Access Journals (Sweden)

    Joel Adu-Brimpong

    2017-03-01

    Full Text Available Optimization of existing measurement tools is necessary to explore links between aspects of the neighborhood built environment and health behaviors or outcomes. We evaluate a scoring method for virtual neighborhood audits utilizing the Active Neighborhood Checklist (the Checklist, a neighborhood audit measure, and assess street segment representativeness in low-income neighborhoods. Eighty-two home neighborhoods of Washington, D.C. Cardiovascular Health/Needs Assessment (NCT01927783 participants were audited using Google Street View imagery and the Checklist (five sections with 89 total questions. Twelve street segments per home address were assessed for (1 Land-Use Type; (2 Public Transportation Availability; (3 Street Characteristics; (4 Environment Quality and (5 Sidewalks/Walking/Biking features. Checklist items were scored 0–2 points/question. A combinations algorithm was developed to assess street segments’ representativeness. Spearman correlations were calculated between built environment quality scores and Walk Score®, a validated neighborhood walkability measure. Street segment quality scores ranged 10–47 (Mean = 29.4 ± 6.9 and overall neighborhood quality scores, 172–475 (Mean = 352.3 ± 63.6. Walk scores® ranged 0–91 (Mean = 46.7 ± 26.3. Street segment combinations’ correlation coefficients ranged 0.75–1.0. Significant positive correlations were found between overall neighborhood quality scores, four of the five Checklist subsection scores, and Walk Scores® (r = 0.62, p < 0.001. This scoring method adequately captures neighborhood features in low-income, residential areas and may aid in delineating impact of specific built environment features on health behaviors and outcomes.

  4. Two Topics in Data Analysis: Sample-based Optimal Transport and Analysis of Turbulent Spectra from Ship Track Data

    Science.gov (United States)

    Kuang, Simeng Max

    This thesis contains two topics in data analysis. The first topic consists of the introduction of algorithms for sample-based optimal transport and barycenter problems. In chapter 1, a family of algorithms is introduced to solve both the L2 optimal transport problem and the Wasserstein barycenter problem. Starting from a theoretical perspective, the new algorithms are motivated from a key characterization of the barycenter measure, which suggests an update that reduces the total transportation cost and stops only when the barycenter is reached. A series of general theorems is given to prove the convergence of all the algorithms. We then extend the algorithms to solve sample-based optimal transport and barycenter problems, in which only finite sample sets are available instead of underlying probability distributions. A unique feature of the new approach is that it compares sample sets in terms of the expected values of a set of feature functions, which at the same time induce the function space of optimal maps and can be chosen by users to incorporate their prior knowledge of the data. All the algorithms are implemented and applied to various synthetic example and practical applications. On synthetic examples it is found that both the SOT algorithm and the SCB algorithm are able to find the true solution and often converge in a handful of iterations. On more challenging applications including Gaussian mixture models, color transfer and shape transform problems, the algorithms give very good results throughout despite the very different nature of the corresponding datasets. In chapter 2, a preconditioning procedure is developed for the L2 and more general optimal transport problems. The procedure is based on a family of affine map pairs, which transforms the original measures into two new measures that are closer to each other, while preserving the optimality of solutions. It is proved that the preconditioning procedure minimizes the remaining transportation cost

  5. Optimized cryo-focused ion beam sample preparation aimed at in situ structural studies of membrane proteins.

    Science.gov (United States)

    Schaffer, Miroslava; Mahamid, Julia; Engel, Benjamin D; Laugks, Tim; Baumeister, Wolfgang; Plitzko, Jürgen M

    2017-02-01

    While cryo-electron tomography (cryo-ET) can reveal biological structures in their native state within the cellular environment, it requires the production of high-quality frozen-hydrated sections that are thinner than 300nm. Sample requirements are even more stringent for the visualization of membrane-bound protein complexes within dense cellular regions. Focused ion beam (FIB) sample preparation for transmission electron microscopy (TEM) is a well-established technique in material science, but there are only few examples of biological samples exhibiting sufficient quality for high-resolution in situ investigation by cryo-ET. In this work, we present a comprehensive description of a cryo-sample preparation workflow incorporating additional conductive-coating procedures. These coating steps eliminate the adverse effects of sample charging on imaging with the Volta phase plate, allowing data acquisition with improved contrast. We discuss optimized FIB milling strategies adapted from material science and each critical step required to produce homogeneously thin, non-charging FIB lamellas that make large areas of unperturbed HeLa and Chlamydomonas cells accessible for cryo-ET at molecular resolution. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Optimization of the solvent-based dissolution method to sample volatile organic compound vapors for compound-specific isotope analysis.

    Science.gov (United States)

    Bouchard, Daniel; Wanner, Philipp; Luo, Hong; McLoughlin, Patrick W; Henderson, James K; Pirkle, Robert J; Hunkeler, Daniel

    2017-10-20

    The methodology of the solvent-based dissolution method used to sample gas phase volatile organic compounds (VOC) for compound-specific isotope analysis (CSIA) was optimized to lower the method detection limits for TCE and benzene. The sampling methodology previously evaluated by [1] consists in pulling the air through a solvent to dissolve and accumulate the gaseous VOC. After the sampling process, the solvent can then be treated similarly as groundwater samples to perform routine CSIA by diluting an aliquot of the solvent into water to reach the required concentration of the targeted contaminant. Among solvents tested, tetraethylene glycol dimethyl ether (TGDE) showed the best aptitude for the method. TGDE has a great affinity with TCE and benzene, hence efficiently dissolving the compounds during their transition through the solvent. The method detection limit for TCE (5±1μg/m 3 ) and benzene (1.7±0.5μg/m 3 ) is lower when using TGDE compared to methanol, which was previously used (385μg/m 3 for TCE and 130μg/m 3 for benzene) [2]. The method detection limit refers to the minimal gas phase concentration in ambient air required to load sufficient VOC mass into TGDE to perform δ 13 C analysis. Due to a different analytical procedure, the method detection limit associated with δ 37 Cl analysis was found to be 156±6μg/m 3 for TCE. Furthermore, the experimental results validated the relationship between the gas phase TCE and the progressive accumulation of dissolved TCE in the solvent during the sampling process. Accordingly, based on the air-solvent partitioning coefficient, the sampling methodology (e.g. sampling rate, sampling duration, amount of solvent) and the final TCE concentration in the solvent, the concentration of TCE in the gas phase prevailing during the sampling event can be determined. Moreover, the possibility to analyse for TCE concentration in the solvent after sampling (or other targeted VOCs) allows the field deployment of the sampling

  7. Optimal design of sampling and mapping schemes in the radiometric exploration of Chipilapa, El Salvador (Geo-statistics)

    International Nuclear Information System (INIS)

    Balcazar G, M.; Flores R, J.H.

    1992-01-01

    As part of the knowledge about the radiometric surface exploration, carried out in the geothermal field of Chipilapa, El Salvador, its were considered the geo-statistical parameters starting from the calculated variogram of the field data, being that the maxim distance of correlation of the samples in 'radon' in the different observation addresses (N-S, E-W, N W-S E, N E-S W), it was of 121 mts for the monitoring grill in future prospectus in the same area. Being derived of it an optimization (minimum cost) in the spacing of the field samples by means of geo-statistical techniques, without losing the detection of the anomaly. (Author)

  8. Optimal sample size for predicting viability of cabbage and radish seeds based on near infrared spectra of single seeds

    DEFF Research Database (Denmark)

    Shetty, Nisha; Min, Tai-Gi; Gislum, René

    2011-01-01

    The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub......-sets of different sizes were chosen randomly with several iterations and using the spectral-based sample selection algorithms DUPLEX and CADEX. An independent test set was used to validate the developed classification models. The results showed that 200 seeds were optimal in a calibration set for both cabbage...... using all 600 seeds in the calibration set. Thus, the number of seeds in the calibration set can be reduced by up to 67% without significant loss of classification accuracy, which will effectively enhance the cost-effectiveness of NIR spectral analysis. Wavelength regions important...

  9. Immunosuppressant therapeutic drug monitoring by LC-MS/MS: workflow optimization through automated processing of whole blood samples.

    Science.gov (United States)

    Marinova, Mariela; Artusi, Carlo; Brugnolo, Laura; Antonelli, Giorgia; Zaninotto, Martina; Plebani, Mario

    2013-11-01

    Although, due to its high specificity and sensitivity, LC-MS/MS is an efficient technique for the routine determination of immunosuppressants in whole blood, it involves time-consuming manual sample preparation. The aim of the present study was therefore to develop an automated sample-preparation protocol for the quantification of sirolimus, everolimus and tacrolimus by LC-MS/MS using a liquid handling platform. Six-level commercially available blood calibrators were used for assay development, while four quality control materials and three blood samples from patients under immunosuppressant treatment were employed for the evaluation of imprecision. Barcode reading, sample re-suspension, transfer of whole blood samples into 96-well plates, addition of internal standard solution, mixing, and protein precipitation were performed with a liquid handling platform. After plate filtration, the deproteinised supernatants were submitted for SPE on-line. The only manual steps in the entire process were de-capping of the tubes, and transfer of the well plates to the HPLC autosampler. Calibration curves were linear throughout the selected ranges. The imprecision and accuracy data for all analytes were highly satisfactory. The agreement between the results obtained with manual and those obtained with automated sample preparation was optimal (n=390, r=0.96). In daily routine (100 patient samples) the typical overall total turnaround time was less than 6h. Our findings indicate that the proposed analytical system is suitable for routine analysis, since it is straightforward and precise. Furthermore, it incurs less manual workload and less risk of error in the quantification of whole blood immunosuppressant concentrations than conventional methods. © 2013.

  10. Optimized Field Sampling and Monitoring of Airborne Hazardous Transport Plumes; A Geostatistical Simulation Approach

    International Nuclear Information System (INIS)

    Chen, DI-WEN

    2001-01-01

    Airborne hazardous plumes inadvertently released during nuclear/chemical/biological incidents are mostly of unknown composition and concentration until measurements are taken of post-accident ground concentrations from plume-ground deposition of constituents. Unfortunately, measurements often are days post-incident and rely on hazardous manned air-vehicle measurements. Before this happens, computational plume migration models are the only source of information on the plume characteristics, constituents, concentrations, directions of travel, ground deposition, etc. A mobile ''lighter than air'' (LTA) system is being developed at Oak Ridge National Laboratory that will be part of the first response in emergency conditions. These interactive and remote unmanned air vehicles will carry light-weight detectors and weather instrumentation to measure the conditions during and after plume release. This requires a cooperative computationally organized, GPS-controlled set of LTA's that self-coordinate around the objectives in an emergency situation in restricted time frames. A critical step before an optimum and cost-effective field sampling and monitoring program proceeds is the collection of data that provides statistically significant information, collected in a reliable and expeditious manner. Efficient aerial arrangements of the detectors taking the data (for active airborne release conditions) are necessary for plume identification, computational 3-dimensional reconstruction, and source distribution functions. This report describes the application of stochastic or geostatistical simulations to delineate the plume for guiding subsequent sampling and monitoring designs. A case study is presented of building digital plume images, based on existing ''hard'' experimental data and ''soft'' preliminary transport modeling results of Prairie Grass Trials Site. Markov Bayes Simulation, a coupled Bayesian/geostatistical methodology, quantitatively combines soft information

  11. Optimization of Region-of-Interest Sampling Strategies for Hepatic MRI Proton Density Fat Fraction Quantification

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z.; Schlein, Alexandra N.; Hooker, Jonathan C.; Dehkordy, Soudabeh Fazeli; Hamilton, Gavin; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.

    2017-01-01

    BACKGROUND Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. PURPOSE To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. STUDY TYPE Retrospective secondary analysis of prospectively acquired clinical research data. POPULATION A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. FIELD STRENGTH/SEQUENCE Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradientrecalled echo technique. ASSESSMENT An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. STATISTICAL TESTING Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland–Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland–Altman analyses. RESULTS The study population’s mean whole-liver PDFF was 10.1±8.9% (range: 1.1–44.1%). Although there was no significant difference in average segmental (P=0.452) or lobar (P=0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥ 4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. DATA CONCLUSION Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. Level of

  12. Optimization of region-of-interest sampling strategies for hepatic MRI proton density fat fraction quantification.

    Science.gov (United States)

    Hong, Cheng William; Wolfson, Tanya; Sy, Ethan Z; Schlein, Alexandra N; Hooker, Jonathan C; Fazeli Dehkordy, Soudabeh; Hamilton, Gavin; Reeder, Scott B; Loomba, Rohit; Sirlin, Claude B

    2018-04-01

    Clinical trials utilizing proton density fat fraction (PDFF) as an imaging biomarker for hepatic steatosis have used a laborious region-of-interest (ROI) sampling strategy of placing an ROI in each hepatic segment. To identify a strategy with the fewest ROIs that consistently achieves close agreement with the nine-ROI strategy. Retrospective secondary analysis of prospectively acquired clinical research data. A total of 391 adults (173 men, 218 women) with known or suspected NAFLD. Confounder-corrected chemical-shift-encoded 3T MRI using a 2D multiecho gradient-recalled echo technique. An ROI was placed in each hepatic segment. Mean nine-ROI PDFF and segmental PDFF standard deviation were computed. Segmental and lobar PDFF were compared. PDFF was estimated using every combinatorial subset of ROIs and compared to the nine-ROI average. Mean nine-ROI PDFF and segmental PDFF standard deviation were summarized descriptively. Segmental PDFF was compared using a one-way analysis of variance, and lobar PDFF was compared using a paired t-test and a Bland-Altman analysis. The PDFF estimated by every subset of ROIs was informally compared to the nine-ROI average using median intraclass correlation coefficients (ICCs) and Bland-Altman analyses. The study population's mean whole-liver PDFF was 10.1 ± 8.9% (range: 1.1-44.1%). Although there was no significant difference in average segmental (P = 0.452) or lobar (P = 0.154) PDFF, left and right lobe PDFF differed by at least 1.5 percentage points in 25.1% (98/391) of patients. Any strategy with ≥4 ROIs had ICC >0.995. 115 of 126 four-ROI strategies (91%) had limits of agreement (LOA) 0.995, and 2/36 (6%) of two-ROI strategies and 46/84 (55%) of three-ROI strategies had LOA <1.5%. Four-ROI sampling strategies with two ROIs in the left and right lobes achieve close agreement with nine-ROI PDFF. 3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:988-994. © 2017 International Society for Magnetic Resonance

  13. Optimization of microwave-assisted extraction with saponification (MAES) for the determination of polybrominated flame retardants in aquaculture samples.

    Science.gov (United States)

    Fajar, N M; Carro, A M; Lorenzo, R A; Fernandez, F; Cela, R

    2008-08-01

    The efficiency of microwave-assisted extraction with saponification (MAES) for the determination of seven polybrominated flame retardants (polybrominated biphenyls, PBBs; and polybrominated diphenyl ethers, PBDEs) in aquaculture samples is described and compared with microwave-assisted extraction (MAE). Chemometric techniques based on experimental designs and desirability functions were used for simultaneous optimization of the operational parameters used in both MAES and MAE processes. Application of MAES to this group of contaminants in aquaculture samples, which had not been previously applied to this type of analytes, was shown to be superior to MAE in terms of extraction efficiency, extraction time and lipid content extracted from complex matrices (0.7% as against 18.0% for MAE extracts). PBBs and PBDEs were determined by gas chromatography with micro-electron capture detection (GC-muECD). The quantification limits for the analytes were 40-750 pg g(-1) (except for BB-15, which was 1.43 ng g(-1)). Precision for MAES-GC-muECD (%RSD < 11%) was significantly better than for MAE-GC-muECD (%RSD < 20%). The accuracy of both optimized methods was satisfactorily demonstrated by analysis of appropriate certified reference material (CRM), WMF-01.

  14. Plasma treatment of bulk niobium surface for superconducting rf cavities: Optimization of the experimental conditions on flat samples

    Directory of Open Access Journals (Sweden)

    M. Rašković

    2010-11-01

    Full Text Available Accelerator performance, in particular the average accelerating field and the cavity quality factor, depends on the physical and chemical characteristics of the superconducting radio-frequency (SRF cavity surface. Plasma based surface modification provides an excellent opportunity to eliminate nonsuperconductive pollutants in the penetration depth region and to remove the mechanically damaged surface layer, which improves the surface roughness. Here we show that the plasma treatment of bulk niobium (Nb presents an alternative surface preparation method to the commonly used buffered chemical polishing and electropolishing methods. We have optimized the experimental conditions in the microwave glow discharge system and their influence on the Nb removal rate on flat samples. We have achieved an etching rate of 1.7  μm/min⁡ using only 3% chlorine in the reactive mixture. Combining a fast etching step with a moderate one, we have improved the surface roughness without exposing the sample surface to the environment. We intend to apply the optimized experimental conditions to the preparation of single cell cavities, pursuing the improvement of their rf performance.

  15. Optimal sample size of signs for classification of radiational and oily soils

    International Nuclear Information System (INIS)

    Babayev, M.P.; Iskenderov, S.M.; Aghayev, R.A.

    2012-01-01

    Full text : This article tells about classification of radiational and oily soils that should be in essence a compact intelligence system which contains maximum information on classes of soil objects in the accepted feature space. The stored experience shows that the volume of the most informative soil signs can make up maximum 7-8 indexes. More correct approach to our opinion for a sample of the most informative (most important) indexes is the method of testing and mistakes, that is the experimental method, allowing to make use a wide experience and intuition of the researcher, or group of the researchers, engaged for many years in the field of soil science. At this operational stage of the formal device of soils classification, to say more concrete, the assessment section of selfdescriptiveness of soil signs of this formal device, in our opinion, is purely mathematized and in some cases even not reflect the true picture. In this case it will be calculated 21 pair of correlative elements between the selected soil signs as a measure of the linear communication. The volume of the correlative row will be equal to 6, as the increase in volume of the correlative row can sharply increase the volume calculation. Pertinently to note that, it is the first time an attempt is made to create correlative matrixes of the most important signs of radiation and oily soils

  16. On the Optimal Policy for the Single-product Inventory Problem with Set-up Cost and a Restricted Production Capacity

    NARCIS (Netherlands)

    Foreest, N. D. van; Wijngaard, J.

    2010-01-01

    The single-product, stationary inventory problem with set-up cost is one of the classical problems in stochastic operations research. Theories have been developed to cope with finite production capacity in periodic review systems, and it has been proved that optimal policies for these cases are not

  17. A boundary-optimized rejection region test for the two-sample binomial problem.

    Science.gov (United States)

    Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A

    2018-03-30

    Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  18. Optimized pre-thinning procedures of ion-beam thinning for TEM sample preparation by magnetorheological polishing.

    Science.gov (United States)

    Luo, Hu; Yin, Shaohui; Zhang, Guanhua; Liu, Chunhui; Tang, Qingchun; Guo, Meijian

    2017-10-01

    Ion-beam-thinning is a well-established sample preparation technique for transmission electron microscopy (TEM), but tedious procedures and labor consuming pre-thinning could seriously reduce its efficiency. In this work, we present a simple pre-thinning technique by using magnetorheological (MR) polishing to replace manual lapping and dimpling, and demonstrate the successful preparation of electron-transparent single crystal silicon samples after MR polishing and single-sided ion milling. Dimples pre-thinned to less than 30 microns and with little mechanical surface damage were repeatedly produced under optimized MR polishing conditions. Samples pre-thinned by both MR polishing and traditional technique were ion-beam thinned from the rear side until perforation, and then observed by optical microscopy and TEM. The results show that the specimen pre-thinned by MR technique was free from dimpling related defects, which were still residual in sample pre-thinned by conventional technique. Nice high-resolution TEM images could be acquired after MR polishing and one side ion-thinning. MR polishing promises to be an adaptable and efficient method for pre-thinning in preparation of TEM specimens, especially for brittle ceramics. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. An Optimized DNA Analysis Workflow for the Sampling, Extraction, and Concentration of DNA obtained from Archived Latent Fingerprints.

    Science.gov (United States)

    Solomon, April D; Hytinen, Madison E; McClain, Aryn M; Miller, Marilyn T; Dawson Cruz, Tracey

    2018-01-01

    DNA profiles have been obtained from fingerprints, but there is limited knowledge regarding DNA analysis from archived latent fingerprints-touch DNA "sandwiched" between adhesive and paper. Thus, this study sought to comparatively analyze a variety of collection and analytical methods in an effort to seek an optimized workflow for this specific sample type. Untreated and treated archived latent fingerprints were utilized to compare different biological sampling techniques, swab diluents, DNA extraction systems, DNA concentration practices, and post-amplification purification methods. Archived latent fingerprints disassembled and sampled via direct cutting, followed by DNA extracted using the QIAamp® DNA Investigator Kit, and concentration with Centri-Sep™ columns increased the odds of obtaining an STR profile. Using the recommended DNA workflow, 9 of the 10 samples provided STR profiles, which included 7-100% of the expected STR alleles and two full profiles. Thus, with carefully selected procedures, archived latent fingerprints can be a viable DNA source for criminal investigations including cold/postconviction cases. © 2017 American Academy of Forensic Sciences.

  20. A Global Optimizing Policy for Decaying Items with Ramp-Type Demand Rate under Two-Level Trade Credit Financing Taking Account of Preservation Technology

    Directory of Open Access Journals (Sweden)

    S. R. Singh

    2013-01-01

    Full Text Available An inventory system for deteriorating items, with ramp-type demand rate, under two-level trade credit policy taking account of preservation technology is considered. The objective of this study is to develop a deteriorating inventory policy when the supplier provides to the retailer a permissible delay in payments, and during this credit period, the retailer accumulates the revenue and earns interest on that revenue; also the retailer invests on the preservation technology to reduce the rate of product deterioration. Shortages are allowed and partially backlogged. Sufficient conditions of the existence and uniqueness of the optimal replenishment policy are provided, and an algorithm, for its determination, is proposed. Numerical examples draw attention to the obtained results, and the sensitivity analysis of the optimal solution with respect to leading parameters of the system is carried out.

  1. Novel synthesis of nanocomposite for the extraction of Sildenafil Citrate (Viagra) from water and urine samples: Process screening and optimization.

    Science.gov (United States)

    Asfaram, Arash; Ghaedi, Mehrorang; Purkait, Mihir Kumar

    2017-09-01

    A sensitive analytical method is investigated to concentrate and determine trace level of Sildenafil Citrate (SLC) present in water and urine samples. The method is based on a sample treatment using dispersive solid-phase micro-extraction (DSPME) with laboratory-made Mn@ CuS/ZnS nanocomposite loaded on activated carbon (Mn@ CuS/ZnS-NCs-AC) as a sorbent for the target analyte. The efficiency was enhanced by ultrasound-assisted (UA) with dispersive nanocomposite solid-phase micro-extraction (UA-DNSPME). Four significant variables affecting SLC recovery like; pH, eluent volume, sonication time and adsorbent mass were selected by the Plackett-Burman design (PBD) experiments. These selected factors were optimized by the central composite design (CCD) to maximize extraction of SLC. The results exhibited that the optimum conditions for maximizing extraction of SLC were 6.0 pH, 300μL eluent (acetonitrile) volume, 10mg of adsorbent and 6min sonication time. Under optimized conditions, virtuous linearity of SLC was ranged from 30 to 4000ngmL -1 with R 2 of 0.99. The limit of detection (LOD) was 2.50ngmL -1 and the recoveries at two spiked levels were ranged from 97.37 to 103.21% with the relative standard deviation (RSD) less than 4.50% (n=15). The enhancement factor (EF) was 81.91. The results show that the combination UAE with DNSPME is a suitable method for the determination of SLC in water and urine samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Exploring structural variability in X-ray crystallographic models using protein local optimization by torsion-angle sampling

    International Nuclear Information System (INIS)

    Knight, Jennifer L.; Zhou, Zhiyong; Gallicchio, Emilio; Himmel, Daniel M.; Friesner, Richard A.; Arnold, Eddy; Levy, Ronald M.

    2008-01-01

    Torsion-angle sampling, as implemented in the Protein Local Optimization Program (PLOP), is used to generate multiple structurally variable single-conformer models which are in good agreement with X-ray data. An ensemble-refinement approach to differentiate between positional uncertainty and conformational heterogeneity is proposed. Modeling structural variability is critical for understanding protein function and for modeling reliable targets for in silico docking experiments. Because of the time-intensive nature of manual X-ray crystallographic refinement, automated refinement methods that thoroughly explore conformational space are essential for the systematic construction of structurally variable models. Using five proteins spanning resolutions of 1.0–2.8 Å, it is demonstrated how torsion-angle sampling of backbone and side-chain libraries with filtering against both the chemical energy, using a modern effective potential, and the electron density, coupled with minimization of a reciprocal-space X-ray target function, can generate multiple structurally variable models which fit the X-ray data well. Torsion-angle sampling as implemented in the Protein Local Optimization Program (PLOP) has been used in this work. Models with the lowest R free values are obtained when electrostatic and implicit solvation terms are included in the effective potential. HIV-1 protease, calmodulin and SUMO-conjugating enzyme illustrate how variability in the ensemble of structures captures structural variability that is observed across multiple crystal structures and is linked to functional flexibility at hinge regions and binding interfaces. An ensemble-refinement procedure is proposed to differentiate between variability that is a consequence of physical conformational heterogeneity and that which reflects uncertainty in the atomic coordinates

  3. Optimal selective renewal policy for systems subject to propagated failures with global effect and failure isolation phenomena

    International Nuclear Information System (INIS)

    Maaroufi, Ghofrane; Chelbi, Anis; Rezg, Nidhal

    2013-01-01

    This paper considers a selective maintenance policy for multi-component systems for which a minimum level of reliability is required for each mission. Such systems need to be maintained between consecutive missions. The proposed strategy aims at selecting the components to be maintained (renewed) after the completion of each mission such that a required reliability level is warranted up to the next stop with the minimum cost, taking into account the time period allotted for maintenance between missions and the possibility to extend it while paying a penalty cost. This strategy is applied to binary-state systems subject to propagated failures with global effect, and failure isolation phenomena. A set of rules to reduce the solutions space for such complex systems is developed. A numerical example is presented to illustrate the modeling approach and the use of the reduction rules. Finally, the Monte-Carlo simulation is used in combination with the selective maintenance optimization model to deal with a number of successive missions

  4. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  5. Optimization and application of octadecyl-modified monolithic silica for solid-phase extraction of drugs in whole blood samples.

    Science.gov (United States)

    Namera, Akira; Saito, Takeshi; Ota, Shigenori; Miyazaki, Shota; Oikawa, Hiroshi; Murata, Kazuhiro; Nagao, Masataka

    2017-09-29

    Monolithic silica in MonoSpin for solid-phase extraction of drugs from whole blood samples was developed to facilitate high-throughput analysis. Monolithic silica of various pore sizes and octadecyl contents were synthesized, and their effects on recovery rates were evaluated. The silica monolith M18-200 (20μm through-pore size, 10.4nm mesopore size, and 17.3% carbon content) achieved the best recovery of the target analytes in whole blood samples. The extraction proceeded with centrifugal force at 1000rpm for 2min, and the eluate was directly injected into the liquid chromatography-mass spectrometry system without any tedious steps such as evaporation of extraction solvents. Under the optimized condition, low detection limits of 0.5-2.0ngmL -1 and calibration ranges up to 1000ngmL -1 were obtained. The recoveries of the target drugs in the whole blood were 76-108% with relative standard deviation of less than 14.3%. These results indicate that the developed method based on monolithic silica is convenient, highly efficient, and applicable for detecting drugs in whole blood samples. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Active SAmpling Protocol (ASAP) to Optimize Individual Neurocognitive Hypothesis Testing: A BCI-Inspired Dynamic Experimental Design.

    Science.gov (United States)

    Sanchez, Gaëtan; Lecaignard, Françoise; Otman, Anatole; Maby, Emmanuel; Mattout, Jérémie

    2016-01-01

    The relatively young field of Brain-Computer Interfaces has promoted the use of electrophysiology and neuroimaging in real-time. In the meantime, cognitive neuroscience studies, which make extensive use of functional exploration techniques, have evolved toward model-based experiments and fine hypothesis testing protocols. Although these two developments are mostly unrelated, we argue that, brought together, they may trigger an important shift in the way experimental paradigms are being designed, which should prove fruitful to both endeavors. This change simply consists in using real-time neuroimaging in order to optimize advanced neurocognitive hypothesis testing. We refer to this new approach as the instantiation of an Active SAmpling Protocol (ASAP). As opposed to classical (static) experimental protocols, ASAP implements online model comparison, enabling the optimization of design parameters (e.g., stimuli) during the course of data acquisition. This follows the well-known principle of sequential hypothesis testing. What is radically new, however, is our ability to perform online processing of the huge amount of complex data that brain imaging techniques provide. This is all the more relevant at a time when physiological and psychological processes are beginning to be approached using more realistic, generative models which may be difficult to tease apart empirically. Based upon Bayesian inference, ASAP proposes a generic and principled way to optimize experimental design adaptively. In this perspective paper, we summarize the main steps in ASAP. Using synthetic data we illustrate its superiority in selecting the right perceptual model compared to a classical design. Finally, we briefly discuss its future potential for basic and clinical neuroscience as well as some remaining challenges.

  7. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers.

    Science.gov (United States)

    Tisdale, Evgenia; Kennedy, Devin; Xu, Xiaodong; Wilkins, Charles

    2014-01-15

    The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr2) than is the pentafluorostyrene component distribution. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Increasing the sampling efficiency of protein conformational transition using velocity-scaling optimized hybrid explicit/implicit solvent REMD simulation

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Yuqi; Wang, Jinan; Shao, Qiang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn; Zhu, Weiliang, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [ACS Key Laboratory of Receptor Research, Drug Discovery and Design Center, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 555 Zuchongzhi Road, Shanghai 201203 (China); Shi, Jiye, E-mail: qshao@mail.shcnc.ac.cn, E-mail: Jiye.Shi@ucb.com, E-mail: wlzhu@mail.shcnc.ac.cn [UCB Pharma, 216 Bath Road, Slough SL1 4EN (United Kingdom)

    2015-03-28

    The application of temperature replica exchange molecular dynamics (REMD) simulation on protein motion is limited by its huge requirement of computational resource, particularly when explicit solvent model is implemented. In the previous study, we developed a velocity-scaling optimized hybrid explicit/implicit solvent REMD method with the hope to reduce the temperature (replica) number on the premise of maintaining high sampling efficiency. In this study, we utilized this method to characterize and energetically identify the conformational transition pathway of a protein model, the N-terminal domain of calmodulin. In comparison to the standard explicit solvent REMD simulation, the hybrid REMD is much less computationally expensive but, meanwhile, gives accurate evaluation of the structural and thermodynamic properties of the conformational transition which are in well agreement with the standard REMD simulation. Therefore, the hybrid REMD could highly increase the computational efficiency and thus expand the application of REMD simulation to larger-size protein systems.

  9. Design and sampling plan optimization for RT-qPCR experiments in plants: a case study in blueberry

    Directory of Open Access Journals (Sweden)

    Jose V Die

    2016-03-01

    Full Text Available The qPCR assay has become a routine technology in plant biotechnology and agricultural research. It is unlikely to be technically improved, but there are still challenges which center around minimizing the variability in results and transparency when reporting technical data in support of the conclusions of a study. There are a number of aspects of the pre- and post-assay workflow that contribute to variability of results. Here, through the study of the introduction of error in qPCR measurements at different stages of the workflow, we describe the most important causes of technical variability in a case study using blueberry. In this study, we found that the stage for which increasing the number of replicates would be the most beneficial depends on the tissue used. For example, we would recommend the use of more RT replicates when working with leaf tissue, while the use of more sampling (RNA extraction replicates would be recommended when working with stems or fruits to obtain the most optimal results. The use of more qPCR replicates provides the least benefit as it is the most reproducible step. By knowing the distribution of error over an entire experiment and the costs at each step, we have developed a script to identify the optimal sampling plan within the limits of a given budget. These findings should help plant scientists improve the design of qPCR experiments and refine their laboratory practices in order to conduct qPCR assays in a more reliable-manner to produce more consistent and reproducible data.

  10. Optimizing sample pretreatment for compound-specific stable carbon isotopic analysis of amino sugars in marine sediment

    Science.gov (United States)

    Zhu, R.; Lin, Y.-S.; Lipp, J. S.; Meador, T. B.; Hinrichs, K.-U.

    2014-09-01

    Amino sugars are quantitatively significant constituents of soil and marine sediment, but their sources and turnover in environmental samples remain poorly understood. The stable carbon isotopic composition of amino sugars can provide information on the lifestyles of their source organisms and can be monitored during incubations with labeled substrates to estimate the turnover rates of microbial populations. However, until now, such investigation has been carried out only with soil samples, partly because of the much lower abundance of amino sugars in marine environments. We therefore optimized a procedure for compound-specific isotopic analysis of amino sugars in marine sediment, employing gas chromatography-isotope ratio mass spectrometry. The whole procedure consisted of hydrolysis, neutralization, enrichment, and derivatization of amino sugars. Except for the derivatization step, the protocol introduced negligible isotopic fractionation, and the minimum requirement of amino sugar for isotopic analysis was 20 ng, i.e., equivalent to ~8 ng of amino sugar carbon. Compound-specific stable carbon isotopic analysis of amino sugars obtained from marine sediment extracts indicated that glucosamine and galactosamine were mainly derived from organic detritus, whereas muramic acid showed isotopic imprints from indigenous bacterial activities. The δ13C analysis of amino sugars provides a valuable addition to the biomarker-based characterization of microbial metabolism in the deep marine biosphere, which so far has been lipid oriented and biased towards the detection of archaeal signals.

  11. Optimization of pressurized liquid extraction (PLE) of dioxin-furans and dioxin-like PCBs from environmental samples.

    Science.gov (United States)

    Antunes, Pedro; Viana, Paula; Vinhas, Tereza; Capelo, J L; Rivera, J; Gaspar, Elvira M S M

    2008-05-30

    Pressurized liquid extraction (PLE) applying three extraction cycles, temperature and pressure, improved the efficiency of solvent extraction when compared with the classical Soxhlet extraction. Polychlorinated-p-dioxins (PCDDs), polychlorinated dibenzofurans (PCDFs) and dioxin-like PCBs (coplanar polychlorinated biphenyls (Co-PCBs)) in two Certified Reference Materials [DX-1 (sediment) and BCR 529 (soil)] and in two contaminated environmental samples (sediment and soil) were extracted by ASE and Soxhlet methods. Unlike data previously reported by other authors, results demonstrated that ASE using n-hexane as solvent and three extraction cycles, 12.4 MPa (1800 psi) and 150 degrees C achieves similar recovery results than the classical Soxhlet extraction for PCDFs and Co-PCBs, and better recovery results for PCDDs. ASE extraction, performed in less time and with less solvent proved to be, under optimized conditions, an excellent extraction technique for the simultaneous analysis of PCDD/PCDFs and Co-PCBs from environmental samples. Such fast analytical methodology, having the best cost-efficiency ratio, will improve the control and will provide more information about the occurrence of dioxins and the levels of toxicity and thereby will contribute to increase human health.

  12. Optimization of loop-mediated isothermal amplification (LAMP) assays for the detection of Leishmania DNA in human blood samples.

    Science.gov (United States)

    Abbasi, Ibrahim; Kirstein, Oscar D; Hailu, Asrat; Warburg, Alon

    2016-10-01

    Visceral leishmaniasis (VL), one of the most important neglected tropical diseases, is caused by Leishmania donovani eukaryotic protozoan parasite of the genus Leishmania, the disease is prevalent mainly in the Indian sub-continent, East Africa and Brazil. VL can be diagnosed by PCR amplifying ITS1 and/or kDNA genes. The current study involved the optimization of Loop-mediated isothermal amplification (LAMP) for the detection of Leishmania DNA in human blood or tissue samples. Three LAMP systems were developed; in two of those the primers were designed based on shared regions of the ITS1 gene among different Leishmania species, while the primers for the third LAMP system were derived from a newly identified repeated region in the Leishmania genome. The LAMP tests were shown to be sufficiently sensitive to detect 0.1pg of DNA from most Leishmania species. The green nucleic acid stain SYTO16, was used here for the first time to allow real-time monitoring of LAMP amplification. The advantage of real time-LAMP using SYTO 16 over end-point LAMP product detection is discussed. The efficacy of the real time-LAMP tests for detecting Leishmania DNA in dried blood samples from volunteers living in endemic areas, was compared with that of qRT-kDNA PCR. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Performance of two liquids scintillation and optimization of a Wallac 1411 counter in the tritium quantification in aqueous samples

    International Nuclear Information System (INIS)

    Contreras de la Cruz, E. de J.; Lopez del Rio, H.; Davila R, J. I.; Mireles G, F.; Pinedo V, J. L.

    2014-10-01

    The optimization of a liquid scintillation counting Wallac 1411 is presented as well as the performance of the liquids scintillation miscible in water OptiPhase Hi Safe 3 and Last Gold Ab, in the tritium quantification in aqueous samples. The luminescence effect, the quenching, the solution ph and the level of pulse amplitude comparator (Pac) were evaluated in the response of both liquids scintillation in the tritium measurement. The quenching and the luminescence modify the scintillators response; in the first of them the counting efficiency decreases and the minimum detectable activity increases; the second interferes in the tritium quantification in the interest window, but the effect disappears after 4 hours of darkness of the samples. The maximum counting efficiency was of 24% for OptiPhase Hi Safe 3 and 31% for Last Gold Ab, diminishing with the quenching until values of 8 and 11%, respectively. For a counting time of 6 hours and lower quenching, the minimum detectable concentration for OptiPhase Hi Safe 3 was of 13.4 ± 0.2 Bq/L and 9.9 ± 0.1 Bq/L for Last Gold Ab. Both scintillators responded appropriately to sour and basic solutions, being only presented chemiluminescence in Last Gold Ab to ph highly basic. The Pac application that varies between 1 and 256 does not have effect in the tritium measurement until values above 90. (Author)

  14. An Optimization Study on Listening Experiments to Improve the Comparability of Annoyance Ratings of Noise Samples from Different Experimental Sample Sets.

    Science.gov (United States)

    Di, Guoqing; Lu, Kuanguang; Shi, Xiaofan

    2018-03-08

    Annoyance ratings obtained from listening experiments are widely used in studies on health effect of environmental noise. In listening experiments, participants usually give the annoyance rating of each noise sample according to its relative annoyance degree among all samples in the experimental sample set if there are no reference sound samples, which leads to poor comparability between experimental results obtained from different experimental sample sets. To solve this problem, this study proposed to add several pink noise samples with certain loudness levels into experimental sample sets as reference sound samples. On this basis, the standard curve between logarithmic mean annoyance and loudness level of pink noise was used to calibrate the experimental results and the calibration procedures were described in detail. Furthermore, as a case study, six different types of noise sample sets were selected to conduct listening experiments using this method to examine the applicability of it. Results showed that the differences in the annoyance ratings of each identical noise sample from different experimental sample sets were markedly decreased after calibration. The determination coefficient ( R ²) of linear fitting functions between psychoacoustic annoyance (PA) and mean annoyance (MA) of noise samples from different experimental sample sets increased obviously after calibration. The case study indicated that the method above is applicable to calibrating annoyance ratings obtained from different types of noise sample sets. After calibration, the comparability of annoyance ratings of noise samples from different experimental sample sets can be distinctly improved.

  15. Effect of different economic support policies on the optimal synthesis and operation of a distributed energy supply system with renewable energy sources for an industrial area

    International Nuclear Information System (INIS)

    Casisi, Melchiorre; De Nardi, Alberto; Pinamonti, Piero; Reini, Mauro

    2015-01-01

    Highlights: • MILP model optimization identifies best structure and operation of an energy system. • Total cost of the system is minimized according to industrial stakeholders wills. • Effects of the adoption of economic support policies on the system are evaluated. • Social cost of incentives is comparted with correspondent CO 2 emission reduction. • Support schemes that promote an actual environmental benefit are highlighted. - Abstract: Economic support policies are widely adopted in European countries in order to promote a more efficient energy usage and the growth of renewable energy technologies. On one hand these schemes allow us to reduce the overall pollutant emissions and the total cost from the point of view of the energy systems, but on the other hand their social impact in terms of economic investment needs to be evaluated. The aim of this paper is to compare the social cost of the application of each incentive with the correspondent CO 2 emission reduction and overall energy saving. A Mixed Integer Linear Programming optimization procedure is used to evaluate the effect of different economic support policies on the optimal configuration and operation of a distributed energy supply system of an industrial area located in the north-east of Italy. The minimized objective function is the total annual cost for owning, operating and maintaining the whole energy system. The expectation is that a proper mix of renewable energy technologies and cogeneration systems will be included in the optimal solution, depending on the amount and nature of the supporting policies, highlighting the incentives that promote a real environmental benefit

  16. Optimization of PMAxx pretreatment to distinguish between human norovirus with intact and altered capsids in shellfish and sewage samples.

    Science.gov (United States)

    Randazzo, Walter; Khezri, Mohammad; Ollivier, Joanna; Le Guyader, Françoise S; Rodríguez-Díaz, Jesús; Aznar, Rosa; Sánchez, Gloria

    2018-02-02

    Shellfish contamination by human noroviruses (HuNoVs) is a serious health and economic problem. Recently an ISO procedure based on RT-qPCR for the quantitative detection of HuNoVs in shellfish has been issued, but these procedures cannot discriminate between inactivated and potentially infectious viruses. The aim of the present study was to optimize a pretreatment using PMAxx to better discriminate between intact and heat-treated HuNoVs in shellfish and sewage. To this end, the optimal conditions (30min incubation with 100μM of PMAxx and 0.5% of Triton, and double photoactivation) were applied to mussels, oysters and cockles artificially inoculated with thermally-inactivated (99°C for 5min) HuNoV GI and GII. This pretreatment reduced the signal of thermally-inactivated HuNoV GI in cockles and HuNoV GII in mussels by >3 log. Additionally, this pretreatment reduced the signal of thermally-inactivated HuNoV GI and GII between 1 and 1.5 log in oysters. Thermal inactivation of HuNoV GI and GII in PBS, sewage and bioaccumulated oysters was also evaluated by the PMAxx-Triton pretreatment. Results showed significant differences between reductions observed in the control and PMAxx-treated samples in PBS following treatment at 72 and 95°C for 15min. In sewage, the RT-qPCR signal of HuNoV GI was completely removed by the PMAxx pretreatment after heating at 72 and 95°C, while the RT-qPCR signal for HuNoV GII was completely eliminated only at 95°C. Finally, the PMAxx-Triton pretreatment was applied to naturally contaminated sewage and oysters, resulting in most of the HuNoV genomes quantified in sewage and oyster samples (12 out of 17) corresponding to undamaged capsids. Although this procedure may still overestimate infectivity, the PMAxx-Triton pretreatment represents a step forward to better interpret the quantification of intact HuNoVs in complex matrices, such as sewage and shellfish, and it could certainly be included in the procedures based on RT-qPCR. Copyright

  17. Optimization of Decision-Making for Spatial Sampling in the North China Plain, Based on Remote-Sensing a Priori Knowledge

    Science.gov (United States)

    Feng, J.; Bai, L.; Liu, S.; Su, X.; Hu, H.

    2012-07-01

    In this paper, the MODIS remote sensing data, featured with low-cost, high-timely and moderate/low spatial resolutions, in the North China Plain (NCP) as a study region were firstly used to carry out mixed-pixel spectral decomposition to extract an useful regionalized indicator parameter (RIP) (i.e., an available ratio, that is, fraction/percentage, of winter wheat planting area in each pixel as a regionalized indicator variable (RIV) of spatial sampling) from the initial selected indicators. Then, the RIV values were spatially analyzed, and the spatial structure characteristics (i.e., spatial correlation and variation) of the NCP were achieved, which were further processed to obtain the scalefitting, valid a priori knowledge or information of spatial sampling. Subsequently, founded upon an idea of rationally integrating probability-based and model-based sampling techniques and effectively utilizing the obtained a priori knowledge or information, the spatial sampling models and design schemes and their optimization and optimal selection were developed, as is a scientific basis of improving and optimizing the existing spatial sampling schemes of large-scale cropland remote sensing monitoring. Additionally, by the adaptive analysis and decision strategy the optimal local spatial prediction and gridded system of extrapolation results were able to excellently implement an adaptive report pattern of spatial sampling in accordance with report-covering units in order to satisfy the actual needs of sampling surveys.

  18. Optimizing Frozen Sample Preparation for Laser Microdissection: Assessment of CryoJane Tape-Transfer System®.

    Directory of Open Access Journals (Sweden)

    Yelena G Golubeva

    Full Text Available Laser microdissection is an invaluable tool in medical research that facilitates collecting specific cell populations for molecular analysis. Diversity of research targets (e.g., cancerous and precancerous lesions in clinical and animal research, cell pellets, rodent embryos, etc. and varied scientific objectives, however, present challenges toward establishing standard laser microdissection protocols. Sample preparation is crucial for quality RNA, DNA and protein retrieval, where it often determines the feasibility of a laser microdissection project. The majority of microdissection studies in clinical and animal model research are conducted on frozen tissues containing native nucleic acids, unmodified by fixation. However, the variable morphological quality of frozen sections from tissues containing fat, collagen or delicate cell structures can limit or prevent successful harvest of the desired cell population via laser dissection. The CryoJane Tape-Transfer System®, a commercial device that improves cryosectioning outcomes on glass slides has been reported superior for slide preparation and isolation of high quality osteocyte RNA (frozen bone during laser dissection. Considering the reported advantages of CryoJane for laser dissection on glass slides, we asked whether the system could also work with the plastic membrane slides used by UV laser based microdissection instruments, as these are better suited for collection of larger target areas. In an attempt to optimize laser microdissection slide preparation for tissues of different RNA stability and cryosectioning difficulty, we evaluated the CryoJane system for use with both glass (laser capture microdissection and membrane (laser cutting microdissection slides. We have established a sample preparation protocol for glass and membrane slides including manual coating of membrane slides with CryoJane solutions, cryosectioning, slide staining and dissection procedure, lysis and RNA extraction

  19. Optimization of a method by liquid chromatography of high resolution to determine residues of ethilenthiourea in samples of tomato

    International Nuclear Information System (INIS)

    Mora, D.; Rodriguez, O.M.

    2002-01-01

    A method was optimized to determine the present residues of ethilenthiourea in samples of tomatoes. The method consisted of three stages: a extraction of ultrasonic bath with methanol; a cleaning of the extract through a glass column of 11 mm of diameter stuffed with 2,5 g of neutral aluminium's mixture and activated coal (97,5:2,5) and 2,5 g of pure-neutral aluminium, it is dissolved with 250 ml of methanol. The third stage was of quantification by HPLC in a C 18 ' column with a methanol and water mixture (90:10) like a mobile phase to an flow of 2,0 ml/min. and with UV detection to 232 nm. The retention's time under theses conditions was of 2,15 minutes. The merit's parameters of the method were determined, proving and lineal sphere between 1,0 and 28,0 (g/ml of ETU; some quantification and detection limits have been calculated by the method of Hubaux and Vos (22) of 0,153 and 0,306 mg/ml respectively and a recuperation of 84%. (Author) [es

  20. Optimal Acquisition and Production Policy for End-of-Life Engineering Machinery Recovering in a Joint Manufacturing/Remanufacturing System under Uncertainties in Procurement and Demand

    Directory of Open Access Journals (Sweden)

    Haolan Liao

    2017-02-01

    Full Text Available The intensive shortage of natural resources and the inchoate phase of automobile remanufacturing in a closed-loop supply chain (CLSC are driving people to take cyclic manufacturing seriously. Aiming at maximizing resource utilization and produce profits, we apply an optimizing mathematical analysis to the modeling of automobile engine remanufacturing in a joint manufacturing system, in which the quantity and quality of procurement, and the demand of the market, are both uncertain. The manufacturer can either produce new products with raw materials or remanufacture the returned product taken back from customers; the raw materials are bought from two suppliers with certain probabilities of disruption in the supply. The returned products are classified into different quality levels according to the testing results after sorting, by considering the remanufacture-up-to strategy we obtained the optimal remanufacturing ratio, then the manufacturing quantity and corresponding maximized total profit of this joint system are determined. We also investigated a real-life case of auto engine remanufacturing, comparing it with the theory of optimal remanufacturing policy, and the results indicate that a material savings of more than 45% and a cost improvement of more than 40% could be achieved when the optimal remanufacturing policy of our model is implemented.

  1. National US public policy on global warming derived from optimization of energy use and environmental impact studies

    International Nuclear Information System (INIS)

    Reck, R.

    1993-01-01

    This paper will discuss possible United States policy responses to global warming. The components of a voluntary program for emissions control will be presented as well as regulatory options, including a carbon tax and tradeable permits. The advantages and disadvantages of both options will be discussed as well as the need for a consistent overall policy response to climate change

  2. National US public policy on global warming derived from optimization of energy use and environmental impact studies

    Energy Technology Data Exchange (ETDEWEB)

    Reck, R.

    1993-12-31

    This paper will discuss possible United States policy responses to global warming. The components of a voluntary program for emissions control will be presented as well as regulatory options, including a carbon tax and tradeable permits. The advantages and disadvantages of both options will be discussed as well as the need for a consistent overall policy response to climate change.

  3. The Relationship between Sun Protection Policy and Associated Practices in a National Sample of Early Childhood Services in Australia

    Science.gov (United States)

    Ettridge, Kerry A.; Bowden, Jacqueline A.; Rayner, Joanne M.; Wilson, Carlene J.

    2011-01-01

    Limiting exposure to sunlight during childhood can significantly reduce the risk of skin cancer. This was the first national study to assess the sun protection policies and practices of early childhood services across Australia. It also examined the key predictors of services' sun protection practices. In 2007, 1017 respondents completed a…

  4. Morphometric and immunocytochemical analysis of melanoma samples for individual optimization of therapy for boron neutron capture (BNCT)

    International Nuclear Information System (INIS)

    Carpano, M; Dagrosa, A; Brandizzi, D; Nievas, S; Olivera, M S; Perona, M; Rodriguez, C; Cabrini, R; Juvenal, G; Pisarev, M

    2012-01-01

    Introduction: Tumors from different patients with the same histological diagnosis can show different responses to ionizing radiation including BNCT. Further knowledge about individual tumor characteristics is needed in order to optimize the individual application of this therapy. In previous studies we have shown different patterns of boron intracellular concentration in three human melanoma cell lines. When we performed xenografts with these cell lines in nude mice a wide range of boron concentrations in tumor was observed. We also evaluated the tumor temperature obtained by thermography. Objectives: The aim of this study was to evaluate the differences in the BPA uptake related to different histological and thermal characteristics of each tumor in nude mice bearing human melanoma. We also studied the proliferation and the vasculature in tumors by immunohistochemical studies and the relationship with the BPA uptake. Materials and Methodos: NIH nude mice of 6-8 weeks were implanted (s.c.) into the back right flank with 3.106 human melanoma cells (MELJ). To evaluate the BPA uptake, animals were injected at a dose of 350 mg/Kg b.w. (ip) and sacrificed 2 h post administration. Each sample of tumor was divided into two equal parts, one for uptake of B and another for histological studies. Boron measurements in tissues were performed by ICP-OES. For the histological studies, samples from the tumors were fixed in buffered 10% formaldehyde, embedded in paraffin and stained with hematoxylin and eosin (HE). Infrared imaging studies were performed the day before the biodistribution, measuring the tumor and body temperatures. Immunohistochemical studies were performed with antibodies Ki-67 and CD31. The first one is a marker of proliferative rate and the second one is a specific marker of endothelial cells which allows to identify the vasculature. Formaldehyde-fixed paraffin-embedded tissues and avidin biotin complex immunostaining were used. Results: Tumor BPA uptake showed

  5. Sample-interpolation timing: an optimized technique for the digital measurement of time of flight for γ rays and neutrons at relatively low sampling rates

    International Nuclear Information System (INIS)

    Aspinall, M D; Joyce, M J; Mackin, R O; Jarrah, Z; Boston, A J; Nolan, P J; Peyton, A J; Hawkes, N P

    2009-01-01

    A unique, digital time pick-off method, known as sample-interpolation timing (SIT) is described. This method demonstrates the possibility of improved timing resolution for the digital measurement of time of flight compared with digital replica-analogue time pick-off methods for signals sampled at relatively low rates. Three analogue timing methods have been replicated in the digital domain (leading-edge, crossover and constant-fraction timing) for pulse data sampled at 8 GSa s −1 . Events arising from the 7 Li(p, n) 7 Be reaction have been detected with an EJ-301 organic liquid scintillator and recorded with a fast digital sampling oscilloscope. Sample-interpolation timing was developed solely for the digital domain and thus performs more efficiently on digital signals compared with analogue time pick-off methods replicated digitally, especially for fast signals that are sampled at rates that current affordable and portable devices can achieve. Sample interpolation can be applied to any analogue timing method replicated digitally and thus also has the potential to exploit the generic capabilities of analogue techniques with the benefits of operating in the digital domain. A threshold in sampling rate with respect to the signal pulse width is observed beyond which further improvements in timing resolution are not attained. This advance is relevant to many applications in which time-of-flight measurement is essential

  6. Population Pharmacokinetics and Optimal Sampling Strategy for Model-Based Precision Dosing of Melphalan in Patients Undergoing Hematopoietic Stem Cell Transplantation.

    Science.gov (United States)

    Mizuno, Kana; Dong, Min; Fukuda, Tsuyoshi; Chandra, Sharat; Mehta, Parinda A; McConnell, Scott; Anaissie, Elias J; Vinks, Alexander A

    2018-05-01

    High-dose melphalan is an important component of conditioning regimens for patients undergoing hematopoietic stem cell transplantation. The current dosing strategy based on body surface area results in a high incidence of oral mucositis and gastrointestinal and liver toxicity. Pharmacokinetically guided dosing will individualize exposure and help minimize overexposure-related toxicity. The purpose of this study was to develop a population pharmacokinetic model and optimal sampling strategy. A population pharmacokinetic model was developed with NONMEM using 98 observations collected from 15 adult patients given the standard dose of 140 or 200 mg/m 2 by intravenous infusion. The determinant-optimal sampling strategy was explored with PopED software. Individual area under the curve estimates were generated by Bayesian estimation using full and the proposed sparse sampling data. The predictive performance of the optimal sampling strategy was evaluated based on bias and precision estimates. The feasibility of the optimal sampling strategy was tested using pharmacokinetic data from five pediatric patients. A two-compartment model best described the data. The final model included body weight and creatinine clearance as predictors of clearance. The determinant-optimal sampling strategies (and windows) were identified at 0.08 (0.08-0.19), 0.61 (0.33-0.90), 2.0 (1.3-2.7), and 4.0 (3.6-4.0) h post-infusion. An excellent correlation was observed between area under the curve estimates obtained with the full and the proposed four-sample strategy (R 2  = 0.98; p strategy promises to achieve the target area under the curve as part of precision dosing.

  7. An Economic and Environmental Assessment Model for Selecting the Optimal Implementation Strategy of Fuel Cell Systems—A Focus on Building Energy Policy

    Directory of Open Access Journals (Sweden)

    Daeho Kim

    2014-08-01

    Full Text Available Considerable effort is being made to reduce the primary energy consumption in buildings. As part of this effort, fuel cell systems are attracting attention as a new/renewable energy systems for several reasons: (i distributed generation system; (ii combined heat and power system; and (iii availability of various sources of hydrogen in the future. Therefore, this study aimed to develop an economic and environmental assessment model for selecting the optimal implementation strategy of the fuel cell system, focusing on building energy policy. This study selected two types of buildings (i.e., residential buildings and non-residential buildings as the target buildings and considered two types of building energy policies (i.e., the standard of energy cost calculation and the standard of a government subsidy. This study established the optimal implementation strategy of the fuel cell system in terms of the life cycle cost and life cycle CO2 emissions. For the residential building, it is recommended that the subsidy level and the system marginal price level be increased. For the non-residential building, it is recommended that gas energy cost be decreased and the system marginal price level be increased. The developed model could be applied to any other country or any other type of building according to building energy policy.

  8. Sleep and optimism: A longitudinal study of bidirectional causal relationship and its mediating and moderating variables in a Chinese student sample.

    Science.gov (United States)

    Lau, Esther Yuet Ying; Hui, C Harry; Lam, Jasmine; Cheung, Shu-Fai

    2017-01-01

    While both sleep and optimism have been found to be predictive of well-being, few studies have examined their relationship with each other. Neither do we know much about the mediators and moderators of the relationship. This study investigated (1) the causal relationship between sleep quality and optimism in a college student sample, (2) the role of symptoms of depression, anxiety, and stress as mediators, and (3) how circadian preference might moderate the relationship. Internet survey data were collected from 1,684 full-time university students (67.6% female, mean age = 20.9 years, SD = 2.66) at three time-points, spanning about 19 months. Measures included the Attributional Style Questionnaire, the Pittsburgh Sleep Quality Index, the Composite Scale of Morningness, and the Depression Anxiety Stress Scale-21. Moderate correlations were found among sleep quality, depressive mood, stress symptoms, anxiety symptoms, and optimism. Cross-lagged analyses showed a bidirectional effect between optimism and sleep quality. Moreover, path analyses demonstrated that anxiety and stress symptoms partially mediated the influence of optimism on sleep quality, while depressive mood partially mediated the influence of sleep quality on optimism. In support of our hypothesis, sleep quality affects mood symptoms and optimism differently for different circadian preferences. Poor sleep results in depressive mood and thus pessimism in non-morning persons only. In contrast, the aggregated (direct and indirect) effects of optimism on sleep quality were invariant of circadian preference. Taken together, people who are pessimistic generally have more anxious mood and stress symptoms, which adversely affect sleep while morningness seems to have a specific protective effect countering the potential damage poor sleep has on optimism. In conclusion, optimism and sleep quality were both cause and effect of each other. Depressive mood partially explained the effect of sleep quality on optimism

  9. Optimal sampling theory and population modelling - Application to determination of the influence of the microgravity environment on drug distribution and elimination

    Science.gov (United States)

    Drusano, George L.

    1991-01-01

    The optimal sampling theory is evaluated in applications to studies related to the distribution and elimination of several drugs (including ceftazidime, piperacillin, and ciprofloxacin), using the SAMPLE module of the ADAPT II package of programs developed by D'Argenio and Schumitzky (1979, 1988) and comparing the pharmacokinetic parameter values with results obtained by traditional ten-sample design. The impact of the use of optimal sampling was demonstrated in conjunction with NONMEM (Sheiner et al., 1977) approach, in which the population is taken as the unit of analysis, allowing even fragmentary patient data sets to contribute to population parameter estimates. It is shown that this technique is applicable in both the single-dose and the multiple-dose environments. The ability to study real patients made it possible to show that there was a bimodal distribution in ciprofloxacin nonrenal clearance.

  10. A policy of routine umbilical cord blood gas analysis decreased missing samples from high-risk births.

    Science.gov (United States)

    Ahlberg, M; Elvander, C; Johansson, S; Cnattingius, S; Stephansson, O

    2017-01-01

    This study compared obstetric units practicing routine or selective umbilical cord blood gas analysis, with respect to the risk of missing samples in high-risk deliveries and in infants with birth asphyxia. This was a Swedish population-based cohort study that used register data for 155 235 deliveries of live singleton infants between 2008 and 2014. Risk ratios and 95% confidence intervals were calculated to estimate the association between routine and selective umbilical cord blood gas sampling strategies and the risk of missing samples. Selective sampling increased the risk ratios when routine sampling was used as the reference, with a value of 1.0, and these were significant in high-risk deliveries and birth asphyxia. The risk ratios for selective sampling were large-for-gestational age (9.07), preterm delivery at up to 36 weeks of gestation (8.24), small-for-gestational age (7.94), two or more foetal scalp blood samples (5.96), an Apgar score of less than seven at one minute (2.36), emergency Caesarean section (1.67) and instrumental vaginal delivery (1.24). Compared with routine sampling, selective umbilical cord blood gas sampling significantly increased the risks of missing samples in high-risk deliveries and in infants with birth asphyxia. ©2016 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  11. Optimal (R, Q) policy and pricing for two-echelon supply chain with lead time and retailer's service-level incomplete information

    Science.gov (United States)

    Esmaeili, M.; Naghavi, M. S.; Ghahghaei, A.

    2018-03-01

    Many studies focus on inventory systems to analyze different real-world situations. This paper considers a two-echelon supply chain that includes one warehouse and one retailer with stochastic demand and an up-to-level policy. The retailer's lead time includes the transportation time from the warehouse to the retailer that is unknown to the retailer. On the other hand, the warehouse is unaware of retailer's service level. The relationship between the retailer and the warehouse is modeled based on the Stackelberg game with incomplete information. Moreover, their relationship is presented when the warehouse and the retailer reveal their private information using the incentive strategies. The optimal inventory and pricing policies are obtained using an algorithm based on bi-level programming. Numerical examples, including sensitivity analysis of some key parameters, will compare the results between the Stackelberg models. The results show that information sharing is more beneficial to the warehouse rather than the retailer.

  12. A flexible Bayesian assessment for the expected impact of data on prediction confidence for optimal sampling designs

    Science.gov (United States)

    Leube, Philipp; Geiges, Andreas; Nowak, Wolfgang

    2010-05-01

    Incorporating hydrogeological data, such as head and tracer data, into stochastic models of subsurface flow and transport helps to reduce prediction uncertainty. Considering limited financial resources available for the data acquisition campaign, information needs towards the prediction goal should be satisfied in a efficient and task-specific manner. For finding the best one among a set of design candidates, an objective function is commonly evaluated, which measures the expected impact of data on prediction confidence, prior to their collection. An appropriate approach to this task should be stochastically rigorous, master non-linear dependencies between data, parameters and model predictions, and allow for a wide variety of different data types. Existing methods fail to fulfill all these requirements simultaneously. For this reason, we introduce a new method, denoted as CLUE (Cross-bred Likelihood Uncertainty Estimator), that derives the essential distributions and measures of data utility within a generalized, flexible and accurate framework. The method makes use of Bayesian GLUE (Generalized Likelihood Uncertainty Estimator) and extends it to an optimal design method by marginalizing over the yet unknown data values. Operating in a purely Bayesian Monte-Carlo framework, CLUE is a strictly formal information processing scheme free of linearizations. It provides full flexibility associated with the type of measurements (linear, non-linear, direct, indirect) and accounts for almost arbitrary sources of uncertainty (e.g. heterogeneity, geostatistical assumptions, boundary conditions, model concepts) via stochastic simulation and Bayesian model averaging. This helps to minimize the strength and impact of possible subjective prior assumptions, that would be hard to defend prior to data collection. Our study focuses on evaluating two different uncertainty measures: (i) expected conditional variance and (ii) expected relative entropy of a given prediction goal. The

  13. Optimal transfer, ordering and payment policies for joint supplier-buyer inventory model with price-sensitive trapezoidal demand and net credit

    Science.gov (United States)

    Shah, Nita H.; Shah, Digeshkumar B.; Patel, Dushyantkumar G.

    2015-07-01

    This study aims at formulating an integrated supplier-buyer inventory model when market demand is variable price-sensitive trapezoidal and the supplier offers a choice between discount in unit price and permissible delay period for settling the accounts due against the purchases made. This type of trade credit is termed as 'net credit'. In this policy, if the buyer pays within offered time M1, then the buyer is entitled for a cash discount; otherwise the full account must be settled by the time M2; where M2 > M1 ⩾ 0. The goal is to determine the optimal selling price, procurement quantity, number of transfers from the supplier to the buyer and payment time to maximise the joint profit per unit time. An algorithm is worked out to obtain the optimal solution. A numerical example is given to validate the proposed model. The managerial insights based on sensitivity analysis are deduced.

  14. [Optimizing antibiotics policy in the Netherlands. VI. SWAB advice: no selective decontamination of intensive care patients on mechanical ventilation

    NARCIS (Netherlands)

    Bonten, M.J.; Kullberg, B.J.; Filius, P.M.

    2001-01-01

    The Working Party on Antibiotic Policy (Dutch acronym is SWAB) has issued a guideline in which the pro and cons of the routine use of selective decontamination (SD) in patients in intensive care (IC) on mechanical ventilation are compared in order to decide whether SD is indicated. The effectiveness

  15. Boat sampling

    International Nuclear Information System (INIS)

    Citanovic, M.; Bezlaj, H.

    1994-01-01

    This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures

  16. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    International Nuclear Information System (INIS)

    Brum, Daniel M.; Lima, Claudio F.; Robaina, Nicolle F.; Fonseca, Teresa Cristina O.; Cassella, Ricardo J.

    2011-01-01

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO 3 , the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO 3 medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  17. Multiple response optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry with sample injection as detergent emulsion

    Energy Technology Data Exchange (ETDEWEB)

    Brum, Daniel M.; Lima, Claudio F. [Departamento de Quimica, Universidade Federal de Vicosa, A. Peter Henry Rolfs s/n, Vicosa/MG, 36570-000 (Brazil); Robaina, Nicolle F. [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil); Fonseca, Teresa Cristina O. [Petrobras, Cenpes/PDEDS/QM, Av. Horacio Macedo 950, Ilha do Fundao, Rio de Janeiro/RJ, 21941-915 (Brazil); Cassella, Ricardo J., E-mail: cassella@vm.uff.br [Departamento de Quimica Analitica, Universidade Federal Fluminense, Outeiro de S.J. Batista s/n, Centro, Niteroi/RJ, 24020-141 (Brazil)

    2011-05-15

    The present paper reports the optimization for Cu, Fe and Pb determination in naphtha by graphite furnace atomic absorption spectrometry (GF AAS) employing a strategy based on the injection of the samples as detergent emulsions. The method was optimized in relation to the experimental conditions for the emulsion formation and taking into account that the three analytes (Cu, Fe and Pb) should be measured in the same emulsion. The optimization was performed in a multivariate way by employing a three-variable Doehlert design and a multiple response strategy. For this purpose, the individual responses of the three analytes were combined, yielding a global response that was employed as a dependent variable. The three factors related to the optimization process were: the concentration of HNO{sub 3}, the concentration of the emulsifier agent (Triton X-100 or Triton X-114) in aqueous solution used to emulsify the sample and the volume of solution. At optimum conditions, it was possible to obtain satisfactory results with an emulsion formed by mixing 4 mL of the samples with 1 mL of a 4.7% w/v Triton X-100 solution prepared in 10% v/v HNO{sub 3} medium. The resulting emulsion was stable for 250 min, at least, and provided enough sensitivity to determine the three analytes in the five samples tested. A recovery test was performed to evaluate the accuracy of the optimized procedure and recovery rates, in the range of 88-105%; 94-118% and 95-120%, were verified for Cu, Fe and Pb, respectively.

  18. Optimization of a method based on micro-matrix solid-phase dispersion (micro-MSPD for the determination of PCBs in mussel samples

    Directory of Open Access Journals (Sweden)

    Nieves Carro

    2017-03-01

    Full Text Available This paper reports the development and optimization of micro-matrix solid-phase dispersion (micro-MSPD of nine polychlorinated biphenyls (PCBs in mussel samples (Mytilus galloprovincialis by using a two-level factorial design. Four variables (amount of sample, anhydrous sodium sulphate, Florisil and solvent volume were considered as factors in the optimization process. The results suggested that only the interaction between the amount of anhydrous sodium sulphate and the solvent volume was statistically significant for the overall recovery of a trichlorinated compound, CB 28. Generally most of the considered species exhibited a similar behaviour, the sample and Florisil amounts had a positive effect on PCBs extractions and solvent volume and sulphate amount had a negative effect. The analytical determination and confirmation of PCBs were carried out by using GC-ECD and GC-MS/MS, respectively. The method was validated having satisfactory precision and accuracy with RSD values below 6% and recoveries between 81 and 116% for all congeners. The optimized method was applied to the extraction of real mussel samples from two Galician Rías.

  19. Optimization of basic parameters in temperature-programmed gas chromatographic separations of multi-component samples within a given time

    NARCIS (Netherlands)

    Repka, D.; Krupcik, J.; Brunovska, A.; Leclercq, P.A.; Rijks, J.A.

    1989-01-01

    A new procedure is introduced for the optimization of column peak capacity in a given time. The opitmization focuses on temperature-programmed operating conditions, notably the initial temperature and hold time, and the programming rate. Based conceptually upon Lagrange functions, experiments were

  20. Ensuring an optimal environment for peer education in South African schools: Goals, systems, standards and policy options for effective learning.

    Science.gov (United States)

    Swartz, Sharlene; Deutsch, Charles; Moolman, Benita; Arogundade, Emma; Isaacs, Dane; Michel, Barbara

    2016-12-01

    Peer education has long been seen as a key health promotion strategy and an important tool in preventing HIV infection. In South African schools, it is currently one of the strategies employed to do so. Based on both a recent research study of peer education across 35 schools and drawing on multiple previous studies in South Africa, this paper examines the key elements of peer education that contribute to its effectiveness and asks how this aligns with current educational and health policies. From this research, it summarises and proposes shared goals and aims, minimum standards of implementation and reflects on the necessary infrastructure required for peer education to be effective. In light of these findings, it offers policy recommendations regarding who should be doing peer education and the status peer education should have in a school's formal programme.

  1. Expected frontiers: Incorporating weather uncertainty into a policy analysis using an integrated bi-level multi-objective optimization framework

    Science.gov (United States)

    Weather is the main driver in both plant use of nutrients and fate and transport of nutrients in the environment. In previous work, we evaluated a green tax for control of agricultural nutrients in a bi-level optimization framework that linked deterministic models. In this study,...

  2. Optimal base-stock policy for the inventory system with periodic review, backorders and sequential lead times

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    2008-01-01

    We extend well-known formulae for the optimal base stock of the inventory system with continuous review and constant lead time to the case with periodic review and stochastic, sequential lead times. Our extension uses the notion of the 'extended lead time'. The derived performance measures...

  3. Note: Optimal base-stock policy for the inventory system with periodic review, backorders and sequential lead times

    DEFF Research Database (Denmark)

    Johansen, Søren Glud; Thorstenson, Anders

    We show that well-known textbook formulae for determining the optimal base stock of the inventory system with continuous review and constant lead time can easily be extended to the case with periodic review and stochastic, sequential lead times. The provided performance measures and conditions...

  4. сисOptimization of the state information policy of Ukraine in the conditions of contemporary modernization processes

    Directory of Open Access Journals (Sweden)

    E. O. Romanenko

    2014-10-01

    Today in Ukraine is a strategic document that at the national level to govern the main priorities, directions, principles, principles and ways of realization of the State policy on implementing its information and communication functions. Moreover the communicative component state is not clearly separated from the information, and therefore does not have the proper conceptual, technological and functional software, also destablzacjno affects the livelihoods of its public sector.

  5. Specific amplification of bacterial DNA by optimized so-called universal bacterial primers in samples rich of plant DNA.

    Science.gov (United States)

    Dorn-In, Samart; Bassitta, Rupert; Schwaiger, Karin; Bauer, Johann; Hölzel, Christina S

    2015-06-01

    Universal primers targeting the bacterial 16S-rRNA-gene allow quantification of the total bacterial load in variable sample types by qPCR. However, many universal primer pairs also amplify DNA of plants or even of archaea and other eukaryotic cells. By using these primers, the total bacterial load might be misevaluated, whenever samples contain high amounts of non-target DNA. Thus, this study aimed to provide primer pairs which are suitable for quantification and identification of bacterial DNA in samples such as feed, spices and sample material from digesters. For 42 primers, mismatches to the sequence of chloroplasts and mitochondria of plants were evaluated. Six primer pairs were further analyzed with regard to the question whether they anneal to DNA of archaea, animal tissue and fungi. Subsequently they were tested with sample matrix such as plants, feed, feces, soil and environmental samples. To this purpose, the target DNA in the samples was quantified by qPCR. The PCR products of plant and feed samples were further processed for the Single Strand Conformation Polymorphism method followed by sequence analysis. The sequencing results revealed that primer pair 335F/769R amplified only bacterial DNA in samples such as plants and animal feed, in which the DNA of plants prevailed. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Optimization of the Extraction of the Volatile Fraction from Honey Samples by SPME-GC-MS, Experimental Design, and Multivariate Target Functions

    Directory of Open Access Journals (Sweden)

    Elisa Robotti

    2017-01-01

    Full Text Available Head space (HS solid phase microextraction (SPME followed by gas chromatography with mass spectrometry detection (GC-MS is the most widespread technique to study the volatile profile of honey samples. In this paper, the experimental SPME conditions were optimized by a multivariate strategy. Both sensitivity and repeatability were optimized by experimental design techniques considering three factors: extraction temperature (from 50°C to 70°C, time of exposition of the fiber (from 20 min to 60 min, and amount of salt added (from 0 to 27.50%. Each experiment was evaluated by Principal Component Analysis (PCA that allows to take into consideration all the analytes at the same time, preserving the information about their different characteristics. Optimal extraction conditions were identified independently for signal intensity (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w and repeatability (extraction temperature: 50°C; extraction time: 60 min; salt percentage: 27.50% w/w and a final global compromise (extraction temperature: 70°C; extraction time: 60 min; salt percentage: 27.50% w/w was also reached. Considerations about the choice of the best internal standards were also drawn. The whole optimized procedure was than applied to the analysis of a multiflower honey sample and more than 100 compounds were identified.

  7. Wireless Powered Relaying Networks Under Imperfect Channel State Information: System Performance and Optimal Policy for Instantaneous Rate

    Directory of Open Access Journals (Sweden)

    D. T. Do

    2017-09-01

    Full Text Available In this investigation, we consider wireless powered relaying systems, where energy is scavenged by a relay via radio frequency (RF signals. We explore hybrid time switching-based and power splitting-based relaying protocol (HTPSR and compare performance of Amplify-and-Forward (AF with Decode-and-Forward (DF scheme under imperfect channel state information (CSI. Most importantly, the instantaneous rate, achievable bit error rate (BER are determined in the closed-form expressions under the impact of imperfect CSI. Through numerical analysis, we evaluate system insights via different parameters such as power splitting (PS and time switching (TS ratio of the considered HTPSR which affect outage performance and BER. It is noted that DF relaying networks outperform AF relaying networks. Besides that, the numerical results are given to prove the optimization problems of PS and TS ratio to obtain optimal instantaneous rate.

  8. Optimal Ordering Policy and Coordination Mechanism of a Supply Chain with Controllable Lead-Time-Dependent Demand Forecast

    Directory of Open Access Journals (Sweden)

    Hua-Ming Song

    2011-01-01

    Full Text Available This paper investigates the ordering decisions and coordination mechanism for a distributed short-life-cycle supply chain. The objective is to maximize the whole supply chain's expected profit and meanwhile make the supply chain participants achieve a Pareto improvement. We treat lead time as a controllable variable, thus the demand forecast is dependent on lead time: the shorter lead time, the better forecast. Moreover, optimal decision-making models for lead time and order quantity are formulated and compared in the decentralized and centralized cases. Besides, a three-parameter contract is proposed to coordinate the supply chain and alleviate the double margin in the decentralized scenario. In addition, based on the analysis of the models, we develop an algorithmic procedure to find the optimal ordering decisions. Finally, a numerical example is also presented to illustrate the results.

  9. Optimal oxygen feeding policy to maximize the production of Maleic anhydride in unsteady state fixed bed catalytic reactors

    Directory of Open Access Journals (Sweden)

    E. Ali

    2017-07-01

    Full Text Available The effect of different oxygen feeding scenarios in a fixed bed reactor for the production of Maleic anhydride (MA is studied. Two reactor configurations were examined. In the first configuration, a cross flow reactor (CFR with 4 discrete feeding points is considered. Another configuration is the conventional packed-bed reactor (PBR with a single feed. Nonlinear Model Predictive Controller (NLMPC was used as optimal controller to operate the CFR in dynamic mode and to optimize the multiple feed dosages in order to enhance the MA yield. The simulation results indicated that different combinations of the four feed ratios can operate the reactor at the best value for the yield provided the first feeding point is kept as low as possible. For the packed bed reactor configuration, a single oxygen feed is considered and is optimized transiently by NLMPC. The simulation outcomes showed that the reactor performance in terms of the produced MA mole fraction can also be enhanced to the same magnitude obtained by CFR configuration. This improvement requires decreasing the oxygen ratio in the reactor single feed by 70%.

  10. Robust economic optimization and environmental policy analysis for microgrid planning: An application to Taichung Industrial Park, Taiwan

    International Nuclear Information System (INIS)

    Yu, Nan; Kang, Jin-Su; Chang, Chung-Chuan; Lee, Tai-Yong; Lee, Dong-Yup

    2016-01-01

    This study aims to provide economical and environmentally friendly solutions for a microgrid system with distributed energy resources in the design stage, considering multiple uncertainties during operation and conflicting interests among diverse microgrid stakeholders. For the purpose, we develop a multi-objective optimization model for robust microgrid planning, on the basis of an economic robustness measure, i.e. the worst-case cost among possible scenarios, to reduce the variability among scenario costs caused by uncertainties. The efficacy of the model is successfully demonstrated by applying it to Taichung Industrial Park in Taiwan, an industrial complex, where significant amount of greenhouse gases are emitted. Our findings show that the most robust solution, but the highest cost, mainly includes 45% (26.8 MW) of gas engine and 47% (28 MW) of photovoltaic panel with the highest system capacity (59 MW). Further analyses reveal the environmental benefits from the significant reduction of the expected annual CO_2 emission and carbon tax by about half of the current utility facilities in the region. In conclusion, the developed model provides an efficient decision-making tool for robust microgrid planning at the preliminary stage. - Highlights: • Developed robust economic and environmental optimization model for microgrid planning. • Provided Pareto optimal planning solutions for Taichung Industrial Park, Taiwan. • Suggested microgrid configuration with significant economic and environmental benefits. • Identified gas engine and photovoltaic panel as two promising energy sources.

  11. Optimization of the Analytical Method Using HPLC with Fluorescence Detection to Determine Selected Polycyclic Aromatic Compounds in Clean Water Samples

    International Nuclear Information System (INIS)

    Garcia Alonso, S.; Perez Pastor, R. M.

    2013-01-01

    A study on the comparison and evaluation of 3 miniaturized extraction methods for the determination of selected PACs in clear waters is presented. Three types of liquid-liquid extraction were used for chromatographic analysis by HPLC with fluorescence detection. The main objective was the optimization and development of simple, rapid and low cost methods, minimizing the use of extracting solvent volume. The work also includes a study on the scope of the methods developed at low and high levels of concentration and intermediate precision. (Author)

  12. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Directory of Open Access Journals (Sweden)

    Thadeous J Kacmarczyk

    Full Text Available Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads. Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  13. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection

    Science.gov (United States)

    Kacmarczyk, Thadeous J.; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal. PMID:26066343

  14. Multiplexing of ChIP-Seq Samples in an Optimized Experimental Condition Has Minimal Impact on Peak Detection.

    Science.gov (United States)

    Kacmarczyk, Thadeous J; Bourque, Caitlin; Zhang, Xihui; Jiang, Yanwen; Houvras, Yariv; Alonso, Alicia; Betel, Doron

    2015-01-01

    Multiplexing samples in sequencing experiments is a common approach to maximize information yield while minimizing cost. In most cases the number of samples that are multiplexed is determined by financial consideration or experimental convenience, with limited understanding on the effects on the experimental results. Here we set to examine the impact of multiplexing ChIP-seq experiments on the ability to identify a specific epigenetic modification. We performed peak detection analyses to determine the effects of multiplexing. These include false discovery rates, size, position and statistical significance of peak detection, and changes in gene annotation. We found that, for histone marker H3K4me3, one can multiplex up to 8 samples (7 IP + 1 input) at ~21 million single-end reads each and still detect over 90% of all peaks found when using a full lane for sample (~181 million reads). Furthermore, there are no variations introduced by indexing or lane batch effects and importantly there is no significant reduction in the number of genes with neighboring H3K4me3 peaks. We conclude that, for a well characterized antibody and, therefore, model IP condition, multiplexing 8 samples per lane is sufficient to capture most of the biological signal.

  15. Determination of As, Cd, and Pb in Tap Water and Bottled Water Samples by Using Optimized GFAAS System with Pd-Mg and Ni as Matrix Modifiers

    Directory of Open Access Journals (Sweden)

    Sezgin Bakırdere

    2013-01-01

    Full Text Available Arsenic, lead, and cadmium were determined in tap and bottled water samples consumed in the west part of Turkey at trace levels. Graphite furnace atomic absorption spectrometry (GFAAS was used in all detections. All of the system parameters for each element were optimized to increase sensitivity. Pd-Mg mixture was selected as the best matrix modifier for As, while the highest signals were obtained for Pb and Cd in the case of Ni used as matrix modifier. Detection limits for As, Cd, and Pb were found to be 2.0, 0.036, and 0.25 ng/mL, respectively. 78 tap water and 17 different brands of bottled water samples were analyzed for their As, Cd, and Pb contents under the optimized conditions. In all water samples, concentration of cadmium was found to be lower than detection limits. Lead concentration in the samples analyzed varied between N.D. and 12.66 ± 0.68 ng/mL. The highest concentration of arsenic was determined as 11.54 ± 2.79 ng/mL. Accuracy of the methods was verified by using a certified reference material, namely, Trace Element in Water, 1643e. Results found for As, Cd, and Pb in reference materials were in satisfactory agreement with the certified values.

  16. Gas chromatographic-mass spectrometric analysis of urinary volatile organic metabolites: Optimization of the HS-SPME procedure and sample storage conditions.

    Science.gov (United States)

    Živković Semren, Tanja; Brčić Karačonji, Irena; Safner, Toni; Brajenović, Nataša; Tariba Lovaković, Blanka; Pizent, Alica

    2018-01-01

    Non-targeted metabolomics research of human volatile urinary metabolome can be used to identify potential biomarkers associated with the changes in metabolism related to various health disorders. To ensure reliable analysis of urinary volatile organic metabolites (VOMs) by gas chromatography-mass spectrometry (GC-MS), parameters affecting the headspace-solid phase microextraction (HS-SPME) procedure have been evaluated and optimized. The influence of incubation and extraction temperatures and times, coating fibre material and salt addition on SPME efficiency was investigated by multivariate optimization methods using reduced factorial and Doehlert matrix designs. The results showed optimum values for temperature to be 60°C, extraction time 50min, and incubation time 35min. The proposed conditions were applied to investigate urine samples' stability regarding different storage conditions and freeze-thaw processes. The sum of peak areas of urine samples stored at 4°C, -20°C, and -80°C up to six months showed a time dependent decrease over time although storage at -80°C resulted in a slight non-significant reduction comparing to the fresh sample. However, due to the volatile nature of the analysed compounds, more than two cycles of freezing/thawing of the sample stored for six months at -80°C should be avoided whenever possible. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. An Optimized Set of Fluorescence In Situ Hybridization Probes for Detection of Pancreatobiliary Tract Cancer in Cytology Brush Samples.

    Science.gov (United States)

    Barr Fritcher, Emily G; Voss, Jesse S; Brankley, Shannon M; Campion, Michael B; Jenkins, Sarah M; Keeney, Matthew E; Henry, Michael R; Kerr, Sarah M; Chaiteerakij, Roongruedee; Pestova, Ekaterina V; Clayton, Amy C; Zhang, Jun; Roberts, Lewis R; Gores, Gregory J; Halling, Kevin C; Kipp, Benjamin R

    2015-12-01

    Pancreatobiliary cancer is detected by fluorescence in situ hybridization (FISH) of pancreatobiliary brush samples with UroVysion probes, originally designed to detect bladder cancer. We designed a set of new probes to detect pancreatobiliary cancer and compared its performance with that of UroVysion and routine cytology analysis. We tested a set of FISH probes on tumor tissues (cholangiocarcinoma or pancreatic carcinoma) and non-tumor tissues from 29 patients. We identified 4 probes that had high specificity for tumor vs non-tumor tissues; we called this set of probes pancreatobiliary FISH. We performed a retrospective analysis of brush samples from 272 patients who underwent endoscopic retrograde cholangiopancreatography for evaluation of malignancy at the Mayo Clinic; results were available from routine cytology and FISH with UroVysion probes. Archived residual specimens were retrieved and used to evaluate the pancreatobiliary FISH probes. Cutoff values for FISH with the pancreatobiliary probes were determined using 89 samples and validated in the remaining 183 samples. Clinical and pathologic evidence of malignancy in the pancreatobiliary tract within 2 years of brush sample collection was used as the standard; samples from patients without malignancies were used as negative controls. The validation cohort included 85 patients with malignancies (46.4%) and 114 patients with primary sclerosing cholangitis (62.3%). Samples containing cells above the cutoff for polysomy (copy number gain of ≥2 probes) were classified as positive in FISH with the UroVysion and pancreatobiliary probes. Multivariable logistic regression was used to estimate associations between clinical and pathology findings and results from FISH. The combination of FISH probes 1q21, 7p12, 8q24, and 9p21 identified cancer cells with 93% sensitivity and 100% specificity in pancreatobiliary tissue samples and were therefore included in the pancreatobiliary probe set. In the validation cohort of

  18. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    International Nuclear Information System (INIS)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-01-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors. (paper)

  19. Cost-constrained optimal sampling for system identification in pharmacokinetics applications with population priors and nuisance parameters.

    Science.gov (United States)

    Sorzano, Carlos Oscars S; Pérez-De-La-Cruz Moreno, Maria Angeles; Burguet-Castell, Jordi; Montejo, Consuelo; Ros, Antonio Aguilar

    2015-06-01

    Pharmacokinetics (PK) applications can be seen as a special case of nonlinear, causal systems with memory. There are cases in which prior knowledge exists about the distribution of the system parameters in a population. However, for a specific patient in a clinical setting, we need to determine her system parameters so that the therapy can be personalized. This system identification is performed many times by measuring drug concentrations in plasma. The objective of this work is to provide an irregular sampling strategy that minimizes the uncertainty about the system parameters with a fixed amount of samples (cost constrained). We use Monte Carlo simulations to estimate the average Fisher's information matrix associated to the PK problem, and then estimate the sampling points that minimize the maximum uncertainty associated to system parameters (a minimax criterion). The minimization is performed employing a genetic algorithm. We show that such a sampling scheme can be designed in a way that is adapted to a particular patient and that it can accommodate any dosing regimen as well as it allows flexible therapeutic strategies. © 2015 Wiley Periodicals, Inc. and the American Pharmacists Association.

  20. Sampling optimization for high-speed weigh-in-motion measurements using in-pavement strain-based sensors

    Science.gov (United States)

    Zhang, Zhiming; Huang, Ying; Bridgelall, Raj; Palek, Leonard; Strommen, Robert

    2015-06-01

    Weigh-in-motion (WIM) measurement has been widely used for weight enforcement, pavement design, freight management, and intelligent transportation systems to monitor traffic in real-time. However, to use such sensors effectively, vehicles must exit the traffic stream and slow down to match their current capabilities. Hence, agencies need devices with higher vehicle passing speed capabilities to enable continuous weight measurements at mainline speeds. The current practices for data acquisition at such high speeds are fragmented. Deployment configurations and settings depend mainly on the experiences of operation engineers. To assure adequate data, most practitioners use very high frequency measurements that result in redundant samples, thereby diminishing the potential for real-time processing. The larger data memory requirements from higher sample rates also increase storage and processing costs. The field lacks a sampling design or standard to guide appropriate data acquisition of high-speed WIM measurements. This study develops the appropriate sample rate requirements as a function of the vehicle speed. Simulations and field experiments validate the methods developed. The results will serve as guidelines for future high-speed WIM measurements using in-pavement strain-based sensors.