WorldWideScience

Sample records for optimal worst case

  1. Worst-Case Portfolio Optimization under Stochastic Interest Rate Risk

    Directory of Open Access Journals (Sweden)

    Tina Engler

    2014-12-01

    Full Text Available We investigate a portfolio optimization problem under the threat of a market crash, where the interest rate of the bond is modeled as a Vasicek process, which is correlated with the stock price process. We adopt a non-probabilistic worst-case approach for the height and time of the market crash. On a given time horizon [0; T], we then maximize the investor’s expected utility of terminal wealth in the worst-case crash scenario. Our main result is an explicit characterization of the worst-case optimal portfolio strategy for the class of HARA (hyperbolic absolute risk aversion utility functions.

  2. Worst-case tolerance optimization of antenna systems

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    1980-01-01

    The application of recently developed algorithms to antenna systems design is demonstrated by the worst-case tolerance optimization of linear broadside arrays, using both spacings and excitation coefficients as design parameters. The resulting arrays are optimally immunized against deviations...... of the design parameters from their nominal values....

  3. A critical evaluation of worst case optimization methods for robust intensity-modulated proton therapy planning

    International Nuclear Information System (INIS)

    Fredriksson, Albin; Bokrantz, Rasmus

    2014-01-01

    Purpose: To critically evaluate and compare three worst case optimization methods that have been previously employed to generate intensity-modulated proton therapy treatment plans that are robust against systematic errors. The goal of the evaluation is to identify circumstances when the methods behave differently and to describe the mechanism behind the differences when they occur. Methods: The worst case methods optimize plans to perform as well as possible under the worst case scenario that can physically occur (composite worst case), the combination of the worst case scenarios for each objective constituent considered independently (objectivewise worst case), and the combination of the worst case scenarios for each voxel considered independently (voxelwise worst case). These three methods were assessed with respect to treatment planning for prostate under systematic setup uncertainty. An equivalence with probabilistic optimization was used to identify the scenarios that determine the outcome of the optimization. Results: If the conflict between target coverage and normal tissue sparing is small and no dose-volume histogram (DVH) constraints are present, then all three methods yield robust plans. Otherwise, they all have their shortcomings: Composite worst case led to unnecessarily low plan quality in boundary scenarios that were less difficult than the worst case ones. Objectivewise worst case generally led to nonrobust plans. Voxelwise worst case led to overly conservative plans with respect to DVH constraints, which resulted in excessive dose to normal tissue, and less sharp dose fall-off than the other two methods. Conclusions: The three worst case methods have clearly different behaviors. These behaviors can be understood from which scenarios that are active in the optimization. No particular method is superior to the others under all circumstances: composite worst case is suitable if the conflicts are not very severe or there are DVH constraints whereas

  4. Worst-Case-Optimal Dynamic Reinsurance for Large Claims

    DEFF Research Database (Denmark)

    Korn, Ralf; Menkens, Olaf; Steffensen, Mogens

    2012-01-01

    We control the surplus process of a non-life insurance company by dynamic proportional reinsurance. The objective is to maximize expected (utility of the) surplus under the worst-case claim development. In the large claim case with a worst-case upper limit on claim numbers and claim sizes, we fin...

  5. Algorithms for worst-case tolerance optimization

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans; Madsen, Kaj

    1979-01-01

    New algorithms are presented for the solution of optimum tolerance assignment problems. The problems considered are defined mathematically as a worst-case problem (WCP), a fixed tolerance problem (FTP), and a variable tolerance problem (VTP). The basic optimization problem without tolerances...... is denoted the zero tolerance problem (ZTP). For solution of the WCP we suggest application of interval arithmetic and also alternative methods. For solution of the FTP an algorithm is suggested which is conceptually similar to algorithms previously developed by the authors for the ZTP. Finally, the VTP...... is solved by a double-iterative algorithm in which the inner iteration is performed by the FTP- algorithm. The application of the algorithm is demonstrated by means of relatively simple numerical examples. Basic properties, such as convergence properties, are displayed based on the examples....

  6. Comparison of linear and nonlinear programming approaches for "worst case dose" and "minmax" robust optimization of intensity-modulated proton therapy dose distributions.

    Science.gov (United States)

    Zaghian, Maryam; Cao, Wenhua; Liu, Wei; Kardar, Laleh; Randeniya, Sharmalee; Mohan, Radhe; Lim, Gino

    2017-03-01

    Robust optimization of intensity-modulated proton therapy (IMPT) takes uncertainties into account during spot weight optimization and leads to dose distributions that are resilient to uncertainties. Previous studies demonstrated benefits of linear programming (LP) for IMPT in terms of delivery efficiency by considerably reducing the number of spots required for the same quality of plans. However, a reduction in the number of spots may lead to loss of robustness. The purpose of this study was to evaluate and compare the performance in terms of plan quality and robustness of two robust optimization approaches using LP and nonlinear programming (NLP) models. The so-called "worst case dose" and "minmax" robust optimization approaches and conventional planning target volume (PTV)-based optimization approach were applied to designing IMPT plans for five patients: two with prostate cancer, one with skull-based cancer, and two with head and neck cancer. For each approach, both LP and NLP models were used. Thus, for each case, six sets of IMPT plans were generated and assessed: LP-PTV-based, NLP-PTV-based, LP-worst case dose, NLP-worst case dose, LP-minmax, and NLP-minmax. The four robust optimization methods behaved differently from patient to patient, and no method emerged as superior to the others in terms of nominal plan quality and robustness against uncertainties. The plans generated using LP-based robust optimization were more robust regarding patient setup and range uncertainties than were those generated using NLP-based robust optimization for the prostate cancer patients. However, the robustness of plans generated using NLP-based methods was superior for the skull-based and head and neck cancer patients. Overall, LP-based methods were suitable for the less challenging cancer cases in which all uncertainty scenarios were able to satisfy tight dose constraints, while NLP performed better in more difficult cases in which most uncertainty scenarios were hard to meet

  7. Worst-case analysis of heap allocations

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Huber, Benedikt; Schoeberl, Martin

    2010-01-01

    the worst-case heap allocations of tasks. The analysis builds upon techniques that are well established for worst-case execution time analysis. The difference is that the cost function is not the execution time of instructions in clock cycles, but the allocation in bytes. In contrast to worst-case execution...... time analysis, worst-case heap allocation analysis is not processor dependent. However, the cost function depends on the object layout of the runtime system. The analysis is evaluated with several real-time benchmarks to establish the usefulness of the analysis, and to compare the memory consumption...

  8. Worst-Case Investment and Reinsurance Optimization for an Insurer under Model Uncertainty

    Directory of Open Access Journals (Sweden)

    Xiangbo Meng

    2016-01-01

    Full Text Available In this paper, we study optimal investment-reinsurance strategies for an insurer who faces model uncertainty. The insurer is allowed to acquire new business and invest into a financial market which consists of one risk-free asset and one risky asset whose price process is modeled by a Geometric Brownian motion. Minimizing the expected quadratic distance of the terminal wealth to a given benchmark under the “worst-case” scenario, we obtain the closed-form expressions of optimal strategies and the corresponding value function by solving the Hamilton-Jacobi-Bellman (HJB equation. Numerical examples are presented to show the impact of model parameters on the optimal strategies.

  9. 40 CFR 68.25 - Worst-case release scenario analysis.

    Science.gov (United States)

    2010-07-01

    ... PROGRAMS (CONTINUED) CHEMICAL ACCIDENT PREVENTION PROVISIONS Hazard Assessment § 68.25 Worst-case release... processes, one worst-case release scenario for each Program 1 process; (2) For Program 2 and 3 processes: (i... toxic substances from covered processes under worst-case conditions defined in § 68.22; (ii) One worst...

  10. Worst Case Efficient Data Structures

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    , FINDMIN, FINDMAX and PRED (predecessor query) on a unit cost RAM with word size w bits. The RAM operations used are addition, left and right bit shifts, and bit-wise boolean operations. For any function f(n) satisfying , we present a data structure supporting FINDMIN and FINDMAX in worst case constant......We study the design of efficient data structures. In particular we focus on the design of data structures where each operation has a worst case efficient implementations. The concrete problems we consider are partial persistence, implementation of priority queues, and implementation of dictionaries....... The first problem we consider is how to make bounded in-degree and out-degree data structures partially persistent, i.e., how to remember old versions of a data structure for later access. A node copying technique of Driscoll et al. supports update steps in amortized constant time and access steps in worst...

  11. Worst case bioethics: death, disaster, and public health

    National Research Council Canada - National Science Library

    Annas, George J

    2010-01-01

    ... intentionally left blank IntroductionIntroduction Scared to Death Death is almost everyone's personal worst case scenario. Society's worst case scenario, at least in America, is the disaster of la...

  12. Improving the Earthquake Resilience of Buildings The worst case approach

    CERN Document Server

    Takewaki, Izuru; Fujita, Kohei

    2013-01-01

    Engineers are always interested in the worst-case scenario. One of the most important and challenging missions of structural engineers may be to narrow the range of unexpected incidents in building structural design. Redundancy, robustness and resilience play an important role in such circumstances. Improving the Earthquake Resilience of Buildings: The worst case approach discusses the importance of worst-scenario approach for improved earthquake resilience of buildings and nuclear reactor facilities. Improving the Earthquake Resilience of Buildings: The worst case approach consists of two parts. The first part deals with the characterization and modeling of worst or critical ground motions on inelastic structures and the related worst-case scenario in the structural design of ordinary simple building structures. The second part of the book focuses on investigating the worst-case scenario for passively controlled and base-isolated buildings. This allows for detailed consideration of a range of topics includin...

  13. The worst case complexity of maximum parsimony.

    Science.gov (United States)

    Carmel, Amir; Musa-Lempel, Noa; Tsur, Dekel; Ziv-Ukelson, Michal

    2014-11-01

    One of the core classical problems in computational biology is that of constructing the most parsimonious phylogenetic tree interpreting an input set of sequences from the genomes of evolutionarily related organisms. We reexamine the classical maximum parsimony (MP) optimization problem for the general (asymmetric) scoring matrix case, where rooted phylogenies are implied, and analyze the worst case bounds of three approaches to MP: The approach of Cavalli-Sforza and Edwards, the approach of Hendy and Penny, and a new agglomerative, "bottom-up" approach we present in this article. We show that the second and third approaches are faster than the first one by a factor of Θ(√n) and Θ(n), respectively, where n is the number of species.

  14. Specifying design conservatism: Worst case versus probabilistic analysis

    Science.gov (United States)

    Miles, Ralph F., Jr.

    1993-01-01

    Design conservatism is the difference between specified and required performance, and is introduced when uncertainty is present. The classical approach of worst-case analysis for specifying design conservatism is presented, along with the modern approach of probabilistic analysis. The appropriate degree of design conservatism is a tradeoff between the required resources and the probability and consequences of a failure. A probabilistic analysis properly models this tradeoff, while a worst-case analysis reveals nothing about the probability of failure, and can significantly overstate the consequences of failure. Two aerospace examples will be presented that illustrate problems that can arise with a worst-case analysis.

  15. Smoothed analysis: analysis of algorithms beyond worst case

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, Heiko

    2011-01-01

    Many algorithms perform very well in practice, but have a poor worst-case performance. The reason for this discrepancy is that worst-case analysis is often a way too pessimistic measure for the performance of an algorithm. In order to provide a more realistic performance measure that can explain the

  16. Reevaluating the worst-case radiation response of MOS transistors

    Science.gov (United States)

    Fleetwood, D. M.

    Predicting worst-case response of a semiconductor device to ionizing radiation is a formidable challenge. As processes change and MOS gate insulators become thinner in advanced VLSI and VHSIC technologies, failure mechanisms must be constantly re-examined. Results are presented of a recent study in which more than 100 MOS transistors were monitored for up to 300 days after Co-60 exposure. Based on these results, a reevaluation of worst-case n-channel transistor response (most positive threshold voltage shift) in low-dose-rate and postirradiation environments is required in many cases. It is shown for Sandia hardened n-channel transistors with a 32 nm gate oxide, that switching from zero-volt bias, held during the entire radiation period, to positive bias during anneal clearly leads to a more positive threshold voltage shift (and thus the slowest circuit response) after Co-60 exposure than the standard case of maintaining positive bias during irradiation and anneal. It is concluded that irradiating these kinds of transistors with zero-volt bias, and annealing with positive bias, leads to worst-case postirradiation response. For commercial devices (with few interface states at doses of interest), on the other hand, device response only improves postirradiation, and worst-case response (in terms of device leakage) is for devices irradiated under positive bias and annealed with zero-volts bias.

  17. Properties of Worst-Case GMRES

    Czech Academy of Sciences Publication Activity Database

    Faber, V.; Liesen, J.; Tichý, Petr

    2013-01-01

    Roč. 34, č. 4 (2013), s. 1500-1519 ISSN 0895-4798 R&D Projects: GA ČR GA13-06684S Grant - others:GA AV ČR(CZ) M10041090 Institutional support: RVO:67985807 Keywords : GMRES method * worst-case convergence * ideal GMRES * matrix approximation problems * minmax Subject RIV: BA - General Mathematics Impact factor: 1.806, year: 2013

  18. Worst-case-efficient dynamic arrays in practice

    DEFF Research Database (Denmark)

    Katajainen, Jyrki

    2016-01-01

    The basic operations of a dynamic array are operator[], push back, and pop back. This study is an examination of variations of dynamic arrays that support these operations at O(1) worst-case cost. In the literature, many solutions have been proposed, but little information is available on their m......The basic operations of a dynamic array are operator[], push back, and pop back. This study is an examination of variations of dynamic arrays that support these operations at O(1) worst-case cost. In the literature, many solutions have been proposed, but little information is available...... on their mutual superiority. Most library implementations only guarantee O(1) amortized cost per operation. Four variations with good worst-case performance were benchmarked: (1) resizable array relying on doubling, halving, and incremental copying; (2) level-wiseallocated pile; (3) sliced array with fixed......-capacity slices; and (4) blockwise-allocated pile. Let |V| denote the size of the values of type V and |V*| the size of the pointers to values of type V, both measured in bytes. For an array of n values and a slice of S values, the space requirements of the considered variations were at most 12|V|n+O(|V*|), 2|V...

  19. A New Look at Worst Case Complexity: A Statistical Approach

    Directory of Open Access Journals (Sweden)

    Niraj Kumar Singh

    2014-01-01

    Full Text Available We present a new and improved worst case complexity model for quick sort as yworst(n,td=b0+b1n2+g(n,td+ɛ, where the LHS gives the worst case time complexity, n is the input size, td is the frequency of sample elements, and g(n,td is a function of both the input size n and the parameter td. The rest of the terms arising due to linear regression have usual meanings. We claim this to be an improvement over the conventional model; namely, yworst(n=b0+b1n+b2n2+ɛ, which stems from the worst case O(n2 complexity for this algorithm.

  20. Worst-Case Memory Consumption Analysis for SCJ

    DEFF Research Database (Denmark)

    Andersen, Jeppe Lunde; Todberg, Mikkel; Dalsgaard, Andreas Engelbredt

    2013-01-01

    Safety-Critical Java is designed to be used for safety-critical and hard real-time systems. To ensure predictable behaviour garbage collection has been replaced by a scope based memory model. This model currently requires bounds on memory usage of scopes to be specified by developers. These bounds...... have to be strict worst-case memory bounds to ensure correct behaviour of these systems. Currently, common methods are measurement based or by careful inspection of the applications Java bytecode. Not only is this a cumbersome approach it is also potentially unsafe. In this paper we present a worst......-case memory consumption tool for Safety-Critical Java and evaluate it on existing usecases and a new usecase building on the Cubesat Space Protocol....

  1. Sensitivity analysis for matched pair analysis of binary data: From worst case to average case analysis.

    Science.gov (United States)

    Hasegawa, Raiden; Small, Dylan

    2017-12-01

    In matched observational studies where treatment assignment is not randomized, sensitivity analysis helps investigators determine how sensitive their estimated treatment effect is to some unmeasured confounder. The standard approach calibrates the sensitivity analysis according to the worst case bias in a pair. This approach will result in a conservative sensitivity analysis if the worst case bias does not hold in every pair. In this paper, we show that for binary data, the standard approach can be calibrated in terms of the average bias in a pair rather than worst case bias. When the worst case bias and average bias differ, the average bias interpretation results in a less conservative sensitivity analysis and more power. In many studies, the average case calibration may also carry a more natural interpretation than the worst case calibration and may also allow researchers to incorporate additional data to establish an empirical basis with which to calibrate a sensitivity analysis. We illustrate this with a study of the effects of cellphone use on the incidence of automobile accidents. Finally, we extend the average case calibration to the sensitivity analysis of confidence intervals for attributable effects. © 2017, The International Biometric Society.

  2. SU-E-T-551: PTV Is the Worst-Case of CTV in Photon Therapy

    International Nuclear Information System (INIS)

    Harrington, D; Liu, W; Park, P; Mohan, R

    2014-01-01

    Purpose: To examine the supposition of the static dose cloud and adequacy of the planning target volume (PTV) dose distribution as the worst-case representation of clinical target volume (CTV) dose distribution for photon therapy in head and neck (H and N) plans. Methods: Five diverse H and N plans clinically delivered at our institution were selected. Isocenter for each plan was shifted positively and negatively in the three cardinal directions by a displacement equal to the PTV expansion on the CTV (3 mm) for a total of six shifted plans per original plan. The perturbed plan dose was recalculated in Eclipse (AAA v11.0.30) using the same, fixed fluence map as the original plan. The dose distributions for all plans were exported from the treatment planning system to determine the worst-case CTV dose distributions for each nominal plan. Two worst-case distributions, cold and hot, were defined by selecting the minimum or maximum dose per voxel from all the perturbed plans. The resulting dose volume histograms (DVH) were examined to evaluate the worst-case CTV and nominal PTV dose distributions. Results: Inspection demonstrates that the CTV DVH in the nominal dose distribution is indeed bounded by the CTV DVHs in the worst-case dose distributions. Furthermore, comparison of the D95% for the worst-case (cold) CTV and nominal PTV distributions by Pearson's chi-square test shows excellent agreement for all plans. Conclusion: The assumption that the nominal dose distribution for PTV represents the worst-case dose distribution for CTV appears valid for the five plans under examination. Although the worst-case dose distributions are unphysical since the dose per voxel is chosen independently, the cold worst-case distribution serves as a lower bound for the worst-case possible CTV coverage. Minor discrepancies between the nominal PTV dose distribution and worst-case CTV dose distribution are expected since the dose cloud is not strictly static. This research was

  3. Probability model for worst case solar proton event fluences

    International Nuclear Information System (INIS)

    Xapsos, M.A.; Summers, G.P.; Barth, J.L.; Stassinopoulos, E.G.; Burke, E.A.

    1999-01-01

    The effects that solar proton events have on microelectronics and solar arrays are important considerations for spacecraft in geostationary orbits, polar orbits and on interplanetary missions. A predictive model of worst case solar proton event fluences is presented. It allows the expected worst case event fluence to be calculated for a given confidence level and for periods of time corresponding to space missions. The proton energy range is from >1 to >300 MeV, so that the model is useful for a variety of radiation effects applications. For each proton energy threshold, the maximum entropy principle is used to select the initial distribution of solar proton event fluences. This turns out to be a truncated power law, i.e., a power law for smaller event fluences that smoothly approaches zero at a maximum fluence. The strong agreement of the distribution with satellite data for the last three solar cycles indicates this description captures the essential features of a solar proton event fluence distribution. Extreme value theory is then applied to the initial distribution of events to obtain the model of worst case fluences

  4. Type-A Worst-Case Uncertainty for Gaussian noise instruments

    International Nuclear Information System (INIS)

    Arpaia, P.; Baccigalupi, C.; Martino, M.

    2015-01-01

    An analytical type-A approach is proposed for predicting the Worst-Case Uncertainty of a measurement system. In a set of independent observations of the same measurand, modelled as independent- and identically-distributed random variables, the upcoming extreme values (e.g. peaks) can be forecast by only characterizing the measurement system noise level, assumed to be white and Gaussian. Simulation and experimental results are presented to validate the model for a case study on the worst-case repeatability of a pulsed power supply for the klystron modulators of the Compact LInear Collider at CERN. The experimental validation highlights satisfying results for an acquisition system repeatable in the order of ±25 ppm over a bandwidth of 5 MHz

  5. Type-A Worst-Case Uncertainty for Gaussian noise instruments

    CERN Document Server

    AUTHOR|(CDS)2087245; Arpaia, Pasquale; Martino, Michele

    2015-01-01

    An analytical type-A approach is proposed for predicting the Worst-Case Uncertainty of a measurement system. In a set of independent observations of the same measurand, modelled as independent- and identically-distributed random variables, the upcoming extreme values (e.g. peaks) can be forecast by only characterizing the measurement system noise level, assumed to be white and Gaussian. Simulation and experimental results are presented to validate the model for a case study on the worst-case repeatability of a pulsed power supply for the klystron modulators of the Compact LInear Collider at CERN. The experimental validation highlights satisfying results for an acquisition system repeatable in the order of _25 ppm over a bandwidth of 5 MHz.

  6. Conceptual modeling for identification of worst case conditions in environmental risk assessment of nanomaterials using nZVI and C60 as case studies

    DEFF Research Database (Denmark)

    Grieger, Khara Deanne; Hansen, Steffen Foss; Sørensen, Peter B.

    2011-01-01

    , especially given the vast variety and complexity of nanomaterials and their applications. As an approach to help optimize environmental risk assessments of nanomaterials, we apply the Worst-Case Definition (WCD) model to identify best estimates for worst-case conditions of environmental risks of two case......Conducting environmental risk assessment of engineered nanomaterials has been an extremely challenging endeavor thus far. Moreover, recent findings from the nano-risk scientific community indicate that it is unlikely that many of these challenges will be easily resolved in the near future...... studies which use engineered nanoparticles, namely nZVI in soil and groundwater remediation and C60 in an engine oil lubricant. Results generated from this analysis may ultimately help prioritize research areas for environmental risk assessments of nZVI and C60 in these applications as well as demonstrate...

  7. Optimisation-based worst-case analysis and anti-windup synthesis for uncertain nonlinear systems

    Science.gov (United States)

    Menon, Prathyush Purushothama

    This thesis describes the development and application of optimisation-based methods for worst-case analysis and anti-windup synthesis for uncertain nonlinear systems. The worst-case analysis methods developed in the thesis are applied to the problem of nonlinear flight control law clearance for highly augmented aircraft. Local, global and hybrid optimisation algorithms are employed to evaluate worst-case violations of a nonlinear response clearance criterion, for a highly realistic aircraft simulation model and flight control law. The reliability and computational overheads associated with different opti misation algorithms are compared, and the capability of optimisation-based approaches to clear flight control laws over continuous regions of the flight envelope is demonstrated. An optimisation-based method for computing worst-case pilot inputs is also developed, and compared with current industrial approaches for this problem. The importance of explicitly considering uncertainty in aircraft parameters when computing worst-case pilot demands is clearly demonstrated. Preliminary results on extending the proposed framework to the problems of limit-cycle analysis and robustness analysis in the pres ence of time-varying uncertainties are also included. A new method for the design of anti-windup compensators for nonlinear constrained systems controlled using nonlinear dynamics inversion control schemes is presented and successfully applied to some simple examples. An algorithm based on the use of global optimisation is proposed to design the anti-windup compensator. Some conclusions are drawn from the results of the research presented in the thesis, and directions for future work are identified.

  8. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  9. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    Science.gov (United States)

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  10. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    Directory of Open Access Journals (Sweden)

    Sunil Chinnadurai

    2017-09-01

    Full Text Available In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE maximization problem in a 5G massive multiple-input multiple-output (MIMO-non-orthogonal multiple access (NOMA downlink system with imperfect channel state information (CSI at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM. A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA scheme.

  11. The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.

    Science.gov (United States)

    Qu, Shaojian; Ji, Ying

    2016-01-01

    In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC). For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.

  12. Worst-case prediction of normal operating containment temperatures for environmentally qualified equipment

    International Nuclear Information System (INIS)

    Krasnopoler, M.J.; Sundergill, J.E.

    1991-01-01

    Due to issues raised in NRC Information Notice No. 87-65, a southern US nuclear plant was concerned about thermal aging of environmentally qualified (EQ) equipment located in areas of elevated containment temperatures. A method to predict the worst-case monthly temperatures at various zones in the containment and calculate the qualified life using this monthly temperature was developed. Temperatures were predicted for twenty locations inside the containment. Concern about the qualified life of EQ equipment resulted from normal operating temperatures above 120F in several areas of the containment, especially during the summer. At a few locations, the temperature exceeded 140F. Also, NRC Information Notice No. 89-30 reported high containment temperatures at three other nuclear power plants. The predicted temperatures were based on a one-year containment temperature monitoring program. The monitors included permanent temperature monitors required by the Technical Specifications and temporary monitors installed specifically for this program. The temporary monitors were installed near EQ equipment in the expected worst-case areas based on design and operating experience. A semi-empirical model that combined physical and statistical models was developed. The physical model was an overall energy balance for the containment. The statistical model consists of several linear regressions that conservatively relate the monitor temperatures to the bulk containment temperature. The resulting semi-empirical model predicts the worst-case monthly service temperatures at the location of each of the containment temperature monitors. The monthly temperatures are the maximum expected because they are based on the historically worst-case atmospheric data

  13. The Worst-Case Weighted Multi-Objective Game with an Application to Supply Chain Competitions.

    Directory of Open Access Journals (Sweden)

    Shaojian Qu

    Full Text Available In this paper, we propose a worst-case weighted approach to the multi-objective n-person non-zero sum game model where each player has more than one competing objective. Our "worst-case weighted multi-objective game" model supposes that each player has a set of weights to its objectives and wishes to minimize its maximum weighted sum objectives where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto Nash equilibrium concept, which we call "robust-weighted Nash equilibrium". We prove that the robust-weighted Nash equilibria are guaranteed to exist even when the weight sets are unbounded. For the worst-case weighted multi-objective game with the weight sets of players all given as polytope, we show that a robust-weighted Nash equilibrium can be obtained by solving a mathematical program with equilibrium constraints (MPEC. For an application, we illustrate the usefulness of the worst-case weighted multi-objective game to a supply chain risk management problem under demand uncertainty. By the comparison with the existed weighted approach, we show that our method is more robust and can be more efficiently used for the real-world applications.

  14. On the estimation of the worst-case implant-induced RF-heating in multi-channel MRI

    Science.gov (United States)

    Córcoles, Juan; Zastrow, Earl; Kuster, Niels

    2017-06-01

    The increasing use of multiple radiofrequency (RF) transmit channels in magnetic resonance imaging (MRI) systems makes it necessary to rigorously assess the risk of RF-induced heating. This risk is especially aggravated with inclusions of medical implants within the body. The worst-case RF-heating scenario is achieved when the local tissue deposition in the at-risk region (generally in the vicinity of the implant electrodes) reaches its maximum value while MRI exposure is compliant with predefined general specific absorption rate (SAR) limits or power requirements. This work first reviews the common approach to estimate the worst-case RF-induced heating in multi-channel MRI environment, based on the maximization of the ratio of two Hermitian forms by solving a generalized eigenvalue problem. It is then shown that the common approach is not rigorous and may lead to an underestimation of the worst-case RF-heating scenario when there is a large number of RF transmit channels and there exist multiple SAR or power constraints to be satisfied. Finally, this work derives a rigorous SAR-based formulation to estimate a preferable worst-case scenario, which is solved by casting a semidefinite programming relaxation of this original non-convex problem, whose solution closely approximates the true worst-case including all SAR constraints. Numerical results for 2, 4, 8, 16, and 32 RF channels in a 3T-MRI volume coil for a patient with a deep-brain stimulator under a head imaging exposure are provided as illustrative examples.

  15. Generalized rank weights of reducible codes, optimal cases and related properties

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto

    2018-01-01

    in network coding. In this paper, we study their security behavior against information leakage on networks when applied as coset coding schemes, giving the following main results: 1) we give lower and upper bounds on their generalized rank weights (GRWs), which measure worst case information leakage...... to the wire tapper; 2) we find new parameters for which these codes are MRD (meaning that their first GRW is optimal) and use the previous bounds to estimate their higher GRWs; 3) we show that all linear (over the extension field) codes, whose GRWs are all optimal for fixed packet and code sizes but varying...... length are reducible codes up to rank equivalence; and 4) we show that the information leaked to a wire tapper when using reducible codes is often much less than the worst case given by their (optimal in some cases) GRWs. We conclude with some secondary related properties: conditions to be rank...

  16. Worst-Case Bias During Total Dose Irradiation of SOI Transistors

    International Nuclear Information System (INIS)

    Ferlet-Cavrois, V.; Colladant, T.; Paillet, P.; Leray, J.-L; Musseau, O.; Schwank, James R.; Shaneyfelt, Marty R.; Pelloie, J.L.; Du Port de Poncharra, J.

    2000-01-01

    The worst case bias during total dose irradiation of partially depleted SOI transistors (from SNL and from CEA/LETI) is correlated to the device architecture. Experiments and simulations are used to analyze SOI back transistor threshold voltage shift and charge trapping in the buried oxide

  17. An Approach for Generating Precipitation Input for Worst-Case Flood Modelling

    Science.gov (United States)

    Felder, Guido; Weingartner, Rolf

    2015-04-01

    There is a lack of suitable methods for creating precipitation scenarios that can be used to realistically estimate peak discharges with very low probabilities. On the one hand, existing methods are methodically questionable when it comes to physical system boundaries. On the other hand, the spatio-temporal representativeness of precipitation patterns as system input is limited. In response, this study proposes a method of deriving representative spatio-temporal precipitation patterns and presents a step towards making methodically correct estimations of infrequent floods by using a worst-case approach. A Monte-Carlo rainfall-runoff model allows for the testing of a wide range of different spatio-temporal distributions of an extreme precipitation event and therefore for the generation of a hydrograph for each of these distributions. Out of these numerous hydrographs and their corresponding peak discharges, the worst-case catchment reactions on the system input can be derived. The spatio-temporal distributions leading to the highest peak discharges are identified and can eventually be used for further investigations.

  18. Average Case vs. Worst Case-Margins of Safety in System Design

    DEFF Research Database (Denmark)

    Probst, Christian; Gal, Andreas; Franz, Michael

    2005-01-01

    allocator by sending it a particularly difficult to solve graph-coloring puzzle. The same vulnerability can be exploited if the attacker has intimate knowledge of the data structures used in the attacked system. Similar problems occur in hardware, e.g. with respect to power variability or the heat...... dissipation of processors. Malicious programs can exploit which parts of computer chips dissipate power, thereby overheating regions of the chip that are known to contain no temperature sensors. This attack could be used to affect battery life or cause early chip aging. Unfortunately, worst case-based attacks...

  19. Static Profiling of the Worst-Case in Real-Time Programs

    DEFF Research Database (Denmark)

    Brandner, Florian; Hepp, Stefan; Jordan, Alexander

    2012-01-01

    With the increasing performance demand in real-time systems it becomes more and more relevant to provide feedback to engineers and programmers, but also software development tools, on the performance-relevant code parts of a real-time program. So far, the information provided to programmers through...... other code parts. To give an accurate view covering the entire code base, tools in the spirit of standard program profiling tools are required. This work proposes an efficient approach to compute worst-case timing information for all code parts of a program using a complementary metric, called...... criticality. Every statement of a real-time program is assigned a criticality value, expressing how critical the respective code is with respect to the global WCET. This gives an accurate view to programmers how close the worst execution path passing through a specific part of a real-time program...

  20. Static analysis of worst-case stack cache behavior

    DEFF Research Database (Denmark)

    Jordan, Alexander; Brandner, Florian; Schoeberl, Martin

    2013-01-01

    Utilizing a stack cache in a real-time system can aid predictability by avoiding interference that heap memory traffic causes on the data cache. While loads and stores are guaranteed cache hits, explicit operations are responsible for managing the stack cache. The behavior of these operations can......-graph, the worst-case bounds can be efficiently yet precisely determined. Our evaluation using the MiBench benchmark suite shows that only 37% and 21% of potential stack cache operations actually store to and load from memory, respectively. Analysis times are modest, on average running between 0.46s and 1.30s per...

  1. Dynamic Minimum Spanning Forest with Subpolynomial Worst-case Update Time

    DEFF Research Database (Denmark)

    Nanongkai, Danupon; Saranurak, Thatchaphol; Wulff-Nilsen, Christian

    2017-01-01

    Abstract: We present a Las Vegas algorithm for dynamically maintaining a minimum spanning forest of an nnode graph undergoing edge insertions and deletions. Our algorithm guarantees an O(no(1)) worst-case update time with high probability. This significantly improves the two recent Las Vegas algo...... the previous approach in [2], [3] which is based on Frederickson's 2-dimensional topology tree [6] and illustrates a new application to this old technique....

  2. SU-E-T-642: PTV Is the Voxel-Wise Worst-Case of CTV in Prostate Photon Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, D; Schild, S; Wong, W; Vora, S; Liu, W [Mayo Clinic Arizona, Phoenix, AZ (United States)

    2015-06-15

    Purpose: To examine the adequacy of the planning target volume (PTV) dose distribution as the worst-case representation of clinical target volume (CTV) dose distribution in prostate volumetric-modulated arc therapy (VMAT) plans. Methods: Ten intact prostate cancer cases treated by VMAT at our institution were randomly selected. Isocenter was shifted in the three cardinal directions by a displacement equal to the PTV expansion on the CTV (±3 mm) for a total of six shifted plans per original plan. Rotationally-perturbed plans were generated with a couch rotation of ±1° to simulate patient yaw. The eight perturbed dose distributions were recalculated in the treatment planning system using the same, fixed fluence map as the original plan. The voxel-wise worst-case CTV dose distribution was constructed from the minimum value per voxel from the eight perturbed doses. The resulting dose volume histograms (DVH) were evaluated for statistical correlation between the worst-case CTV and nominal PTV dose distributions based on D95% by Wilcoxon signed-rank test with significance level p ≤ 0.05. Results: Inspection demonstrates the PTV DVH in the nominal dose distribution is bounded by the CTV DVH in the worst-case dose distribution. Comparison of D95% for the two dose distributions by Wilcoxon signed-rank test gives p = 0.131. Therefore the null hypothesis cannot be rejected since the difference in median values is not statistically significant. Conclusion: The assumption that the nominal dose distribution for PTV represents the worst-case dose distribution for CTV appears valid for the ten plans under examination. Although the worst-case dose distribution is unphysical since the dose per voxel is chosen independently, it serves as a lower bound for the possible CTV coverage. Furthermore, this is consistent with the unphysical nature of the PTV. Minor discrepancies between the two dose distributions are expected since the dose cloud is not strictly static. Funding Support

  3. SU-E-T-642: PTV Is the Voxel-Wise Worst-Case of CTV in Prostate Photon Therapy

    International Nuclear Information System (INIS)

    Harrington, D; Schild, S; Wong, W; Vora, S; Liu, W

    2015-01-01

    Purpose: To examine the adequacy of the planning target volume (PTV) dose distribution as the worst-case representation of clinical target volume (CTV) dose distribution in prostate volumetric-modulated arc therapy (VMAT) plans. Methods: Ten intact prostate cancer cases treated by VMAT at our institution were randomly selected. Isocenter was shifted in the three cardinal directions by a displacement equal to the PTV expansion on the CTV (±3 mm) for a total of six shifted plans per original plan. Rotationally-perturbed plans were generated with a couch rotation of ±1° to simulate patient yaw. The eight perturbed dose distributions were recalculated in the treatment planning system using the same, fixed fluence map as the original plan. The voxel-wise worst-case CTV dose distribution was constructed from the minimum value per voxel from the eight perturbed doses. The resulting dose volume histograms (DVH) were evaluated for statistical correlation between the worst-case CTV and nominal PTV dose distributions based on D95% by Wilcoxon signed-rank test with significance level p ≤ 0.05. Results: Inspection demonstrates the PTV DVH in the nominal dose distribution is bounded by the CTV DVH in the worst-case dose distribution. Comparison of D95% for the two dose distributions by Wilcoxon signed-rank test gives p = 0.131. Therefore the null hypothesis cannot be rejected since the difference in median values is not statistically significant. Conclusion: The assumption that the nominal dose distribution for PTV represents the worst-case dose distribution for CTV appears valid for the ten plans under examination. Although the worst-case dose distribution is unphysical since the dose per voxel is chosen independently, it serves as a lower bound for the possible CTV coverage. Furthermore, this is consistent with the unphysical nature of the PTV. Minor discrepancies between the two dose distributions are expected since the dose cloud is not strictly static. Funding Support

  4. Worst-case and smoothed analysis of k-means clustering with Bregman divergences

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, H.

    2013-01-01

    The $k$-means method is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice despite its exponential worst-case running-time. To narrow the gap between theory and practice, $k$-means has been studied in the semi-random input model of smoothed

  5. Multi-master profibus dp modelling and worst case analysis-based evaluation

    OpenAIRE

    Salvatore Monforte; Eduardo Tovar; Francisco Vasques; Salvatore Cavalieri

    2002-01-01

    This paper provides an analysis of the real-time behaviour of the multi-master Profibus DP network. The analysis is based on the evaluation of the worst-case message response time and the results obtained are compared with those present in literature, pointing out its capability to perform a more accurate evaluation of the performance of the Profibus network. Copyright © 2002 IFAC.

  6. Worst-case residual clipping noise power model for bit loading in LACO-OFDM

    KAUST Repository

    Zhang, Zhenyu; Chaaban, Anas; Shen, Chao; Elgala, Hany; Ng, Tien Khee; Ooi, Boon S.; Alouini, Mohamed-Slim

    2018-01-01

    Layered ACO-OFDM enjoys better spectral efficiency than ACO-OFDM, but its performance is challenged by residual clipping noise (RCN). In this paper, the power of RCN of LACO-OFDM is analyzed and modeled. As RCN is data-dependent, the worst-case situation is considered. A worst-case indicator is defined for relating the power of RCN and the power of noise at the receiver, wherein a linear relation is shown to be a practical approximation. An LACO-OFDM bit-loading experiment is performed to examine the proposed RCN power model for data rates of 6 to 7 Gbps. The experiment's results show that accounting for RCN has two advantages. First, it leads to better bit loading and achieves up to 59% lower overall bit-error rate (BER) than when the RCN is ignored. Second, it balances the BER across layers, which is a desired property from a channel coding perspective.

  7. Worst-case residual clipping noise power model for bit loading in LACO-OFDM

    KAUST Repository

    Zhang, Zhenyu

    2018-03-19

    Layered ACO-OFDM enjoys better spectral efficiency than ACO-OFDM, but its performance is challenged by residual clipping noise (RCN). In this paper, the power of RCN of LACO-OFDM is analyzed and modeled. As RCN is data-dependent, the worst-case situation is considered. A worst-case indicator is defined for relating the power of RCN and the power of noise at the receiver, wherein a linear relation is shown to be a practical approximation. An LACO-OFDM bit-loading experiment is performed to examine the proposed RCN power model for data rates of 6 to 7 Gbps. The experiment\\'s results show that accounting for RCN has two advantages. First, it leads to better bit loading and achieves up to 59% lower overall bit-error rate (BER) than when the RCN is ignored. Second, it balances the BER across layers, which is a desired property from a channel coding perspective.

  8. Discussions On Worst-Case Test Condition For Single Event Burnout

    Science.gov (United States)

    Liu, Sandra; Zafrani, Max; Sherman, Phillip

    2011-10-01

    This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.

  9. Performance Analysis of Capacity of MIMO Systems under Multiuser Interference Based on Worst-Case Noise Behavior

    Directory of Open Access Journals (Sweden)

    Jorswieck E. A.

    2004-01-01

    Full Text Available The capacity of a cellular multiuser MIMO system depends on various parameters, for example, the system structure, the transmit and receive strategies, the channel state information at the transmitter and the receiver, and the channel properties. Recently, the main focus of research was on single-user MIMO systems, their channel capacity, and their error performance with space-time coding. In general, the capacity of a cellular multiuser MIMO system is limited by additive white Gaussian noise, intracell interference from other users within the cell, and intercell interference from users outside the considered cell. We study one point-to-point link, on which interference acts. The interference models the different system scenarios and various parameters. Therefore, we consider three scenarios in which the noise is subject to different constraints. A general trace constraint is used in the first scenario. The noise covariance matrix eigenvalues are kept fixed in the second scenario, and in the third scenario the entries on the diagonal of the noise covariance matrix are kept fixed. We assume that the receiver as well as the transmitter have perfect channel state information. We solve the corresponding minimax programming problems and characterize the worst-case noise and the optimal transmit strategy. In all scenarios, the achievable capacity of the MIMO system with worst-case noise is equal to the capacity of some MIMO system in which either the channels are orthogonal or the transmit antennas are not allowed to cooperate or in which no channel state information is available at the transmitter. Furthermore, the minimax expressions fulfill a saddle point property. All theoretical results are illustrated by examples and numerical simulations.

  10. Effect of pesticide fate parameters and their uncertainty on the selection of 'worst-case' scenarios of pesticide leaching to groundwater.

    Science.gov (United States)

    Vanderborght, Jan; Tiktak, Aaldrik; Boesten, Jos J T I; Vereecken, Harry

    2011-03-01

    For the registration of pesticides in the European Union, model simulations for worst-case scenarios are used to demonstrate that leaching concentrations to groundwater do not exceed a critical threshold. A worst-case scenario is a combination of soil and climate properties for which predicted leaching concentrations are higher than a certain percentile of the spatial concentration distribution within a region. The derivation of scenarios is complicated by uncertainty about soil and pesticide fate parameters. As the ranking of climate and soil property combinations according to predicted leaching concentrations is different for different pesticides, the worst-case scenario for one pesticide may misrepresent the worst case for another pesticide, which leads to 'scenario uncertainty'. Pesticide fate parameter uncertainty led to higher concentrations in the higher percentiles of spatial concentration distributions, especially for distributions in smaller and more homogeneous regions. The effect of pesticide fate parameter uncertainty on the spatial concentration distribution was small when compared with the uncertainty of local concentration predictions and with the scenario uncertainty. Uncertainty in pesticide fate parameters and scenario uncertainty can be accounted for using higher percentiles of spatial concentration distributions and considering a range of pesticides for the scenario selection. Copyright © 2010 Society of Chemical Industry.

  11. Combining loop unrolling strategies and code predication to reduce the worst-case execution time of real-time software

    Directory of Open Access Journals (Sweden)

    Andreu Carminati

    2017-07-01

    Full Text Available Worst-case execution time (WCET is a parameter necessary to guarantee timing constraints on real-time systems. The higher the worst-case execution time of tasks, the higher will be the resource demand for the associated system. The goal of this paper is to propose a different way to perform loop unrolling on data-dependent loops using code predication targeting WCET reduction, because existing techniques only consider loops with fixed execution counts. We also combine our technique with existing unrolling approaches. Results showed that this combination can produce aggressive WCET reductions when compared with the original code.

  12. Healthcare Worker Preferences for Active Tuberculosis Case Finding Programs in South Africa: A Best-Worst Scaling Choice Experiment.

    Directory of Open Access Journals (Sweden)

    Nathan N O'Hara

    Full Text Available Healthcare workers (HCWs in South Africa are at a high risk of developing active tuberculosis (TB due to their occupational exposures. This study aimed to systematically quantify and compare the preferred attributes of an active TB case finding program for HCWs in South Africa.A Best-Worst Scaling choice experiment estimated HCW's preferences using a random-effects conditional logit model. Latent class analysis (LCA was used to explore heterogeneity in preferences."No cost", "the assurance of confidentiality", "no wait" and testing at the occupational health unit at one's hospital were the most preferred attributes. LCA identified a four class model with consistent differences in preference strength. Sex, occupation, and the time since a previous TB test were statistically significant predictors of class membership.The findings support the strengthening of occupational health units in South Africa to offer free and confidential active TB case finding programs for HCWs with minimal wait times. There is considerable variation in active TB case finding preferences amongst HCWs of different gender, occupation, and testing history. Attention to heterogeneity in preferences should optimize screening utilization of target HCW populations.

  13. Worst-case study for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagents: Methodology validation of total organic carbon

    International Nuclear Information System (INIS)

    Porto, Luciana Valeria Ferrari Machado

    2015-01-01

    Radiopharmaceuticals are defined as pharmaceutical preparations containing a radionuclide in their composition, mostly intravenously administered, and therefore compliance with the principles of Good Manufacturing Practices (GMP) is essential and indispensable. Cleaning validation is a requirement of the current GMP, and consists of documented evidence, which demonstrates that the cleaning procedures are able to remove residues to pre-determined acceptance levels, ensuring that no cross contamination occurs. A simplification of cleaning processes validation is accepted, and consists in choosing a product, called 'worst case', to represent the cleaning processes of all equipment of the same production area. One of the steps of cleaning validation is the establishment and validation of the analytical method to quantify the residue. The aim of this study was to establish the worst case for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagent (LR) for labeling with 99m Tc, evaluate the use of Total Organic Carbon (TOC) content as indicator of equipment cleaning used in the LR manufacture, validate the method of Non-Purgeable Organic Carbon (NPOC), and perform recovery tests with the product chosen as worst case. Worst case product's choice was based on the calculation of an index called 'Worst Case Index' (WCI), using information about drug solubility, difficulty of cleaning the equipment and occupancy rate of the products in line production. The products indicated 'worst case' was the LR MIBI-TEC. The method validation assays were performed using carbon analyser model TOC-Vwp coupled to an autosampler model ASI-V, both from Shimadzu®, controlled by TOC Control-V software. It was used the direct method for NPOC quantification. The parameters evaluated in the validation method were: system suitability, robustness, linearity, detection limit (DL) and quantification limit (QL), precision

  14. TMsim: An Algorithmic Tool for the Parametric and Worst-Case Simulation of Systems with Uncertainties

    Directory of Open Access Journals (Sweden)

    Riccardo Trinchero

    2017-01-01

    Full Text Available This paper presents a general purpose, algebraic tool—named TMsim—for the combined parametric and worst-case analysis of systems with bounded uncertain parameters. The tool is based on the theory of Taylor models and represents uncertain variables on a bounded domain in terms of a Taylor polynomial plus an interval remainder accounting for truncation and round-off errors. This representation is propagated from inputs to outputs by means of a suitable redefinition of the involved calculations, in both scalar and matrix form. The polynomial provides a parametric approximation of the variable, while the remainder gives a conservative bound of the associated error. The combination between the bound of the polynomial and the interval remainder provides an estimation of the overall (worst-case bound of the variable. After a preliminary theoretical background, the tool (freely available online is introduced step by step along with the necessary theoretical notions. As a validation, it is applied to illustrative examples as well as to real-life problems of relevance in electrical engineering applications, specifically a quarter-car model and a continuous-time linear equalizer.

  15. Determination of the worst case for cleaning validation of equipment used in the radiopharmaceutical production of lyophilized reagents for 99mTc labelling

    International Nuclear Information System (INIS)

    Porto, Luciana Valeria Ferrari Machado; Fukumori, Neuza Taeko Okasaki; Matsuda, Margareth Mie Nakamura

    2016-01-01

    Cleaning validation, a requirement of the current Good Manufacturing Practices (cGMP) for Drugs, consists of documented evidence that cleaning procedures are capable of removing residues to predetermined acceptance levels. This report describes a strategy for the selection of the worst case product for the production of lyophilized reagents (LRs) for labeling with 99m Tc from the Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN/Sao Paulo). The strategy is based on the calculation of a 'worst case index' that incorporates information about drug solubility, cleaning difficulty, and occupancy rate in the production line. It allowed a reduction in the required number of validations considering the possible manufacturing flow of a given product and the subsequent flow, thus facilitating the process by reducing operation time and cost. The products identified as 'worst case' were LRs PUL-TEC and MIBI-TEC. (author). (author)

  16. On robust multi-period pre-commitment and time-consistent mean-variance portfolio optimization

    NARCIS (Netherlands)

    F. Cong (Fei); C.W. Oosterlee (Kees)

    2017-01-01

    textabstractWe consider robust pre-commitment and time-consistent mean-variance optimal asset allocation strategies, that are required to perform well also in a worst-case scenario regarding the development of the asset price. We show that worst-case scenarios for both strategies can be found by

  17. Determination of the worst case for cleaning validation of equipment used in the radiopharmaceutical production of lyophilized reagents for {sup 99m}Tc labelling

    Energy Technology Data Exchange (ETDEWEB)

    Porto, Luciana Valeria Ferrari Machado; Fukumori, Neuza Taeko Okasaki; Matsuda, Margareth Mie Nakamura, E-mail: luciana.porto@anvisa.gov.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil). Centro de Radiofarmacia

    2016-01-15

    Cleaning validation, a requirement of the current Good Manufacturing Practices (cGMP) for Drugs, consists of documented evidence that cleaning procedures are capable of removing residues to predetermined acceptance levels. This report describes a strategy for the selection of the worst case product for the production of lyophilized reagents (LRs) for labeling with {sup 99m}Tc from the Instituto de Pesquisas Energeticas e Nucleares (IPEN-CNEN/Sao Paulo). The strategy is based on the calculation of a 'worst case index' that incorporates information about drug solubility, cleaning difficulty, and occupancy rate in the production line. It allowed a reduction in the required number of validations considering the possible manufacturing flow of a given product and the subsequent flow, thus facilitating the process by reducing operation time and cost. The products identified as 'worst case' were LRs PUL-TEC and MIBI-TEC. (author). (author)

  18. Worst-Case Execution Time Based Optimization of Real-Time Java Programs

    DEFF Research Database (Denmark)

    Hepp, Stefan; Schoeberl, Martin

    2012-01-01

    optimization is method in lining. It is especially important for languages, like Java, where small setter and getter methods are considered good programming style. In this paper we present and explore WCET driven in lining of Java methods. We use the WCET analysis tool for the Java processor JOP to guide...

  19. Simulation of worst-case operating conditions for integrated circuits operating in a total dose environment

    International Nuclear Information System (INIS)

    Bhuva, B.L.

    1987-01-01

    Degradations in the circuit performance created by the radiation exposure of integrated circuits are so unique and abnormal that thorough simulation and testing of VLSI circuits is almost impossible, and new ways to estimate the operating performance in a radiation environment must be developed. The principal goal of this work was the development of simulation techniques for radiation effects on semiconductor devices. The mixed-mode simulation approach proved to be the most promising. The switch-level approach is used to identify the failure mechanisms and critical subcircuits responsible for operational failure along with worst-case operating conditions during and after irradiation. For precise simulations of critical subcircuits, SPICE is used. The identification of failure mechanisms enables the circuit designer to improve the circuit's performance and failure-exposure level. Identification of worst-case operating conditions during and after irradiation reduces the complexity of testing VLSI circuits for radiation environments. The results of test circuits for failure simulations using a conventional simulator and the new simulator showed significant time savings using the new simulator. The savings in simulation time proved to be circuit topology-dependent. However, for large circuits, the simulation time proved to be orders of magnitude smaller than simulation time for conventional simulators

  20. Worst-Case Cooperative Jamming for Secure Communications in CIoT Networks.

    Science.gov (United States)

    Li, Zhen; Jing, Tao; Ma, Liran; Huo, Yan; Qian, Jin

    2016-03-07

    The Internet of Things (IoT) is a significant branch of the ongoing advances in the Internet and mobile communications. The use of a large number of IoT devices makes the spectrum scarcity problem even more serious. The usable spectrum resources are almost entirely occupied, and thus, the increasing radio access demands of IoT devices cannot be met. To tackle this problem, the Cognitive Internet of Things (CIoT) has been proposed. In a CIoT network, secondary users, i.e., sensors and actuators, can access the licensed spectrum bands provided by licensed primary users (such as telephones). Security is a major concern in CIoT networks. However, the traditional encryption method at upper layers (such as symmetric cryptography and asymmetric cryptography) may be compromised in CIoT networks, since these types of networks are heterogeneous. In this paper, we address the security issue in spectrum-leasing-based CIoT networks using physical layer methods. Considering that the CIoT networks are cooperative networks, we propose to employ cooperative jamming to achieve secrecy transmission. In the cooperative jamming scheme, a certain secondary user is employed as the helper to harvest energy transmitted by the source and then uses the harvested energy to generate an artificial noise that jams the eavesdropper without interfering with the legitimate receivers. The goal is to minimize the signal to interference plus noise ratio (SINR) at the eavesdropper subject to the quality of service (QoS) constraints of the primary traffic and the secondary traffic. We formulate the considered minimization problem into a two-stage robust optimization problem based on the worst-case Channel State Information of the Eavesdropper. By using semi-definite programming (SDP), the optimal solutions of the transmit covariance matrices can be obtained. Moreover, in order to build an incentive mechanism for the secondary users, we propose an auction framework based on the cooperative jamming scheme

  1. Worst-Case Cooperative Jamming for Secure Communications in CIoT Networks

    Directory of Open Access Journals (Sweden)

    Zhen Li

    2016-03-01

    Full Text Available The Internet of Things (IoT is a significant branch of the ongoing advances in the Internet and mobile communications. Yet, the use of a large number of IoT devices can severely worsen the spectrum scarcity problem. The usable spectrum resources are almost entirely occupied, and thus, the increasing demands of radio access from IoT devices cannot be met. To tackle this problem, the Cognitive Internet of Things (CIoT has been proposed. In a CIoT network, secondary users, i.e., sensors and actuators, can access the licensed spectrum bands provided by licensed primary users (such as cellular telephones. Security is a major concern in CIoT networks. However, the traditional encryption method at upper layers (such as symmetric and asymmetric ciphers may not be suitable for CIoT networks since these networks are composed of low-profile devices. In this paper, we address the security issues in spectrum-leasing-based CIoT networks using physical layer methods. Considering that the CIoT networks are cooperative in nature, we propose to employ cooperative jamming to achieve secure transmission. In our proposed cooperative jamming scheme, a certain secondary user is employed as the helper to harvest energy transmitted by the source and then uses the harvested energy to generate an artificial noise that jams the eavesdropper without interfering with the legitimate receivers. The goal is to minimize the Signal to Interference plus Noise Ratio (SINR at the eavesdropper subject to the Quality of Service (QoS constraints of the primary traffic and the secondary traffic. We formulate the minimization problem into a two-stage robust optimization problem based on the worst-case Channel State Information of the Eavesdropper (ECSI. By using Semi-Definite Programming (SDP, the optimal solutions of the transmit covariance matrices can be obtained. Moreover, in order to build an incentive mechanism for the secondary users, we propose an auction framework based on the

  2. Conscious worst case definition for risk assessment, part I: a knowledge mapping approach for defining most critical risk factors in integrative risk management of chemicals and nanomaterials.

    Science.gov (United States)

    Sørensen, Peter B; Thomsen, Marianne; Assmuth, Timo; Grieger, Khara D; Baun, Anders

    2010-08-15

    This paper helps bridge the gap between scientists and other stakeholders in the areas of human and environmental risk management of chemicals and engineered nanomaterials. This connection is needed due to the evolution of stakeholder awareness and scientific progress related to human and environmental health which involves complex methodological demands on risk management. At the same time, the available scientific knowledge is also becoming more scattered across multiple scientific disciplines. Hence, the understanding of potentially risky situations is increasingly multifaceted, which again challenges risk assessors in terms of giving the 'right' relative priority to the multitude of contributing risk factors. A critical issue is therefore to develop procedures that can identify and evaluate worst case risk conditions which may be input to risk level predictions. Therefore, this paper suggests a conceptual modelling procedure that is able to define appropriate worst case conditions in complex risk management. The result of the analysis is an assembly of system models, denoted the Worst Case Definition (WCD) model, to set up and evaluate the conditions of multi-dimensional risk identification and risk quantification. The model can help optimize risk assessment planning by initial screening level analyses and guiding quantitative assessment in relation to knowledge needs for better decision support concerning environmental and human health protection or risk reduction. The WCD model facilitates the evaluation of fundamental uncertainty using knowledge mapping principles and techniques in a way that can improve a complete uncertainty analysis. Ultimately, the WCD is applicable for describing risk contributing factors in relation to many different types of risk management problems since it transparently and effectively handles assumptions and definitions and allows the integration of different forms of knowledge, thereby supporting the inclusion of multifaceted risk

  3. Stochastic Robust Mathematical Programming Model for Power System Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay

    2016-01-01

    This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.

  4. Real-Time Optimization under Uncertainty Applied to a Gas Lifted Well Network

    Directory of Open Access Journals (Sweden)

    Dinesh Krishnamoorthy

    2016-12-01

    Full Text Available In this work, we consider the problem of daily production optimization in the upstream oil and gas domain. The objective is to find the optimal decision variables that utilize the production systems efficiently and maximize the revenue. Typically, mathematical models are used to find the optimal operation in such processes. However, such prediction models are subject to uncertainty that has been often overlooked, and the optimal solution based on nominal models can thus render the solution useless and may lead to infeasibility when implemented. To ensure robust feasibility, worst case optimization may be employed; however, the solution may be rather conservative. Alternatively, we propose the use of scenario-based optimization to reduce the conservativeness. The results of the nominal, worst case and scenario-based optimization are compared and discussed.

  5. Reduced computational cost in the calculation of worst case response time for real time systems

    OpenAIRE

    Urriza, José M.; Schorb, Lucas; Orozco, Javier D.; Cayssials, Ricardo

    2009-01-01

    Modern Real Time Operating Systems require reducing computational costs even though the microprocessors become more powerful each day. It is usual that Real Time Operating Systems for embedded systems have advance features to administrate the resources of the applications that they support. In order to guarantee either the schedulability of the system or the schedulability of a new task in a dynamic Real Time System, it is necessary to know the Worst Case Response Time of the Real Time tasks ...

  6. Concepts of combinatorial optimization

    CERN Document Server

    Paschos, Vangelis Th

    2014-01-01

    Combinatorial optimization is a multidisciplinary scientific area, lying in the interface of three major scientific domains: mathematics, theoretical computer science and management.  The three volumes of the Combinatorial Optimization series aim to cover a wide range  of topics in this area. These topics also deal with fundamental notions and approaches as with several classical applications of combinatorial optimization.Concepts of Combinatorial Optimization, is divided into three parts:- On the complexity of combinatorial optimization problems, presenting basics about worst-case and randomi

  7. Robust Design Optimization of an Aerospace Vehicle Prolusion System

    Directory of Open Access Journals (Sweden)

    Muhammad Aamir Raza

    2011-01-01

    Full Text Available This paper proposes a robust design optimization methodology under design uncertainties of an aerospace vehicle propulsion system. The approach consists of 3D geometric design coupled with complex internal ballistics, hybrid optimization, worst-case deviation, and efficient statistical approach. The uncertainties are propagated through worst-case deviation using first-order orthogonal design matrices. The robustness assessment is measured using the framework of mean-variance and percentile difference approach. A parametric sensitivity analysis is carried out to analyze the effects of design variables variation on performance parameters. A hybrid simulated annealing and pattern search approach is used as an optimizer. The results show the objective function of optimizing the mean performance and minimizing the variation of performance parameters in terms of thrust ratio and total impulse could be achieved while adhering to the system constraints.

  8. Housing for the "Worst of the Worst" Inmates: Public Support for Supermax Prisons

    Science.gov (United States)

    Mears, Daniel P.; Mancini, Christina; Beaver, Kevin M.; Gertz, Marc

    2013-01-01

    Despite concerns whether supermaximum security prisons violate human rights or prove effective, these facilities have proliferated in America over the past 25 years. This punishment--aimed at the "worst of the worst" inmates and involving 23-hr-per-day single-cell confinement with few privileges or services--has emerged despite little…

  9. On the relation between flexibility analysis and robust optimization for linear systems

    KAUST Repository

    Zhang, Qi; Grossmann, Ignacio E.; Lima, Ricardo

    2016-01-01

    Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee

  10. A Closed-Form Solution for Robust Portfolio Selection with Worst-Case CVaR Risk Measure

    Directory of Open Access Journals (Sweden)

    Le Tang

    2014-01-01

    Full Text Available With the uncertainty probability distribution, we establish the worst-case CVaR (WCCVaR risk measure and discuss a robust portfolio selection problem with WCCVaR constraint. The explicit solution, instead of numerical solution, is found and two-fund separation is proved. The comparison of efficient frontier with mean-variance model is discussed and finally we give numerical comparison with VaR model and equally weighted strategy. The numerical findings indicate that the proposed WCCVaR model has relatively smaller risk and greater return and relatively higher accumulative wealth than VaR model and equally weighted strategy.

  11. SU-E-T-452: Impact of Respiratory Motion On Robustly-Optimized Intensity-Modulated Proton Therapy to Treat Lung Cancers

    International Nuclear Information System (INIS)

    Liu, W; Schild, S; Bues, M; Liao, Z; Sahoo, N; Park, P; Li, H; Li, Y; Li, X; Shen, J; Anand, A; Dong, L; Zhu, X; Mohan, R

    2014-01-01

    Purpose: We compared conventionally optimized intensity-modulated proton therapy (IMPT) treatment plans against the worst-case robustly optimized treatment plans for lung cancer. The comparison of the two IMPT optimization strategies focused on the resulting plans' ability to retain dose objectives under the influence of patient set-up, inherent proton range uncertainty, and dose perturbation caused by respiratory motion. Methods: For each of the 9 lung cancer cases two treatment plans were created accounting for treatment uncertainties in two different ways: the first used the conventional Method: delivery of prescribed dose to the planning target volume (PTV) that is geometrically expanded from the internal target volume (ITV). The second employed the worst-case robust optimization scheme that addressed set-up and range uncertainties through beamlet optimization. The plan optimality and plan robustness were calculated and compared. Furthermore, the effects on dose distributions of the changes in patient anatomy due to respiratory motion was investigated for both strategies by comparing the corresponding plan evaluation metrics at the end-inspiration and end-expiration phase and absolute differences between these phases. The mean plan evaluation metrics of the two groups were compared using two-sided paired t-tests. Results: Without respiratory motion considered, we affirmed that worst-case robust optimization is superior to PTV-based conventional optimization in terms of plan robustness and optimality. With respiratory motion considered, robust optimization still leads to more robust dose distributions to respiratory motion for targets and comparable or even better plan optimality [D95% ITV: 96.6% versus 96.1% (p=0.26), D5% - D95% ITV: 10.0% versus 12.3% (p=0.082), D1% spinal cord: 31.8% versus 36.5% (p =0.035)]. Conclusion: Worst-case robust optimization led to superior solutions for lung IMPT. Despite of the fact that robust optimization did not explicitly

  12. Worst Asymmetrical Short-Circuit Current

    DEFF Research Database (Denmark)

    Arana Aristi, Iván; Holmstrøm, O; Grastrup, L

    2010-01-01

    In a typical power plant, the production scenario and the short-circuit time were found for the worst asymmetrical short-circuit current. Then, a sensitivity analysis on the missing generator values was realized in order to minimize the uncertainty of the results. Afterward the worst asymmetrical...

  13. Economically optimized electricity trade modeling. Iran-Turkey case

    International Nuclear Information System (INIS)

    Shakouri G, H.; Eghlimi, M.; Manzoor, D.

    2009-01-01

    The advantages of power trade between countries, which are attainable for various facts, are distinguished now. Daily differences in the peak-load times of neighboring countries commonly occur for differences in the longitudes of their location. Seasonal differences are also caused by differences in the latitudes leading to different climates. Consequently, different load curves help to have such a production schedule that reduces blackouts and investments for power generation by planning for a proper trade between countries in a region. This paper firstly describes the methodology and framework for the power trade and then the results of an optimal power trade model between Iran and Turkey, which shows a potential benefit for both countries by peak shaving, are presented. The results, in the worst case design, represent optimality of about 1500 MW electricity export from Iran to Turkey at the Turkish peak times, as well as 447 MW electricity import from Turkey at the Iranian peak times. In addition, results derived from running a Long-Run model show that there will be greater potential for power export from Iran to Turkey, which is a guideline of an energy conservation strategy for both countries in the future. (author)

  14. Uncertain input data problems and the worst scenario method

    Czech Academy of Sciences Publication Activity Database

    Hlaváček, Ivan

    2007-01-01

    Roč. 52, č. 3 (2007), s. 187-196 ISSN 0862-7940 R&D Projects: GA ČR GA201/04/1503 Institutional research plan: CEZ:AV0Z10190503 Keywords : uncertain input data * the worst-case approach * fuzzy sets Subject RIV: BA - General Mathematics

  15. Probabilistic risk assessment for the Los Alamos Meson Physics Facility worst-case design-basis accident

    International Nuclear Information System (INIS)

    Sharirli, M.; Butner, J.M.; Rand, J.L.; Macek, R.J.; McKinney, S.J.; Roush, M.L.

    1992-01-01

    This paper presents results from a Los Alamos National Laboratory Engineering and Safety Analysis Group assessment of the worse-case design-basis accident associated with the Clinton P. Anderson Meson Physics Facility (LAMPF)/Weapons Neutron Research (WNR) Facility. The primary goal of the analysis was to quantify the accident sequences that result in personnel radiation exposure in the WNR Experimental Hall following the worst-case design-basis accident, a complete spill of the LAMPF accelerator 1L beam. This study also provides information regarding the roles of hardware systems and operators in these sequences, and insights regarding the areas where improvements can increase facility-operation safety. Results also include confidence ranges to incorporate combined effects of uncertainties in probability estimates and importance measures to determine how variations in individual events affect the frequencies in accident sequences

  16. Improved worst-case and liely accident definition in complex facilities for 40 CFR 68 compliance

    International Nuclear Information System (INIS)

    O'Kula, K.R., Taylor, Robert P., Jr; Hang, P.

    1997-04-01

    Many DOE facilities potentially subject to compliance with offsite consequence criteria under the 40 CFR 68 Risk Management Program house significant inventories of toxic and flammable chemicals. The accident progression event tree methodology is suggested as a useful technical basis to define Worst-Case and Alternative Release Scenarios in facilities performing operations beyond simple storage and/or having several barriers between the chemical hazard and the environment. For multiple chemical release scenarios, a chemical mixture methodology should be applied to conservatively define concentration isopleths. In some instances, the region requiring emergency response planning is larger under this approach than if chemicals are treated individually

  17. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  18. The lionfish Pterois sp. invasion: Has the worst-case scenario come to pass?

    Science.gov (United States)

    Côté, I M; Smith, N S

    2018-03-01

    This review revisits the traits thought to have contributed to the success of Indo-Pacific lionfish Pterois sp. as an invader in the western Atlantic Ocean and the worst-case scenario about their potential ecological effects in light of the more than 150 studies conducted in the past 5 years. Fast somatic growth, resistance to parasites, effective anti-predator defences and an ability to circumvent predator recognition mechanisms by prey have probably contributed to rapid population increases of lionfish in the invaded range. However, evidence that lionfish are strong competitors is still ambiguous, in part because demonstrating competition is challenging. Geographic spread has likely been facilitated by the remarkable capacity of lionfish for prolonged fasting in combination with other broad physiological tolerances. Lionfish have had a large detrimental effect on native reef-fish populations in the northern part of the invaded range, but similar effects have yet to be seen in the southern Caribbean. Most other envisaged direct and indirect consequences of lionfish predation and competition, even those that might have been expected to occur rapidly, such as shifts in benthic composition, have yet to be realized. Lionfish populations in some of the first areas invaded have started to decline, perhaps as a result of resource depletion or ongoing fishing and culling, so there is hope that these areas have already experienced the worst of the invasion. In closing, we place lionfish in a broader context and argue that it can serve as a new model to test some fundamental questions in invasion ecology. © 2018 The Fisheries Society of the British Isles.

  19. Optimal angle reduction - a behavioral approach to linear system appromixation

    NARCIS (Netherlands)

    Roorda, B.; Weiland, S.

    2001-01-01

    We investigate the problem of optimal state reduction under minimization of the angle between system behaviors. The angle is defined in a worst-case sense, as the largest angle that can occur between a system trajectory and its optimal approximation in the reduced-order model. This problem is

  20. Radiological and toxicological consequences of a worst-case spray leak related to project W-320

    International Nuclear Information System (INIS)

    Himes, D.A.

    1997-01-01

    An analysis was performed of radiological and toxicological consequences of a worst-case leak from a 2-inch diameter flush connection in a pit over tank AY-102. The unmitigated (without controls) flush line spray leak assumes that the blank connector and the removable plug in the pit cover block have been removed so that the maximum system flow is directed out of the open 2-inch line vertically into the air above the pit. The mitigated (with controls) spray scenario assumes the removable plug is in place and the flow is directed against the underside of the pit cover block. The unmitigated scenario exceeded both onsite and offsite risk guidelines for an anticipated accident. For the mitigated case all consequences are well within guidelines and so no additional controls are needed beyond the existing control of having all pit covers and removable plugs in place during any waste transfer

  1. Efficient reanalysis techniques for robust topology optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Sigmund, Ole; Lazarov, Boyan Stefanov

    2012-01-01

    efficient robust topology optimization procedures based on reanalysis techniques. The approach is demonstrated on two compliant mechanism design problems where robust design is achieved by employing either a worst case formulation or a stochastic formulation. It is shown that the time spent on finite...

  2. Assessing oral bioaccessibility of trace elements in soils under worst-case scenarios by automated in-line dynamic extraction as a front end to inductively coupled plasma atomic emission spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Rosende, María [FI-TRACE group, Department of Chemistry, University of the Balearic Islands, Carretera de Valldemossa, km 7.5, Palma de Mallorca, Illes Balears E-07122 (Spain); Magalhães, Luis M.; Segundo, Marcela A. [REQUIMTE, Department of Chemistry, Faculty of Pharmacy, University of Porto, R. de Jorge Viterbo Ferreira, 228, Porto 4050-313 (Portugal); Miró, Manuel, E-mail: manuel.miro@uib.es [FI-TRACE group, Department of Chemistry, University of the Balearic Islands, Carretera de Valldemossa, km 7.5, Palma de Mallorca, Illes Balears E-07122 (Spain)

    2014-09-09

    Highlights: • Automatic oral bioaccessibility tests of trace metals under worst-case scenarios. • Use of intricate and realistic digestive fluids (UBM method). • Analysis of large amounts of soils (≥400 mg) in a flow-based configuration. • Smart interface to inductively coupled plasma atomic emission spectrometry. • Comparison of distinct flow systems mimicking physiological conditions. - Abstract: A novel biomimetic extraction procedure that allows for the in-line handing of ≥400 mg solid substrates is herein proposed for automatic ascertainment of trace element (TE) bioaccessibility in soils under worst-case conditions as per recommendations of ISO norms. A unified bioaccessibility/BARGE method (UBM)-like physiological-based extraction test is evaluated for the first time in a dynamic format for accurate assessment of in-vitro bioaccessibility of Cr, Cu, Ni, Pb and Zn in forest and residential-garden soils by on-line coupling of a hybrid flow set-up to inductively coupled plasma atomic emission spectrometry. Three biologically relevant operational extraction modes mimicking: (i) gastric juice extraction alone; (ii) saliva and gastric juice composite in unidirectional flow extraction format and (iii) saliva and gastric juice composite in a recirculation mode were thoroughly investigated. The extraction profiles of the three configurations using digestive fluids were proven to fit a first order reaction kinetic model for estimating the maximum TE bioaccessibility, that is, the actual worst-case scenario in human risk assessment protocols. A full factorial design, in which the sample amount (400–800 mg), the extractant flow rate (0.5–1.5 mL min{sup −1}) and the extraction temperature (27–37 °C) were selected as variables for the multivariate optimization studies in order to obtain the maximum TE extractability. Two soils of varied physicochemical properties were analysed and no significant differences were found at the 0.05 significance level

  3. Assessing oral bioaccessibility of trace elements in soils under worst-case scenarios by automated in-line dynamic extraction as a front end to inductively coupled plasma atomic emission spectrometry

    International Nuclear Information System (INIS)

    Rosende, María; Magalhães, Luis M.; Segundo, Marcela A.; Miró, Manuel

    2014-01-01

    Highlights: • Automatic oral bioaccessibility tests of trace metals under worst-case scenarios. • Use of intricate and realistic digestive fluids (UBM method). • Analysis of large amounts of soils (≥400 mg) in a flow-based configuration. • Smart interface to inductively coupled plasma atomic emission spectrometry. • Comparison of distinct flow systems mimicking physiological conditions. - Abstract: A novel biomimetic extraction procedure that allows for the in-line handing of ≥400 mg solid substrates is herein proposed for automatic ascertainment of trace element (TE) bioaccessibility in soils under worst-case conditions as per recommendations of ISO norms. A unified bioaccessibility/BARGE method (UBM)-like physiological-based extraction test is evaluated for the first time in a dynamic format for accurate assessment of in-vitro bioaccessibility of Cr, Cu, Ni, Pb and Zn in forest and residential-garden soils by on-line coupling of a hybrid flow set-up to inductively coupled plasma atomic emission spectrometry. Three biologically relevant operational extraction modes mimicking: (i) gastric juice extraction alone; (ii) saliva and gastric juice composite in unidirectional flow extraction format and (iii) saliva and gastric juice composite in a recirculation mode were thoroughly investigated. The extraction profiles of the three configurations using digestive fluids were proven to fit a first order reaction kinetic model for estimating the maximum TE bioaccessibility, that is, the actual worst-case scenario in human risk assessment protocols. A full factorial design, in which the sample amount (400–800 mg), the extractant flow rate (0.5–1.5 mL min −1 ) and the extraction temperature (27–37 °C) were selected as variables for the multivariate optimization studies in order to obtain the maximum TE extractability. Two soils of varied physicochemical properties were analysed and no significant differences were found at the 0.05 significance level

  4. Surface disinfection tests with Salmonella and a putative indicator bacterium, mimicking worst-case scenarios in poultry houses

    DEFF Research Database (Denmark)

    Gradel, K.O.; Sayers, A.R.; Davies, R.H.

    2004-01-01

    Surface disinfection studies mimicking worst-case scenarios in badly cleaned poultry houses were made with 3 bacterial isolates (Salmonella enteritidis, Salmonella senftenberg, and Enterococcus faecalis), and 3 1% disinfectant solutions, formaldehyde (F; 24.5% vol/vol), glutaraldehyde...... hard water, except when feed chain links with fats were disinfected using 30degreesC before and after disinfection, for which the peroxygen compound seemed more effective. Enterococcus faecalis was equally or less susceptible than S. enteritidis and S. senftenberg, indicating its suitability...... as an indicator bacterium. For the peroxygen compound, S. senftenberg was more susceptible than S. enteritidis in spite of higher minimum inhibitory concentrations to this disinfectant for the former....

  5. Scoring best-worst data in unbalanced many-item designs, with applications to crowdsourcing semantic judgments.

    Science.gov (United States)

    Hollis, Geoff

    2018-04-01

    Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.

  6. Worst-Case Efficient Range Searching

    DEFF Research Database (Denmark)

    Arge, Lars Allan

    2009-01-01

    accesses - or I/Os - as possible). We first quickly discuss the well-known and optimal structure for the one-dimensional version of the problem, the B-tree [10, 12], along with its variants weight-balanced B-trees [9], multi-version (or persistent) B-trees [6, 11, 13, 22] and buffer-trees [4]. Then we...

  7. "Best Case/Worst Case": Qualitative Evaluation of a Novel Communication Tool for Difficult in-the-Moment Surgical Decisions.

    Science.gov (United States)

    Kruser, Jacqueline M; Nabozny, Michael J; Steffens, Nicole M; Brasel, Karen J; Campbell, Toby C; Gaines, Martha E; Schwarze, Margaret L

    2015-09-01

    To evaluate a communication tool called "Best Case/Worst Case" (BC/WC) based on an established conceptual model of shared decision-making. Focus group study. Older adults (four focus groups) and surgeons (two focus groups) using modified questions from the Decision Aid Acceptability Scale and the Decisional Conflict Scale to evaluate and revise the communication tool. Individuals aged 60 and older recruited from senior centers (n = 37) and surgeons from academic and private practices in Wisconsin (n = 17). Qualitative content analysis was used to explore themes and concepts that focus group respondents identified. Seniors and surgeons praised the tool for the unambiguous illustration of multiple treatment options and the clarity gained from presentation of an array of treatment outcomes. Participants noted that the tool provides an opportunity for in-the-moment, preference-based deliberation about options and a platform for further discussion with other clinicians and loved ones. Older adults worried that the format of the tool was not universally accessible for people with different educational backgrounds, and surgeons had concerns that the tool was vulnerable to physicians' subjective biases. The BC/WC tool is a novel decision support intervention that may help facilitate difficult decision-making for older adults and their physicians when considering invasive, acute medical treatments such as surgery. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.

  8. Optimal purely functional priority queues

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Okasaki, Chris

    1996-01-01

    Brodal recently introduced the first implementation of imperative priority queues to support findMin, insert and meld in O(1) worst-case time, and deleteMin in O(log n) worst-case time. These bounds are asymptotically optimal among all comparison-based priority queues. In this paper, we adapt...... Brodal's data structure to a purely functional setting. In doing so, we both simplify the data structure and clarify its relationship to the binomial queues of Vuillemin, which support all four operations in O(log n) time. Specifically, we derive our implementation from binomial queues in three steps......: first, we reduce the running time of insert to O(1) by eliminating the possibility of cascading links; second, we reduce the running time of findMin to O(1) by adding a global root to hold the minimum element; and finally, we reduce the running time of meld to O(1) by allowing priority queues to contain...

  9. Prediction of flow recirculation in a blanket assembly under worst-case natural-convection conditions

    International Nuclear Information System (INIS)

    Khan, E.U.; Rector, D.R.

    1982-01-01

    Reactor fuel and blanket assemblies within a Liquid Metal Fast Breeder Reactor (LMFBR) can be subjected to severe radial heat flux gradients. At low-flow conditions, with power-to-flow ratios of nearly the same magnitude as design conditions, buoyancy forces cause flow redistribution to the side of a bundle with the higher heat generation rate. Recirculation of fluid within a rod bundle can occur during a natural convection transient because of the combined effect of flow coastdown and buoyancy-induced redistribution. An important concern is whether recirculation leads to high coolant temperatures. For this reason, the COBRA-WC code was developed with the capability of modeling recirculating flows. Experiments have been conducted in a 2 x 6 rod bundle for flow and power transients to study recirculation in the mixed-convection (forced cooled) and natural-convection regimes. The data base developed was used to validate the recirculation module in the COBRA-WC code. COBRA-WC code calculations were made to predict flow and temperature distributions in a typical LMFBR blanket assembly for the worst-case, natural-circulation transient

  10. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries

    International Nuclear Information System (INIS)

    Mahfouz, Z.; Verloock, L.; Joseph, W.; Tanghe, E.; Gati, A.; Wiart, J.; Lautru, D.; Hanna, V. F.; Martens, L.

    2013-01-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed down-link packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days. (authors)

  11. Comparison of temporal realistic telecommunication base station exposure with worst-case estimation in two countries.

    Science.gov (United States)

    Mahfouz, Zaher; Verloock, Leen; Joseph, Wout; Tanghe, Emmeric; Gati, Azeddine; Wiart, Joe; Lautru, David; Hanna, Victor Fouad; Martens, Luc

    2013-12-01

    The influence of temporal daily exposure to global system for mobile communications (GSM) and universal mobile telecommunications systems and high speed downlink packet access (UMTS-HSDPA) is investigated using spectrum analyser measurements in two countries, France and Belgium. Temporal variations and traffic distributions are investigated. Three different methods to estimate maximal electric-field exposure are compared. The maximal realistic (99 %) and the maximal theoretical extrapolation factor used to extrapolate the measured broadcast control channel (BCCH) for GSM and the common pilot channel (CPICH) for UMTS are presented and compared for the first time in the two countries. Similar conclusions are found in the two countries for both urban and rural areas: worst-case exposure assessment overestimates realistic maximal exposure up to 5.7 dB for the considered example. In France, the values are the highest, because of the higher population density. The results for the maximal realistic extrapolation factor at the weekdays are similar to those from weekend days.

  12. Non-convex multi-objective optimization

    CERN Document Server

    Pardalos, Panos M; Žilinskas, Julius

    2017-01-01

    Recent results on non-convex multi-objective optimization problems and methods are presented in this book, with particular attention to expensive black-box objective functions. Multi-objective optimization methods facilitate designers, engineers, and researchers to make decisions on appropriate trade-offs between various conflicting goals. A variety of deterministic and stochastic multi-objective optimization methods are developed in this book. Beginning with basic concepts and a review of non-convex single-objective optimization problems; this book moves on to cover multi-objective branch and bound algorithms, worst-case optimal algorithms (for Lipschitz functions and bi-objective problems), statistical models based algorithms, and probabilistic branch and bound approach. Detailed descriptions of new algorithms for non-convex multi-objective optimization, their theoretical substantiation, and examples for practical applications to the cell formation problem in manufacturing engineering, the process design in...

  13. A simple method to identify radiation and annealing biases that lead to worst-case CMOS static RAM postirradiation response

    International Nuclear Information System (INIS)

    Fleetwood, D.M.; Dressendorfer, P.V.

    1987-01-01

    The authors illustrate a simple method to identify bias conditions that lead to worst-case postirradiation speed and timing response for SRAMs. Switching cell states between radiation and anneal should lead to maximum speed and timing degradation for many hardened designs and technologies. The greatest SRAM cell imbalance is also established by these radiation and annealing conditions for the hardened and commercial parts that we have examined. These results should provide insight into the behavior of SRAMs during and after irradiation. The results should also be useful to establishing guidelines for integrated-circuit functionality testing, and SEU and dose-rate upset testing, after total-dose irradiation

  14. A new method for evaluating worst- and best-case (WBC) economic consequences of technological development

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    1996-01-01

    This paper is addressing the problem of evaluating economic worst- and best-care (WBC) consequences of technological development in industrial companies faking into account uncertainties and lack of exact cost and market information. In the theoretical part of the paper, the mathematical concepts...

  15. Patterning 45nm flash/DRAM contact hole mask with hyper-NA immersion lithography and optimized illumination

    Science.gov (United States)

    Chen, Ting; Van Den Broeke, Doug; Hsu, Stephen; Park, Sangbong; Berger, Gabriel; Coskun, Tamer; de Vocht, Joep; Corcoran, Noel; Chen, Fung; van der Heijden, Eddy; Finders, Jo; Engelen, Andre; Socha, Robert

    2006-03-01

    Patterning contact-hole mask for Flash/DRAM is probably one of the most challenging tasks for design rule below 50nm due to the extreme low-k I printing conditions common in the memory designs. When combined with optical proximity corrections (OPC) to the mask, using optimized illumination has become a viable part of the production lithography process for 65nm node. At k Ipitch design rules. Here we use 6% attPSM mask for simulation and actual exposure in ASML XT 1400i (NA=0.93) and 1700i (NA=1.2) respectively. We begin with the illumination source optimization using full vector high-NA calculation (VHNA) with production resist stack and all manufacturability requirements for the source shaping diffractive optical element (DOE) are accounted for during the source optimization. Using the optimized source, IML TM technology based scattering bars (SB) placement together with model based OPC (MOPC) are applied to the original contact-hole design. In-focus printing and process latitude simulations are used to gauge the performance and manufacturability of the final optimized process, which includes the optimized mask, optimized source and required imaging settings. Our results show that for the 130nm pitch Flash contact-hole patterns, on ASML XT 1400i at NA=0.93, both optimized illumination source and immersion lithography are necessary in order to achieve manufacturability. The worst-case depth of focus (DOF) before SB and MOPC is 100-130nm at 6% EL, without common process window (PW) and with MOPC, the worst-case DOF is >200nm at 6% EL. The latter is in excellent agreement with the wafer results from ASML XT 1400i, and the predicated CDs match well with the measured at isolated, medium and dense pitch contact-holes to within 5nm. For the 120nm pitch Flash contact patterns, ASML XT 1700i at NA=1.2 must be used, together with optimized illumination source, to achieve the same or better process latitude (worst-case DOF at 6% EL), and for the Flash pattern used, further

  16. When is best-worst best? A comparison of best-worst scaling, numeric estimation, and rating scales for collection of semantic norms.

    Science.gov (United States)

    Hollis, Geoff; Westbury, Chris

    2018-02-01

    Large-scale semantic norms have become both prevalent and influential in recent psycholinguistic research. However, little attention has been directed towards understanding the methodological best practices of such norm collection efforts. We compared the quality of semantic norms obtained through rating scales, numeric estimation, and a less commonly used judgment format called best-worst scaling. We found that best-worst scaling usually produces norms with higher predictive validities than other response formats, and does so requiring less data to be collected overall. We also found evidence that the various response formats may be producing qualitatively, rather than just quantitatively, different data. This raises the issue of potential response format bias, which has not been addressed by previous efforts to collect semantic norms, likely because of previous reliance on a single type of response format for a single type of semantic judgment. We have made available software for creating best-worst stimuli and scoring best-worst data. We also made available new norms for age of acquisition, valence, arousal, and concreteness collected using best-worst scaling. These norms include entries for 1,040 words, of which 1,034 are also contained in the ANEW norms (Bradley & Lang, Affective norms for English words (ANEW): Instruction manual and affective ratings (pp. 1-45). Technical report C-1, the center for research in psychophysiology, University of Florida, 1999).

  17. Manufacturing tolerant topology optimization

    DEFF Research Database (Denmark)

    Sigmund, Ole

    2009-01-01

    In this paper we present an extension of the topology optimization method to include uncertainties during the fabrication of macro, micro and nano structures. More specifically, we consider devices that are manufactured using processes which may result in (uniformly) too thin (eroded) or too thick...... (dilated) structures compared to the intended topology. Examples are MEMS devices manufactured using etching processes, nano-devices manufactured using e-beam lithography or laser micro-machining and macro structures manufactured using milling processes. In the suggested robust topology optimization...... approach, under- and over-etching is modelled by image processing-based "erode" and "dilate" operators and the optimization problem is formulated as a worst case design problem. Applications of the method to the design of macro structures for minimum compliance and micro compliant mechanisms show...

  18. EPHECT I: European household survey on domestic use of consumer products and development of worst-case scenarios for daily use.

    Science.gov (United States)

    Dimitroulopoulou, C; Lucica, E; Johnson, A; Ashmore, M R; Sakellaris, I; Stranger, M; Goelen, E

    2015-12-01

    Consumer products are frequently and regularly used in the domestic environment. Realistic estimates for product use are required for exposure modelling and health risk assessment. This paper provides significant data that can be used as input for such modelling studies. A European survey was conducted, within the framework of the DG Sanco-funded EPHECT project, on the household use of 15 consumer products. These products are all-purpose cleaners, kitchen cleaners, floor cleaners, glass and window cleaners, bathroom cleaners, furniture and floor polish products, combustible air fresheners, spray air fresheners, electric air fresheners, passive air fresheners, coating products for leather and textiles, hair styling products, spray deodorants and perfumes. The analysis of the results from the household survey (1st phase) focused on identifying consumer behaviour patterns (selection criteria, frequency of use, quantities, period of use and ventilation conditions during product use). This can provide valuable input to modelling studies, as this information is not reported in the open literature. The above results were further analysed (2nd phase), to provide the basis for the development of 'most representative worst-case scenarios' regarding the use of the 15 products by home-based population groups (housekeepers and retired people), in four geographical regions in Europe. These scenarios will be used for the exposure and health risk assessment within the EPHECT project. To the best of our knowledge, it is the first time that daily worst-case scenarios are presented in the scientific published literature concerning the use of a wide range of 15 consumer products across Europe. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.

  19. Average-case analysis of numerical problems

    CERN Document Server

    2000-01-01

    The average-case analysis of numerical problems is the counterpart of the more traditional worst-case approach. The analysis of average error and cost leads to new insight on numerical problems as well as to new algorithms. The book provides a survey of results that were mainly obtained during the last 10 years and also contains new results. The problems under consideration include approximation/optimal recovery and numerical integration of univariate and multivariate functions as well as zero-finding and global optimization. Background material, e.g. on reproducing kernel Hilbert spaces and random fields, is provided.

  20. Testing the robustness of best worst scaling for cross-national segmentation with different numbers of choice sets

    DEFF Research Database (Denmark)

    Mueller Loose, Simone; Lockshin, Larry

    2013-01-01

    The aim of the study is to showcase how cross-cultural research can take advantage of the measurement invariance of best-worst scales. The study utilises best-worst scaling (BWS) to assess the importance of environmental sustainability among other experience and credence product attributes...... for the purchase of wine across five countries. Three consumer segments with distinct product preferences were identified across all five countries, which differ in their relative size in each market. This case study demonstrates different analysis methods suitable for the analysis of BWS data on aggregated...

  1. "Best Case/Worst Case": Training Surgeons to Use a Novel Communication Tool for High-Risk Acute Surgical Problems.

    Science.gov (United States)

    Kruser, Jacqueline M; Taylor, Lauren J; Campbell, Toby C; Zelenski, Amy; Johnson, Sara K; Nabozny, Michael J; Steffens, Nicole M; Tucholka, Jennifer L; Kwekkeboom, Kris L; Schwarze, Margaret L

    2017-04-01

    Older adults often have surgery in the months preceding death, which can initiate postoperative treatments inconsistent with end-of-life values. "Best Case/Worst Case" (BC/WC) is a communication tool designed to promote goal-concordant care during discussions about high-risk surgery. The objective of this study was to evaluate a structured training program designed to teach surgeons how to use BC/WC. Twenty-five surgeons from one tertiary care hospital completed a two-hour training session followed by individual coaching. We audio-recorded surgeons using BC/WC with standardized patients and 20 hospitalized patients. Hospitalized patients and their families participated in an open-ended interview 30 to 120 days after enrollment. We used a checklist of 11 BC/WC elements to measure tool fidelity and surgeons completed the Practitioner Opinion Survey to measure acceptability of the tool. We used qualitative analysis to evaluate variability in tool content and to characterize patient and family perceptions of the tool. Surgeons completed a median of 10 of 11 BC/WC elements with both standardized and hospitalized patients (range 5-11). We found moderate variability in presentation of treatment options and description of outcomes. Three months after training, 79% of surgeons reported BC/WC is better than their usual approach and 71% endorsed active use of BC/WC in clinical practice. Patients and families found that BC/WC established expectations, provided clarity, and facilitated deliberation. Surgeons can learn to use BC/WC with older patients considering acute high-risk surgical interventions. Surgeons, patients, and family members endorse BC/WC as a strategy to support complex decision making. Copyright © 2017 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  2. Totally optimal decision trees for Boolean functions

    KAUST Repository

    Chikalov, Igor

    2016-07-28

    We study decision trees which are totally optimal relative to different sets of complexity parameters for Boolean functions. A totally optimal tree is an optimal tree relative to each parameter from the set simultaneously. We consider the parameters characterizing both time (in the worst- and average-case) and space complexity of decision trees, i.e., depth, total path length (average depth), and number of nodes. We have created tools based on extensions of dynamic programming to study totally optimal trees. These tools are applicable to both exact and approximate decision trees, and allow us to make multi-stage optimization of decision trees relative to different parameters and to count the number of optimal trees. Based on the experimental results we have formulated the following hypotheses (and subsequently proved): for almost all Boolean functions there exist totally optimal decision trees (i) relative to the depth and number of nodes, and (ii) relative to the depth and average depth.

  3. Pareto-Optimal Evaluation of Ultimate Limit States in Offshore Wind Turbine Structural Analysis

    Directory of Open Access Journals (Sweden)

    Michael Muskulus

    2015-12-01

    Full Text Available The ultimate capacity of support structures is checked with extreme loads. This is straightforward when the limit state equations depend on a single load component, and it has become common to report maxima for each load component. However, if more than one load component is influential, e.g., both axial force and bending moments, it is not straightforward how to define an extreme load. The combination of univariate maxima can be too conservative, and many different combinations of load components can result in the worst value of the limit state equations. The use of contemporaneous load vectors is typically non-conservative. Therefore, in practice, limit state checks are done for each possible load vector, from each time step of a simulation. This is not feasible when performing reliability assessments and structural optimization, where additional, time-consuming computations are involved for each load vector. We therefore propose to use Pareto-optimal loads, which are a small set of loads that together represent all possible worst case scenarios. Simulations with two reference wind turbines show that this approach can be very useful for jacket structures, whereas the design of monopiles is often governed by the bending moment only. Even in this case, the approach might be useful when approaching the structural limits during optimization.

  4. Query Optimization in Distributed Databases.

    Science.gov (United States)

    1982-10-01

    general, the strategy a31 a11 a 3 is more time comsuming than the strategy a, a, and sually we do not use it. Since the semijoin of R.XJ> RS requires...analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are difficult to obtain, some...is the study of the analytic behavior of those heuristic algorithms. Although some analytic results of worst case and average case analysis are

  5. Preservice Teacher Perceptions of Their Best and Worst K-12 Teachers.

    Science.gov (United States)

    Aagaard, Lola; Skidmore, Ronald

    This study investigated student teachers' views on their best and worst teachers' characteristics. Students in four sections of a sophomore-level teacher education program prerequisite course were required to write half-page descriptions of their best and worst teachers from elementary and high school, focusing on the behaviors and attitudes that…

  6. Worst-case Throughput Analysis for Parametric Rate and Parametric Actor Execution Time Scenario-Aware Dataflow Graphs

    Directory of Open Access Journals (Sweden)

    Mladen Skelin

    2014-03-01

    Full Text Available Scenario-aware dataflow (SADF is a prominent tool for modeling and analysis of dynamic embedded dataflow applications. In SADF the application is represented as a finite collection of synchronous dataflow (SDF graphs, each of which represents one possible application behaviour or scenario. A finite state machine (FSM specifies the possible orders of scenario occurrences. The SADF model renders the tightest possible performance guarantees, but is limited by its finiteness. This means that from a practical point of view, it can only handle dynamic dataflow applications that are characterized by a reasonably sized set of possible behaviours or scenarios. In this paper we remove this limitation for a class of SADF graphs by means of SADF model parametrization in terms of graph port rates and actor execution times. First, we formally define the semantics of the model relevant for throughput analysis based on (max,+ linear system theory and (max,+ automata. Second, by generalizing some of the existing results, we give the algorithms for worst-case throughput analysis of parametric rate and parametric actor execution time acyclic SADF graphs with a fully connected, possibly infinite state transition system. Third, we demonstrate our approach on a few realistic applications from digital signal processing (DSP domain mapped onto an embedded multi-processor architecture.

  7. Specific exercises reduce brace prescription in adolescent idiopathic scoliosis: a prospective controlled cohort study with worst-case analysis.

    Science.gov (United States)

    Negrini, Stefano; Zaina, Fabio; Romano, Michele; Negrini, Alessandra; Parzini, Silvana

    2008-06-01

    To compare the effect of Scientific Exercises Approach to Scoliosis (SEAS) exercises with "usual care" rehabilitation programmes in terms of the avoidance of brace prescription and prevention of curve progression in adolescent idiopathic scoliosis. Prospective controlled cohort observational study. Seventy-four consecutive outpatients with adolescent idiopathic scoliosis, mean 15 degrees (standard deviation 6) Cobb angle, 12.4 (standard deviation 2.2) years old, at risk of bracing who had not been treated previously. Thirty-five patients were included in the SEAS exercises group and 39 in the usual physiotherapy group. The primary outcome included the number of braced patients, Cobb angle and the angle of trunk rotation. There were 6.1% braced patients in the SEAS exercises group vs 25.0% in the usual physiotherapy group. Failures of treatment in the worst-case analysis were 11.5% and 30.8%, respectively. In both cases the differences were statistically significant. Cobb angle improved in the SEAS exercises group, but worsened in the usual physiotherapy group. In the SEAS exercises group, 23.5% of patients improved and 11.8% worsened, while in the usual physiotherapy group 11.1% improved and 13.9% worsened. These data confirm the effectiveness of exercises in patients with scoliosis who are at high risk of progression. Compared with non-adapted exercises, a specific and personalized treatment (SEAS) appears to be more effective.

  8. Factor analysis in optimization of formulation of high content uniformity tablets containing low dose active substance

    Czech Academy of Sciences Publication Activity Database

    Lukášová, I.; Muselík, J.; Franc, A.; Goněc, R.; Mika, Filip; Vetchý, D.

    2017-01-01

    Roč. 109, NOV (2017), s. 541-547 ISSN 0928-0987 R&D Projects: GA MŠk(CZ) LO1212; GA MŠk ED0017/01/01 Institutional support: RVO:68081731 Keywords : factor analysis * process optimization * sampling error * worst case Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering OBOR OECD: Medical laboratory technology (including laboratory samples analysis Impact factor: 3.756, year: 2016

  9. Optimization of Algorithms Using Extensions of Dynamic Programming

    KAUST Repository

    AbouEisha, Hassan M.

    2017-04-09

    reflects the worst-case time complexity and average depth indicates the average-case time complexity. Non-adaptive algorithms are represented as decision tests whose size expresses the worst-case time complexity. Finally, we present a dynamic programming algorithm that finds a minimum decision test (minimum reduct) for a given decision table.

  10. Framework for Combined Diagnostics, Prognostics and Optimal Operation of a Subsea Gas Compression System

    OpenAIRE

    Verheyleweghen, Adriaen; Jaeschke, Johannes

    2017-01-01

    The efficient and safe operation of subsea gas and oil production systems sets strict requirements to equipment reliability to avoid unplanned breakdowns and costly maintenance interventions. Because of this, condition monitoring is employed to assess the status of the system in real-time. However, the condition of the system is usually not considered explicitly when finding the optimal operation strategy. Instead, operational constraints on flow rates, pressures etc., based on worst-case sce...

  11. Tsunami hazard for the city of Catania, eastern Sicily, Italy, assessed by means of Worst-case Credible Tsunami Scenario Analysis (WCTSA

    Directory of Open Access Journals (Sweden)

    R. Tonini

    2011-05-01

    Full Text Available Eastern Sicily is one of the coastal areas most exposed to earthquakes and tsunamis in Italy. The city of Catania that developed between the eastern base of Etna volcano and the Ionian Sea is, together with the neighbour coastal belt, under the strong menace of tsunamis. This paper addresses the estimation of the tsunami hazard for the city of Catania by using the technique of the Worst-case Credible Tsunami Scenario Analysis (WCTSA and is focused on a target area including the Catania harbour and the beach called La Plaia where many human activities develop and many important structures are present. The aim of the work is to provide a detailed tsunami hazard analysis, firstly by building scenarios that are proposed on the basis of tectonic considerations and of the largest historical events that hit the city in the past, and then by combining all the information deriving from single scenarios into a unique aggregated scenario that can be viewed as the worst virtual scenario. Scenarios have been calculated by means of numerical simulations on computational grids of different resolutions, passing from 3 km on a regional scale to 40 m in the target area. La Plaia beach results to be the area most exposed to tsunami inundation, with inland penetration up to hundreds of meters. The harbour turns out to be more exposed to tsunami waves with low frequencies: in particular, it is found that the major contribution to the hazard in the harbour is due to a tsunami from a remote source, which propagates with much longer periods than tsunamis from local sources. This work has been performed in the framework of the EU-funded project SCHEMA.

  12. [Unfolding item response model using best-worst scaling].

    Science.gov (United States)

    Ikehara, Kazuya

    2015-02-01

    In attitude measurement and sensory tests, the unfolding model is typically used. In this model, response probability is formulated by the distance between the person and the stimulus. In this study, we proposed an unfolding item response model using best-worst scaling (BWU model), in which a person chooses the best and worst stimulus among repeatedly presented subsets of stimuli. We also formulated an unfolding model using best scaling (BU model), and compared the accuracy of estimates between the BU and BWU models. A simulation experiment showed that the BWU modell performed much better than the BU model in terms of bias and root mean square errors of estimates. With reference to Usami (2011), the proposed models were apllied to actual data to measure attitudes toward tardiness. Results indicated high similarity between stimuli estimates generated with the proposed models and those of Usami (2011).

  13. Five-Yearly Review: TREF avoids the worst!

    CERN Multimedia

    Staff Association

    2006-01-01

    In our last edition, we informed you about the Staff Council decision to reject the entire set of proposals for the five-yearly review, on the basis of the Director-General's revised proposals 2 and 10. We also indicated that a total failure could still be avoided at TREF on 4 and 5 October. To our relief, TREF avoided the worst!

  14. Post Pareto optimization-A case

    Science.gov (United States)

    Popov, Stoyan; Baeva, Silvia; Marinova, Daniela

    2017-12-01

    Simulation performance may be evaluated according to multiple quality measures that are in competition and their simultaneous consideration poses a conflict. In the current study we propose a practical framework for investigating such simulation performance criteria, exploring the inherent conflicts amongst them and identifying the best available tradeoffs, based upon multi-objective Pareto optimization. This approach necessitates the rigorous derivation of performance criteria to serve as objective functions and undergo vector optimization. We demonstrate the effectiveness of our proposed approach by applying it with multiple stochastic quality measures. We formulate performance criteria of this use-case, pose an optimization problem, and solve it by means of a simulation-based Pareto approach. Upon attainment of the underlying Pareto Frontier, we analyze it and prescribe preference-dependent configurations for the optimal simulation training.

  15. Heterogenous Agents Model with the Worst Out Algorithm

    Czech Academy of Sciences Publication Activity Database

    Vácha, Lukáš; Vošvrda, Miloslav

    -, č. 8 (2006), s. 3-19 ISSN 1801-5999 Institutional research plan: CEZ:AV0Z10750506 Keywords : efficient market hypothesis * fractal market hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out algorithm Subject RIV: AH - Economics

  16. The relative worst order ratio applied to paging

    DEFF Research Database (Denmark)

    Boyar, Joan; Favrholdt, Lene Monrad; Larsen, Kim Skak

    2007-01-01

    The relative worst order ratio, a new measure for the quality of on-line algorithms, was recently defined and applied to two bin packing problems. Here, we apply it to the paging problem and obtain the following results: We devise a new deterministic paging algorithm, Retrospective-LRU, and show...

  17. PMS49 – Empirical comparison of discrete choice experiment and best-worst scaling to estimate stakeholders' risk tolerance for hip replacement surgery

    NARCIS (Netherlands)

    van Dijk, J.D.; Groothuis-Oudshoorn, Karin; Marshall, D.; IJzerman, Maarten Joost

    2013-01-01

    Objectives Empirical comparison of two preference elicitation methods, discrete choice experiment (DCE) and profile case best-worst scaling (BWS), regarding the estimation of the risk tolerance for hip replacement surgery (total hip arthroplasty and total hip resurfacing arthroplasty). Methods An

  18. Design optimization of anisotropic pressure vessels with manufacturing uncertainties accounted for

    International Nuclear Information System (INIS)

    Walker, M.; Tabakov, P.Y.

    2013-01-01

    Accurate optimal design solutions for most engineering structures present considerable difficulties due to the complexity and multi-modality of the functional design space. The situation is made even more complex when potential manufacturing tolerances must be accounted for in the optimizing process. The present study provides an original in-depth analysis of the problem and then a new technique for determining the optimal design of engineering structures, with manufacturing tolerances accounted for, is proposed and demonstrated. The numerical examples used to demonstrate the technique involve the design optimization of anisotropic fibre-reinforced laminated pressure vessels. It is assumed that the probability of any tolerance value occurring within the tolerance band, compared with any other, is equal, and thus it is a worst-case scenario approach. A genetic algorithm with fitness sharing, including a micro-genetic algorithm, has been found to be very suitable to use, and implemented in the technique

  19. Preliminary Analysis of Aircraft Loss of Control Accidents: Worst Case Precursor Combinations and Temporal Sequencing

    Science.gov (United States)

    Belcastro, Christine M.; Groff, Loren; Newman, Richard L.; Foster, John V.; Crider, Dennis H.; Klyde, David H.; Huston, A. McCall

    2014-01-01

    Aircraft loss of control (LOC) is a leading cause of fatal accidents across all transport airplane and operational classes, and can result from a wide spectrum of hazards, often occurring in combination. Technologies developed for LOC prevention and recovery must therefore be effective under a wide variety of conditions and uncertainties, including multiple hazards, and their validation must provide a means of assessing system effectiveness and coverage of these hazards. This requires the definition of a comprehensive set of LOC test scenarios based on accident and incident data as well as future risks. This paper defines a comprehensive set of accidents and incidents over a recent 15 year period, and presents preliminary analysis results to identify worst-case combinations of causal and contributing factors (i.e., accident precursors) and how they sequence in time. Such analyses can provide insight in developing effective solutions for LOC, and form the basis for developing test scenarios that can be used in evaluating them. Preliminary findings based on the results of this paper indicate that system failures or malfunctions, crew actions or inactions, vehicle impairment conditions, and vehicle upsets contributed the most to accidents and fatalities, followed by inclement weather or atmospheric disturbances and poor visibility. Follow-on research will include finalizing the analysis through a team consensus process, defining future risks, and developing a comprehensive set of test scenarios with correlation to the accidents, incidents, and future risks. Since enhanced engineering simulations are required for batch and piloted evaluations under realistic LOC precursor conditions, these test scenarios can also serve as a high-level requirement for defining the engineering simulation enhancements needed for generating them.

  20. On the Optimality of Repetition Coding among Rate-1 DC-offset STBCs for MIMO Optical Wireless Communications

    KAUST Repository

    Sapenov, Yerzhan

    2017-07-06

    In this paper, an optical wireless multiple-input multiple-output communication system employing intensity-modulation direct-detection is considered. The performance of direct current offset space-time block codes (DC-STBC) is studied in terms of pairwise error probability (PEP). It is shown that among the class of DC-STBCs, the worst case PEP corresponding to the minimum distance between two codewords is minimized by repetition coding (RC), under both electrical and optical individual power constraints. It follows that among all DC-STBCs, RC is optimal in terms of worst-case PEP for static channels and also for varying channels under any turbulence statistics. This result agrees with previously published numerical results showing the superiority of RC in such systems. It also agrees with previously published analytic results on this topic under log-normal turbulence and further extends it to arbitrary turbulence statistics. This shows the redundancy of the time-dimension of the DC-STBC in this system. This result is further extended to sum power constraints with static and turbulent channels, where it is also shown that the time dimension is redundant, and the optimal DC-STBC has a spatial beamforming structure. Numerical results are provided to demonstrate the difference in performance for systems with different numbers of receiving apertures and different throughput.

  1. Schedulability Analysis and Optimization for the Synthesis of Multi-Cluster Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2003-01-01

    We present an approach to schedulability analysis for the synthesis of multi-cluster distributed embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways. We have also proposed a buffer size and worst case queuing delay analysis for the gateways......, responsible for routing inter-cluster traffic. Optimization heuristics for the priority assignment and synthesis of bus access parameters aimed at producing a schedulable system with minimal buffer needs have been proposed. Extensive experiments and a real-life example show the efficiency of our approaches....

  2. On meeting capital requirements with a chance-constrained optimization model.

    Science.gov (United States)

    Atta Mills, Ebenezer Fiifi Emire; Yu, Bo; Gu, Lanlan

    2016-01-01

    This paper deals with a capital to risk asset ratio chance-constrained optimization model in the presence of loans, treasury bill, fixed assets and non-interest earning assets. To model the dynamics of loans, we introduce a modified CreditMetrics approach. This leads to development of a deterministic convex counterpart of capital to risk asset ratio chance constraint. We pursue the scope of analyzing our model under the worst-case scenario i.e. loan default. The theoretical model is analyzed by applying numerical procedures, in order to administer valuable insights from a financial outlook. Our results suggest that, our capital to risk asset ratio chance-constrained optimization model guarantees banks of meeting capital requirements of Basel III with a likelihood of 95 % irrespective of changes in future market value of assets.

  3. Min-max optimal public service system design

    Directory of Open Access Journals (Sweden)

    Marek Kvet

    2015-03-01

    Full Text Available This paper deals with designing a fair public service system. To achieve fairness, various schemes are be applied. The strongest criterion in the process is minimization of disutility of the worst situated users and then optimization of disutility of the better situated users under the condition that disutility of the worst situated users does not worsen, otherwise called lexicographical minimization. Focusing on the first step, this paper endeavours to find an effective solution to the weighted p-median problem based on radial formulation. Attempts at solving real instances when using a location-allocation model often fail due to enormous computational time or huge memory demands. Radial formulation can be implemented using commercial optimisation software. The main goal of this study is to show that the suitability solving of the min-max optimal public service system design can save computational time.

  4. Time-Optimal Real-Time Test Case Generation using UPPAAL

    DEFF Research Database (Denmark)

    Hessel, Anders; Larsen, Kim Guldstrand; Nielsen, Brian

    2004-01-01

    Testing is the primary software validation technique used by industry today, but remains ad hoc, error prone, and very expensive. A promising improvement is to automatically generate test cases from formal models of the system under test. We demonstrate how to automatically generate real...... test purposes or generated automatically from various coverage criteria of the model.......-time conformance test cases from timed automata specifications. Specifically we demonstrate how to fficiently generate real-time test cases with optimal execution time i.e test cases that are the fastest possible to execute. Our technique allows time optimal test cases to be generated using manually formulated...

  5. Scheduling with Bus Access Optimization for Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Eles, Petru; Doboli, Alex; Pop, Paul

    2000-01-01

    of control. Our goal is to derive a worst case delay by which the system completes execution, such that this delay is as small as possible; to generate a logically and temporally deterministic schedule; and to optimize parameters of the communication protocol such that this delay is guaranteed. We have......In this paper, we concentrate on aspects related to the synthesis of distributed embedded systems consisting of programmable processors and application-specific hardware components. The approach is based on an abstract graph representation that captures, at process level, both dataflow and the flow......, generates an efficient bus access scheme as well as the schedule tables for activation of processes and communications....

  6. Space engineering modeling and optimization with case studies

    CERN Document Server

    Pintér, János

    2016-01-01

    This book presents a selection of advanced case studies that cover a substantial range of issues and real-world challenges and applications in space engineering. Vital mathematical modeling, optimization methodologies and numerical solution aspects of each application case study are presented in detail, with discussions of a range of advanced model development and solution techniques and tools. Space engineering challenges are discussed in the following contexts: •Advanced Space Vehicle Design •Computation of Optimal Low Thrust Transfers •Indirect Optimization of Spacecraft Trajectories •Resource-Constrained Scheduling, •Packing Problems in Space •Design of Complex Interplanetary Trajectories •Satellite Constellation Image Acquisition •Re-entry Test Vehicle Configuration Selection •Collision Risk Assessment on Perturbed Orbits •Optimal Robust Design of Hybrid Rocket Engines •Nonlinear Regression Analysis in Space Engineering< •Regression-Based Sensitivity Analysis and Robust Design ...

  7. On the relation between flexibility analysis and robust optimization for linear systems

    KAUST Repository

    Zhang, Qi

    2016-03-05

    Flexibility analysis and robust optimization are two approaches to solving optimization problems under uncertainty that share some fundamental concepts, such as the use of polyhedral uncertainty sets and the worst-case approach to guarantee feasibility. The connection between these two approaches has not been sufficiently acknowledged and examined in the literature. In this context, the contributions of this work are fourfold: (1) a comparison between flexibility analysis and robust optimization from a historical perspective is presented; (2) for linear systems, new formulations for the three classical flexibility analysis problems—flexibility test, flexibility index, and design under uncertainty—based on duality theory and the affinely adjustable robust optimization (AARO) approach are proposed; (3) the AARO approach is shown to be generally more restrictive such that it may lead to overly conservative solutions; (4) numerical examples show the improved computational performance from the proposed formulations compared to the traditional flexibility analysis models. © 2016 American Institute of Chemical Engineers AIChE J, 62: 3109–3123, 2016

  8. Optimizing the hydraulic program of cementing casing strings

    Energy Technology Data Exchange (ETDEWEB)

    Novakovic, M

    1984-01-01

    A technique is described for calculating the optimal parameters of the flow of plugging mud which takes into consideration the geometry of the annular space and the rheological characteristics of the muds. The optimization algorithm was illustrated by a block diagram. Examples are given for practical application of the optimization programs in production conditions. It is stressed that optimizing the hydraulic cementing program is effective if other technical-technological problems in cementing casing strings have been resolved.

  9. The effect of a loss of model structural detail due to network skeletonization on contamination warning system design: case studies

    Science.gov (United States)

    Davis, Michael J.; Janke, Robert

    2018-05-01

    The effect of limitations in the structural detail available in a network model on contamination warning system (CWS) design was examined in case studies using the original and skeletonized network models for two water distribution systems (WDSs). The skeletonized models were used as proxies for incomplete network models. CWS designs were developed by optimizing sensor placements for worst-case and mean-case contamination events. Designs developed using the skeletonized network models were transplanted into the original network model for evaluation. CWS performance was defined as the number of people who ingest more than some quantity of a contaminant in tap water before the CWS detects the presence of contamination. Lack of structural detail in a network model can result in CWS designs that (1) provide considerably less protection against worst-case contamination events than that obtained when a more complete network model is available and (2) yield substantial underestimates of the consequences associated with a contamination event. Nevertheless, CWSs developed using skeletonized network models can provide useful reductions in consequences for contaminants whose effects are not localized near the injection location. Mean-case designs can yield worst-case performances similar to those for worst-case designs when there is uncertainty in the network model. Improvements in network models for WDSs have the potential to yield significant improvements in CWS designs as well as more realistic evaluations of those designs. Although such improvements would be expected to yield improved CWS performance, the expected improvements in CWS performance have not been quantified previously. The results presented here should be useful to those responsible for the design or implementation of CWSs, particularly managers and engineers in water utilities, and encourage the development of improved network models.

  10. The Treeterbi and Parallel Treeterbi algorithms: efficient, optimal decoding for ordinary, generalized and pair HMMs

    DEFF Research Database (Denmark)

    Keibler, Evan; Arumugam, Manimozhiyan; Brent, Michael R

    2007-01-01

    MOTIVATION: Hidden Markov models (HMMs) and generalized HMMs been successfully applied to many problems, but the standard Viterbi algorithm for computing the most probable interpretation of an input sequence (known as decoding) requires memory proportional to the length of the sequence, which can...... be prohibitive. Existing approaches to reducing memory usage either sacrifice optimality or trade increased running time for reduced memory. RESULTS: We developed two novel decoding algorithms, Treeterbi and Parallel Treeterbi, and implemented them in the TWINSCAN/N-SCAN gene-prediction system. The worst case...... asymptotic space and time are the same as for standard Viterbi, but in practice, Treeterbi optimally decodes arbitrarily long sequences with generalized HMMs in bounded memory without increasing running time. Parallel Treeterbi uses the same ideas to split optimal decoding across processors, dividing latency...

  11. Worst-case study for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagents: Methodology validation of total organic carbon; Estudo do pior caso na validação de limpeza de equipamentos de produção de radiofármacos de reagentes liofilizados: validação de metodologia de carbono orgânico total

    Energy Technology Data Exchange (ETDEWEB)

    Porto, Luciana Valeria Ferrari Machado

    2015-07-01

    Radiopharmaceuticals are defined as pharmaceutical preparations containing a radionuclide in their composition, mostly intravenously administered, and therefore compliance with the principles of Good Manufacturing Practices (GMP) is essential and indispensable. Cleaning validation is a requirement of the current GMP, and consists of documented evidence, which demonstrates that the cleaning procedures are able to remove residues to pre-determined acceptance levels, ensuring that no cross contamination occurs. A simplification of cleaning processes validation is accepted, and consists in choosing a product, called 'worst case', to represent the cleaning processes of all equipment of the same production area. One of the steps of cleaning validation is the establishment and validation of the analytical method to quantify the residue. The aim of this study was to establish the worst case for cleaning validation of equipment in the radiopharmaceutical production of lyophilized reagent (LR) for labeling with {sup 99m}Tc, evaluate the use of Total Organic Carbon (TOC) content as indicator of equipment cleaning used in the LR manufacture, validate the method of Non-Purgeable Organic Carbon (NPOC), and perform recovery tests with the product chosen as worst case. Worst case product's choice was based on the calculation of an index called 'Worst Case Index' (WCI), using information about drug solubility, difficulty of cleaning the equipment and occupancy rate of the products in line production. The products indicated 'worst case' was the LR MIBI-TEC. The method validation assays were performed using carbon analyser model TOC-Vwp coupled to an autosampler model ASI-V, both from Shimadzu®, controlled by TOC Control-V software. It was used the direct method for NPOC quantification. The parameters evaluated in the validation method were: system suitability, robustness, linearity, detection limit (DL) and quantification limit (QL), precision

  12. Coverage-based constraints for IMRT optimization

    Science.gov (United States)

    Mescher, H.; Ulrich, S.; Bangert, M.

    2017-09-01

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.

  13. Supplier Risk Assessment Based on Best-Worst Method and K-Means Clustering: A Case Study

    Directory of Open Access Journals (Sweden)

    Merve Er Kara

    2018-04-01

    Full Text Available Supplier evaluation and selection is one of the most critical strategic decisions for developing a competitive and sustainable organization. Companies have to consider supplier related risks and threats in their purchasing decisions. In today’s competitive and risky business environment, it is very important to work with reliable suppliers. This study proposes a clustering based approach to group suppliers based on their risk profile. Suppliers of a company in the heavy-machinery sector are assessed based on 17 qualitative and quantitative risk types. The weights of the criteria are determined by using the Best-Worst method. Four factors are extracted by applying Factor Analysis to the supplier risk data. Then k-means clustering algorithm is applied to group core suppliers of the company based on the four risk factors. Three clusters are created with different risk exposure levels. The interpretation of the results provides insights for risk management actions and supplier development programs to mitigate supplier risk.

  14. Heterogeneous Agents Model with the Worst Out Algorithm

    Czech Academy of Sciences Publication Activity Database

    Vošvrda, Miloslav; Vácha, Lukáš

    I, č. 1 (2007), s. 54-66 ISSN 1802-4696 R&D Projects: GA MŠk(CZ) LC06075; GA ČR(CZ) GA402/06/0990 Grant - others:GA UK(CZ) 454/2004/A-EK/FSV Institutional research plan: CEZ:AV0Z10750506 Keywords : Efficient Market s Hypothesis * Fractal Market Hypothesis * agents' investment horizons * agents' trading strategies * technical trading rules * heterogeneous agent model with stochastic memory * Worst out Algorithm Subject RIV: AH - Economics

  15. Java Processor Optimized for RTSJ

    Directory of Open Access Journals (Sweden)

    Tu Shiliang

    2007-01-01

    Full Text Available Due to the preeminent work of the real-time specification for Java (RTSJ, Java is increasingly expected to become the leading programming language in real-time systems. To provide a Java platform suitable for real-time applications, a Java processor which can execute Java bytecode is directly proposed in this paper. It provides efficient support in hardware for some mechanisms specified in the RTSJ and offers a simpler programming model through ameliorating the scoped memory of the RTSJ. The worst case execution time (WCET of the bytecodes implemented in this processor is predictable by employing the optimization method proposed in our previous work, in which all the processing interfering predictability is handled before bytecode execution. Further advantage of this method is to make the implementation of the processor simpler and suited to a low-cost FPGA chip.

  16. The determination of risk areas for muddy floods based on a worst-case erosion modelling

    Science.gov (United States)

    Saathoff, Ulfert; Schindewolf, Marcus; Annika Arévalo, Sarah

    2013-04-01

    Soil erosion and muddy floods are a frequently occurring hazard in the German state of Saxony, because of the topography and the high relief energy together with the high proportion of arable land. Still, the events are rather heterogeneously distributed and we do not know where damage is likely to occur. The goal of this study is to locate hot spots for the risk of muddy floods, with the objective to prevent high economic damage in future. We applied a soil erosion and deposition map of Saxony, calculated with the process based soil erosion model EROSION 3D. This map shows the potential soil erosion and transported sediment for worst case soil conditions and a 10 year rain storm event. Furthermore, a map of the current landuse in the state is used. From the landuse map, we extracted those areas that are especially vulnerable to muddy floods, like residential and industrial areas, infrastructural facilities (e.g. power plants, hospitals) and highways. In combination with the output of the soil erosion model, the amount of sediment, that enters each single landuse entity, is calculated. Based on this data, a state-wide map with classified risks is created. The results are furthermore used to identify the risk of muddy floods for each single municipality in Saxony. The results are evaluated with data of real occurred muddy flood events with documented locations during the period between 2000 and 2010. Additionally, plausibility tests are performed for selected areas (examination of landuse, topography and soil). The results prove to be plausible and most of the documented events can be explained by the modelled risk map. The created map can be used by different institutions like city and traffic planners, to estimate the risk of muddy flood occurrence at specific locations. Furthermore, the risk map can serve insurance companies to evaluate the insurance risk of a building. To make them easily accessible, the risk map will be published online via a web GIS

  17. Evacuation planning for plausible worst case inundation scenarios in Honolulu, Hawaii.

    Science.gov (United States)

    Kim, Karl; Pant, Pradip; Yamashita, Eric

    2015-01-01

    Honolulu is susceptible to coastal flooding hazards. Like other coastal cities, Honolulu&s long-term economic viability and sustainability depends on how well it can adapt to changes in the natural and built environment. While there is a disagreement over the magnitude and extent of localized impacts associated with climate change, it is widely accepted that by 2100 there will be at least a meter in sea level rise (SLR) and an increase in extreme weather events. Increased exposure and vulnerabilities associated with urbanization and location of human activities in coastal areas warrants serious consideration by planners and policy makers. This article has three objectives. First, flooding due to the combined effects of SLR and episodic hydro-meteorological and geophysical events in Honolulu are investigated and the risks to the community are quantified. Second, the risks and vulnerabilities of critical infrastructure and the surface transportation system are described. Third, using the travel demand software, travel distances and travel times for evacuation from inundated areas are modeled. Data from three inundation models were used. The first model simulated storm surge from a category 4 hurricane similar to Hurricane Iniki which devastated the island of Kauai in 1992. The second model estimates inundation based on five tsunamis that struck Hawaii. A 1-m increase in sea level was included in both the hurricane storm surge and tsunami flooding models. The third model used in this article generated a 500-year flood event due to riverine flooding. Using a uniform grid cell structure, the three inundation maps were used to assess the worst case flooding scenario. Based on the flood depths, the ruling hazard (hurricane, tsunami, or riverine flooding) for each grid cell was determined. The hazard layer was analyzed with socioeconomic data layers to determine the impact on vulnerable populations, economic activity, and critical infrastructure. The analysis focused both

  18. Sequential optimization and reliability assessment method for metal forming processes

    International Nuclear Information System (INIS)

    Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.

    2004-01-01

    Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations

  19. Multi-point optimization of recirculation flow type casing treatment in centrifugal compressors

    Science.gov (United States)

    Tun, Min Thaw; Sakaguchi, Daisaku

    2016-06-01

    High-pressure ratio and wide operating range are highly required for a turbocharger in diesel engines. A recirculation flow type casing treatment is effective for flow range enhancement of centrifugal compressors. Two ring grooves on a suction pipe and a shroud casing wall are connected by means of an annular passage and stable recirculation flow is formed at small flow rates from the downstream groove toward the upstream groove through the annular bypass. The shape of baseline recirculation flow type casing is modified and optimized by using a multi-point optimization code with a metamodel assisted evolutionary algorithm embedding a commercial CFD code CFX from ANSYS. The numerical optimization results give the optimized design of casing with improving adiabatic efficiency in wide operating flow rate range. Sensitivity analysis of design parameters as a function of efficiency has been performed. It is found that the optimized casing design provides optimized recirculation flow rate, in which an increment of entropy rise is minimized at grooves and passages of the rotating impeller.

  20. Geochemical modelling of worst-case leakage scenarios at potential CO2-storage sites - CO2 and saline water contamination of drinking water aquifers

    Science.gov (United States)

    Szabó, Zsuzsanna; Edit Gál, Nóra; Kun, Éva; Szőcs, Teodóra; Falus, György

    2017-04-01

    Carbon Capture and Storage is a transitional technology to reduce greenhouse gas emissions and to mitigate climate change. Following the implementation and enforcement of the 2009/31/EC Directive in the Hungarian legislation, the Geological and Geophysical Institute of Hungary is required to evaluate the potential CO2 geological storage structures of the country. Basic assessment of these saline water formations has been already performed and the present goal is to extend the studies to the whole of the storage complex and consider the protection of fresh water aquifers of the neighbouring area even in unlikely scenarios when CO2 injection has a much more regional effect than planned. In this work, worst-case scenarios are modelled to understand the effects of CO2 or saline water leaks into drinking water aquifers. The dissolution of CO2 may significantly change the pH of fresh water which induces mineral dissolution and precipitation in the aquifer and therefore, changes in solution composition and even rock porosity. Mobilization of heavy metals may also be of concern. Brine migration from CO2 reservoir and replacement of fresh water in the shallower aquifer may happen due to pressure increase as a consequence of CO2 injection. The saline water causes changes in solution composition which may also induce mineral reactions. The modelling of the above scenarios has happened at several methodological levels such as equilibrium batch, kinetic batch and kinetic reactive transport simulations. All of these have been performed by PHREEQC using the PHREEQC.DAT thermodynamic database. Kinetic models use equations and kinetic rate parameters from the USGS report of Palandri and Kharaka (2004). Reactive transport modelling also considers estimated fluid flow and dispersivity of the studied formation. Further input parameters are the rock and the original ground water compositions of the aquifers and a range of gas-phase CO2 or brine replacement ratios. Worst-case scenarios

  1. Design optimization for active twist rotor blades

    Science.gov (United States)

    Mok, Ji Won

    This dissertation introduces the process of optimizing active twist rotor blades in the presence of embedded anisotropic piezo-composite actuators. Optimum design of active twist blades is a complex task, since it involves a rich design space with tightly coupled design variables. The study presents the development of an optimization framework for active helicopter rotor blade cross-sectional design. This optimization framework allows for exploring a rich and highly nonlinear design space in order to optimize the active twist rotor blades. Different analytical components are combined in the framework: cross-sectional analysis (UM/VABS), an automated mesh generator, a beam solver (DYMORE), a three-dimensional local strain recovery module, and a gradient based optimizer within MATLAB. Through the mathematical optimization problem, the static twist actuation performance of a blade is maximized while satisfying a series of blade constraints. These constraints are associated with locations of the center of gravity and elastic axis, blade mass per unit span, fundamental rotating blade frequencies, and the blade strength based on local three-dimensional strain fields under worst loading conditions. Through pre-processing, limitations of the proposed process have been studied. When limitations were detected, resolution strategies were proposed. These include mesh overlapping, element distortion, trailing edge tab modeling, electrode modeling and foam implementation of the mesh generator, and the initial point sensibility of the current optimization scheme. Examples demonstrate the effectiveness of this process. Optimization studies were performed on the NASA/Army/MIT ATR blade case. Even though that design was built and shown significant impact in vibration reduction, the proposed optimization process showed that the design could be improved significantly. The second example, based on a model scale of the AH-64D Apache blade, emphasized the capability of this framework to

  2. Optimized PID control of depth of hypnosis in anesthesia.

    Science.gov (United States)

    Padula, Fabrizio; Ionescu, Clara; Latronico, Nicola; Paltenghi, Massimiliano; Visioli, Antonio; Vivacqua, Giulio

    2017-06-01

    This paper addresses the use of proportional-integral-derivative controllers for regulating the depth of hypnosis in anesthesia by using propofol administration and the bispectral index as a controlled variable. In fact, introducing an automatic control system might provide significant benefits for the patient in reducing the risk for under- and over-dosing. In this study, the controller parameters are obtained through genetic algorithms by solving a min-max optimization problem. A set of 12 patient models representative of a large population variance is used to test controller robustness. The worst-case performance in the considered population is minimized considering two different scenarios: the induction case and the maintenance case. Our results indicate that including a gain scheduling strategy enables optimal performance for induction and maintenance phases, separately. Using a single tuning to address both tasks may results in a loss of performance up to 102% in the induction phase and up to 31% in the maintenance phase. Further on, it is shown that a suitably designed low-pass filter on the controller output can handle the trade-off between the performance and the noise effect in the control variable. Optimally tuned PID controllers provide a fast induction time with an acceptable overshoot and a satisfactory disturbance rejection performance during maintenance. These features make them a very good tool for comparison when other control algorithms are developed. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Worst-case optimal approximation algorithms for maximizing triplet consistency within phylogenetic networks

    NARCIS (Netherlands)

    J. Byrka (Jaroslaw); K.T. Huber; S.M. Kelk (Steven); P. Gawrychowski

    2009-01-01

    htmlabstractThe study of phylogenetic networks is of great interest to computational evolutionary biology and numerous different types of such structures are known. This article addresses the following question concerning rooted versions of phylogenetic networks. What is the maximum value of pset

  4. Discrete Material Buckling Optimization of Laminated Composite Structures considering "Worst" Shape Imperfections

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik

    2015-01-01

    Robust design of laminated composite structures is considered in this work. Because laminated composite structures are often thin walled, buckling failure can occur prior to material failure, making it desirable to maximize the buckling load. However, as a structure always contains imperfections...... and “worst” shape imperfection optimizations to design robust composite structures. The approach is demonstrated on an U-profile where the imperfection sensitivity is monitored, and based on the example it can be concluded that robust designs can be obtained....

  5. Worst-case execution time analysis-driven object cache design

    DEFF Research Database (Denmark)

    Huber, Benedikt; Puffitsch, Wolfgang; Schoeberl, Martin

    2012-01-01

    result in a WCET analysis‐friendly design. Aiming for a time‐predictable design, we therefore propose to employ WCET analysis techniques for the design space exploration of processor architectures. We evaluated different object cache configurations using static analysis techniques. The number of field......Hard real‐time systems need a time‐predictable computing platform to enable static worst‐case execution time (WCET) analysis. All performance‐enhancing features need to be WCET analyzable. However, standard data caches containing heap‐allocated data are very hard to analyze statically....... In this paper we explore a new object cache design, which is driven by the capabilities of static WCET analysis. Simulations of standard benchmarks estimating the expected average case performance usually drive computer architecture design. The design decisions derived from this methodology do not necessarily...

  6. Fast meldable priority queues

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting

    1995-01-01

    We present priority queues that support the operations Find-Min, Insert, MakeQueue and Meld in worst case time O(1) and Delete and DeleteMin in worst case time O(log n). They can be implemented on the pointer machine and require linear space. The time bounds are optimal for all implementations wh...

  7. An Elite Decision Making Harmony Search Algorithm for Optimization Problem

    Directory of Open Access Journals (Sweden)

    Lipu Zhang

    2012-01-01

    Full Text Available This paper describes a new variant of harmony search algorithm which is inspired by a well-known item “elite decision making.” In the new algorithm, the good information captured in the current global best and the second best solutions can be well utilized to generate new solutions, following some probability rule. The generated new solution vector replaces the worst solution in the solution set, only if its fitness is better than that of the worst solution. The generating and updating steps and repeated until the near-optimal solution vector is obtained. Extensive computational comparisons are carried out by employing various standard benchmark optimization problems, including continuous design variables and integer variables minimization problems from the literature. The computational results show that the proposed new algorithm is competitive in finding solutions with the state-of-the-art harmony search variants.

  8. Financial appraisal of efficiency investments. Why the good may be the worst enemy of the best

    Energy Technology Data Exchange (ETDEWEB)

    Verbruggen, A. [University of Antwerp, Prinsstraat 13, 2000 Antwerp (Belgium)

    2012-11-15

    This methodological paper has a didactic goal: improving our understanding of what 'cost optimal energy performance of buildings' means and how financial appraisal of efficiency investments must be set up. Three items merit improvement. First, focus on the endowment character of energy performance of long-living assets like buildings. Second, defining cost optimal requires more than a comparative static trade-off scheme; cost optimal refers to dynamic efficiency, which results from technology dynamics induced by changes in society and policy. Third, financial appraisal is a more complex issue than simple net present value and life cycle cost calculations. It must reflect the time sequential dynamics of real-life processes including real-life decision making. Financial appraisal is embedded in a complex framework made up by three dimensions: future time, doubt and irrevocability. The latter dimension connects with issues like lock-in and path dependency that are generally overlooked in net present value calculations. This may lead to very erroneous recommendations regarding efficiency investments, in particular regarding the energy performance endowment of buildings. Mostly irrevocability is used as an argument to 'wait and learn' what has, for example, blocked the pace of climate policy. But the opposite 'choose or lose' is the logical outcome when the methodology is fed with evidenced expectations. The latter boosts energy efficiency to its boundaries, saving it from the middle-of-the-river quagmire where incomplete appraisals are dropping it too often (making the good the worst enemy of the best)

  9. Value chain management for commodities: a case study from the chemical industry

    DEFF Research Database (Denmark)

    Kannegiesser, M.; Gunther, H.O.; van Beek, P.

    2009-01-01

    decisions by volume and value throughout the value chain to ensure profitability. Contract and spot demand differentiation with volatile and uncertain spot prices, spot sales quantity flexibility, spot sales price-quantity functions and variable raw material consumption rates in production are problem...... quantity, price and supply decisions throughout the value chain. A two-phase optimization approach supports robust planning ensuring minimum profitability even in case of worst-case spot sales price scenarios. Model evaluations with industry case data demonstrate the impact of elasticities, variable raw......We present a planning model for chemical commodities related to an industry case. Commodities are standard chemicals characterized by sales and supply volatility in volume and value. Increasing and volatile prices of crude oil-dependent raw materials require coordination of sales and supply...

  10. SU-E-T-07: 4DCT Robust Optimization for Esophageal Cancer Using Intensity Modulated Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Liao, L [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Department of Industrial Engineering, University of Houston, Houston, TX (United States); Yu, J; Zhu, X; Li, H; Zhang, X [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Li, Y [Proton Therapy Center, UT MD Anderson Cancer Center, Houston, TX (United States); Varian Medical Systems, Houston, TX (United States); Lim, G [Department of Industrial Engineering, University of Houston, Houston, TX (United States)

    2015-06-15

    Purpose: To develop a 4DCT robust optimization method to reduce the dosimetric impact from respiratory motion in intensity modulated proton therapy (IMPT) for esophageal cancer. Methods: Four esophageal cancer patients were selected for this study. The different phases of CT from a set of 4DCT were incorporated into the worst-case dose distribution robust optimization algorithm. 4DCT robust treatment plans were designed and compared with the conventional non-robust plans. Result doses were calculated on the average and maximum inhale/exhale phases of 4DCT. Dose volume histogram (DVH) band graphic and ΔD95%, ΔD98%, ΔD5%, ΔD2% of CTV between different phases were used to evaluate the robustness of the plans. Results: Compare to the IMPT plans optimized using conventional methods, the 4DCT robust IMPT plans can achieve the same quality in nominal cases, while yield a better robustness to breathing motion. The mean ΔD95%, ΔD98%, ΔD5% and ΔD2% of CTV are 6%, 3.2%, 0.9% and 1% for the robustly optimized plans vs. 16.2%, 11.8%, 1.6% and 3.3% from the conventional non-robust plans. Conclusion: A 4DCT robust optimization method was proposed for esophageal cancer using IMPT. We demonstrate that the 4DCT robust optimization can mitigate the dose deviation caused by the diaphragm motion.

  11.  Optimizing relational algebra operations using discrimination-based joins and lazy products

    DEFF Research Database (Denmark)

    Henglein, Fritz

    We show how to implement in-memory execution of the core re- lational algebra operations of projection, selection and cross-product eciently, using discrimination-based joins and lazy products. We introduce the notion of (partitioning) discriminator, which par- titions a list of values according...... to a specied equivalence relation on keys the values are associated with. We show how discriminators can be dened generically, purely functionally, and eciently (worst-case linear time) on top of the array-based basic multiset discrimination algorithm of Cai and Paige (1995). Discriminators provide the basis...... the selection operation to recognize on the y whenever it is applied to a cross-product, in which case it can choose an ecient discrimination-based equijoin implementation. The techniques subsume most of the optimization techniques based on relational algebra equalities, without need for a query preprocessing...

  12. Tractable Pareto Optimization of Temporal Preferences

    Science.gov (United States)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  13. Minimizing the health and climate impacts of emissions from heavy-duty public transportation bus fleets through operational optimization.

    Science.gov (United States)

    Gouge, Brian; Dowlatabadi, Hadi; Ries, Francis J

    2013-04-16

    In contrast to capital control strategies (i.e., investments in new technology), the potential of operational control strategies (e.g., vehicle scheduling optimization) to reduce the health and climate impacts of the emissions from public transportation bus fleets has not been widely considered. This case study demonstrates that heterogeneity in the emission levels of different bus technologies and the exposure potential of bus routes can be exploited though optimization (e.g., how vehicles are assigned to routes) to minimize these impacts as well as operating costs. The magnitude of the benefits of the optimization depend on the specific transit system and region. Health impacts were found to be particularly sensitive to different vehicle assignments and ranged from worst to best case assignment by more than a factor of 2, suggesting there is significant potential to reduce health impacts. Trade-offs between climate, health, and cost objectives were also found. Transit agencies that do not consider these objectives in an integrated framework and, for example, optimize for costs and/or climate impacts alone, risk inadvertently increasing health impacts by as much as 49%. Cost-benefit analysis was used to evaluate trade-offs between objectives, but large uncertainties make identifying an optimal solution challenging.

  14. When does treatment plan optimization require inverse planning?

    International Nuclear Information System (INIS)

    Sherouse, George W.

    1995-01-01

    Increasing maturity of image-based computer-aided design of three-dimensional conformal radiotherapy has recently sparked a great deal of work in the area of treatment plan optimization. Optimization of a conformal photon beam treatment plan is that exercise through which a set of intensity-modulated static beams or arcs is specified such that, when the plan is executed, 1) a region of homogeneous dose is produced in the patient with a shape which geometrically conforms (within a specified tolerance) to the three-dimensional shape of a designated target volume and 2) acceptably low incidental dose is delivered to non-target tissues. Interest in conformal radiotherapy arise from a fundamental assumption that there is significant value to be gained from aggressive customization of the treatment for each individual patient In our efforts to design optimal treatments, however, it is important to remember that, given the biological and economic realities of clinical radiotherapy, mathematical optimization of dose distribution metrics with respect to some minimal constraint set is not a necessary or even sufficient condition for design of a clinically optimal treatment. There is wide variation in the complexity of the clinical situations encountered in practice and there are a number of non-physical criteria to be considered in planning. There is also a complementary variety of computational and engineering means for achieving optimization. To date, the scientific dialogue regarding these techniques has concentrated on development of solutions to worst-case scenarios, largely in the absence of consideration of appropriate matching of solution complexity to problem complexity. It is the aim of this presentation to propose a provisional stratification of treatment planning problems, stratified by relative complexity, and to identify a corresponding stratification of necessary treatment planning techniques. It is asserted that the subset of clinical radiotherapy cases for

  15. Numerical simulation and optimized design of cased telescoped ammunition interior ballistic

    Directory of Open Access Journals (Sweden)

    Jia-gang Wang

    2018-04-01

    Full Text Available In order to achieve the optimized design of a cased telescoped ammunition (CTA interior ballistic design, a genetic algorithm was introduced into the optimal design of CTA interior ballistics with coupling the CTA interior ballistic model. Aiming at the interior ballistic characteristics of a CTA gun, the goal of CTA interior ballistic design is to obtain a projectile velocity as large as possible. The optimal design of CTA interior ballistic is carried out using a genetic algorithm by setting peak pressure, changing the chamber volume and gun powder charge density. A numerical simulation of interior ballistics based on a 35 mm CTA firing experimental scheme was conducted and then the genetic algorithm was used for numerical optimization. The projectile muzzle velocity of the optimized scheme is increased from 1168 m/s for the initial experimental scheme to 1182 m/s. Then four optimization schemes were obtained with several independent optimization processes. The schemes were compared with each other and the difference between these schemes is small. The peak pressure and muzzle velocity of these schemes are almost the same. The result shows that the genetic algorithm is effective in the optimal design of the CTA interior ballistics. This work will be lay the foundation for further CTA interior ballistic design. Keywords: Cased telescoped ammunition, Interior ballistics, Gunpowder, Optimization genetic algorithm

  16. Impact of the Worst School Experiences in Students: A Retrospective Study on Trauma

    Directory of Open Access Journals (Sweden)

    Paloma Pegolo de Albuquerque

    2015-12-01

    Full Text Available AbstractThe literature indicates damage to students' mental health in cases of school violence. The aim of this retrospective study was to evaluate the psychological impact of school victimization in university students, and to analyze the association between PTSD symptoms and variables related to school victimization. 691 University students responded to the Portuguese version of the Student Alienation and Trauma Survey (SATS. Clinically significant scores in the subscales ranged from 4.7% (somatic symptoms to 20% (hypervigilance, with frequent symptoms described in the literature resulting from school victimization, such as depression, hopelessness, cognitive difficulties, and traumatic event recollection. Additionally, 7.8% of participants presented PTSD symptoms after suffering their "worst school experience". Associations were found between PTSD symptoms and the level of distress after the experience, as well as the perceived benefits after the event, and duration. The results confirm the potential detrimental effects of school victimization, and may be useful to further investigations on this topic.

  17. Verification and synthesis of optimal decision strategies for complex systems

    International Nuclear Information System (INIS)

    Summers, S. J.

    2013-01-01

    that quantifies the probability of hitting a target set at some point during a finite time horizon, while avoiding an obstacle set during each time step preceding the target hitting time. In contrast with the general reach-avoid formulation, which assumes that the target and obstacle sets are constant and deterministic, we allow these sets to be both time-varying and probabilistic. An optimal reach-avoid control policy is derived as the solution to an optimal control problem via dynamic programming. A framework for analyzing probabilistic safety and reachability problems for discrete time stochastic hybrid systems in scenarios where system dynamics are affected by rational competing agents follows. We consider a zero sum game formulation of the probabilistic reach-avoid problem, in which the control objective is to maximize the probability of reaching a desired subset of the hybrid state space, while avoiding an unsafe set, subject to the worst case behavior of a rational adversary. Theoretical results are provided on a dynamic programming algorithm for computing the maximal reach-avoid probability under the worst-case adversary strategy, as well as the existence of a maxmin control policy that achieves this probability. Probabilistic Computation Tree Logic (PCTL) is a well-known modal logic that has become a standard for expressing temporal properties of finite state Markov chains in the context of automated model checking. Here we consider PCTL for non countable-space Markov chains, and we show that there is a substantial affinity between certain of its operators and problems of dynamic programming. We prove some basic properties of the solutions to the latter. The dissertation concludes with a collection of computational examples in the areas of ecology, robotics, aerospace, and finance. (author)

  18. Verification and synthesis of optimal decision strategies for complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Summers, S. J.

    2013-07-01

    that quantifies the probability of hitting a target set at some point during a finite time horizon, while avoiding an obstacle set during each time step preceding the target hitting time. In contrast with the general reach-avoid formulation, which assumes that the target and obstacle sets are constant and deterministic, we allow these sets to be both time-varying and probabilistic. An optimal reach-avoid control policy is derived as the solution to an optimal control problem via dynamic programming. A framework for analyzing probabilistic safety and reachability problems for discrete time stochastic hybrid systems in scenarios where system dynamics are affected by rational competing agents follows. We consider a zero sum game formulation of the probabilistic reach-avoid problem, in which the control objective is to maximize the probability of reaching a desired subset of the hybrid state space, while avoiding an unsafe set, subject to the worst case behavior of a rational adversary. Theoretical results are provided on a dynamic programming algorithm for computing the maximal reach-avoid probability under the worst-case adversary strategy, as well as the existence of a maxmin control policy that achieves this probability. Probabilistic Computation Tree Logic (PCTL) is a well-known modal logic that has become a standard for expressing temporal properties of finite state Markov chains in the context of automated model checking. Here we consider PCTL for non countable-space Markov chains, and we show that there is a substantial affinity between certain of its operators and problems of dynamic programming. We prove some basic properties of the solutions to the latter. The dissertation concludes with a collection of computational examples in the areas of ecology, robotics, aerospace, and finance. (author)

  19. Feasibility and robustness of dose painting by numbers in proton therapy with contour-driven plan optimization

    International Nuclear Information System (INIS)

    Barragán, A. M.; Differding, S.; Lee, J. A.; Sterpin, E.; Janssens, G.

    2015-01-01

    Purpose: To prove the ability of protons to reproduce a dose gradient that matches a dose painting by numbers (DPBN) prescription in the presence of setup and range errors, by using contours and structure-based optimization in a commercial treatment planning system. Methods: For two patients with head and neck cancer, voxel-by-voxel prescription to the target volume (GTV PET ) was calculated from 18 FDG-PET images and approximated with several discrete prescription subcontours. Treatments were planned with proton pencil beam scanning. In order to determine the optimal plan parameters to approach the DPBN prescription, the effects of the scanning pattern, number of fields, number of subcontours, and use of range shifter were separately tested on each patient. Different constant scanning grids (i.e., spot spacing = Δx = Δy = 3.5, 4, and 5 mm) and uniform energy layer separation [4 and 5 mm WED (water equivalent distance)] were analyzed versus a dynamic and automatic selection of the spots grid. The number of subcontours was increased from 3 to 11 while the number of beams was set to 3, 5, or 7. Conventional PTV-based and robust clinical target volumes (CTV)-based optimization strategies were considered and their robustness against range and setup errors assessed. Because of the nonuniform prescription, ensuring robustness for coverage of GTV PET inevitably leads to overdosing, which was compared for both optimization schemes. Results: The optimal number of subcontours ranged from 5 to 7 for both patients. All considered scanning grids achieved accurate dose painting (1% average difference between the prescribed and planned doses). PTV-based plans led to nonrobust target coverage while robust-optimized plans improved it considerably (differences between worst-case CTV dose and the clinical constraint was up to 3 Gy for PTV-based plans and did not exceed 1 Gy for robust CTV-based plans). Also, only 15% of the points in the GTV PET (worst case) were above 5% of DPBN

  20. Worst-case Analysis of Strategy Iteration and the Simplex Method

    DEFF Research Database (Denmark)

    Hansen, Thomas Dueholm

    In this dissertation we study strategy iteration (also known as policy iteration) algorithms for solving Markov decision processes (MDPs) and two-player turn-based stochastic games (2TBSGs). MDPs provide a mathematical model for sequential decision making under uncertainty. They are widely used...... to model stochastic optimization problems in various areas ranging from operations research, machine learning, artificial intelligence, economics and game theory. The class of two-player turn-based stochastic games is a natural generalization of Markov decision processes that is obtained by introducing...... in the size of the problem (the bounds have subexponential form). Utilizing a tight connection between MDPs and linear programming, it is shown that the same bounds apply to the corresponding pivoting rules for the simplex method for solving linear programs. Prior to this result no super-polynomial lower...

  1. Australian Public Preferences for the Funding of New Health Technologies: A Comparison of Discrete Choice and Profile Case Best-Worst Scaling Methods.

    Science.gov (United States)

    Whitty, Jennifer A; Ratcliffe, Julie; Chen, Gang; Scuffham, Paul A

    2014-07-01

    Ethical, economic, political, and legitimacy arguments support the consideration of public preferences in health technology decision making. The objective was to assess public preferences for funding new health technologies and to compare a profile case best-worst scaling (BWS) and traditional discrete choice experiment (DCE) method. An online survey consisting of a DCE and BWS task was completed by 930 adults recruited via an Internet panel. Respondents traded between 7 technology attributes. Participation quotas broadly reflected the population of Queensland, Australia, by gender and age. Choice data were analyzed using a generalized multinomial logit model. The findings from both the BWS and DCE were generally consistent in that respondents exhibited stronger preferences for technologies offering prevention or early diagnosis over other benefit types. Respondents also prioritized technologies that benefit younger people, larger numbers of people, those in rural areas, or indigenous Australians; that provide value for money; that have no available alternative; or that upgrade an existing technology. However, the relative preference weights and consequent preference orderings differed between the DCE and BWS models. Further, poor correlation between the DCE and BWS weights was observed. While only a minority of respondents reported difficulty completing either task (22.2% DCE, 31.9% BWS), the majority (72.6%) preferred the DCE over BWS task. This study provides reassurance that many criteria routinely used for technology decision making are considered to be relevant by the public. The findings clearly indicate the perceived importance of prevention and early diagnosis. The dissimilarity observed between DCE and profile case BWS weights is contrary to the findings of previous comparisons and raises uncertainty regarding the comparative merits of these stated preference methods in a priority-setting context. © The Author(s) 2014.

  2. Guaranteed Discrete Energy Optimization on Large Protein Design Problems.

    Science.gov (United States)

    Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas

    2015-12-08

    In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.

  3. Data-Driven Zero-Sum Neuro-Optimal Control for a Class of Continuous-Time Unknown Nonlinear Systems With Disturbance Using ADP.

    Science.gov (United States)

    Wei, Qinglai; Song, Ruizhuo; Yan, Pengfei

    2016-02-01

    This paper is concerned with a new data-driven zero-sum neuro-optimal control problem for continuous-time unknown nonlinear systems with disturbance. According to the input-output data of the nonlinear system, an effective recurrent neural network is introduced to reconstruct the dynamics of the nonlinear system. Considering the system disturbance as a control input, a two-player zero-sum optimal control problem is established. Adaptive dynamic programming (ADP) is developed to obtain the optimal control under the worst case of the disturbance. Three single-layer neural networks, including one critic and two action networks, are employed to approximate the performance index function, the optimal control law, and the disturbance, respectively, for facilitating the implementation of the ADP method. Convergence properties of the ADP method are developed to show that the system state will converge to a finite neighborhood of the equilibrium. The weight matrices of the critic and the two action networks are also convergent to finite neighborhoods of their optimal ones. Finally, the simulation results will show the effectiveness of the developed data-driven ADP methods.

  4. Is Time Predictability Quantifiable?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2012-01-01

    Computer architects and researchers in the realtime domain start to investigate processors and architectures optimized for real-time systems. Optimized for real-time systems means time predictable, i.e., architectures where it is possible to statically derive a tight bound of the worst......-case execution time. To compare different approaches we would like to quantify time predictability. That means we need to measure time predictability. In this paper we discuss the different approaches for these measurements and conclude that time predictability is practically not quantifiable. We can only...... compare the worst-case execution time bounds of different architectures....

  5. Optimization of the Case Based Reasoning Systems

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2014-01-01

    Intrusion Detection System (IDS) have a great importance in saving the authority of the information widely spread all over the world through the networks. Many Case Based Systems concerned on the different methods of the unauthorized users/hackers that face the developers of the IDS. The proposed system introduces a new hybrid system that uses the genetic algorithm to optimize an IDS - case based system. It can detect the new anomalies appeared through the network and use the cases in the case library to determine the suitable solution for their behavior. The suggested system can solve the problem either by using an old identical solution or adapt the optimum one till have the targeted solution. The proposed system has been applied to block unauthorized users / hackers from attach the medical images for radiotherapy of the cancer diseases during their transmission through web. The proposed system can prove its accepted performance in this manner

  6. Solar cooking in Senegalese villages: An application of best–worst scaling

    International Nuclear Information System (INIS)

    Vanschoenwinkel, Janka; Lizin, Sebastien; Swinnen, Gilbert; Azadi, Hossein; Van Passel, Steven

    2014-01-01

    Dissemination programs of nontraditional cookstoves often fail. Nontraditional cookstoves aim to solve problems associated with biomass fuel usage in developing countries. Recent studies do not explain what drives user's cookstove choice. This study therefore builds a holistic framework that centralizes product-specific preferences or needs. The case study identifies product-specific factors that influence rural Senegalese inhabitants to switch to solar cooking, using best–worst scaling. Looking at the preferences, the case study classified 126 respondents, in three distinct market segments with different solar cooking expectations. The paper identifies socio-demographic characteristics that explain these differences in the respondents' preferences. Finally, the respondent sample is divided in two groups: solar cooker owners and non-owners. When studied with regard to the same issue, solar cooker owners appear to value benefits of the solar cooker lower than non-owners. This is due to program factors (such as formations, after-sales network) and miscommunication (such as a wrong image of the solar cooker) that highly influenced the respondents' cookstove choice. As a conclusion, solar cookers and solar cooking programs are not always adapted to the needs and requirements of the end-users. Needs-oriented and end-user adopted strategies are necessary in order to successfully implement nontraditional cookstoves programs. - Highlights: • Current solar cookers and their programs do not sufficiently fit end-users' needs. • We centralize product-specific preferences in a framework integrating all variables. • Looking at these preferences, three distinct market segments are identified. • Preferences are influenced by both socio-demographic and program characteristics

  7. Intelligent and robust optimization frameworks for smart grids

    Science.gov (United States)

    Dhansri, Naren Reddy

    A smart grid implies a cyberspace real-time distributed power control system to optimally deliver electricity based on varying consumer characteristics. Although smart grids solve many of the contemporary problems, they give rise to new control and optimization problems with the growing role of renewable energy sources such as wind or solar energy. Under highly dynamic nature of distributed power generation and the varying consumer demand and cost requirements, the total power output of the grid should be controlled such that the load demand is met by giving a higher priority to renewable energy sources. Hence, the power generated from renewable energy sources should be optimized while minimizing the generation from non renewable energy sources. This research develops a demand-based automatic generation control and optimization framework for real-time smart grid operations by integrating conventional and renewable energy sources under varying consumer demand and cost requirements. Focusing on the renewable energy sources, the intelligent and robust control frameworks optimize the power generation by tracking the consumer demand in a closed-loop control framework, yielding superior economic and ecological benefits and circumvent nonlinear model complexities and handles uncertainties for superior real-time operations. The proposed intelligent system framework optimizes the smart grid power generation for maximum economical and ecological benefits under an uncertain renewable wind energy source. The numerical results demonstrate that the proposed framework is a viable approach to integrate various energy sources for real-time smart grid implementations. The robust optimization framework results demonstrate the effectiveness of the robust controllers under bounded power plant model uncertainties and exogenous wind input excitation while maximizing economical and ecological performance objectives. Therefore, the proposed framework offers a new worst-case deterministic

  8. Optimal river monitoring network using optimal partition analysis: a case study of Hun River, Northeast China.

    Science.gov (United States)

    Wang, Hui; Liu, Chunyue; Rong, Luge; Wang, Xiaoxu; Sun, Lina; Luo, Qing; Wu, Hao

    2018-01-09

    River monitoring networks play an important role in water environmental management and assessment, and it is critical to develop an appropriate method to optimize the monitoring network. In this study, an effective method was proposed based on the attainment rate of National Grade III water quality, optimal partition analysis and Euclidean distance, and Hun River was taken as a method validation case. There were 7 sampling sites in the monitoring network of the Hun River, and 17 monitoring items were analyzed once a month during January 2009 to December 2010. The results showed that the main monitoring items in the surface water of Hun River were ammonia nitrogen (NH 4 + -N), chemical oxygen demand, and biochemical oxygen demand. After optimization, the required number of monitoring sites was reduced from seven to three, and 57% of the cost was saved. In addition, there were no significant differences between non-optimized and optimized monitoring networks, and the optimized monitoring networks could correctly represent the original monitoring network. The duplicate setting degree of monitoring sites decreased after optimization, and the rationality of the monitoring network was improved. Therefore, the optimal method was identified as feasible, efficient, and economic.

  9. The worst case scenario: Locomotor and collision demands of the longest periods of gameplay in professional rugby union.

    Directory of Open Access Journals (Sweden)

    Cillian Reardon

    Full Text Available A number of studies have used global positioning systems (GPS to report on positional differences in the physical game demands of rugby union both on an average and singular bout basis. However, the ability of these studies to report quantitative data is limited by a lack of validation of certain aspects of measurement by GPS micro-technology. Furthermore no study has analyzed the positional physical demands of the longest bouts of ball-in-play time in rugby union. The aim of the present study is to compare the demands of the single longest period of ball-in-play, termed "worst case scenario" (WCS between positional groups, which have previously been reported to have distinguishable game demands. The results of this study indicate that WCS periods follow a similar sporadic pattern as average demands but are played at a far higher pace than previously reported for average game demands with average meters per minute of 116.8 m. The positional differences in running and collision activity previously reported are perpetuated within WCS periods. Backs covered greater total distances than forwards (318 m vs 289 m, carried out more high-speed running (11.1 m·min-1 vs 5.5 m·min-1 and achieved higher maximum velocities (MaxVel. Outside Backs achieved the highest MaxVel values (6.84 m·sec-1. Tight Five and Back Row forwards underwent significantly more collisions than Inside Back and Outside Backs (0.73 & 0.89 collisions·min-1 vs 0.28 & 0.41 collisions·min-1 respectively. The results of the present study provide information on the positional physical requirements of performance in prolonged periods involving multiple high intensity bursts of effort. Although the current state of GPS micro-technology as a measurement tool does not permit reporting of collision intensity or acceleration data, the combined use of video and GPS provides valuable information to the practitioner. This can be used to match and replicate game demands in training.

  10. The worst case scenario: Locomotor and collision demands of the longest periods of gameplay in professional rugby union

    Science.gov (United States)

    Reardon, Cillian; Tobin, Daniel P.; Tierney, Peter; Delahunt, Eamonn

    2017-01-01

    A number of studies have used global positioning systems (GPS) to report on positional differences in the physical game demands of rugby union both on an average and singular bout basis. However, the ability of these studies to report quantitative data is limited by a lack of validation of certain aspects of measurement by GPS micro-technology. Furthermore no study has analyzed the positional physical demands of the longest bouts of ball-in-play time in rugby union. The aim of the present study is to compare the demands of the single longest period of ball-in-play, termed “worst case scenario” (WCS) between positional groups, which have previously been reported to have distinguishable game demands. The results of this study indicate that WCS periods follow a similar sporadic pattern as average demands but are played at a far higher pace than previously reported for average game demands with average meters per minute of 116.8 m. The positional differences in running and collision activity previously reported are perpetuated within WCS periods. Backs covered greater total distances than forwards (318 m vs 289 m), carried out more high-speed running (11.1 m·min-1 vs 5.5 m·min-1) and achieved higher maximum velocities (MaxVel). Outside Backs achieved the highest MaxVel values (6.84 m·sec-1). Tight Five and Back Row forwards underwent significantly more collisions than Inside Back and Outside Backs (0.73 & 0.89 collisions·min-1 vs 0.28 & 0.41 collisions·min-1 respectively). The results of the present study provide information on the positional physical requirements of performance in prolonged periods involving multiple high intensity bursts of effort. Although the current state of GPS micro-technology as a measurement tool does not permit reporting of collision intensity or acceleration data, the combined use of video and GPS provides valuable information to the practitioner. This can be used to match and replicate game demands in training. PMID:28510582

  11. On precision of optimization in the case of incomplete information

    Czech Academy of Sciences Publication Activity Database

    Volf, Petr

    2012-01-01

    Roč. 19, č. 30 (2012), s. 170-184 ISSN 1212-074X R&D Projects: GA ČR GAP402/10/0956 Institutional support: RVO:67985556 Keywords : stochastic optimization * censored data * Fisher information * product-limit estimator Subject RIV: BB - Applied Statistics, Operational Research http://library.utia.cas.cz/separaty/2013/SI/volf-on precision of optimization in the case of incomplete information.pdf

  12. The Treeterbi and Parallel Treeterbi algorithms: efficient, optimal decoding for ordinary, generalized and pair HMMs.

    Science.gov (United States)

    Keibler, Evan; Arumugam, Manimozhiyan; Brent, Michael R

    2007-03-01

    Hidden Markov models (HMMs) and generalized HMMs been successfully applied to many problems, but the standard Viterbi algorithm for computing the most probable interpretation of an input sequence (known as decoding) requires memory proportional to the length of the sequence, which can be prohibitive. Existing approaches to reducing memory usage either sacrifice optimality or trade increased running time for reduced memory. We developed two novel decoding algorithms, Treeterbi and Parallel Treeterbi, and implemented them in the TWINSCAN/N-SCAN gene-prediction system. The worst case asymptotic space and time are the same as for standard Viterbi, but in practice, Treeterbi optimally decodes arbitrarily long sequences with generalized HMMs in bounded memory without increasing running time. Parallel Treeterbi uses the same ideas to split optimal decoding across processors, dividing latency to completion by approximately the number of available processors with constant average overhead per processor. Using these algorithms, we were able to optimally decode all human chromosomes with N-SCAN, which increased its accuracy relative to heuristic solutions. We also implemented Treeterbi for Pairagon, our pair HMM based cDNA-to-genome aligner. The TWINSCAN/N-SCAN/PAIRAGON open source software package is available from http://genes.cse.wustl.edu.

  13. Dynamic Planar Convex Hull with Optimal Query Time and O(log n · log log n ) Update Time

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jakob, Riko

    2000-01-01

    The dynamic maintenance of the convex hull of a set of points in the plane is one of the most important problems in computational geometry. We present a data structure supporting point insertions in amortized O(log n · log log log n) time, point deletions in amortized O(log n · log log n) time......, and various queries about the convex hull in optimal O(log n) worst-case time. The data structure requires O(n) space. Applications of the new dynamic convex hull data structure are improved deterministic algorithms for the k-level problem and the red-blue segment intersection problem where all red and all...

  14. Best-worst scaling to assess the most important barriers and facilitators for the use of health technology assessment in Austria.

    Science.gov (United States)

    Feig, Chiara; Cheung, Kei Long; Hiligsmann, Mickaël; Evers, Silvia M A A; Simon, Judit; Mayer, Susanne

    2018-04-01

    Although Health Technology Assessment (HTA) is increasingly used to support evidence-based decision-making in health care, several barriers and facilitators for the use of HTA have been identified. This best-worst scaling (BWS) study aims to assess the relative importance of selected barriers and facilitators of the uptake of HTA studies in Austria. A BWS object case survey was conducted among 37 experts in Austria to assess the relative importance of HTA barriers and facilitators. Hierarchical Bayes estimation was applied, with the best-worst count analysis as sensitivity analysis. Subgroup analyses were also performed on professional role and HTA experience. The most important barriers were 'lack of transparency in the decision-making process', 'fragmentation', 'absence of appropriate incentives', 'no explicit framework for decision-making process', and 'insufficient legal support'. The most important facilitators were 'transparency in the decision-making process', 'availability of relevant HTA research for policy makers', 'availability of explicit framework for decision-making process', 'sufficient legal support', and 'appropriate incentives'. This study suggests that HTA barriers and facilitators related to the context of decision makers, especially 'policy characteristics' and 'organization and resources' are the most important in Austria. A transparent and participatory decision-making process could improve the adoption of HTA evidence.

  15. A New Method for Solving Multiobjective Bilevel Programs

    Directory of Open Access Journals (Sweden)

    Ying Ji

    2017-01-01

    Full Text Available We study a class of multiobjective bilevel programs with the weights of objectives being uncertain and assumed to belong to convex and compact set. To the best of our knowledge, there is no study about this class of problems. We use a worst-case weighted approach to solve this class of problems. Our “worst-case weighted multiobjective bilevel programs” model supposes that each player (leader or follower has a set of weights to their objectives and wishes to minimize their maximum weighted sum objective where the maximization is with respect to the set of weights. This new model gives rise to a new Pareto optimum concept, which we call “robust-weighted Pareto optimum”; for the worst-case weighted multiobjective optimization with the weight set of each player given as a polytope, we show that a robust-weighted Pareto optimum can be obtained by solving mathematical programing with equilibrium constraints (MPEC. For an application, we illustrate the usefulness of the worst-case weighted multiobjective optimization to a supply chain risk management under demand uncertainty. By the comparison with the existing weighted approach, we show that our method is more robust and can be more efficiently applied to real-world problems.

  16. Optimization guidance to post-accident intervention: a specific case

    International Nuclear Information System (INIS)

    Garcia-Ramirez, J.E.; Reyes-Sanchez, M.A.

    1996-01-01

    ICRP recommends the application of the system of protection to intervention situations, i.e. those in which exposure pathways are already present; e.g the public exposure following an accident. This implies that intervention must be justified and optimized, being the optimization the process of deciding the nature of protective action in order to obtain the maximum net benefit. This paper provides an example of one optimization model to guide a decision making process in a specific case of post-accident intervention. The involved scenario postulates the contamination of big quantities of reinforcing steel bars used in construction industry, and a lot of them present in the structure of several dwellings. Inhabitants of these dwellings must be protected and the proposed action is to demolish those homes exceeding some intervention criterion. The objective of this study is to reach such intervention level trough an optimization process economically focused. (author)

  17. Optimizing the construction of devices to control inaccesible surfaces - case study

    Science.gov (United States)

    Niţu, E. L.; Costea, A.; Iordache, M. D.; Rizea, A. D.; Babă, Al

    2017-10-01

    The modern concept for the evolution of manufacturing systems requires multi-criteria optimization of technological processes and equipments, prioritizing associated criteria according to their importance. Technological preparation of the manufacturing can be developed, depending on the volume of production, to the limit of favourable economical effects related to the recovery of the costs for the design and execution of the technological equipment. Devices, as subsystems of the technological system, in the general context of modernization and diversification of machines, tools, semi-finished products and drives, are made in a multitude of constructive variants, which in many cases do not allow their identification, study and improvement. This paper presents a case study in which the multi-criteria analysis of some structures, based on a general optimization method, of novelty character, is used in order to determine the optimal construction variant of a control device. The rational construction of the control device confirms that the optimization method and the proposed calculation methods are correct and determine a different system configuration, new features and functions, and a specific method of working to control inaccessible surfaces.

  18. Return Persistence and Fund Flows in the Worst Performing Mutual Funds

    OpenAIRE

    Jonathan B. Berk; Ian Tonks

    2007-01-01

    We document that the observed persistence amongst the worst performing actively managed mutual funds is attributable to funds that have performed poorly both in the current and prior year. We demonstrate that this persistence results from an unwillingness of investors in these funds to respond to bad performance by withdrawing their capital. In contrast, funds that only performed poorly in the current year have a significantly larger (out)flow of funds/return sensitivity and consequently show...

  19. Optimal wireless receiver structure for omnidirectional inductive power transmission to biomedical implants.

    Science.gov (United States)

    Gougheri, Hesam Sadeghi; Kiani, Mehdi

    2016-08-01

    In order to achieve omnidirectional inductive power transmission to biomedical implants, the use of several orthogonal coils in the receiver side (Rx) has been proposed in the past. In this paper, the optimal Rx structure for connecting three orthogonal Rx coils and the power management is found to achieve the maximum power delivered to the load (PDL) in the presence of any Rx coil tilting. Unlike previous works, in which a separate power management has been used for each coil to deliver power to the load, different resonant Rx structures for connecting three Rx coils to a single power management are studied. In simulations, connecting three Rx coils with the diameters of 3 mm, 3.3 mm, and 3.6 mm in series and resonating them with a single capacitor at the operation frequency of 100 MHz led to the maximum PDL for large loads when the implant was tilted for 45o. This optimal Rx structure achieves higher PDL in worst-case scenarios as well as reduces the number of power managements to only one.

  20. Wine consumers’ preferences in Spain: an analysis using the best-worst scaling approach

    Directory of Open Access Journals (Sweden)

    Tiziana de-Magistris

    2014-06-01

    Full Text Available Research on wine consumers’ preferences has largely been explored in the academic literature and the importance of wine attributes has been measured by rating or ranking scales. However, the most recent literature on wine preferences has applied the best-worst scaling approach to avoid the biased outcomes derived from using rating or ranking scales in surveys. This study investigates premium red wine consumers’ preferences in Spain by applying best-worst alternatives. To achieve this goal, a random parameter logit model is applied to assess the impacts of wine attributes on the probability of choosing premium quality red wine by using data from an ad-hoc survey conducted in a medium-sized Spanish city. The results suggest that some wine attributes related to past experience (i.e. it matches food followed by some related to personal knowledge (i.e. the designation of origin are valued as the most important, whereas other attributes related to the image of the New World (i.e. label or brand name are perceived as the least important or indifferent.

  1. Including robustness in multi-criteria optimization for intensity-modulated proton therapy

    Science.gov (United States)

    Chen, Wei; Unkelbach, Jan; Trofimov, Alexei; Madden, Thomas; Kooy, Hanne; Bortfeld, Thomas; Craft, David

    2012-02-01

    We present a method to include robustness in a multi-criteria optimization (MCO) framework for intensity-modulated proton therapy (IMPT). The approach allows one to simultaneously explore the trade-off between different objectives as well as the trade-off between robustness and nominal plan quality. In MCO, a database of plans each emphasizing different treatment planning objectives, is pre-computed to approximate the Pareto surface. An IMPT treatment plan that strikes the best balance between the different objectives can be selected by navigating on the Pareto surface. In our approach, robustness is integrated into MCO by adding robustified objectives and constraints to the MCO problem. Uncertainties (or errors) of the robust problem are modeled by pre-calculated dose-influence matrices for a nominal scenario and a number of pre-defined error scenarios (shifted patient positions, proton beam undershoot and overshoot). Objectives and constraints can be defined for the nominal scenario, thus characterizing nominal plan quality. A robustified objective represents the worst objective function value that can be realized for any of the error scenarios and thus provides a measure of plan robustness. The optimization method is based on a linear projection solver and is capable of handling large problem sizes resulting from a fine dose grid resolution, many scenarios, and a large number of proton pencil beams. A base-of-skull case is used to demonstrate the robust optimization method. It is demonstrated that the robust optimization method reduces the sensitivity of the treatment plan to setup and range errors to a degree that is not achieved by a safety margin approach. A chordoma case is analyzed in more detail to demonstrate the involved trade-offs between target underdose and brainstem sparing as well as robustness and nominal plan quality. The latter illustrates the advantage of MCO in the context of robust planning. For all cases examined, the robust optimization for

  2. Optimal Portfolio of Corporate Investment and Consumption Problem under Market Closure: Inflation Case

    Directory of Open Access Journals (Sweden)

    Zongyuan Huang

    2013-01-01

    Full Text Available We present the model of corporate optimal investment with consideration of the influence of inflation and the difference between the market opening and market closure. In our model, the investor has three market activities of his or her choice: investment in project A, investment in project B, and consumption. The optimal strategy for the investor is obtained using the Hamilton-Jacobi-Bellman equation which is derived using the dynamic programming principle. Further along, a specific case, the Hyperbolic Absolute Risk Aversion case, is discussed in detail, where the explicit optimal strategy can be obtained using a very simple and direct method. At the very end, we present some simulation results along with a brief analysis of the relationship between the optimal strategy and other factors.

  3. Robust Active Portfolio Management

    National Research Council Canada - National Science Library

    Erdogan, E; Goldfarb, D; Iyengar, G

    2006-01-01

    ... on the portfolio beta, and limits on cash and industry exposure. We show that the optimal portfolios can be computed by solving second-order cone programs -- a class of optimization problems with a worst case complexity (i.e...

  4. Optimization of the radiation protection in industrial field: study of some practical cases

    International Nuclear Information System (INIS)

    Muglioni, P.

    1998-01-01

    Two situations are studied: the case of stationary gauges where the situation is sure with little actions to do to optimize the radiation protection and the case of mobile sources where the sources can submit to important exposure. In these conditions, the best way to optimize the radiation protection is to integrate the constraints, to put in operation a dosimetry and to keep a correct level of radiation protection information. (N.C.)

  5. On the optimity of separation cascade for a binary and a multi-component case

    International Nuclear Information System (INIS)

    Song, T.M.; Zeng, S.

    2006-01-01

    The optimity discussed in this article means minimum total interstage flow which is studied for two cases, a binary and a multi-component case, using direct numerical optimizations for countercurrent symmetric cascades with the concentrations of the target component specified in the .feed flow, the product and waste withdrawals In binary separation, the ideal cascade in which there are no mixing losses and whose stages are working under symmetric separation is the optimum cascade that has the minimum total flow However when the separation factor is large, there may not exist an ideal cascade for certain prescribed external parameters. Cascades are optimized numerically to minimize mixing losses and total flows, respectively The results are compared for the minimum mixing losses and the minimum total flow, and analyzed with theoretically derived formulas. For the multi-component case, satisfying the non-mixing condition is impossible. There is a counterpart of the binary ideal cascade named MARC which matches the abundance ratio at mixing points. An optimization example for a four-cornponent mixture separation cascade is analyzed with the first and the last components as the targets, respectively. The results show that MARC is not the optimum cascade for the separation of one certain isotope. The separation power of each stage in the optimized cascades is calculated using several different definitions, and the rationality of these definitions is discussed. The Q-iteration method is used to calculate the concentration distribution in both the binary and the multi-component cases. Ns-2 stage cuts out of the Ns stages of the cascade are the optimization variables in the optimization process and a combination of the simulated annealing and the Hooke-Jeeves method is applied as the optimization technique to find the minimum. (authors)

  6. Medical Optimization Network for Space Telemedicine Resources

    Science.gov (United States)

    Shah, R. V.; Mulcahy, R.; Rubin, D.; Antonsen, E. L.; Kerstman, E. L.; Reyes, D.

    2017-01-01

    INTRODUCTION: Long-duration missions beyond low Earth orbit introduce new constraints to the space medical system such as the inability to evacuate to Earth, communication delays, and limitations in clinical skillsets. NASA recognizes the need to improve capabilities for autonomous care on such missions. As the medical system is developed, it is important to have an ability to evaluate the trade space of what resources will be most important. The Medical Optimization Network for Space Telemedicine Resources was developed for this reason, and is now a system to gauge the relative importance of medical resources in addressing medical conditions. METHODS: A list of medical conditions of potential concern for an exploration mission was referenced from the Integrated Medical Model, a probabilistic model designed to quantify in-flight medical risk. The diagnostic and treatment modalities required to address best and worst-case scenarios of each medical condition, at the terrestrial standard of care, were entered into a database. This list included tangible assets (e.g. medications) and intangible assets (e.g. clinical skills to perform a procedure). A team of physicians working within the Exploration Medical Capability Element of NASA's Human Research Program ranked each of the items listed according to its criticality. Data was then obtained from the IMM for the probability of occurrence of the medical conditions, including a breakdown of best case and worst case, during a Mars reference mission. The probability of occurrence information and criticality for each resource were taken into account during analytics performed using Tableau software. RESULTS: A database and weighting system to evaluate all the diagnostic and treatment modalities was created by combining the probability of condition occurrence data with the criticalities assigned by the physician team. DISCUSSION: Exploration Medical Capabilities research at NASA is focused on providing a medical system to

  7. Worst-Case Scenario Tsunami Hazard Assessment in Two Historically and Economically Important Districts in Eastern Sicily (Italy)

    Science.gov (United States)

    Armigliato, A.; Tinti, S.; Pagnoni, G.; Zaniboni, F.; Paparo, M. A.

    2015-12-01

    The portion of the eastern Sicily coastline (southern Italy), ranging from the southern part of the Catania Gulf (to the north) down to the southern-eastern end of the island, represents a very important geographical domain from the industrial, commercial, military, historical and cultural points of view. Here the two major cities of Augusta and Siracusa are found. In particular, the Augusta bay hosts one of the largest petrochemical poles in the Mediterranean, and Siracusa is listed among the UNESCO World Heritage Sites since 2005. This area was hit by at least seven tsunamis in the approximate time interval from 1600 BC to present, the most famous being the 365, 1169, 1693 and 1908 tsunamis. The choice of this area as one of the sites for the testing of innovative methods for tsunami hazard, vulnerability and risk assessment and reduction is then fully justified. This is being developed in the frame of the EU Project called ASTARTE - Assessment, STrategy And Risk Reduction for Tsunamis in Europe (Grant 603839, 7th FP, ENV.2013.6.4-3). We assess the tsunami hazard for the Augusta-Siracusa area through the worst-case credible scenario technique, which can be schematically divided into the following steps: 1) Selection of five main source areas, both in the near- and in the far-field (Hyblaean-Malta escarpment, Messina Straits, Ionian subduction zone, Calabria offshore, western Hellenic Trench); 2) Choice of potential and credible tsunamigenic faults in each area: 38 faults were selected, with properly assigned magnitude, geometry and focal mechanism; 3) Computation of the maximum tsunami wave elevations along the eastern Sicily coast on a coarse grid (by means of the in-house code UBO-TSUFD) and extraction of the 9 scenarios that produce the largest effects in the target areas of Augusta and Siracusa; 4) For each of the 9 scenarios we run numerical UBO-TSUFD simulations over a set of five nested grids, with grid cells size decreasing from 3 km in the open Ionian

  8. Satellite-Based Evaluation of the Post-Fire Recovery Process from the Worst Forest Fire Case in South Korea

    Directory of Open Access Journals (Sweden)

    Jae-Hyun Ryu

    2018-06-01

    Full Text Available The worst forest fire in South Korea occurred in April 2000 on the eastern coast. Forest recovery works were conducted until 2005, and the forest has been monitored since the fire. Remote sensing techniques have been used to detect the burned areas and to evaluate the recovery-time point of the post-fire processes during the past 18 years. We used three indices, Normalized Burn Ratio (NBR, Normalized Difference Vegetation Index (NDVI, and Gross Primary Production (GPP, to temporally monitor a burned area in terms of its moisture condition, vegetation biomass, and photosynthetic activity, respectively. The change of those three indices by forest recovery processes was relatively analyzed using an unburned reference area. The selected unburned area had similar characteristics to the burned area prior to the forest fire. The temporal patterns of NBR and NDVI, not only showed the forest recovery process as a result of forest management, but also statistically distinguished the recovery periods at the regions of low, moderate, and high fire severity. The NBR2.1 for all areas, calculated using 2.1 μm wavelengths, reached the unburned state in 2008. The NDVI for areas with low and moderate fire severity levels became significantly equal to the unburned state in 2009 (p > 0.05, but areas with high severity levels did not reach the unburned state until 2017. This indicated that the surface and vegetation moisture conditions recovered to the unburned state about 8 years after the fire event, while vegetation biomass and health required a longer time to recover, particularly for high severity regions. In the case of GPP, it rapidly recovered after about 3 years. Then, the steady increase in GPP surpassed the GPP of the reference area in 2015 because of the rapid growth and high photosynthetic activity of young forests. Therefore, the concluding scientific message is that, because the recovery-time point for each component of the forest ecosystem is

  9. The effect of acutely administered MDMA on subjective and BOLD-fMRI responses to favourite and worst autobiographical memories.

    Science.gov (United States)

    Carhart-Harris, R L; Wall, M B; Erritzoe, D; Kaelen, M; Ferguson, B; De Meer, I; Tanner, M; Bloomfield, M; Williams, T M; Bolstridge, M; Stewart, L; Morgan, C J; Newbould, R D; Feilding, A; Curran, H V; Nutt, D J

    2014-04-01

    3,4-methylenedioxymethamphetamine (MDMA) is a potent monoamine-releaser that is widely used as a recreational drug. Preliminary work has supported the potential of MDMA in psychotherapy for post-traumatic stress disorder (PTSD). The neurobiological mechanisms underlying its putative efficacy are, however, poorly understood. Psychotherapy for PTSD usually requires that patients revisit traumatic memories, and it has been argued that this is easier to do under MDMA. Functional magnetic resonance imaging (fMRI) was used to investigate the effect of MDMA on recollection of favourite and worst autobiographical memories (AMs). Nineteen participants (five females) with previous experience with MDMA performed a blocked AM recollection (AMR) paradigm after ingestion of 100 mg of MDMA-HCl or ascorbic acid (placebo) in a double-blind, repeated-measures design. Memory cues describing participants' AMs were read by them in the scanner. Favourite memories were rated as significantly more vivid, emotionally intense and positive after MDMA than placebo and worst memories were rated as less negative. Functional MRI data from 17 participants showed robust activations to AMs in regions known to be involved in AMR. There was also a significant effect of memory valence: hippocampal regions showed preferential activations to favourite memories and executive regions to worst memories. MDMA augmented activations to favourite memories in the bilateral fusiform gyrus and somatosensory cortex and attenuated activations to worst memories in the left anterior temporal cortex. These findings are consistent with a positive emotional-bias likely mediated by MDMA's pro-monoaminergic pharmacology.

  10. On the Performance Optimization of Two-Level Three-Phase Grid-Feeding Voltage-Source Inverters

    Directory of Open Access Journals (Sweden)

    Issam A. Smadi

    2018-02-01

    Full Text Available The performance optimization of the two-level, three-phase, grid-feeding, voltage-source inverter (VSI is studied in this paper, which adopts an online adaptive switching frequency algorithm (OASF. A new degree of freedom has been added to the employed OASF algorithm for optimal selection of the weighting factor and overall system optimization design. Toward that end, a full mathematical formulation, including the impact of the coupling inductor and the controller response time, is presented. At first, the weighting factor is selected to favor the switching losses, and the controller gains are optimized by minimizing the integral time-weighted absolute error (ITAE of the output active and reactive power. Different loading and ambient temperature conditions are considered to validate the optimized controller and its fast response through online field programmable gate array (FPGA-in-the-loop. Then, the weighting factor is optimally selected to reduce the cost of the L-filter and the heat-sink. An optimization problem to minimize the cost design at the worst case of loading condition for grid-feeding VSI is formulated. The results from this optimization problem are the filter inductance, the thermal resistance of the heat-sink, and the optimal switching frequency with the optimal weighting factor. The VSI test-bed using the optimized parameters is used to verify the proposed work experimentally. Adopting the OASF algorithm that employs the optimal weighting factor for grid-feeding VSI, the percentages of the reductions in the slope of the steady state junction temperature profile compared to fixed frequencies of 10 kHz, 14.434 kHz, and 20 kHz are about 6%, 30%, and 18%, respectively.

  11. Coupling atmospheric, hydrological and hydraulic models to develop a catalogue of worst-case scenarios for extreme flooding in Switzerland

    Science.gov (United States)

    José Gómez-Navarro, Juan; Felder, Guido; Raible, Christoph C.; Martius, Olivia; Rössler, Ole

    2015-04-01

    the high-resolution simulation (as it is driven by the boundary conditions provided by the GCM), the spatial structure of the precipitation is refined, producing stronger precipitation gradients that allow to identify the main orographic barriers. Further on, much higher precipitation rates occur in some river catchments, which are indicative of potential disastrous situations at very localised regions. In a next step, the results of the atmospheric-alone RCM simulations will be used to drive the hydrological model PREVAH. This model produces event hydrographs, that represent plausible catchment reactions on the simulated precipitation produced by the RCM. The event hydrographs will be then routed with the 1D/2D hydraulic model BASEMENT-ETH, that accounts for the retention effects of lakes and inundated areas. Hence, the described model chain will eventually simulate a number of physically plausible peak discharges in Switzerland that are determined by the most extreme situations occurring in the GCM simulation. This will enable the analysis and characterisation of worst-case floodings in Switzerland whose return period exceeds several centuries.

  12. Case studies on age-management in organisations: report on organisational case studies

    NARCIS (Netherlands)

    Punte, E.; Conen, W.S.; Schippers, J.; Henkens, C.J.I.M.

    2011-01-01

    The acquisition of case studies was thwarted by the economic crisis and the feeling of being ‘over-researched’ by potential organisations. Although organisations in some sectors (e.g. chemical manufacturing) reported the worst part of the economic crisis was behind, many organisations indicated

  13. Self-optimizing Uplink Outer Loop Power Control for WCDMA Network

    Directory of Open Access Journals (Sweden)

    A. G. Markoc

    2015-06-01

    Full Text Available The increasing demands for high data rates, drives the efforts for more efficient usage of the finite natural radio spectrum resources. Existing wideband code division multiple access (WCDMA uplink outer loop power control has difficulty to answer to the new load on air interface. The main reason is that the maximum allowed noise rise per single user is fixed value. In worst case uplink load can be so high that all services, including conversational service, could be blocked. In this paper investigation has been performed to present correlation of main system parameters, used by uplink outer loop power control, to uplink load. Simulation has been created and executed to present difference in current implementation of uplink outer loop power control against proposed changes. Proposed solution is self-optimizing uplink outer loop power control in a way that maximum allowed noise rise per single user would be dynamically changed based on current uplink load on cell.

  14. Optimal Finger Search Trees in the Pointer Machine

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lagogiannis, George; Makris, Christos

    2003-01-01

    We develop a new finger search tree with worst-case constant update time in the Pointer Machine (PM) model of computation. This was a major problem in the field of Data Structures and was tantalizingly open for over twenty years while many attempts by researchers were made to solve it. The result...

  15. Similar-Case-Based Optimization of Beam Arrangements in Stereotactic Body Radiotherapy for Assisting Treatment Planners

    Directory of Open Access Journals (Sweden)

    Taiki Magome

    2013-01-01

    Full Text Available Objective. To develop a similar-case-based optimization method for beam arrangements in lung stereotactic body radiotherapy (SBRT to assist treatment planners. Methods. First, cases that are similar to an objective case were automatically selected based on geometrical features related to a planning target volume (PTV location, PTV shape, lung size, and spinal cord position. Second, initial beam arrangements were determined by registration of similar cases with the objective case using a linear registration technique. Finally, beam directions of the objective case were locally optimized based on the cost function, which takes into account the radiation absorption in normal tissues and organs at risk. The proposed method was evaluated with 10 test cases and a treatment planning database including 81 cases, by using 11 planning evaluation indices such as tumor control probability and normal tissue complication probability (NTCP. Results. The procedure for the local optimization of beam arrangements improved the quality of treatment plans with significant differences (P<0.05 in the homogeneity index and conformity index for the PTV, V10, V20, mean dose, and NTCP for the lung. Conclusion. The proposed method could be usable as a computer-aided treatment planning tool for the determination of beam arrangements in SBRT.

  16. Integer Representations towards Efficient Counting in the Bit Probe Model

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Greve, Mark; Pandey, Vineet

    2011-01-01

    Abstract We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both...... in the worst-case and in the average-case. A counter is space-optimal if it represents any number in the range [0,...,2 n  − 1] using exactly n bits. We provide a space-optimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worst......-case. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,...,L] for some integer L bits, we define the efficiency...

  17. Case series: Two cases of eyeball tattoos with short-term complications

    Directory of Open Access Journals (Sweden)

    Gonzalo Duarte

    2017-04-01

    Conclusions and importance: Eyeball tattoos are performed by non-ophthalmic trained personnel. There are a substantial number of short-term risks associated with this procedure. Long-term effects on the eyes and vision are still unknown, but in a worst case scenario could include loss of vision or permanent damage to the eyes.

  18. An optimal algorithm for preemptive on-line scheduling

    NARCIS (Netherlands)

    Chen, B.; Vliet, van A.; Woeginger, G.J.

    1995-01-01

    We investigate the problem of on-line scheduling jobs on m identical parallel machines where preemption is allowed. The goal is to minimize the makespan. We derive an approximation algorithm with worst-case guarantee mm/(mm - (m - 1)m) for every m 2, which increasingly tends to e/(e - 1) ˜ 1.58 as m

  19. Aerodynamic reconfiguration and multicriterial optimization of centrifugal compressors – a case study

    Directory of Open Access Journals (Sweden)

    Valeriu DRAGAN

    2014-12-01

    Full Text Available This paper continues the recent research of the author, with application to 3D computational fluid dynamics multicriterial optimization of turbomachinery parts. Computational Fluid Dynamics has been an ubicuous tool for compressor design for decades, helping the designers to test the aerodynamic parameters of their machines with great accuracy. Due to advances of multigrid methods and the improved robustness of structured solvers, CFD can nowadays be part of an optimization loop with artificial neural networks or evolutive algorithms. This paper presents a case study of an air centrifugal compressor rotor optimized using Numeca's Design 3D CFD suite. The turbulence model used for the database generation and the optimization stage is Spalart Allmaras. Results indicate a fairly quick convergence time per individual as well as a good convergence of the artificial neural network optimizer.

  20. Prioritizing Parental Worry Associated with Duchenne Muscular Dystrophy Using Best-Worst Scaling.

    Science.gov (United States)

    Peay, Holly Landrum; Hollin, I L; Bridges, J F P

    2016-04-01

    Duchenne muscular dystrophy (DMD) is a progressive, fatal pediatric disorder with significant burden on parents. Assessing disease impact can inform clinical interventions. Best-worst scaling (BWS) was used to elicit parental priorities among 16 short-term, DMD-related worries identified through community engagement. Respondents viewed 16 subsets of worries, identified using a balanced, incomplete block design, and identified the most and least worrying items. Priorities were assessed using best-worst scores (spanning +1 to -1) representing the relative number of times items were endorsed as most and least worrying. Independent-sample t-tests compared prioritization of parents with ambulatory and non-ambulatory children. Participants (n = 119) most prioritized worries about weakness progression (BW score = 0.64) and getting the right care over time (BW = 0.25). Compared to parents of non-ambulatory children, parents of ambulatory children more highly prioritized missing treatments (BW = 0.31 vs. 0.13, p < 0.001) and being a good enough parent (BW = 0.06 vs. -0.08, p = 0.010), and less prioritized child feeling like a burden (BW = -0.24 vs. -0.07, p < 0.001). Regardless of child's disease stage, caregiver interventions should address the emotional impact of caring for a child with a progressive, fatal disease. We demonstrate an accessible, clinically-relevant approach to prioritize disease impact using BWS, which offers an alternative to the use of traditional rating/ranking scales.

  1. Managing the Public Sector Research and Development Portfolio Selection Process: A Case Study of Quantitative Selection and Optimization

    Science.gov (United States)

    2016-09-01

    PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION by Jason A. Schwartz...PUBLIC SECTOR RESEARCH & DEVELOPMENT PORTFOLIO SELECTION PROCESS: A CASE STUDY OF QUANTITATIVE SELECTION AND OPTIMIZATION 5. FUNDING NUMBERS 6...describing how public sector organizations can implement a research and development (R&D) portfolio optimization strategy to maximize the cost

  2. Reliability evaluation of high-performance, low-power FinFET standard cells based on mixed RBB/FBB technique

    Science.gov (United States)

    Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole

    2017-04-01

    With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).

  3. Optimizing the NPP refurbishment environmental qualification design process for an in-service installation

    International Nuclear Information System (INIS)

    MacBeth, M.J.; Hemmings, R.L.

    2002-01-01

    This paper describes the Environmental Qualification (EQ) modification design work required to upgrade the Reactor Building (RB) airlocks at the Pickering Nuclear Generating Station B Facility. The RB airlocks provide a containment boundary function and are designed to prevent a breach of containment from occurring for all analysed station conditions. Recent, more stringent, Canadian Nuclear Regulatory actions have imposed EQ requirements for the RB airlocks in Canadian Nuclear Generating Stations. The airlocks are required to function under the worst-case design basis accident (DBA) conditions for the assigned mission duration and the design must demonstrate that a spurious door opening cannot be initiated by any accident conditions. This paper reviews key project design activities while providing detailed insights to the potential solution or elimination of some problematic aspects of these types of design activities. General recommendations for optimal technical management of such project implementation issues are presented. (author)

  4. Optimization of revenues from a distributed generation portfolio: a case study

    NARCIS (Netherlands)

    Geysen, D.; Kessels, K.; Hommelberg, M.; Ghijsen, M.; Tielemans, Y.; Vinck, K.

    2011-01-01

    Many companies are investing in energy production from renewable energy sources and are looking at ways to optimize their portfolio performance. The case study under consideration aims at maximizing the revenues from such a distributed energy generation portfolio, consisting of gas engines and a PV

  5. Permutation flow-shop scheduling problem to optimize a quadratic objective function

    Science.gov (United States)

    Ren, Tao; Zhao, Peng; Zhang, Da; Liu, Bingqian; Yuan, Huawei; Bai, Danyu

    2017-09-01

    A flow-shop scheduling model enables appropriate sequencing for each job and for processing on a set of machines in compliance with identical processing orders. The objective is to achieve a feasible schedule for optimizing a given criterion. Permutation is a special setting of the model in which the processing order of the jobs on the machines is identical for each subsequent step of processing. This article addresses the permutation flow-shop scheduling problem to minimize the criterion of total weighted quadratic completion time. With a probability hypothesis, the asymptotic optimality of the weighted shortest processing time schedule under a consistency condition (WSPT-CC) is proven for sufficiently large-scale problems. However, the worst case performance ratio of the WSPT-CC schedule is the square of the number of machines in certain situations. A discrete differential evolution algorithm, where a new crossover method with multiple-point insertion is used to improve the final outcome, is presented to obtain high-quality solutions for moderate-scale problems. A sequence-independent lower bound is designed for pruning in a branch-and-bound algorithm for small-scale problems. A set of random experiments demonstrates the performance of the lower bound and the effectiveness of the proposed algorithms.

  6. Shortening Delivery Times of Intensity Modulated Proton Therapy by Reducing Proton Energy Layers During Treatment Plan Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Water, Steven van de, E-mail: s.vandewater@erasmusmc.nl [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Kooy, Hanne M. [F. H. Burr Proton Therapy Center, Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts (United States); Heijmen, Ben J.M.; Hoogeman, Mischa S. [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands)

    2015-06-01

    Purpose: To shorten delivery times of intensity modulated proton therapy by reducing the number of energy layers in the treatment plan. Methods and Materials: We have developed an energy layer reduction method, which was implemented into our in-house-developed multicriteria treatment planning system “Erasmus-iCycle.” The method consisted of 2 components: (1) minimizing the logarithm of the total spot weight per energy layer; and (2) iteratively excluding low-weighted energy layers. The method was benchmarked by comparing a robust “time-efficient plan” (with energy layer reduction) with a robust “standard clinical plan” (without energy layer reduction) for 5 oropharyngeal cases and 5 prostate cases. Both plans of each patient had equal robust plan quality, because the worst-case dose parameters of the standard clinical plan were used as dose constraints for the time-efficient plan. Worst-case robust optimization was performed, accounting for setup errors of 3 mm and range errors of 3% + 1 mm. We evaluated the number of energy layers and the expected delivery time per fraction, assuming 30 seconds per beam direction, 10 ms per spot, and 400 Giga-protons per minute. The energy switching time was varied from 0.1 to 5 seconds. Results: The number of energy layers was on average reduced by 45% (range, 30%-56%) for the oropharyngeal cases and by 28% (range, 25%-32%) for the prostate cases. When assuming 1, 2, or 5 seconds energy switching time, the average delivery time was shortened from 3.9 to 3.0 minutes (25%), 6.0 to 4.2 minutes (32%), or 12.3 to 7.7 minutes (38%) for the oropharyngeal cases, and from 3.4 to 2.9 minutes (16%), 5.2 to 4.2 minutes (20%), or 10.6 to 8.0 minutes (24%) for the prostate cases. Conclusions: Delivery times of intensity modulated proton therapy can be reduced substantially without compromising robust plan quality. Shorter delivery times are likely to reduce treatment uncertainties and costs.

  7. On the worst scenario method: Application to a quasilinear elliptic 2D-problem with uncertain coefficients

    Czech Academy of Sciences Publication Activity Database

    Harasim, Petr

    2011-01-01

    Roč. 56, č. 5 (2011), s. 459-480 ISSN 0862-7940 Institutional research plan: CEZ:AV0Z30860518 Keywords : Worst scenario method * nonlinear differential equation * Kachanov method Subject RIV: BA - General Mathematics Impact factor: 0.480, year: 2011 http://am.math.cas.cz/am56-5/2.html

  8. Identification of Swallowing Tasks from a Modified Barium Swallow Study That Optimize the Detection of Physiological Impairment

    Science.gov (United States)

    Hazelwood, R. Jordan; Armeson, Kent E.; Hill, Elizabeth G.; Bonilha, Heather Shaw; Martin-Harris, Bonnie

    2017-01-01

    Purpose: The purpose of this study was to identify which swallowing task(s) yielded the worst performance during a standardized modified barium swallow study (MBSS) in order to optimize the detection of swallowing impairment. Method: This secondary data analysis of adult MBSSs estimated the probability of each swallowing task yielding the derived…

  9. Analytical optimization of active bandwidth and quality factor for TOCSY experiments in NMR spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Coote, Paul, E-mail: paul-coote@hms.harvard.edu [Harvard Medical School (United States); Bermel, Wolfgang [Bruker BioSpin GmbH (Germany); Wagner, Gerhard; Arthanari, Haribabu, E-mail: hari@hms.harvard.edu [Harvard Medical School (United States)

    2016-09-15

    Active bandwidth and global quality factor are the two main metrics used to quantitatively compare the performance of TOCSY mixing sequences. Active bandwidth refers to the spectral region over which at least 50 % of the magnetization is transferred via a coupling. Global quality factor scores mixing sequences according to the worst-case transfer over a range of possible mixing times and chemical shifts. Both metrics reward high transfer efficiency away from the main diagonal of a two-dimensional spectrum. They can therefore be used to design mixing sequences that will function favorably in experiments. Here, we develop optimization methods tailored to these two metrics, including precise control of off-diagonal cross peak buildup rates. These methods produce square shaped transfer efficiency profiles, directly matching the desirable properties that the metrics are intended to measure. The optimization methods are analytical, rather than numerical. The two resultant shaped pulses have significantly higher active bandwidth and quality factor, respectively, than all other known sequences. They are therefore highly suitable for use in NMR spectroscopy. We include experimental verification of these improved waveforms on small molecule and protein samples.

  10. Constructing Delaunay triangulations along space-filling curves

    NARCIS (Netherlands)

    Buchin, K.; Fiat, A.; Sanders, P.

    2009-01-01

    Incremental construction con BRIO using a space-filling curve order for insertion is a popular algorithm for constructing Delaunay triangulations. So far, it has only been analyzed for the case that a worst-case optimal point location data structure is used which is often avoided in implementations.

  11. Accommodating new ways of working : lessons from best practices and worst cases

    NARCIS (Netherlands)

    Brunia, S.; de Been, I.; van der Voordt, Theo

    2016-01-01

    Purpose – The purpose of this study is to explore which factors may explain the high or low percentages of satisfied employees in offices with shared activity-based workplaces. Design/methodology/approach – The paper compares data on employee satisfaction from two cases with remarkably high

  12. Worst-case and smoothed analysis of $k$-means clustering with Bregman divergences

    NARCIS (Netherlands)

    Manthey, Bodo; Röglin, Heiko; Dong, Yingfei; Du, Dingzhu; Ibarra, Oscar

    2009-01-01

    The $k$-means algorithm is the method of choice for clustering large-scale data sets and it performs exceedingly well in practice. Most of the theoretical work is restricted to the case that squared Euclidean distances are used as similarity measure. In many applications, however, data is to be

  13. Factor analysis in optimization of formulation of high content uniformity tablets containing low dose active substance.

    Science.gov (United States)

    Lukášová, Ivana; Muselík, Jan; Franc, Aleš; Goněc, Roman; Mika, Filip; Vetchý, David

    2017-11-15

    Warfarin is intensively discussed drug with narrow therapeutic range. There have been cases of bleeding attributed to varying content or altered quality of the active substance. Factor analysis is useful for finding suitable technological parameters leading to high content uniformity of tablets containing low amount of active substance. The composition of tabletting blend and technological procedure were set with respect to factor analysis of previously published results. The correctness of set parameters was checked by manufacturing and evaluation of tablets containing 1-10mg of warfarin sodium. The robustness of suggested technology was checked by using "worst case scenario" and statistical evaluation of European Pharmacopoeia (EP) content uniformity limits with respect to Bergum division and process capability index (Cpk). To evaluate the quality of active substance and tablets, dissolution method was developed (water; EP apparatus II; 25rpm), allowing for statistical comparison of dissolution profiles. Obtained results prove the suitability of factor analysis to optimize the composition with respect to batches manufactured previously and thus the use of metaanalysis under industrial conditions is feasible. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Case series: Two cases of eyeball tattoos with short-term complications.

    Science.gov (United States)

    Duarte, Gonzalo; Cheja, Rashel; Pachón, Diana; Ramírez, Carolina; Arellanes, Lourdes

    2017-04-01

    To report two cases of eyeball tattoos with short-term post procedural complications. Case 1 is a 26-year-old Mexican man that developed orbital cellulitis and posterior scleritis 2 h after an eyeball tattoo. Patient responded satisfactorily to systemic antibiotic and corticosteroid treatment. Case 2 is a 17-year-old Mexican man that developed two sub-episcleral nodules in the ink injection sites immediately after the procedure. Eyeball tattoos are performed by non-ophthalmic trained personnel. There are a substantial number of short-term risks associated with this procedure. Long-term effects on the eyes and vision are still unknown, but in a worst case scenario could include loss of vision or permanent damage to the eyes.

  15. A case study of optimization in the decision process: Siting groundwater monitoring wells

    International Nuclear Information System (INIS)

    Cardwell, H.; Huff, D.; Douthitt, J.; Sale, M.

    1993-12-01

    Optimization is one of the tools available to assist decision makers in balancing multiple objectives and concerns. In a case study of the siting decision for groundwater monitoring wells, we look at the influence of the optimization models on the decisions made by the responsible groundwater specialist. This paper presents a multi-objective integer programming model for determining the location of monitoring wells associated with a groundwater pump-and-treat remediation. After presenting the initial optimization results, we analyze the actual decision and revise the model to incorporate elements of the problem that were later identified as important in the decision-making process. The results of a revised model are compared to the actual siting plans, the recommendations from the initial optimization runs, and the initial monitoring network proposed by the decision maker

  16. Optimization of resistively hardened latches

    International Nuclear Information System (INIS)

    Gagne, G.; Savaria, Y.

    1990-01-01

    The design of digital circuits tolerant to single-event upsets is considered. The results of a study are presented on which an analytical model was used to predict the behavior of a standard resistively hardened latch. It is shown that a worst case analysis for all possible single-event upset situations (on the latch or in the logic) can be derived from studying the effects of a transient disturbed write cycle. The existence of an intrinsic minimum write period to tolerate a transient of a given duration is also demonstrated

  17. Optimization of a space based radiator

    International Nuclear Information System (INIS)

    Sam, Kien Fan Cesar Hung; Deng Zhongmin

    2011-01-01

    Nowadays there is an increased demand in satellite weight reduction for the reduction of costs. Thermal control system designers have to face the challenge of reducing both the weight of the system and required heater power while maintaining the components temperature within their design ranges. The main purpose of this paper is to present an optimization of a heat pipe radiator applied to a practical engineering design application. For this study, a communications satellite payload panel was considered. Four radiator areas were defined instead of a centralized one in order to improve the heat rejection into space; the radiator's dimensions were determined considering worst hot scenario, solar fluxes, heat dissipation and the component's design temperature upper limit. Dimensions, thermal properties of the structural panel, optical properties and degradation/contamination on thermal control coatings were also considered. A thermal model was constructed for thermal analysis and two heat pipe network designs were evaluated and compared. The model that allowed better radiator efficiency was selected for parametric thermal analysis and optimization. This pursues finding the minimum size of the heat pipe network while keeping complying with thermal control requirements without increasing power consumption. - Highlights: →Heat pipe radiator optimization applied to a practical engineering design application. →The heat pipe radiator of a communications satellite panel is optimized. →A thermal model was built for parametric thermal analysis and optimization. →Optimal heat pipe network size is determined for the optimal weight solution. →The thermal compliance was verified by transient thermal analysis.

  18. Efficiency in the Worst Production Situation Using Data Envelopment Analysis

    Directory of Open Access Journals (Sweden)

    Md. Kamrul Hossain

    2013-01-01

    Full Text Available Data envelopment analysis (DEA measures relative efficiency among the decision making units (DMU without considering noise in data. The least efficient DMU indicates that it is in the worst situation. In this paper, we measure efficiency of individual DMU whenever it losses the maximum output, and the efficiency of other DMUs is measured in the observed situation. This efficiency is the minimum efficiency of a DMU. The concept of stochastic data envelopment analysis (SDEA is a DEA method which considers the noise in data which is proposed in this study. Using bounded Pareto distribution, we estimate the DEA efficiency from efficiency interval. Small value of shape parameter can estimate the efficiency more accurately using the Pareto distribution. Rank correlations were estimated between observed efficiencies and minimum efficiency as well as between observed and estimated efficiency. The correlations are indicating the effectiveness of this SDEA model.

  19. An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  20. An Extended Quadratic Frobenius Primality Test with Average- and Worst-Case Error Estimate

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2006-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  1. Best-Worst scaling…reflections on presentation, analysis, and lessons learnt from case 3 BWS experiments

    DEFF Research Database (Denmark)

    Adamsen, Jannie Mia; Rundle-Thiele, Sharyn; Whitty, Jennifer

    2013-01-01

    Surveys based on Likert scales and similar ratings-based scales continue to dominate market research practice despite their many and well-documented limitations. Key issues of concern for Likert scales include over- or under-reporting depending on the context, and variation in responses based......,600 respondents. One case 3 BW experiment investigating consumer preferences for organic apples is featured and evaluated using two approaches. The first analysis treats the data as a case 1 BW experiment to outline the simplicity of case 1 analysis. Case 3 BW analysis involving multinomial logit and latent class...... do believe the BWS method has a significant potential to improve predictability in market research – the response rate and positive participant feedback speaks for itself....

  2. Surrogate Assisted Design Optimization of an Air Turbine

    Directory of Open Access Journals (Sweden)

    Rameez Badhurshah

    2014-01-01

    Full Text Available Surrogates are cheaper to evaluate and assist in designing systems with lesser time. On the other hand, the surrogates are problem dependent and they need evaluation for each problem to find a suitable surrogate. The Kriging variants such as ordinary, universal, and blind along with commonly used response surface approximation (RSA model were used in the present problem, to optimize the performance of an air impulse turbine used for ocean wave energy harvesting by CFD analysis. A three-level full factorial design was employed to find sample points in the design space for two design variables. A Reynolds-averaged Navier Stokes solver was used to evaluate the objective function responses, and these responses along with the design variables were used to construct the Kriging variants and RSA functions. A hybrid genetic algorithm was used to find the optimal point in the design space. It was found that the best optimal design was produced by the universal Kriging while the blind Kriging produced the worst. The present approach is suggested for renewable energy application.

  3. Optimization of maintenance strategies in case of data uncertainties; Optimierung von Instandhaltungsstrategien bei unscharfen Eingangsdaten

    Energy Technology Data Exchange (ETDEWEB)

    Aha, Ulrich

    2013-07-01

    Maintenance strategies are aimed to keep a technical facility functioning in spite of damaging processes (wear, corrosion, fatigue) with simultaneous control of these processes. The project optimization of maintenance strategies in case of data uncertainties is aimed to optimize maintenance measures like preventive measures (lubrication etc.), inspections and replacements to keep the facility/plant operating including the minimization of financial costs. The report covers the following topics: modeling assumptions, model development and optimization procedure, results for a conventional power plant and an oxyfuel plant.

  4. On Quantification of Flexibility in Power Systems

    DEFF Research Database (Denmark)

    Bucher, Matthias A.; Delikaraoglou, Stefanos; Heussen, Kai

    2015-01-01

    practice ofthe TSO using a robust reserve procurement strategy whichguarantees optimal system response in the worst-case realizationof the uncertainty. An illustrative three-node system is usedto investigate the procurement method. Finally, the locationalflexibility for a larger test system is presented....

  5. An Improved Fruit Fly Optimization Algorithm Inspired from Cell Communication Mechanism

    Directory of Open Access Journals (Sweden)

    Chuncai Xiao

    2015-01-01

    Full Text Available Fruit fly optimization algorithm (FOA invented recently is a new swarm intelligence method based on fruit fly’s foraging behaviors and has been shown to be competitive with existing evolutionary algorithms, such as particle swarm optimization (PSO algorithm. However, there are still some disadvantages in the FOA, such as low convergence precision, easily trapped in a local optimum value at the later evolution stage. This paper presents an improved FOA based on the cell communication mechanism (CFOA, by considering the information of the global worst, mean, and best solutions into the search strategy to improve the exploitation. The results from a set of numerical benchmark functions show that the CFOA outperforms the FOA and the PSO in most of the experiments. Further, the CFOA is applied to optimize the controller for preoxidation furnaces in carbon fibers production. Simulation results demonstrate the effectiveness of the CFOA.

  6. Integrated Circuit Conception: A Wire Optimization Technic Reducing Interconnection Delay in Advanced Technology Nodes

    Directory of Open Access Journals (Sweden)

    Mohammed Darmi

    2017-10-01

    Full Text Available As we increasingly use advanced technology nodes to design integrated circuits (ICs, physical designers and electronic design automation (EDA providers are facing multiple challenges, firstly, to honor all physical constraints coming with cutting-edge technologies and, secondly, to achieve expected quality of results (QoR. An advanced technology should be able to bring better performances with minimum cost whatever the complexity. A high effort to develop out-of-the-box optimization techniques is more than needed. In this paper, we will introduce a new routing technique, with the objective to optimize timing, by only acting on routing topology, and without impacting the IC Area. In fact, the self-aligned double patterning (SADP technology offers an important difference on layer resistance between SADP and No-SADP layers; this property will be taken as an advantage to drive the global router to use No-SADP less resistive layers for critical nets. To prove the benefit on real test cases, we will use Mentor Graphics’ physical design EDA tool Nitro-SoC™ and several 7 nm technology node designs. The experiments show that worst negative slack (WNS and total negative slack (TNS improved up to 13% and 56%, respectively, compared to the baseline flow.

  7. Atmospheric transport of radioactive debris to Norway in case of a hypothetical accident related to the recovery of the Russian submarine K-27

    International Nuclear Information System (INIS)

    Bartnicki, Jerzy; Amundsen, Ingar; Brown, Justin; Hosseini, Ali; Hov, Øystein; Haakenstad, Hilde; Klein, Heiko; Lind, Ole Christian; Salbu, Brit; Szacinski Wendel, Cato C.; Ytre-Eide, Martin Album

    2016-01-01

    The Russian nuclear submarine K-27 suffered a loss of coolant accident in 1968 and with nuclear fuel in both reactors it was scuttled in 1981 in the outer part of Stepovogo Bay located on the eastern coast of Novaya Zemlya. The inventory of spent nuclear fuel on board the submarine is of concern because it represents a potential source of radioactive contamination of the Kara Sea and a criticality accident with potential for long-range atmospheric transport of radioactive particles cannot be ruled out. To address these concerns and to provide a better basis for evaluating possible radiological impacts of potential releases in case a salvage operation is initiated, we assessed the atmospheric transport of radionuclides and deposition in Norway from a hypothetical criticality accident on board the K-27. To achieve this, a long term (33 years) meteorological database has been prepared and used for selection of the worst case meteorological scenarios for each of three selected locations of the potential accident. Next, the dispersion model SNAP was run with the source term for the worst-case accident scenario and selected meteorological scenarios. The results showed predictions to be very sensitive to the estimation of the source term for the worst-case accident and especially to the sizes and densities of released radioactive particles. The results indicated that a large area of Norway could be affected, but that the deposition in Northern Norway would be considerably higher than in other areas of the country. The simulations showed that deposition from the worst-case scenario of a hypothetical K-27 accident would be at least two orders of magnitude lower than the deposition observed in Norway following the Chernobyl accident. - Highlights: • Long-term meteorological database has been developed for atmospheric dispersion. • Using this database, the worst case meteorological scenarios have been selected. • Mainly northern parts of Norwegian territory will be

  8. SU-E-T-625: Robustness Evaluation and Robust Optimization of IMPT Plans Based on Per-Voxel Standard Deviation of Dose Distributions.

    Science.gov (United States)

    Liu, W; Mohan, R

    2012-06-01

    Proton dose distributions, IMPT in particular, are highly sensitive to setup and range uncertainties. We report a novel method, based on per-voxel standard deviation (SD) of dose distributions, to evaluate the robustness of proton plans and to robustly optimize IMPT plans to render them less sensitive to uncertainties. For each optimization iteration, nine dose distributions are computed - the nominal one, and one each for ± setup uncertainties along x, y and z axes and for ± range uncertainty. SD of dose in each voxel is used to create SD-volume histogram (SVH) for each structure. SVH may be considered a quantitative representation of the robustness of the dose distribution. For optimization, the desired robustness may be specified in terms of an SD-volume (SV) constraint on the CTV and incorporated as a term in the objective function. Results of optimization with and without this constraint were compared in terms of plan optimality and robustness using the so called'worst case' dose distributions; which are obtained by assigning the lowest among the nine doses to each voxel in the clinical target volume (CTV) and the highest to normal tissue voxels outside the CTV. The SVH curve and the area under it for each structure were used as quantitative measures of robustness. Penalty parameter of SV constraint may be varied to control the tradeoff between robustness and plan optimality. We applied these methods to one case each of H&N and lung. In both cases, we found that imposing SV constraint improved plan robustness but at the cost of normal tissue sparing. SVH-based optimization and evaluation is an effective tool for robustness evaluation and robust optimization of IMPT plans. Studies need to be conducted to test the methods for larger cohorts of patients and for other sites. This research is supported by National Cancer Institute (NCI) grant P01CA021239, the University Cancer Foundation via the Institutional Research Grant program at the University of Texas MD

  9. OPTIMIZATION METHOD AND SOFTWARE FOR FUEL COST REDUCTION IN CASE OF ROAD TRANSPORT ACTIVITY

    Directory of Open Access Journals (Sweden)

    György Kovács

    2017-06-01

    Full Text Available The transport activity is one of the most expensive processes in the supply chain and the fuel cost is the highest cost among the cost components of transportation. The goal of the research is to optimize the transport costs in case of a given transport task both by the selecting the optimal petrol station and by determining the optimal amount of the refilled fuel. Recently, in practice, these two decisions have not been made centrally at the forwarding company, but they depend on the individual decision of the driver. The aim of this study is to elaborate a precise and reliable mathematical method for selecting the optimal refuelling stations and determining the optimal amount of the refilled fuel to fulfil the transport demands. Based on the elaborated model, new decision-supporting software is developed for the economical fulfilment of transport trips.

  10. Relay Precoder Optimization in MIMO-Relay Networks With Imperfect CSI

    KAUST Repository

    Pandarakkottilil, Ubaidulla

    2011-11-01

    In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations. © 2011 IEEE.

  11. Optimal control of an invasive species using a reaction-diffusion model and linear programming

    Science.gov (United States)

    Bonneau, Mathieu; Johnson, Fred A.; Smith, Brian J.; Romagosa, Christina M.; Martin, Julien; Mazzotti, Frank J.

    2017-01-01

    Managing an invasive species is particularly challenging as little is generally known about the species’ biological characteristics in its new habitat. In practice, removal of individuals often starts before the species is studied to provide the information that will later improve control. Therefore, the locations and the amount of control have to be determined in the face of great uncertainty about the species characteristics and with a limited amount of resources. We propose framing spatial control as a linear programming optimization problem. This formulation, paired with a discrete reaction-diffusion model, permits calculation of an optimal control strategy that minimizes the remaining number of invaders for a fixed cost or that minimizes the control cost for containment or protecting specific areas from invasion. We propose computing the optimal strategy for a range of possible model parameters, representing current uncertainty on the possible invasion scenarios. Then, a best strategy can be identified depending on the risk attitude of the decision-maker. We use this framework to study the spatial control of the Argentine black and white tegus (Salvator merianae) in South Florida. There is uncertainty about tegu demography and we considered several combinations of model parameters, exhibiting various dynamics of invasion. For a fixed one-year budget, we show that the risk-averse strategy, which optimizes the worst-case scenario of tegus’ dynamics, and the risk-neutral strategy, which optimizes the expected scenario, both concentrated control close to the point of introduction. A risk-seeking strategy, which optimizes the best-case scenario, focuses more on models where eradication of the species in a cell is possible and consists of spreading control as much as possible. For the establishment of a containment area, assuming an exponential growth we show that with current control methods it might not be possible to implement such a strategy for some of the

  12. Optimal auxiliary Hamiltonians for truncated boson-space calculations by means of a maximal-decoupling variational principle

    International Nuclear Information System (INIS)

    Li, C.

    1991-01-01

    A new method based on a maximal-decoupling variational principle is proposed to treat the Pauli-principle constraints for calculations of nuclear collective motion in a truncated boson space. The viability of the method is demonstrated through an application to the multipole form of boson Hamiltonians for the single-j and nondegenerate multi-j pairing interactions. While these boson Hamiltonians are Hermitian and contain only one- and two-boson terms, they are also the worst case for truncated boson-space calculations because they are not amenable to any boson truncations at all. By using auxiliary Hamiltonians optimally determined by the maximal-decoupling variational principle, however, truncations in the boson space become feasible and even yield reasonably accurate results. The method proposed here may thus be useful for doing realistic calculations of nuclear collective motion as well as for obtaining a viable interacting-boson-model type of boson Hamiltonian from the shell model

  13. Virtual sensors for active noise control in acoustic-structural coupled enclosures using structural sensing: robust virtual sensor design.

    Science.gov (United States)

    Halim, Dunant; Cheng, Li; Su, Zhongqing

    2011-03-01

    The work was aimed to develop a robust virtual sensing design methodology for sensing and active control applications of vibro-acoustic systems. The proposed virtual sensor was designed to estimate a broadband acoustic interior sound pressure using structural sensors, with robustness against certain dynamic uncertainties occurring in an acoustic-structural coupled enclosure. A convex combination of Kalman sub-filters was used during the design, accommodating different sets of perturbed dynamic model of the vibro-acoustic enclosure. A minimax optimization problem was set up to determine an optimal convex combination of Kalman sub-filters, ensuring an optimal worst-case virtual sensing performance. The virtual sensing and active noise control performance was numerically investigated on a rectangular panel-cavity system. It was demonstrated that the proposed virtual sensor could accurately estimate the interior sound pressure, particularly the one dominated by cavity-controlled modes, by using a structural sensor. With such a virtual sensing technique, effective active noise control performance was also obtained even for the worst-case dynamics. © 2011 Acoustical Society of America

  14. A Systematic Review Comparing the Acceptability, Validity and Concordance of Discrete Choice Experiments and Best-Worst Scaling for Eliciting Preferences in Healthcare.

    Science.gov (United States)

    Whitty, Jennifer A; Oliveira Gonçalves, Ana Sofia

    2018-06-01

    The aim of this study was to compare the acceptability, validity and concordance of discrete choice experiment (DCE) and best-worst scaling (BWS) stated preference approaches in health. A systematic search of EMBASE, Medline, AMED, PubMed, CINAHL, Cochrane Library and EconLit databases was undertaken in October to December 2016 without date restriction. Studies were included if they were published in English, presented empirical data related to the administration or findings of traditional format DCE and object-, profile- or multiprofile-case BWS, and were related to health. Study quality was assessed using the PREFS checklist. Fourteen articles describing 12 studies were included, comparing DCE with profile-case BWS (9 studies), DCE and multiprofile-case BWS (1 study), and profile- and multiprofile-case BWS (2 studies). Although limited and inconsistent, the balance of evidence suggests that preferences derived from DCE and profile-case BWS may not be concordant, regardless of the decision context. Preferences estimated from DCE and multiprofile-case BWS may be concordant (single study). Profile- and multiprofile-case BWS appear more statistically efficient than DCE, but no evidence is available to suggest they have a greater response efficiency. Little evidence suggests superior validity for one format over another. Participant acceptability may favour DCE, which had a lower self-reported task difficulty and was preferred over profile-case BWS in a priority setting but not necessarily in other decision contexts. DCE and profile-case BWS may be of equal validity but give different preference estimates regardless of the health context; thus, they may be measuring different constructs. Therefore, choice between methods is likely to be based on normative considerations related to coherence with theoretical frameworks and on pragmatic considerations related to ease of data collection.

  15. Minimizing worst-case and average-case makespan over scenarios

    NARCIS (Netherlands)

    Feuerstein, Esteban; Marchetti-Spaccamela, Alberto; Schalekamp, Frans; Sitters, R.A.; van der Ster, Suzanne; Stougie, Leen; van Zuylen, Anke

    2017-01-01

    We consider scheduling problems over scenarios where the goal is to find a single assignment of the jobs to the machines which performs well over all scenarios in an explicitly given set. Each scenario is a subset of jobs that must be executed in that scenario. The two objectives that we consider

  16. Experimental design: Case studies of diagnostics optimization for W7-X

    International Nuclear Information System (INIS)

    Dreier, H.; Dinklage, A.; Fischer, R.; Hartfuss, H.-J.; Hirsch, M.; Kornejew, P.; Pasch, E.; Turkin, Yu.

    2005-01-01

    The preparation of diagnostics for Wendelstein 7-X is accompanied by diagnostics simulations and optimization. Starting from the physical objectives, the design of diagnostics should incorporate predictive modelling (e.g. transport modelling) and simulations of respective measurements. Although technical constraints are governing design considerations, it appears that several design parameters of different diagnostics can be optimized. However, a general formulation for fusion diagnostics design in terms of optimization is lacking. In this paper, first case studies of Bayesian experimental design aiming at applications on W7-X diagnostics preparation are presented. The information gain of a measurement is formulated as a utility function which is expressed in terms of the Kullback-Leibler divergence. Then, the expected range of data is to be included and the resulting expected utility represents the objective for optimization. Bayesian probability theory gives a framework allowing us for an appropriate formulation of the design problem in terms of probability distribution functions. Results are obtained for the information gain from interferometry and for the design of polychromators for Thomson scattering. For interferometry, studies of the choice of line-of-sights for optimum signal and for the reproduction of gradient positions are presented for circular, elliptical and W7-X geometries. For Thomson scattering, the design of filter transmissions for density and temperature measurements are discussed. (author)

  17. Optimal Stabilization of Social Welfare under Small Variation of Operating Condition with Bifurcation Analysis

    Science.gov (United States)

    Chanda, Sandip; De, Abhinandan

    2016-12-01

    A social welfare optimization technique has been proposed in this paper with a developed state space based model and bifurcation analysis to offer substantial stability margin even in most inadvertent states of power system networks. The restoration of the power market dynamic price equilibrium has been negotiated in this paper, by forming Jacobian of the sensitivity matrix to regulate the state variables for the standardization of the quality of solution in worst possible contingencies of the network and even with co-option of intermittent renewable energy sources. The model has been tested in IEEE 30 bus system and illustrious particle swarm optimization has assisted the fusion of the proposed model and methodology.

  18. A QFD-based optimization method for a scalable product platform

    Science.gov (United States)

    Luo, Xinggang; Tang, Jiafu; Kwong, C. K.

    2010-02-01

    In order to incorporate the customer into the early phase of the product development cycle and to better satisfy customers' requirements, this article adopts quality function deployment (QFD) for optimal design of a scalable product platform. A five-step QFD-based method is proposed to determine the optimal values for platform engineering characteristics (ECs) and non-platform ECs of the products within a product family. First of all, the houses of quality (HoQs) for all product variants are developed and a QFD-based optimization approach is used to determine the optimal ECs for each product variant. Sensitivity analysis is performed for each EC with respect to overall customer satisfaction (OCS). Based on the obtained sensitivity indices of ECs, a mathematical model is established to simultaneously optimize the values of the platform and the non-platform ECs. Finally, by comparing and analysing the optimal solutions with different number of platform ECs, the ECs with which the worst OCS loss can be avoided are selected as platform ECs. An illustrative example is used to demonstrate the feasibility of this method. A comparison between the proposed method and a two-step approach is conducted on the example. The comparison shows that, as a kind of single-stage approach, the proposed method yields better average degree of customer satisfaction due to the simultaneous optimization of platform and non-platform ECs.

  19. Mechanical Design Optimization Using Advanced Optimization Techniques

    CERN Document Server

    Rao, R Venkata

    2012-01-01

    Mechanical design includes an optimization process in which designers always consider objectives such as strength, deflection, weight, wear, corrosion, etc. depending on the requirements. However, design optimization for a complete mechanical assembly leads to a complicated objective function with a large number of design variables. It is a good practice to apply optimization techniques for individual components or intermediate assemblies than a complete assembly. Analytical or numerical methods for calculating the extreme values of a function may perform well in many practical cases, but may fail in more complex design situations. In real design problems, the number of design parameters can be very large and their influence on the value to be optimized (the goal function) can be very complicated, having nonlinear character. In these complex cases, advanced optimization algorithms offer solutions to the problems, because they find a solution near to the global optimum within reasonable time and computational ...

  20. Valuing Treatments for Parkinson Disease Incorporating Process Utility: Performance of Best-Worst Scaling, Time Trade-Off, and Visual Analogue Scales

    NARCIS (Netherlands)

    Weernink, Marieke Geertruida Maria; Groothuis-Oudshoorn, Catharina Gerarda Maria; IJzerman, Maarten Joost; van Til, Janine Astrid

    2016-01-01

    Objective The objective of this study was to compare treatment profiles including both health outcomes and process characteristics in Parkinson disease using best-worst scaling (BWS), time trade-off (TTO), and visual analogue scales (VAS). Methods From the model comprising of seven attributes with

  1. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    OpenAIRE

    Zhigang Zuo; Shuhong Liu; Yizhang Fan; Yulin Wu

    2014-01-01

    It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were...

  2. Optimal Sparse Matrix Dense Vector Multiplication in the I/O-Model

    DEFF Research Database (Denmark)

    Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf

    2010-01-01

    of nonzero entries is kN, i.e., where the average number of nonzero entries per column is k. We investigate what is the external worst-case complexity, i.e., the best possible upper bound on the number of I/Os, as a function of k and N. We determine this complexity up to a constant factor for all meaningful...

  3. New opportunities for natural gas

    International Nuclear Information System (INIS)

    Newcomb, J.

    1991-01-01

    This paper reports that the prospect of extremely low gas prices - approaching $1.00 per million Btu (MMBtu) on a seasonal basis - is frightening many producers. The presence of large gas inventories only serves to intensify these fears. Threats of declining market conditions stir the question: How should producers react to these prices? On the score, the experts advise: One of the first rules of playing the power game is that all bad news must be accepted calmly as if one already knew and didn't much care. Although stated jokingly, there is a kernel of truth to the suggestion. Having thought through the adversities involved in the worst case scenario - and for natural gas producers and other industry participants, those adversities are formidable - companies may be better prepared to adapt to the worst case, should it happen to materialize. Here, the bad news is that CERA foresees serious near-term perils that could route the industry toward that worst case. The good news is that long-term prospects provide a cause for optimism

  4. A Data-Driven Stochastic Reactive Power Optimization Considering Uncertainties in Active Distribution Networks and Decomposition Method

    DEFF Research Database (Denmark)

    Ding, Tao; Yang, Qingrun; Yang, Yongheng

    2018-01-01

    To address the uncertain output of distributed generators (DGs) for reactive power optimization in active distribution networks, the stochastic programming model is widely used. The model is employed to find an optimal control strategy with minimum expected network loss while satisfying all......, in this paper, a data-driven modeling approach is introduced to assume that the probability distribution from the historical data is uncertain within a confidence set. Furthermore, a data-driven stochastic programming model is formulated as a two-stage problem, where the first-stage variables find the optimal...... control for discrete reactive power compensation equipment under the worst probability distribution of the second stage recourse. The second-stage variables are adjusted to uncertain probability distribution. In particular, this two-stage problem has a special structure so that the second-stage problem...

  5. JPL Thermal Design Modeling Philosophy and NASA-STD-7009 Standard for Models and Simulations - A Case Study

    Science.gov (United States)

    Avila, Arturo

    2011-01-01

    The Standard JPL thermal engineering practice prescribes worst-case methodologies for design. In this process, environmental and key uncertain thermal parameters (e.g., thermal blanket performance, interface conductance, optical properties) are stacked in a worst case fashion to yield the most hot- or cold-biased temperature. Thus, these simulations would represent the upper and lower bounds. This, effectively, represents JPL thermal design margin philosophy. Uncertainty in the margins and the absolute temperatures is usually estimated by sensitivity analyses and/or by comparing the worst-case results with "expected" results. Applicability of the analytical model for specific design purposes along with any temperature requirement violations are documented in peer and project design review material. In 2008, NASA released NASA-STD-7009, Standard for Models and Simulations. The scope of this standard covers the development and maintenance of models, the operation of simulations, the analysis of the results, training, recommended practices, the assessment of the Modeling and Simulation (M&S) credibility, and the reporting of the M&S results. The Mars Exploration Rover (MER) project thermal control system M&S activity was chosen as a case study determining whether JPL practice is in line with the standard and to identify areas of non-compliance. This paper summarizes the results and makes recommendations regarding the application of this standard to JPL thermal M&S practices.

  6. Safe to the super GAU (worst case scenario); Sicher bis zum Super-Gau

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Armin

    2017-07-15

    The NPP Philipsburg-2 has violated existing safety requirements for 32 years: Due to the bolt breaks in ventilation channels four safety injection systems could fail in case of an incident. The NPP was built deviating from the design with respect to the safety against strong vibrations. A rather earth quake or military aircraft crash could trigger a severe accident. Further safety deficiencies concern insulating materials in the sump, insufficient cooling water levels during start-up and fire prevention.

  7. Transaction fees and optimal rebalancing in the growth-optimal portfolio

    Science.gov (United States)

    Feng, Yu; Medo, Matúš; Zhang, Liang; Zhang, Yi-Cheng

    2011-05-01

    The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return distributions and returns generated by a GARCH process. Finally we study the case when investment is rebalanced only partially and show that this strategy can improve the investment long-term growth rate more than optimization of the rebalancing period.

  8. A framework for model-based optimization of bioprocesses under uncertainty: Lignocellulosic ethanol production case

    DEFF Research Database (Denmark)

    Morales Rodriguez, Ricardo; Meyer, Anne S.; Gernaey, Krist

    2012-01-01

    of up to 0.13 USD/gal-ethanol. Further stochastic optimization demonstrated the options for further reduction of the production costs with different processing configurations, reaching a reduction of up to 28% in the production cost in the SHCF configuration compared to the base case operation. Further...

  9. Robust approximate optimal guidance strategies for aeroassisted orbital transfer missions

    Science.gov (United States)

    Ilgen, Marc R.

    This thesis presents the application of game theoretic and regular perturbation methods to the problem of determining robust approximate optimal guidance laws for aeroassisted orbital transfer missions with atmospheric density and navigated state uncertainties. The optimal guidance problem is reformulated as a differential game problem with the guidance law designer and Nature as opposing players. The resulting equations comprise the necessary conditions for the optimal closed loop guidance strategy in the presence of worst case parameter variations. While these equations are nonlinear and cannot be solved analytically, the presence of a small parameter in the equations of motion allows the method of regular perturbations to be used to solve the equations approximately. This thesis is divided into five parts. The first part introduces the class of problems to be considered and presents results of previous research. The second part then presents explicit semianalytical guidance law techniques for the aerodynamically dominated region of flight. These guidance techniques are applied to unconstrained and control constrained aeroassisted plane change missions and Mars aerocapture missions, all subject to significant atmospheric density variations. The third part presents a guidance technique for aeroassisted orbital transfer problems in the gravitationally dominated region of flight. Regular perturbations are used to design an implicit guidance technique similar to the second variation technique but that removes the need for numerically computing an optimal trajectory prior to flight. This methodology is then applied to a set of aeroassisted inclination change missions. In the fourth part, the explicit regular perturbation solution technique is extended to include the class of guidance laws with partial state information. This methodology is then applied to an aeroassisted plane change mission using inertial measurements and subject to uncertainties in the initial value

  10. Making optimal investment decisions for energy service companies under uncertainty: A case study

    International Nuclear Information System (INIS)

    Deng, Qianli; Jiang, Xianglin; Zhang, Limao; Cui, Qingbin

    2015-01-01

    Varied initial energy efficiency investments would result in different annual energy savings achievements. In order to balance the savings revenue and the potential capital loss through EPC (Energy Performance Contracting), a cost-effective investment decision is needed when selecting energy efficiency technologies. In this research, an approach is developed for the ESCO (Energy Service Company) to evaluate the potential energy savings profit, and thus make the optimal investment decisions. The energy savings revenue under uncertainties, which are derived from energy efficiency performance variation and energy price fluctuation, are first modeled as stochastic processes. Then, the derived energy savings profit is shared by the owner and the ESCO according to the contract specification. A simulation-based model is thus built to maximize the owner's profit, and at the same time, satisfy the ESCO's expected rate of return. In order to demonstrate the applicability of the proposed approach, the University of Maryland campus case is also presented. The proposed method could not only help the ESCO determine the optimal energy efficiency investments, but also assist the owner's decision in the bidding selection. - Highlights: • An optimization model is built for determining energy efficiency investment for ESCO. • Evolution of the energy savings revenue is modeled as a stochastic process. • Simulation is adopted to calculate investment balancing the owner and the ESCO's profit. • A campus case is presented to demonstrate applicability of the proposed approach

  11. Robust economic optimization and environmental policy analysis for microgrid planning: An application to Taichung Industrial Park, Taiwan

    International Nuclear Information System (INIS)

    Yu, Nan; Kang, Jin-Su; Chang, Chung-Chuan; Lee, Tai-Yong; Lee, Dong-Yup

    2016-01-01

    This study aims to provide economical and environmentally friendly solutions for a microgrid system with distributed energy resources in the design stage, considering multiple uncertainties during operation and conflicting interests among diverse microgrid stakeholders. For the purpose, we develop a multi-objective optimization model for robust microgrid planning, on the basis of an economic robustness measure, i.e. the worst-case cost among possible scenarios, to reduce the variability among scenario costs caused by uncertainties. The efficacy of the model is successfully demonstrated by applying it to Taichung Industrial Park in Taiwan, an industrial complex, where significant amount of greenhouse gases are emitted. Our findings show that the most robust solution, but the highest cost, mainly includes 45% (26.8 MW) of gas engine and 47% (28 MW) of photovoltaic panel with the highest system capacity (59 MW). Further analyses reveal the environmental benefits from the significant reduction of the expected annual CO_2 emission and carbon tax by about half of the current utility facilities in the region. In conclusion, the developed model provides an efficient decision-making tool for robust microgrid planning at the preliminary stage. - Highlights: • Developed robust economic and environmental optimization model for microgrid planning. • Provided Pareto optimal planning solutions for Taichung Industrial Park, Taiwan. • Suggested microgrid configuration with significant economic and environmental benefits. • Identified gas engine and photovoltaic panel as two promising energy sources.

  12. Worst-case efficient external-memory priority queues

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Katajainen, Jyrki

    1998-01-01

    A priority queue Q is a data structure that maintains a collection of elements, each element having an associated priority drawn from a totally ordered universe, under the operations Insert, which inserts an element into Q, and DeleteMin, which deletes an element with the minimum priority from Q....... In this paper a priority-queue implementation is given which is efficient with respect to the number of block transfers or I/Os performed between the internal and external memories of a computer. Let B and M denote the respective capacity of a block and the internal memory measured in elements. The developed...... data structure handles any intermixed sequence of Insert and DeleteMin operations such that in every disjoint interval of B consecutive priorityqueue operations at most clogM/B N/M I/Os are performed, for some positive constant c. These I/Os are divided evenly among the operations: if B ≥ clogM/B N...

  13. Finding Worst-Case Flexible Schedules using Coevolution

    DEFF Research Database (Denmark)

    Jensen, Mikkel Thomas

    2001-01-01

    Finding flexible schedules is important to industry, since in many environments changes such as machine breakdowns or the appearance of new jobs can happen at short notice.......Finding flexible schedules is important to industry, since in many environments changes such as machine breakdowns or the appearance of new jobs can happen at short notice....

  14. 49 CFR 194.105 - Worst case discharge.

    Science.gov (United States)

    2010-10-01

    ... capacity of the pipeline), plus the largest line drainage volume after shutdown of the line section(s) in... credits for breakout tank secondary containment and other specific spill prevention measures as follows: Prevention measure Standard Credit(percent) Secondary containment > 100% NFPA 30 50 Built/repaired to API...

  15. 33 CFR 154.1029 - Worst case discharge.

    Science.gov (United States)

    2010-07-01

    ... facility. The discharge from each pipe is calculated as follows: The maximum time to discover the release from the pipe in hours, plus the maximum time to shut down flow from the pipe in hours (based on... vessel regardless of the presence of secondary containment; plus (2) The discharge from all piping...

  16. Warpage optimization on a mobile phone case using response surface methodology (RSM)

    Science.gov (United States)

    Lee, X. N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Shazzuan, S.

    2017-09-01

    Plastic injection moulding is a popular manufacturing method not only it is reliable, but also efficient and cost saving. It able to produce plastic part with detailed features and complex geometry. However, defects in injection moulding process degrades the quality and aesthetic of the injection moulded product. The most common defect occur in the process is warpage. Inappropriate process parameter setting of injection moulding machine is one of the reason that leads to the occurrence of warpage. The aims of this study were to improve the quality of injection moulded part by investigating the optimal parameters in minimizing warpage using Response Surface Methodology (RSM). Subsequent to this, the most significant parameter was identified and recommended parameters setting was compared with the optimized parameter setting using RSM. In this research, the mobile phone case was selected as case study. The mould temperature, melt temperature, packing pressure, packing time and cooling time were selected as variables whereas warpage in y-direction was selected as responses in this research. The simulation was carried out by using Autodesk Moldflow Insight 2012. In addition, the RSM was performed by using Design Expert 7.0. The warpage in y direction recommended by RSM were reduced by 70 %. RSM performed well in solving warpage issue.

  17. Search Trees with Relaxed Balance and Near-Optimal Height

    DEFF Research Database (Denmark)

    Fagerberg, Rolf; Jensen, Rune E.; Larsen, Kim Skak

    2001-01-01

    We introduce a relaxed k-tree, a search tree with relaxed balance and a height bound, when in balance, of (1+epsilon)log_2 n + 1, for any epsilon > 0. The number of nodes involved in rebalancing is O(1/epsilon) per update in the amortized sense, and O(log n/epsilon) in the worst case sense. This ...... constant rebalancing, which is an improvement over the current definition. World Wide Web search engines are possible applications for this line of work....

  18. Optimization of field homogeneity of Helmholtz-like coils for measuring the balance of planar gradiometers

    International Nuclear Information System (INIS)

    Nordahn, M.A.; Holst, T.; Shen, Y.Q.

    1999-01-01

    Measuring the balance of planar SQUID gradiometers using a relatively small Helmholtz-like coil system requires a careful design of the coils in order to have a high degree of field uniformity along the radial direction. The level to which planar gradiometers can be balanced will be affected by any misalignment of the gradiometer relative to the ideal central position. Therefore, the maximum degree of balancing possible is calculated numerically for the Helmholtz geometry under various perturbations, including misalignment of the gradiometer along the cylindrical and the radial axis, and angular tilting relative to the normal plane. Furthermore, if the ratio between the coil separation and coil radius is chosen to be less than unity, calculations show that the expected radial uniformity of the field can be improved considerably compared to the traditional Helmholtz geometry. The optimized coil geometry is compared to the Helmholtz geometry and is found to yield up to an order of magnitude improvement of the worst case error signal within a volume spanned by the uncertainty in the alignment. (author)

  19. [Diagnosis and the technology for optimizing the medical support of a troop unit].

    Science.gov (United States)

    Korshever, N G; Polkovov, S V; Lavrinenko, O V; Krupnov, P A; Anastasov, K N

    2000-05-01

    The work is devoted to investigation of the system of military unit medical support with the use of principles and states of organizational diagnosis; development of the method allowing to assess its functional activity; and determination of optimization trends. Basing on the conducted organizational diagnosis and expert inquiry the informative criteria were determined which characterize the stages of functioning of the military unit medical support system. To evaluate the success of military unit medical support the complex multi-criteria pattern was developed and algorithm of this process optimization was substantiated. Using the results obtained, particularly realization of principles and states of decision taking theory in machine program it is possible to solve more complex problem of comparison between any number of military units: to dispose them according to priority decrease; to select the programmed number of the best and worst; to determine the trends of activity optimization in corresponding medical service personnel.

  20. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    Directory of Open Access Journals (Sweden)

    Zhigang Zuo

    2014-04-01

    Full Text Available It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were discussed. A model with casing diameter of 0.875D40 was selected for further analyses. FEM analyses were then carried out on different combinations of casing sizes, casing wall thickness, and materials, to evaluate its safety related to pressure integrity, with respect to both static and fatigue strength analyses. Two models with forging and cast materials were selected as final results.

  1. Hybrid Genetic Algorithm Optimization for Case Based Reasoning Systems

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2008-01-01

    The success of a CBR system largely depen ds on an effective retrieval of useful prior case for the problem. Nearest neighbor and induction are the main CBR retrieval algorithms. Each of them can be more suitable in different situations. Integrated the two retrieval algorithms can catch the advantages of both of them. But, they still have some limitations facing the induction retrieval algorithm when dealing with a noisy data, a large number of irrelevant features, and different types of data. This research utilizes a hybrid approach using genetic algorithms (GAs) to case-based induction retrieval of the integrated nearest neighbor - induction algorithm in an attempt to overcome these limitations and increase the overall classification accuracy. GAs can be used to optimize the search space of all the possible subsets of the features set. It can deal with the irrelevant and noisy features while still achieving a significant improvement of the retrieval accuracy. Therefore, the proposed CBR-GA introduces an effective general purpose retrieval algorithm that can improve the performance of CBR systems. It can be applied in many application areas. CBR-GA has proven its success when applied for different problems in real-life

  2. Average-case analysis of incremental topological ordering

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Friedrich, Tobias

    2010-01-01

    Many applications like pointer analysis and incremental compilation require maintaining a topological ordering of the nodes of a directed acyclic graph (DAG) under dynamic updates. All known algorithms for this problem are either only analyzed for worst-case insertion sequences or only evaluated...... experimentally on random DAGs. We present the first average-case analysis of incremental topological ordering algorithms. We prove an expected runtime of under insertion of the edges of a complete DAG in a random order for the algorithms of Alpern et al. (1990) [4], Katriel and Bodlaender (2006) [18], and Pearce...

  3. "It Was the Best of Times, It Was the Worst of Times …": Philosophy of Education in the Contemporary World

    Science.gov (United States)

    Roberts, Peter

    2015-01-01

    This article considers the state of philosophy of education in our current age and assesses prospects for the future of the field. I argue that as philosophers of education, we live in both the best of times and the worst of times. Developments in one key organisation, the Philosophy of Education Society of Australasia, are examined in relation to…

  4. The Integrated Medical Model - Optimizing In-flight Space Medical Systems to Reduce Crew Health Risk and Mission Impacts

    Science.gov (United States)

    Kerstman, Eric; Walton, Marlei; Minard, Charles; Saile, Lynn; Myers, Jerry; Butler, Doug; Lyengar, Sriram; Fitts, Mary; Johnson-Throop, Kathy

    2009-01-01

    The Integrated Medical Model (IMM) is a decision support tool used by medical system planners and designers as they prepare for exploration planning activities of the Constellation program (CxP). IMM provides an evidence-based approach to help optimize the allocation of in-flight medical resources for a specified level of risk within spacecraft operational constraints. Eighty medical conditions and associated resources are represented in IMM. Nine conditions are due to Space Adaptation Syndrome. The IMM helps answer fundamental medical mission planning questions such as What medical conditions can be expected? What type and quantity of medical resources are most likely to be used?", and "What is the probability of crew death or evacuation due to medical events?" For a specified mission and crew profile, the IMM effectively characterizes the sequence of events that could potentially occur should a medical condition happen. The mathematical relationships among mission and crew attributes, medical conditions and incidence data, in-flight medical resources, potential clinical and crew health end states are established to generate end state probabilities. A Monte Carlo computational method is used to determine the probable outcomes and requires up to 25,000 mission trials to reach convergence. For each mission trial, the pharmaceuticals and supplies required to diagnose and treat prevalent medical conditions are tracked and decremented. The uncertainty of patient response to treatment is bounded via a best-case, worst-case, untreated case algorithm. A Crew Health Index (CHI) metric, developed to account for functional impairment due to a medical condition, provides a quantified measure of risk and enables risk comparisons across mission scenarios. The use of historical in-flight medical data, terrestrial surrogate data as appropriate, and space medicine subject matter expertise has enabled the development of a probabilistic, stochastic decision support tool capable of

  5. Optimal Elbow Angle for Extracting sEMG Signals During Fatiguing Dynamic Contraction

    Directory of Open Access Journals (Sweden)

    Mohamed R. Al-Mulla

    2015-09-01

    Full Text Available Surface electromyographic (sEMG activity of the biceps muscle was recorded from 13 subjects. Data was recorded while subjects performed dynamic contraction until fatigue and the signals were segmented into two parts (Non-Fatigue and Fatigue. An evolutionary algorithm was used to determine the elbow angles that best separate (using Davies-Bouldin Index, DBI both Non-Fatigue and Fatigue segments of the sEMG signal. Establishing the optimal elbow angle for feature extraction used in the evolutionary process was based on 70% of the conducted sEMG trials. After completing 26 independent evolution runs, the best run containing the optimal elbow angles for separation (Non-Fatigue and Fatigue was selected and then tested on the remaining 30% of the data to measure the classification performance. Testing the performance of the optimal angle was undertaken on nine features extracted from each of the two classes (Non-Fatigue and Fatigue to quantify the performance. Results showed that the optimal elbow angles can be used for fatigue classification, showing 87.90% highest correct classification for one of the features and on average of all eight features (including worst performing features giving 78.45%.

  6. Bayesian Network Constraint-Based Structure Learning Algorithms: Parallel and Optimized Implementations in the bnlearn R Package

    Directory of Open Access Journals (Sweden)

    Marco Scutari

    2017-03-01

    Full Text Available It is well known in the literature that the problem of learning the structure of Bayesian networks is very hard to tackle: Its computational complexity is super-exponential in the number of nodes in the worst case and polynomial in most real-world scenarios. Efficient implementations of score-based structure learning benefit from past and current research in optimization theory, which can be adapted to the task by using the network score as the objective function to maximize. This is not true for approaches based on conditional independence tests, called constraint-based learning algorithms. The only optimization in widespread use, backtracking, leverages the symmetries implied by the definitions of neighborhood and Markov blanket. In this paper we illustrate how backtracking is implemented in recent versions of the bnlearn R package, and how it degrades the stability of Bayesian network structure learning for little gain in terms of speed. As an alternative, we describe a software architecture and framework that can be used to parallelize constraint-based structure learning algorithms (also implemented in bnlearn and we demonstrate its performance using four reference networks and two real-world data sets from genetics and systems biology. We show that on modern multi-core or multiprocessor hardware parallel implementations are preferable over backtracking, which was developed when single-processor machines were the norm.

  7. A Casting Yield Optimization Case Study: Forging Ram

    DEFF Research Database (Denmark)

    Kotas, Petr; Tutum, Cem Celal; Hattel, Jesper Henri

    2010-01-01

    This work summarizes the findings of multi-objective optimization of a gravity sand-cast steel part for which an increase of the casting yield via riser optimization was considered. This was accomplished by coupling a casting simulation software package with an optimization module. The benefits...... of this approach, recently adopted in foundry industry world wide and based on fully automated computer optimization, were demonstrated. First, analyses of filling and solidification of the original casting design were conducted in the standard simulation environment to determine potential flaws and inadequacies...

  8. Patients with the worst outcomes after paracetamol (acetaminophen)-induced liver failure have an early monocytopenia.

    Science.gov (United States)

    Moore, J K; MacKinnon, A C; Man, T Y; Manning, J R; Forbes, S J; Simpson, K J

    2017-02-01

    Acute liver failure (ALF) is associated with significant morbidity and mortality. Studies have implicated the immune response, especially monocyte/macrophages as being important in dictating outcome. To investigate changes in the circulating monocytes and other immune cells serially in patients with ALF, relate these with cytokine concentrations, monocyte gene expression and patient outcome. In a prospective case-control study in the Scottish Liver Transplant Unit, Royal Infirmary Edinburgh, 35 consecutive patients admitted with paracetamol-induced liver failure (POD-ALF), 10 patients with non-paracetamol causes of ALF and 16 controls were recruited. The peripheral blood monocyte phenotype was analysed by flow cytometry, circulating cytokines quantified by protein array and monocyte gene expression array performed and related to outcome. On admission, patients with worst outcomes after POD-ALF had a significant monocytopenia, characterised by reduced classical and expanded intermediate monocyte population. This was associated with reduced circulating lymphocytes and natural killer cells, peripheral cytokine patterns suggestive of a 'cytokine storm' and increased concentrations of cytokines associated with monocyte egress from the bone marrow. Gene expression array did not differentiate patient outcome. At day 4, there was no significant difference in monocyte, lymphocyte or natural killer cells between survivors and the patients with adverse outcomes. Severe paracetamol liver failure is associated with profound changes in the peripheral blood compartment, particularly in monocytes, related with worse outcomes. This is not seen in patients with non-paracetamol-induced liver failure. Significant monocytopenia on admission may allow earlier clarification of prognosis, and it highlights a potential target for therapeutic intervention. © 2016 John Wiley & Sons Ltd.

  9. Evaluating firms' R&D performance using best worst method.

    Science.gov (United States)

    Salimi, Negin; Rezaei, Jafar

    2018-02-01

    Since research and development (R&D) is the most critical determinant of the productivity, growth and competitive advantage of firms, measuring R&D performance has become the core of attention of R&D managers, and an extensive body of literature has examined and identified different R&D measurements and determinants of R&D performance. However, measuring R&D performance and assigning the same level of importance to different R&D measures, which is the common approach in existing studies, can oversimplify the R&D measuring process, which may result in misinterpretation of the performance and consequently fallacy R&D strategies. The aim of this study is to measure R&D performance taking into account the different levels of importance of R&D measures, using a multi-criteria decision-making method called Best Worst Method (BWM) to identify the weights (importance) of R&D measures and measure the R&D performance of 50 high-tech SMEs in the Netherlands using the data gathered in a survey among SMEs and from R&D experts. The results show how assigning different weights to different R&D measures (in contrast to simple mean) results in a different ranking of the firms and allow R&D managers to formulate more effective strategies to improve their firm's R&D performance by applying knowledge regarding the importance of different R&D measures. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Oil Reservoir Production Optimization using Optimal Control

    DEFF Research Database (Denmark)

    Völcker, Carsten; Jørgensen, John Bagterp; Stenby, Erling Halfdan

    2011-01-01

    Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using the adjo...... reservoir using water ooding and smart well technology. Compared to the uncontrolled case, the optimal operation increases the Net Present Value of the oil field by 10%.......Practical oil reservoir management involves solution of large-scale constrained optimal control problems. In this paper we present a numerical method for solution of large-scale constrained optimal control problems. The method is a single-shooting method that computes the gradients using...

  11. Real parameter optimization by an effective differential evolution algorithm

    Directory of Open Access Journals (Sweden)

    Ali Wagdy Mohamed

    2013-03-01

    Full Text Available This paper introduces an Effective Differential Evolution (EDE algorithm for solving real parameter optimization problems over continuous domain. The proposed algorithm proposes a new mutation rule based on the best and the worst individuals among the entire population of a particular generation. The mutation rule is combined with the basic mutation strategy through a linear decreasing probability rule. The proposed mutation rule is shown to promote local search capability of the basic DE and to make it faster. Furthermore, a random mutation scheme and a modified Breeder Genetic Algorithm (BGA mutation scheme are merged to avoid stagnation and/or premature convergence. Additionally, the scaling factor and crossover of DE are introduced as uniform random numbers to enrich the search behavior and to enhance the diversity of the population. The effectiveness and benefits of the proposed modifications used in EDE has been experimentally investigated. Numerical experiments on a set of bound-constrained problems have shown that the new approach is efficient, effective and robust. The comparison results between the EDE and several classical differential evolution methods and state-of-the-art parameter adaptive differential evolution variants indicate that the proposed EDE algorithm is competitive with , and in some cases superior to, other algorithms in terms of final solution quality, efficiency, convergence rate, and robustness.

  12. Battery management systems (BMS) optimization for electric vehicles (EVs) in Malaysia

    Science.gov (United States)

    Salehen, P. M. W.; Su'ait, M. S.; Razali, H.; Sopian, K.

    2017-04-01

    Following the UN Climate Change Conference 2009 in Copenhagen, Denmark, Malaysia seriously committed on "Go Green" campaign with the aim to reduce 40% GHG emission by the year 2020. Therefore, the National Green Technology Policy has been legalised in 2009 with transportation as one of its focused sectors, which include hybrid (HEVs), electric vehicles (EVs) and fuel cell vehicles with the purpose of to keep up with the worst scenario. While the number of registered cars has been increasing by 1 million yearly, the amount has doubled in the last two decades. Consequently, CO2 emission in Malaysia reaches up to 97.1% and will continue to increase mainly due to the activities in the transportation sector. Nevertheless, Malaysia is now moving towards on green car which battery-based EVs. This type of transportation mainly needs power performance optimization, which is controlled by the Batteries Management System (BMS). BMS is an essential module which leads to reliable power management, optimal power performance and safe vehicle that lead back for power optimization in EVs. Thus, this paper proposes power performance optimization for various setups of lithium-ion cathode with graphene anode using MATLAB/SIMULINK software for better management performance and extended EVs driving range.

  13. OnlineMin

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Moruz, Gabriel; Negoescu, Andrei

    2015-01-01

    OnlineMin that has optimal competitiveness and allows fast implementations. In fact, if k pages fit in internal memory the best previous solution required O(k2) time per request and O(k) space. We present two implementations of OnlineMin which use O(k) space, but only O(logk) worst case time and O...

  14. Automation of POST Cases via External Optimizer and "Artificial p2" Calculation

    Science.gov (United States)

    Dees, Patrick D.; Zwack, Mathew R.

    2017-01-01

    During early conceptual design of complex systems, speed and accuracy are often at odds with one another. While many characteristics of the design are fluctuating rapidly during this phase there is nonetheless a need to acquire accurate data from which to down-select designs as these decisions will have a large impact upon program life-cycle cost. Therefore enabling the conceptual designer to produce accurate data in a timely manner is tantamount to program viability. For conceptual design of launch vehicles, trajectory analysis and optimization is a large hurdle. Tools such as the industry standard Program to Optimize Simulated Trajectories (POST) have traditionally required an expert in the loop for setting up inputs, running the program, and analyzing the output. The solution space for trajectory analysis is in general non-linear and multi-modal requiring an experienced analyst to weed out sub-optimal designs in pursuit of the global optimum. While an experienced analyst presented with a vehicle similar to one which they have already worked on can likely produce optimal performance figures in a timely manner, as soon as the "experienced" or "similar" adjectives are invalid the process can become lengthy. In addition, an experienced analyst working on a similar vehicle may go into the analysis with preconceived ideas about what the vehicle's trajectory should look like which can result in sub-optimal performance being recorded. Thus, in any case but the ideal either time or accuracy can be sacrificed. In the authors' previous work a tool called multiPOST was created which captures the heuristics of a human analyst over the process of executing trajectory analysis with POST. However without the instincts of a human in the loop, this method relied upon Monte Carlo simulation to find successful trajectories. Overall the method has mixed results, and in the context of optimizing multiple vehicles it is inefficient in comparison to the method presented POST's internal

  15. DCT-based iris recognition.

    Science.gov (United States)

    Monro, Donald M; Rakshit, Soumyadip; Zhang, Dexin

    2007-04-01

    This paper presents a novel iris coding method based on differences of discrete cosine transform (DCT) coefficients of overlapped angular patches from normalized iris images. The feature extraction capabilities of the DCT are optimized on the two largest publicly available iris image data sets, 2,156 images of 308 eyes from the CASIA database and 2,955 images of 150 eyes from the Bath database. On this data, we achieve 100 percent Correct Recognition Rate (CRR) and perfect Receiver-Operating Characteristic (ROC) Curves with no registered false accepts or rejects. Individual feature bit and patch position parameters are optimized for matching through a product-of-sum approach to Hamming distance calculation. For verification, a variable threshold is applied to the distance metric and the False Acceptance Rate (FAR) and False Rejection Rate (FRR) are recorded. A new worst-case metric is proposed for predicting practical system performance in the absence of matching failures, and the worst case theoretical Equal Error Rate (EER) is predicted to be as low as 2.59 x 10(-4) on the available data sets.

  16. The co phylogeny reconstruction problem is NP-complete.

    Science.gov (United States)

    Ovadia, Y; Fielder, D; Conow, C; Libeskind-Hadas, R

    2011-01-01

    The co phylogeny reconstruction problem is that of finding minimum cost explanations of differences between historical associations. The problem arises in parasitology, molecular systematics, and biogeography. Existing software tools for this problem either have worst-case exponential time or use heuristics that do not guarantee optimal solutions. To date, no polynomial time optimal algorithms have been found for this problem. In this article, we prove that the problem is NP-complete, suggesting that future research on algorithms for this problem should seek better polynomial-time approximation algorithms and heuristics rather than optimal solutions.

  17. The Improvement of Particle Swarm Optimization: a Case Study of Optimal Operation in Goupitan Reservoir

    Science.gov (United States)

    Li, Haichen; Qin, Tao; Wang, Weiping; Lei, Xiaohui; Wu, Wenhui

    2018-02-01

    Due to the weakness in holding diversity and reaching global optimum, the standard particle swarm optimization has not performed well in reservoir optimal operation. To solve this problem, this paper introduces downhill simplex method to work together with the standard particle swarm optimization. The application of this approach in Goupitan reservoir optimal operation proves that the improved method had better accuracy and higher reliability with small investment.

  18. Eliciting preferences for priority setting in genetic testing: a pilot study comparing best-worst scaling and discrete-choice experiments

    OpenAIRE

    Severin, Franziska; Schmidtke, Jörg; Mühlbacher, Axel; Rogowski, Wolf H

    2013-01-01

    Given the increasing number of genetic tests available, decisions have to be made on how to allocate limited health-care resources to them. Different criteria have been proposed to guide priority setting. However, their relative importance is unclear. Discrete-choice experiments (DCEs) and best-worst scaling experiments (BWSs) are methods used to identify and weight various criteria that influence orders of priority. This study tests whether these preference eliciting techniques can be used f...

  19. Emergency Management Span of Control: Optimizing Organizational Structures to Better Prepare Vermont for the Next Major or Catastrophic Disaster

    Science.gov (United States)

    2008-12-01

    full glare of media and public scrutiny, they are expected to perform flawlessly like a goalie in hockey or soccer, or a conversion kicker in...among all levels of government, not a plan that is pulled off the shelf only during worst- case disasters. The lifecycle of disasters entails a

  20. Pulmonary strongyloidiasis. Presentation of two cases

    International Nuclear Information System (INIS)

    Munive, Abraham Ali; Torres D, Carlos A Lasso A Javier J; Ojeda Leon, Paulina; Acosta R, Nohora

    2002-01-01

    We describe two case reports of immune-suppressed patients receiving oral steroids and who presented lung Strongyloidiasis; later they evolved toward respiratory failure, with different clinical courses. One developed severe hypoxaemia, hemodynamic instability and death. The worst prognosis in this patient was determined by diffuse infiltrates and the resulting lung injury. The other one presented a stable clinical course and evolved to full recovery; this case presented a cavern in the chest X-ray that could represent a preliminary phase for lung extension of the infection. The difference in the evolution of these two patients is determined by the different presentation of the lung damage

  1. Worst case prediction of additives migration from polystyrene for food safety purposes: a model update

    DEFF Research Database (Denmark)

    Martinez Lopez, Brais; Gontard, Nathalie; Peyron, Stephane

    2018-01-01

    . These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original...

  2. OnlineMin: A Fast Strongly Competitive Randomized Paging Algorithm

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Moruz, Gabriel; Negoescu, Andrei

    2012-01-01

    approach that both has optimal competitiveness and selects victim pages in subquadratic time. In fact, if k pages fit in internal memory the best previous solution required O(k 2) time per request and O(k) space, whereas our approach takes also O(k) space, but only O(logk) time in the worst case per page...

  3. A technical practice of affiliate marketing : case study: coLanguage and OptimalNachhilfe

    OpenAIRE

    Phan, Giang

    2015-01-01

    This study aims to introduce a new marketing method: Affiliate marketing. In addition, this study explains and explores many types of affiliate marketing. The study focuses on defining affiliate marketing methods and the technologies used to develop it. To clarify and study this new business and marketing model, the study introduces two case studies: coLanguage and OptimalNachhilfe. In addition, various online businesses such as Amazon, Udemy, and Google are discussed to give a broader v...

  4. Planning Education for Regional Economic Integration: The Case of Paraguay and MERCOSUR.

    Science.gov (United States)

    McGinn, Noel

    This paper examines the possible impact of MERCOSUR on Paraguay's economic and educational systems. MERCOSUR is a trade agreement among Argentina, Brazil, Paraguay, and Uruguay, under which terms all import tariffs among the countries will be eliminated by 1994. The countries will enter into a common economic market. The worst-case scenario…

  5. Regionalized LCA-based optimization of building energy supply: method and case study for a Swiss municipality.

    Science.gov (United States)

    Saner, Dominik; Vadenbo, Carl; Steubing, Bernhard; Hellweg, Stefanie

    2014-07-01

    This paper presents a regionalized LCA-based multiobjective optimization model of building energy demand and supply for the case of a Swiss municipality for the minimization of greenhouse gas emissions and particulate matter formation. The results show that the environmental improvement potential is very large: in the optimal case, greenhouse gas emissions from energy supply could be reduced by more than 75% and particulate emissions by over 50% in the municipality. This scenario supposes a drastic shift of heat supply systems from a fossil fuel dominated portfolio to a portfolio consisting of mainly heat pump and woodchip incineration systems. In addition to a change in heat supply technologies, roofs, windows and walls would need to be refurbished in more than 65% of the municipality's buildings. The full potential of the environmental impact reductions will hardly be achieved in reality, particularly in the short term, for example, because of financial constraints and social acceptance, which were not taken into account in this study. Nevertheless, the results of the optimization model can help policy makers to identify the most effective measures for improvement at the decision making level, for example, at the building level for refurbishment and selection of heating systems or at the municipal level for designing district heating networks. Therefore, this work represents a starting point for designing effective incentives to reduce the environmental impact of buildings. While the results of the optimization model are specific to the municipality studied, the model could readily be adapted to other regions.

  6. In-hospital cardiopulmonary resuscitation: Trainees' worst and most memorable experiences.

    Science.gov (United States)

    Myint, P K; Rivas, C A; Bowker, L K

    2010-11-01

    To examine the personal experiences of higher specialist trainees in Geriatric Medicine (GM) with regard to cardiopulmonary resuscitation (CPR) and do not attempt resuscitation (DNAR) decision making. UK. Two hundred and thirty-five higher trainee members of the British Geriatrics Society (BGS) at the Specialist Registrar (SpR) level. Postal questionnaire survey. We distributed a questionnaire examining the various issues around DNAR decision making among the trainee members of the BGS in November 2003. In one of the questions, we asked the participants, 'Briefly describe your worst or most memorable experience of DNAR'. Responses to this question were analysed by thematic schema and are presented. Overall the response rate was 62% (251/408) after second mailing and 235 of these were at SpR grade. One hundred and ninety-eight participants provided an answer to the above question, providing diverse and often detailed accounts, most of which were negative experiences and which appeared to have had a powerful influence on their ongoing clinical practice. The emerging themes demonstrated areas of conflict between trainees and other doctors as well as patients and relatives. SpR grade geriatricians are exposed to extreme and varied experiences of DNAR decision making in the UK. Efforts to improve support and training in this area should embrace the complexity of the subject.

  7. Power optimization in body sensor networks: the case of an autonomous wireless EMG sensor powered by PV-cells.

    Science.gov (United States)

    Penders, J; Pop, V; Caballero, L; van de Molengraft, J; van Schaijk, R; Vullers, R; Van Hoof, C

    2010-01-01

    Recent advances in ultra-low-power circuits and energy harvesters are making self-powered body sensor nodes a reality. Power optimization at the system and application level is crucial in achieving ultra-low-power consumption for the entire system. This paper reviews system-level power optimization techniques, and illustrates their impact on the case of autonomous wireless EMG monitoring. The resulting prototype, an Autonomous wireless EMG sensor power by PV-cells, is presented.

  8. Transaction fees and optimal rebalancing in the growth-optimal portfolio

    OpenAIRE

    Yu Feng; Matus Medo; Liang Zhang; Yi-Cheng Zhang

    2010-01-01

    The growth-optimal portfolio optimization strategy pioneered by Kelly is based on constant portfolio rebalancing which makes it sensitive to transaction fees. We examine the effect of fees on an example of a risky asset with a binary return distribution and show that the fees may give rise to an optimal period of portfolio rebalancing. The optimal period is found analytically in the case of lognormal returns. This result is consequently generalized and numerically verified for broad return di...

  9. A Case Study of Some Issues in the Optimization of Fortran 90 Array Notation

    Directory of Open Access Journals (Sweden)

    John D. McCalpin

    1996-01-01

    Full Text Available Some issues in the relationship of coding style and compiler optimization are discussed with regard to Fortran 90 array notation. A review of several important Fortran 90 array constructs and their performance on vector and scalar hardware sets the stage for a more detailed example based on the kernel of a finite difference computational fluid dynamics model, specifically the nonlinear shallow water equations. Special attention is paid to the optimization of memory use and memory traffic. It is shown that the style of coding interacts with the rules of Fortran 90 and the current state of the art of Fortran 90 compilers to produce a fairly wide range of performance levels. Although performance degradations are typically small, a few cases of more serious loss of effciency are identified and discussed.

  10. Guillain?Barre Syndrome in Postpartum Period: Rehabilitation Issues and Outcome ? Three Case Reports

    OpenAIRE

    Gupta, Anupam; Patil, Maitreyi; Khanna, Meeka; Krishnan, Rashmi; Taly, Arun B.

    2017-01-01

    We report three females who developed Guillain–Barre Syndrome in postpartum period (within 6 weeks of delivery) and were admitted in the Neurological Rehabilitation Department for rehabilitation after the initial diagnosis and treatment in the Department of Neurology. The first case, axonal variant (acute motor axonal neuropathy [AMAN]) had worst presentation at the time of admission, recovered well by the time of discharge. The second case, acute motor sensory axonal neuropathy variant and t...

  11. Optimization of BWR fuel lattice enrichment and gadolinia distribution using genetic algorithms and knowledge

    International Nuclear Information System (INIS)

    Martin-del-Campo, Cecilia; Francois, Juan Luis; Carmona, Roberto; Oropeza, Ivonne P.

    2007-01-01

    An optimization methodology based on the Genetic Algorithms (GA) method was developed for the design of radial enrichment and gadolinia distributions for boiling water reactor (BWR) fuel lattices. The optimization algorithm was linked to the HELIOS code to evaluate the neutronic parameters included in the objective function. The goal is to search for a fuel lattice with the lowest average enrichment, which satisfy a reactivity target, a local power peaking factor (PPF), lower than a limit value, and an average gadolinia concentration target. The methodology was applied to the design of a 10 x 10 fuel lattice, which can be used in fuel assemblies currently used in the two BWRs operating at Mexico. The optimization process showed an excellent performance because it found forty lattice designs in which the worst one has a better neutronic performance than the reference lattice design. The main contribution of this study is the development of an efficient procedure for BWR fuel lattice design, using GA with an objective function (OF) which saves computing time because it does not require lattice burnup calculations

  12. Lifecycle-Based Swarm Optimization Method for Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Hai Shen

    2014-01-01

    Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.

  13. Development of a codon optimization strategy using the efor RED reporter gene as a test case

    Science.gov (United States)

    Yip, Chee-Hoo; Yarkoni, Orr; Ajioka, James; Wan, Kiew-Lian; Nathan, Sheila

    2018-04-01

    Synthetic biology is a platform that enables high-level synthesis of useful products such as pharmaceutically related drugs, bioplastics and green fuels from synthetic DNA constructs. Large-scale expression of these products can be achieved in an industrial compliant host such as Escherichia coli. To maximise the production of recombinant proteins in a heterologous host, the genes of interest are usually codon optimized based on the codon usage of the host. However, the bioinformatics freeware available for standard codon optimization might not be ideal in determining the best sequence for the synthesis of synthetic DNA. Synthesis of incorrect sequences can prove to be a costly error and to avoid this, a codon optimization strategy was developed based on the E. coli codon usage using the efor RED reporter gene as a test case. This strategy replaces codons encoding for serine, leucine, proline and threonine with the most frequently used codons in E. coli. Furthermore, codons encoding for valine and glycine are substituted with the second highly used codons in E. coli. Both the optimized and original efor RED genes were ligated to the pJS209 plasmid backbone using Gibson Assembly and the recombinant DNAs were transformed into E. coli E. cloni 10G strain. The fluorescence intensity per cell density of the optimized sequence was improved by 20% compared to the original sequence. Hence, the developed codon optimization strategy is proposed when designing an optimal sequence for heterologous protein production in E. coli.

  14. Vector-model-supported approach in prostate plan optimization

    International Nuclear Information System (INIS)

    Liu, Eva Sau Fan; Wu, Vincent Wing Cheung; Harris, Benjamin; Lehman, Margot; Pryor, David; Chan, Lawrence Wing Chi

    2017-01-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  15. Vector-model-supported approach in prostate plan optimization

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Eva Sau Fan [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Wu, Vincent Wing Cheung [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong); Harris, Benjamin [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); Lehman, Margot; Pryor, David [Department of Radiation Oncology, Princess Alexandra Hospital, Brisbane (Australia); School of Medicine, University of Queensland (Australia); Chan, Lawrence Wing Chi, E-mail: wing.chi.chan@polyu.edu.hk [Department of Health Technology and Informatics, The Hong Kong Polytechnic University (Hong Kong)

    2017-07-01

    Lengthy time consumed in traditional manual plan optimization can limit the use of step-and-shoot intensity-modulated radiotherapy/volumetric-modulated radiotherapy (S&S IMRT/VMAT). A vector model base, retrieving similar radiotherapy cases, was developed with respect to the structural and physiologic features extracted from the Digital Imaging and Communications in Medicine (DICOM) files. Planning parameters were retrieved from the selected similar reference case and applied to the test case to bypass the gradual adjustment of planning parameters. Therefore, the planning time spent on the traditional trial-and-error manual optimization approach in the beginning of optimization could be reduced. Each S&S IMRT/VMAT prostate reference database comprised 100 previously treated cases. Prostate cases were replanned with both traditional optimization and vector-model-supported optimization based on the oncologists' clinical dose prescriptions. A total of 360 plans, which consisted of 30 cases of S&S IMRT, 30 cases of 1-arc VMAT, and 30 cases of 2-arc VMAT plans including first optimization and final optimization with/without vector-model-supported optimization, were compared using the 2-sided t-test and paired Wilcoxon signed rank test, with a significance level of 0.05 and a false discovery rate of less than 0.05. For S&S IMRT, 1-arc VMAT, and 2-arc VMAT prostate plans, there was a significant reduction in the planning time and iteration with vector-model-supported optimization by almost 50%. When the first optimization plans were compared, 2-arc VMAT prostate plans had better plan quality than 1-arc VMAT plans. The volume receiving 35 Gy in the femoral head for 2-arc VMAT plans was reduced with the vector-model-supported optimization compared with the traditional manual optimization approach. Otherwise, the quality of plans from both approaches was comparable. Vector-model-supported optimization was shown to offer much shortened planning time and iteration

  16. Determination of the optimal case definition for the diagnosis of end-stage renal disease from administrative claims data in Manitoba, Canada.

    Science.gov (United States)

    Komenda, Paul; Yu, Nancy; Leung, Stella; Bernstein, Keevin; Blanchard, James; Sood, Manish; Rigatto, Claudio; Tangri, Navdeep

    2015-01-01

    End-stage renal disease (ESRD) is a major public health problem with increasing prevalence and costs. An understanding of the long-term trends in dialysis rates and outcomes can help inform health policy. We determined the optimal case definition for the diagnosis of ESRD using administrative claims data in the province of Manitoba over a 7-year period. We determined the sensitivity, specificity, predictive value and overall accuracy of 4 administrative case definitions for the diagnosis of ESRD requiring chronic dialysis over different time horizons from Jan. 1, 2004, to Mar. 31, 2011. The Manitoba Renal Program Database served as the gold standard for confirming dialysis status. During the study period, 2562 patients were registered as recipients of chronic dialysis in the Manitoba Renal Program Database. Over a 1-year period (2010), the optimal case definition was any 2 claims for outpatient dialysis, and it was 74.6% sensitive (95% confidence interval [CI] 72.3%-76.9%) and 94.4% specific (95% CI 93.6%-95.2%) for the diagnosis of ESRD. In contrast, a case definition of at least 2 claims for dialysis treatment more than 90 days apart was 64.8% sensitive (95% CI 62.2%-67.3%) and 97.1% specific (95% CI 96.5%-97.7%). Extending the period to 5 years greatly improved sensitivity for all case definitions, with minimal change to specificity; for example, for the optimal case definition of any 2 claims for dialysis treatment, sensitivity increased to 86.0% (95% CI 84.7%-87.4%) at 5 years. Accurate case definitions for the diagnosis of ESRD requiring dialysis can be derived from administrative claims data. The optimal definition required any 2 claims for outpatient dialysis. Extending the claims period to 5 years greatly improved sensitivity with minimal effects on specificity for all case definitions.

  17. Optimal Tracking of Distributed Heavy Hitters and Quantiles

    DEFF Research Database (Denmark)

    Yi, Ke; Zhang, Qin

    2013-01-01

    We consider the problem of tracking heavy hitters and quantiles in the distributed streaming model. The heavy hitters and quantiles are two important statistics for characterizing a data distribution. Let A be a multiset of elements, drawn from the universe U={1,…,u}. For a given 0≤ϕ≤1, the ϕ...... of the sites has a two-way communication channel to a designated coordinator, whose goal is to track the set of ϕ-heavy hitters and the ϕ-quantile of A approximately at all times with minimum communication. We give tracking algorithms with worst-case communication cost O(k/ϵ⋅logn) for both problems, where n...

  18. Thermal Analysis of MIRIS Space Observation Camera for Verification of Passive Cooling

    Directory of Open Access Journals (Sweden)

    Duk-Hang Lee

    2012-09-01

    Full Text Available We conducted thermal analyses and cooling tests of the space observation camera (SOC of the multi-purpose infrared imaging system (MIRIS to verify passive cooling. The thermal analyses were conducted with NX 7.0 TMG for two cases of attitude of the MIRIS: for the worst hot case and normal case. Through the thermal analyses of the flight model, it was found that even in the worst case the telescope could be cooled to less than 206°K. This is similar to the results of the passive cooling test (~200.2°K. For the normal attitude case of the analysis, on the other hand, the SOC telescope was cooled to about 160°K in 10 days. Based on the results of these analyses and the test, it was determined that the telescope of the MIRIS SOC could be successfully cooled to below 200°K with passive cooling. The SOC is, therefore, expected to have optimal performance under cooled conditions in orbit.

  19. Load flow optimization and optimal power flow

    CERN Document Server

    Das, J C

    2017-01-01

    This book discusses the major aspects of load flow, optimization, optimal load flow, and culminates in modern heuristic optimization techniques and evolutionary programming. In the deregulated environment, the economic provision of electrical power to consumers requires knowledge of maintaining a certain power quality and load flow. Many case studies and practical examples are included to emphasize real-world applications. The problems at the end of each chapter can be solved by hand calculations without having to use computer software. The appendices are devoted to calculations of line and cable constants, and solutions to the problems are included throughout the book.

  20. Energy-optimal electrical excitation of nerve fibers.

    Science.gov (United States)

    Jezernik, Saso; Morari, Manfred

    2005-04-01

    We derive, based on an analytical nerve membrane model and optimal control theory of dynamical systems, an energy-optimal stimulation current waveform for electrical excitation of nerve fibers. Optimal stimulation waveforms for nonleaky and leaky membranes are calculated. The case with a leaky membrane is a realistic case. Finally, we compare the waveforms and energies necessary for excitation of a leaky membrane in the case where the stimulation waveform is a square-wave current pulse, and in the case of energy-optimal stimulation. The optimal stimulation waveform is an exponentially rising waveform and necessitates considerably less energy to excite the nerve than a square-wave pulse (especially true for larger pulse durations). The described theoretical results can lead to drastically increased battery lifetime and/or decreased energy transmission requirements for implanted biomedical systems.

  1. Full-order optimal compensators for flow control: the multiple inputs case

    Science.gov (United States)

    Semeraro, Onofrio; Pralits, Jan O.

    2018-03-01

    Flow control has been the subject of numerous experimental and theoretical works. We analyze full-order, optimal controllers for large dynamical systems in the presence of multiple actuators and sensors. The full-order controllers do not require any preliminary model reduction or low-order approximation: this feature allows us to assess the optimal performance of an actuated flow without relying on any estimation process or further hypothesis on the disturbances. We start from the original technique proposed by Bewley et al. (Meccanica 51(12):2997-3014, 2016. https://doi.org/10.1007/s11012-016-0547-3), the adjoint of the direct-adjoint (ADA) algorithm. The algorithm is iterative and allows bypassing the solution of the algebraic Riccati equation associated with the optimal control problem, typically infeasible for large systems. In this numerical work, we extend the ADA iteration into a more general framework that includes the design of controllers with multiple, coupled inputs and robust controllers (H_{∞} methods). First, we demonstrate our results by showing the analytical equivalence between the full Riccati solutions and the ADA approximations in the multiple inputs case. In the second part of the article, we analyze the performance of the algorithm in terms of convergence of the solution, by comparing it with analogous techniques. We find an excellent scalability with the number of inputs (actuators), making the method a viable way for full-order control design in complex settings. Finally, the applicability of the algorithm to fluid mechanics problems is shown using the linearized Kuramoto-Sivashinsky equation and the Kármán vortex street past a two-dimensional cylinder.

  2. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  3. Intermittent random walks for an optimal search strategy: one-dimensional case

    International Nuclear Information System (INIS)

    Oshanin, G; Wio, H S; Lindenberg, K; Burlatsky, S F

    2007-01-01

    We study the search kinetics of an immobile target by a concentration of randomly moving searchers. The object of the study is to optimize the probability of detection within the constraints of our model. The target is hidden on a one-dimensional lattice in the sense that searchers have no a priori information about where it is, and may detect it only upon encounter. The searchers perform random walks in discrete time n = 0,1,2,...,N, where N is the maximal time the search process is allowed to run. With probability α the searchers step on a nearest-neighbour, and with probability (1-α) they leave the lattice and stay off until they land back on the lattice at a fixed distance L away from the departure point. The random walk is thus intermittent. We calculate the probability P N that the target remains undetected up to the maximal search time N, and seek to minimize this probability. We find that P N is a non-monotonic function of α, and show that there is an optimal choice α opt (N) of α well within the intermittent regime, 0 opt (N) N can be orders of magnitude smaller compared to the 'pure' random walk cases α = 0 and α = 1

  4. Resource Allocation and Resident Outcomes In Nursing Homes: Comparisons between the Best and Worst1

    Science.gov (United States)

    Anderson, Ruth A.; Hsieh, Pi-Ching; Su, Hui-Fang

    2005-01-01

    The purpose of this study was to identify patterns of resource allocation that related to resident outcomes in nursing homes. Data on structure, staffing levels, salaries, cost, casemix, and resident outcomes were obtained from state-level, administrative databases on 494 nursing homes. We identified two sets of comparison groups and showed that the group of homes with the greatest percentage of improvement in resident outcomes had higher levels of RN staffing and higher costs. However, comparison groups based on best/worst average outcomes did not differ in resource allocation patterns. Additional analysis demonstrated that when controlling for RN staffing, resident outcomes in high and low cost homes did not differ. The results suggest that, although RN staffing is more expensive, it is key to improving resident outcomes. PMID:9679807

  5. Optimizing UML Class Diagrams

    Directory of Open Access Journals (Sweden)

    Sergievskiy Maxim

    2018-01-01

    Full Text Available Most of object-oriented development technologies rely on the use of the universal modeling language UML; class diagrams play a very important role in the design process play, used to build a software system model. Modern CASE tools, which are the basic tools for object-oriented development, can’t be used to optimize UML diagrams. In this manuscript we will explain how, based on the use of design patterns and anti-patterns, class diagrams could be verified and optimized. Certain transformations can be carried out automatically; in other cases, potential inefficiencies will be indicated and recommendations given. This study also discusses additional CASE tools for validating and optimizing of UML class diagrams. For this purpose, a plugin has been developed that analyzes an XMI file containing a description of class diagrams.

  6. Understanding trends in the worst forms of child labour and the state’s legal responses: a descriptive analysis

    Directory of Open Access Journals (Sweden)

    Mashele Rapatsa

    2017-10-01

    Full Text Available This article discusses trends in the worst forms of child labour. It also examines state’s legal responses designed to eradicate child economic exploitation. This is premised on the Constitution transformative ideal of accelerating social transformation and human development. The exploitative nature of the worst forms of child labour is amongst the most disconcerting aspects in social, educational and economic realities. Most repugnant forms include children being subjected to Commercial Sexual Exploitation, Children being Used to Commit Illicit Activities, bondage labour and other hazardous economic activities. Such activities often result in unalterable physical and psychological harm or even worse, threaten children’s lives. Thus, it is a human rights issue, which infringes children’s core rights such right to dignity, life, social security and freedom. Widespread anecdotal evidence suggests that no country in the world is immune from this scourge, and so is South Africa. Hence, the need to highlight the nature and extent of prevalence, and the efficacy of the rights-based legal instruments adopted against child economic exploitation. It is asserted that factors that proliferates child economic exploitation manifests in the form of primary factors (those with direct impact such as social deprivations, e.g. poverty and secondary factors (those that relate with action or inaction of governments, e.g. corruption, lack of state capacity. It is argued that legal instruments will be of no effect lest these direct and indirect causes are not interrupted. Widespread awareness campaigns also remain indispensable in order to conscientise society regarding the urgency of the problem.

  7. A novel comprehensive learning artificial bee colony optimizer for dynamic optimization biological problems.

    Science.gov (United States)

    Su, Weixing; Chen, Hanning; Liu, Fang; Lin, Na; Jing, Shikai; Liang, Xiaodan; Liu, Wei

    2017-03-01

    There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC) for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell's pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.

  8. Strict fibonacci heaps

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lagogiannis, George; Tarjan, Robert E.

    2012-01-01

    We present the first pointer-based heap implementation with time bounds matching those of Fibonacci heaps in the worst case. We support make-heap, insert, find-min, meld and decrease-key in worst-case O(1) time, and delete and delete-min in worst-case O(lg n) time, where n is the size of the heap...... of the smaller heap when doing a meld. We use the pigeonhole principle in place of the redundant counter mechanism. We present the first pointer-based heap implementation with time bounds matching those of Fibonacci heaps in the worst case. We support make-heap, insert, find-min, meld and decrease-key in worst...

  9. Valuing Treatments for Parkinson Disease Incorporating Process Utility: Performance of Best-Worst Scaling, Time Trade-Off, and Visual Analogue Scales.

    Science.gov (United States)

    Weernink, Marieke G M; Groothuis-Oudshoorn, Catharina G M; IJzerman, Maarten J; van Til, Janine A

    2016-01-01

    The objective of this study was to compare treatment profiles including both health outcomes and process characteristics in Parkinson disease using best-worst scaling (BWS), time trade-off (TTO), and visual analogue scales (VAS). From the model comprising of seven attributes with three levels, six unique profiles were selected representing process-related factors and health outcomes in Parkinson disease. A Web-based survey (N = 613) was conducted in a general population to estimate process-related utilities using profile-based BWS (case 2), multiprofile-based BWS (case 3), TTO, and VAS. The rank order of the six profiles was compared, convergent validity among methods was assessed, and individual analysis focused on the differentiation between pairs of profiles with methods used. The aggregated health-state utilities for the six treatment profiles were highly comparable for all methods and no rank reversals were identified. On the individual level, the convergent validity between all methods was strong; however, respondents differentiated less in the utility of closely related treatment profiles with a VAS or TTO than with BWS. For TTO and VAS, this resulted in nonsignificant differences in mean utilities for closely related treatment profiles. This study suggests that all methods are equally able to measure process-related utility when the aim is to estimate the overall value of treatments. On an individual level, such as in shared decision making, BWS allows for better prioritization of treatment alternatives, especially if they are closely related. The decision-making problem and the need for explicit trade-off between attributes should determine the choice for a method. Copyright © 2016. Published by Elsevier Inc.

  10. Population Modeling Approach to Optimize Crop Harvest Strategy. The Case of Field Tomato.

    Science.gov (United States)

    Tran, Dinh T; Hertog, Maarten L A T M; Tran, Thi L H; Quyen, Nguyen T; Van de Poel, Bram; Mata, Clara I; Nicolaï, Bart M

    2017-01-01

    In this study, the aim is to develop a population model based approach to optimize fruit harvesting strategies with regard to fruit quality and its derived economic value. This approach was applied to the case of tomato fruit harvesting under Vietnamese conditions. Fruit growth and development of tomato (cv. "Savior") was monitored in terms of fruit size and color during both the Vietnamese winter and summer growing seasons. A kinetic tomato fruit growth model was applied to quantify biological fruit-to-fruit variation in terms of their physiological maturation. This model was successfully calibrated. Finally, the model was extended to translate the fruit-to-fruit variation at harvest into the economic value of the harvested crop. It can be concluded that a model based approach to the optimization of harvest date and harvest frequency with regard to economic value of the crop as such is feasible. This approach allows growers to optimize their harvesting strategy by harvesting the crop at more uniform maturity stages meeting the stringent retail demands for homogeneous high quality product. The total farm profit would still depend on the impact a change in harvesting strategy might have on related expenditures. This model based harvest optimisation approach can be easily transferred to other fruit and vegetable crops improving homogeneity of the postharvest product streams.

  11. A novel optimization method, Gravitational Search Algorithm (GSA), for PWR core optimization

    International Nuclear Information System (INIS)

    Mahmoudi, S.M.; Aghaie, M.; Bahonar, M.; Poursalehi, N.

    2016-01-01

    Highlights: • The Gravitational Search Algorithm (GSA) is introduced. • The advantage of GSA is verified in Shekel’s Foxholes. • Reload optimizing in WWER-1000 and WWER-440 cases are performed. • Maximizing K eff , minimizing PPFs and flattening power density is considered. - Abstract: In-core fuel management optimization (ICFMO) is one of the most challenging concepts of nuclear engineering. In recent decades several meta-heuristic algorithms or computational intelligence methods have been expanded to optimize reactor core loading pattern. This paper presents a new method of using Gravitational Search Algorithm (GSA) for in-core fuel management optimization. The GSA is constructed based on the law of gravity and the notion of mass interactions. It uses the theory of Newtonian physics and searcher agents are the collection of masses. In this work, at the first step, GSA method is compared with other meta-heuristic algorithms on Shekel’s Foxholes problem. In the second step for finding the best core, the GSA algorithm has been performed for three PWR test cases including WWER-1000 and WWER-440 reactors. In these cases, Multi objective optimizations with the following goals are considered, increment of multiplication factor (K eff ), decrement of power peaking factor (PPF) and power density flattening. It is notable that for neutronic calculation, PARCS (Purdue Advanced Reactor Core Simulator) code is used. The results demonstrate that GSA algorithm have promising performance and could be proposed for other optimization problems of nuclear engineering field.

  12. Flowshop Scheduling Problems with a Position-Dependent Exponential Learning Effect

    Directory of Open Access Journals (Sweden)

    Mingbao Cheng

    2013-01-01

    Full Text Available We consider a permutation flowshop scheduling problem with a position-dependent exponential learning effect. The objective is to minimize the performance criteria of makespan and the total flow time. For the two-machine flow shop scheduling case, we show that Johnson’s rule is not an optimal algorithm for minimizing the makespan given the exponential learning effect. Furthermore, by using the shortest total processing times first (STPT rule, we construct the worst-case performance ratios for both criteria. Finally, a polynomial-time algorithm is proposed for special cases of the studied problem.

  13. Modeling and operation optimization of a proton exchange membrane fuel cell system for maximum efficiency

    International Nuclear Information System (INIS)

    Han, In-Su; Park, Sang-Kyun; Chung, Chang-Bock

    2016-01-01

    Highlights: • A proton exchange membrane fuel cell system is operationally optimized. • A constrained optimization problem is formulated to maximize fuel cell efficiency. • Empirical and semi-empirical models for most system components are developed. • Sensitivity analysis is performed to elucidate the effects of major operating variables. • The optimization results are verified by comparison with actual operation data. - Abstract: This paper presents an operation optimization method and demonstrates its application to a proton exchange membrane fuel cell system. A constrained optimization problem was formulated to maximize the efficiency of a fuel cell system by incorporating practical models derived from actual operations of the system. Empirical and semi-empirical models for most of the system components were developed based on artificial neural networks and semi-empirical equations. Prior to system optimizations, the developed models were validated by comparing simulation results with the measured ones. Moreover, sensitivity analyses were performed to elucidate the effects of major operating variables on the system efficiency under practical operating constraints. Then, the optimal operating conditions were sought at various system power loads. The optimization results revealed that the efficiency gaps between the worst and best operation conditions of the system could reach 1.2–5.5% depending on the power output range. To verify the optimization results, the optimal operating conditions were applied to the fuel cell system, and the measured results were compared with the expected optimal values. The discrepancies between the measured and expected values were found to be trivial, indicating that the proposed operation optimization method was quite successful for a substantial increase in the efficiency of the fuel cell system.

  14. Optimization of well field operation: case study of søndersø waterworks, Denmark

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine; Madsen, Henrik; Bauer-Gottwein, Peter

    2013-01-01

    An integrated hydrological well field model (WELLNES) that predicts the water level and energy consumption in the production wells of a waterworks is used to optimize the management of a waterworks with the speed of the pumps as decision variables. The two-objective optimization problem...... variable-speed pumps, it is possible to save 42% of the specific energy consumption and at the same time improve the risk objective function. The payback period of investing in new variable speed pumps is only 3.1 years, due to the large savings in electricity. The case study illustrates the efficiency...... of minimizing the risk of contamination from a nearby contaminated site and minimizing the energy consumption of the waterworks is solved by genetic algorithms. In comparison with historical values, significant improvements in both objectives can be obtained. If the existing on/off pumps are changed to new...

  15. Optimal metering plan for measurement and verification on a lighting case study

    International Nuclear Information System (INIS)

    Ye, Xianming; Xia, Xiaohua

    2016-01-01

    M&V (Measurement and Verification) has become an indispensable process in various incentive EEDSM (energy efficiency and demand side management) programmes to accurately and reliably measure and verify the project performance in terms of energy and/or cost savings. Due to the uncertain nature of the unmeasurable savings, there is an inherent trade-off between the M&V accuracy and M&V cost. In order to achieve the required M&V accuracy cost-effectively, we propose a combined spatial and longitudinal MCM (metering cost minimisation) model to assist the design of optimal M&V metering plans, which minimises the metering cost whilst satisfying the required measurement and sampling accuracy of M&V. The objective function of the proposed MCM model is the M&V metering cost that covers the procurement, installation and maintenance of the metering system whereas the M&V accuracy requirements are formulated as the constraints. Optimal solutions to the proposed MCM model offer useful information in designing the optimal M&V metering plan. The advantages of the proposed MCM model are demonstrated by a case study of an EE lighting retrofit project and the model is widely applicable to other M&V lighting projects with different population sizes and sampling accuracy requirements. - Highlights: • A combined spatial and longitudinal optimisation model is proposed to reduce M&V cost. • The combined optimisation model handles M&V sampling uncertainty cost-effectively. • The model exhibits a better performance than the separate spatial or longitudinal models. • The required 90/10 criterion sampling accuracy is satisfied for each M&V report.

  16. Optimization-based methodology for wastewater treatment plant synthesis – a full scale retrofitting case study

    DEFF Research Database (Denmark)

    Bozkurt, Hande; Gernaey, Krist; Sin, Gürkan

    2015-01-01

    Existing wastewater treatment plants (WWTP) need retrofitting in order to better handle changes in the wastewater flow and composition, reduce operational costs as well as meet newer and stricter regulatory standards on the effluent discharge limits. In this study, we use an optimization based...... technologies. The superstructure optimization problem is formulated as a Mixed Integer (non)Linear Programming problem and solved for different scenarios - represented by different objective functions and constraint definitions. A full-scale domestic wastewater treatment plant (265,000 PE) is used as a case...... framework to manage the multi-criteria WWTP design/retrofit problem for domestic wastewater treatment. The design space (i.e. alternative treatment technologies) is represented in a superstructure, which is coupled with a database containing data for both performance and economics of the novel alternative...

  17. Asymptotically Optimal Agents

    OpenAIRE

    Lattimore, Tor; Hutter, Marcus

    2011-01-01

    Artificial general intelligence aims to create agents capable of learning to solve arbitrary interesting problems. We define two versions of asymptotic optimality and prove that no agent can satisfy the strong version while in some cases, depending on discounting, there does exist a non-computable weak asymptotically optimal agent.

  18. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Dong, P; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Ungun, B [Stanford University School of Medicine, Stanford, CA (United States); Stanford University School of Engineering, Stanford, CA (United States); Boyd, S [Stanford University School of Engineering, Stanford, CA (United States)

    2016-06-15

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. To avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.

  19. WE-AB-209-09: Optimization of Rotational Arc Station Parameter Optimized Radiation Therapy

    International Nuclear Information System (INIS)

    Dong, P; Xing, L; Ungun, B; Boyd, S

    2016-01-01

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of improving VMAT in both plan quality and delivery efficiency. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based Proximal Operator Graph Solver (POGS) within seconds. Apertures with zero or low weight were thrown out. To avoid being trapped in a local minimum, a stochastic gradient descent method was employed which also greatly increased the convergence rate of the objective function. The above procedure repeated until the plan could not be improved any further. A weighting factor associated with the total plan MU also indirectly controlled the complexities of aperture shapes. The number of apertures for VMAT and SPORT was confined to 180. The SPORT allowed the coexistence of multiple apertures in a single SP. The optimization technique was assessed by using three clinical cases (prostate, H&N and brain). Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. Prostate case: the volume of the 50% prescription dose was decreased by 22% for the rectum. H&N case: SPORT improved the mean dose for the left and right parotids by 15% each. Brain case: the doses to the eyes, chiasm and inner ears were all improved. SPORT shortened the treatment time by ∼1 min for the prostate case, ∼0.5 min for brain case, and ∼0.2 min for the H&N case. Conclusion: The superior dosimetric quality and delivery efficiency presented here indicates that SPORT is an intriguing alternative treatment modality.

  20. Quantifying policy options for reducing future coronary heart disease mortality in England: a modelling study.

    Directory of Open Access Journals (Sweden)

    Shaun Scholes

    Full Text Available To estimate the number of coronary heart disease (CHD deaths potentially preventable in England in 2020 comparing four risk factor change scenarios.Using 2007 as baseline, the IMPACTSEC model was extended to estimate the potential number of CHD deaths preventable in England in 2020 by age, gender and Index of Multiple Deprivation 2007 quintiles given four risk factor change scenarios: (a assuming recent trends will continue; (b assuming optimal but feasible levels already achieved elsewhere; (c an intermediate point, halfway between current and optimal levels; and (d assuming plateauing or worsening levels, the worst case scenario. These four scenarios were compared to the baseline scenario with both risk factors and CHD mortality rates remaining at 2007 levels. This would result in approximately 97,000 CHD deaths in 2020. Assuming recent trends will continue would avert approximately 22,640 deaths (95% uncertainty interval: 20,390-24,980. There would be some 39,720 (37,120-41,900 fewer deaths in 2020 with optimal risk factor levels and 22,330 fewer (19,850-24,300 in the intermediate scenario. In the worst case scenario, 16,170 additional deaths (13,880-18,420 would occur. If optimal risk factor levels were achieved, the gap in CHD rates between the most and least deprived areas would halve with falls in systolic blood pressure, physical inactivity and total cholesterol providing the largest contributions to mortality gains.CHD mortality reductions of up to 45%, accompanied by significant reductions in area deprivation mortality disparities, would be possible by implementing optimal preventive policies.

  1. Quantifying policy options for reducing future coronary heart disease mortality in England: a modelling study.

    Science.gov (United States)

    Scholes, Shaun; Bajekal, Madhavi; Norman, Paul; O'Flaherty, Martin; Hawkins, Nathaniel; Kivimäki, Mika; Capewell, Simon; Raine, Rosalind

    2013-01-01

    To estimate the number of coronary heart disease (CHD) deaths potentially preventable in England in 2020 comparing four risk factor change scenarios. Using 2007 as baseline, the IMPACTSEC model was extended to estimate the potential number of CHD deaths preventable in England in 2020 by age, gender and Index of Multiple Deprivation 2007 quintiles given four risk factor change scenarios: (a) assuming recent trends will continue; (b) assuming optimal but feasible levels already achieved elsewhere; (c) an intermediate point, halfway between current and optimal levels; and (d) assuming plateauing or worsening levels, the worst case scenario. These four scenarios were compared to the baseline scenario with both risk factors and CHD mortality rates remaining at 2007 levels. This would result in approximately 97,000 CHD deaths in 2020. Assuming recent trends will continue would avert approximately 22,640 deaths (95% uncertainty interval: 20,390-24,980). There would be some 39,720 (37,120-41,900) fewer deaths in 2020 with optimal risk factor levels and 22,330 fewer (19,850-24,300) in the intermediate scenario. In the worst case scenario, 16,170 additional deaths (13,880-18,420) would occur. If optimal risk factor levels were achieved, the gap in CHD rates between the most and least deprived areas would halve with falls in systolic blood pressure, physical inactivity and total cholesterol providing the largest contributions to mortality gains. CHD mortality reductions of up to 45%, accompanied by significant reductions in area deprivation mortality disparities, would be possible by implementing optimal preventive policies.

  2. Topology optimization for biocatalytic microreactor configurations

    DEFF Research Database (Denmark)

    Pereira Rosinha, Ines; Gernaey, Krist; Woodley, John

    2015-01-01

    as a case study. The Evolutionary Structure Optimization (ESO) method is applied using an interface between Matlab® and the computational fluid dynamic simulation software ANSYS CFX®. In the case study, theESO method is applied to optimize the spatial distribution of immobilized enzyme inside a microreactor...

  3. A novel comprehensive learning artificial bee colony optimizer for dynamic optimization biological problems

    Directory of Open Access Journals (Sweden)

    Weixing Su

    2017-03-01

    Full Text Available There are many dynamic optimization problems in the real world, whose convergence and searching ability is cautiously desired, obviously different from static optimization cases. This requires an optimization algorithm adaptively seek the changing optima over dynamic environments, instead of only finding the global optimal solution in the static environment. This paper proposes a novel comprehensive learning artificial bee colony optimizer (CLABC for optimization in dynamic environments problems, which employs a pool of optimal foraging strategies to balance the exploration and exploitation tradeoff. The main motive of CLABC is to enrich artificial bee foraging behaviors in the ABC model by combining Powell’s pattern search method, life-cycle, and crossover-based social learning strategy. The proposed CLABC is a more bee-colony-realistic model that the bee can reproduce and die dynamically throughout the foraging process and population size varies as the algorithm runs. The experiments for evaluating CLABC are conducted on the dynamic moving peak benchmarks. Furthermore, the proposed algorithm is applied to a real-world application of dynamic RFID network optimization. Statistical analysis of all these cases highlights the significant performance improvement due to the beneficial combination and demonstrates the performance superiority of the proposed algorithm.

  4. Analysis and Optimization of Spiral Circular Inductive Coupling Link for Bio-Implanted Applications on Air and within Human Tissue

    Directory of Open Access Journals (Sweden)

    Saad Mutashar

    2014-06-01

    Full Text Available The use of wireless communication using inductive links to transfer data and power to implantable microsystems to stimulate and monitor nerves and muscles is increasing. This paper deals with the development of the theoretical analysis and optimization of an inductive link based on coupling and on spiral circular coil geometry. The coil dimensions offer 22 mm of mutual distance in air. However, at 6 mm of distance, the coils offer a power transmission efficiency of 80% in the optimum case and 73% in the worst case via low input impedance, whereas, transmission efficiency is 45% and 32%, respectively, via high input impedance. The simulations were performed in air and with two types of simulated human biological tissues such as dry and wet-skin using a depth of 6 mm. The performance results expound that the combined magnitude of the electric field components surrounding the external coil is approximately 98% of that in air, and for an internal coil, it is approximately 50%, respectively. It can be seen that the gain surrounding coils is almost constant and confirms the omnidirectional pattern associated with such loop antennas which reduces the effect of non-alignment between the two coils. The results also show that the specific absorption rate (SAR and power loss within the tissue are lower than that of the standard level. Thus, the tissue will not be damaged anymore.

  5. Schedulability Analysis and Optimization for the Synthesis of Multi-Cluster Distributed Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2003-01-01

    An approach to schedulability analysis for the synthesis of multi-cluster distributed embedded systems consisting of time-triggered and event-triggered clusters, interconnected via gateways, is presented. A buffer size and worst case queuing delay analysis for the gateways, responsible for routing...... inter-cluster traffic, is also proposed. Optimisation heuristics for the priority assignment and synthesis of bus access parameters aimed at producing a schedulable system with minimal buffer needs have been proposed. Extensive experiments and a real-life example show the efficiency of the approaches....

  6. Combined Turbine and Cycle Optimization for Organic Rankine Cycle Power Systems—Part B: Application on a Case Study

    Directory of Open Access Journals (Sweden)

    Angelo La Seta

    2016-05-01

    Full Text Available Organic Rankine cycle (ORC power systems have recently emerged as promising solutions for waste heat recovery in low- and medium-size power plants. Their performance and economic feasibility strongly depend on the expander. The design process and efficiency estimation are particularly challenging due to the peculiar physical properties of the working fluid and the gas-dynamic phenomena occurring in the machine. Unlike steam Rankine and Brayton engines, organic Rankine cycle expanders combine small enthalpy drops with large expansion ratios. These features yield turbine designs with few highly-loaded stages in supersonic flow regimes. Part A of this two-part paper has presented the implementation and validation of the simulation tool TURAX, which provides the optimal preliminary design of single-stage axial-flow turbines. The authors have also presented a sensitivity analysis on the decision variables affecting the turbine design. Part B of this two-part paper presents the first application of a design method where the thermodynamic cycle optimization is combined with calculations of the maximum expander performance using the mean-line design tool described in part A. The high computational cost of the turbine optimization is tackled by building a model which gives the optimal preliminary design of an axial-flow turbine as a function of the cycle conditions. This allows for estimating the optimal expander performance for each operating condition of interest. The test case is the preliminary design of an organic Rankine cycle turbogenerator to increase the overall energy efficiency of an offshore platform. For an increase in expander pressure ratio from 10 to 35, the results indicate up to 10% point reduction in expander performance. This corresponds to a relative reduction in net power output of 8.3% compared to the case when the turbine efficiency is assumed to be 80%. This work also demonstrates that this approach can support the plant designer

  7. Synthesis of robust nonlinear autopilots using differential game theory

    Science.gov (United States)

    Menon, P. K. A.

    1991-01-01

    A synthesis technique for handling unmodeled disturbances in nonlinear control law synthesis was advanced using differential game theory. Two types of modeling inaccuracies can be included in the formulation. The first is a bias-type error, while the second is the scale-factor-type error in the control variables. The disturbances were assumed to satisfy an integral inequality constraint. Additionally, it was assumed that they act in such a way as to maximize a quadratic performance index. Expressions for optimal control and worst-case disturbance were then obtained using optimal control theory.

  8. Optimization of rotational arc station parameter optimized radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Dong, P.; Ungun, B. [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Boyd, S. [Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Xing, L., E-mail: lei@stanford.edu [Department of Radiation Oncology, Stanford University, Stanford, California 94305 and Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States)

    2016-09-15

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was

  9. Optimization of rotational arc station parameter optimized radiation therapy

    International Nuclear Information System (INIS)

    Dong, P.; Ungun, B.; Boyd, S.; Xing, L.

    2016-01-01

    Purpose: To develop a fast optimization method for station parameter optimized radiation therapy (SPORT) and show that SPORT is capable of matching VMAT in both plan quality and delivery efficiency by using three clinical cases of different disease sites. Methods: The angular space from 0° to 360° was divided into 180 station points (SPs). A candidate aperture was assigned to each of the SPs based on the calculation results using a column generation algorithm. The weights of the apertures were then obtained by optimizing the objective function using a state-of-the-art GPU based proximal operator graph solver. To avoid being trapped in a local minimum in beamlet-based aperture selection using the gradient descent algorithm, a stochastic gradient descent was employed here. Apertures with zero or low weight were thrown out. To find out whether there was room to further improve the plan by adding more apertures or SPs, the authors repeated the above procedure with consideration of the existing dose distribution from the last iteration. At the end of the second iteration, the weights of all the apertures were reoptimized, including those of the first iteration. The above procedure was repeated until the plan could not be improved any further. The optimization technique was assessed by using three clinical cases (prostate, head and neck, and brain) with the results compared to that obtained using conventional VMAT in terms of dosimetric properties, treatment time, and total MU. Results: Marked dosimetric quality improvement was demonstrated in the SPORT plans for all three studied cases. For the prostate case, the volume of the 50% prescription dose was decreased by 22% for the rectum and 6% for the bladder. For the head and neck case, SPORT improved the mean dose for the left and right parotids by 15% each. The maximum dose was lowered from 72.7 to 71.7 Gy for the mandible, and from 30.7 to 27.3 Gy for the spinal cord. The mean dose for the pharynx and larynx was

  10. Uncovering Voter Preference Structures Using a Best-Worst Scaling Procedure: Method and Empirical Example in the British General Election of 2010

    DEFF Research Database (Denmark)

    Ormrod, Robert P.; Savigny, Heather

    Best-Worst scaling (BWS) is a method that can provide insights into the preference structures of voters. By asking voters to select the ‘best’ and ‘worst’ option (‘most important’ and ‘least important’ media in our investigation) from a short list of alternatives it is possible to uncover the rel...... the least information. We furthermore investigate group differences using an ANOVA procedure to demonstrate how contextual variables can enrich our empirical investigations using the BWS method....

  11. A Two-Stage Robust Optimization for Centralized-Optimal Dispatch of Photovoltaic Inverters in Active Distribution Networks

    DEFF Research Database (Denmark)

    Ding, Tao; Li, Cheng; Yang, Yongheng

    2017-01-01

    Optimally dispatching Photovoltaic (PV) inverters is an efficient way to avoid overvoltage in active distribution networks, which may occur in the case of PV generation surplus load demand. Typically, the dispatching optimization objective is to identify critical PV inverters that have the most...... nature of solar PV energy may affect the selection of the critical PV inverters and also the final optimal objective value. In order to address this issue, a two-stage robust optimization model is proposed in this paper to achieve a robust optimal solution to the PV inverter dispatch, which can hedge...... against any possible realization within the uncertain PV outputs. In addition, the conic relaxation-based branch flow formulation and second-order cone programming based column-and-constraint generation algorithm are employed to deal with the proposed robust optimization model. Case studies on a 33-bus...

  12. Optimization of well field management

    DEFF Research Database (Denmark)

    Hansen, Annette Kirstine

    Groundwater is a limited but important resource for fresh water supply. Differ- ent conflicting objectives are important when operating a well field. This study investigates how the management of a well field can be improved with respect to different objectives simultaneously. A framework...... for optimizing well field man- agement using multi-objective optimization is developed. The optimization uses the Strength Pareto Evolutionary Algorithm 2 (SPEA2) to find the Pareto front be- tween the conflicting objectives. The Pareto front is a set of non-inferior optimal points and provides an important tool...... for the decision-makers. The optimization framework is tested on two case studies. Both abstract around 20,000 cubic meter of water per day, but are otherwise rather different. The first case study concerns the management of Hardhof waterworks, Switzer- land, where artificial infiltration of river water...

  13. Improving thermal performance of an existing UK district heat network: a case for temperature optimization

    DEFF Research Database (Denmark)

    Tunzi, Michele; Boukhanouf, Rabah; Li, Hongwei

    2018-01-01

    This paper presents results of a research study into improving energy performance of small-scale district heat network through water supply and return temperature optimization technique. The case study involves establishing the baseline heat demand of the estate’s buildings, benchmarking...... the existing heat network operating parameters, and defining the optimum supply and return temperature. A stepwise temperature optimization technique of plate radiators heat emitters was applied to control the buildings indoor thermal comfort using night set back temperature strategy of 21/18 °C....... It was established that the heat network return temperature could be lowered from the current measured average of 55 °C to 35.6 °C, resulting in overall reduction of heat distribution losses and fuel consumption of 10% and 9% respectively. Hence, the study demonstrates the potential of operating existing heat...

  14. Two-Step Optimization for Spatial Accessibility Improvement: A Case Study of Health Care Planning in Rural China

    Directory of Open Access Journals (Sweden)

    Jing Luo

    2017-01-01

    Full Text Available A recent advancement in location-allocation modeling formulates a two-step approach to a new problem of minimizing disparity of spatial accessibility. Our field work in a health care planning project in a rural county in China indicated that residents valued distance or travel time from the nearest hospital foremost and then considered quality of care including less waiting time as a secondary desirability. Based on the case study, this paper further clarifies the sequential decision-making approach, termed “two-step optimization for spatial accessibility improvement (2SO4SAI.” The first step is to find the best locations to site new facilities by emphasizing accessibility as proximity to the nearest facilities with several alternative objectives under consideration. The second step adjusts the capacities of facilities for minimal inequality in accessibility, where the measure of accessibility accounts for the match ratio of supply and demand and complex spatial interaction between them. The case study illustrates how the two-step optimization method improves both aspects of spatial accessibility for health care access in rural China.

  15. Method optimization for drug impurity profiling in supercritical fluid chromatography: Application to a pharmaceutical mixture.

    Science.gov (United States)

    Muscat Galea, Charlene; Didion, David; Clicq, David; Mangelings, Debby; Vander Heyden, Yvan

    2017-12-01

    A supercritical chromatographic method for the separation of a drug and its impurities has been developed and optimized applying an experimental design approach and chromatogram simulations. Stationary phase screening was followed by optimization of the modifier and injection solvent composition. A design-of-experiment (DoE) approach was then used to optimize column temperature, back-pressure and the gradient slope simultaneously. Regression models for the retention times and peak widths of all mixture components were built. The factor levels for different grid points were then used to predict the retention times and peak widths of the mixture components using the regression models and the best separation for the worst separated peak pair in the experimental domain was identified. A plot of the minimal resolutions was used to help identifying the factor levels leading to the highest resolution between consecutive peaks. The effects of the DoE factors were visualized in a way that is familiar to the analytical chemist, i.e. by simulating the resulting chromatogram. The mixture of an active ingredient and seven impurities was separated in less than eight minutes. The approach discussed in this paper demonstrates how SFC methods can be developed and optimized efficiently using simple concepts and tools. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Application of best-worst method in evaluation of medical tourism development strategy

    Directory of Open Access Journals (Sweden)

    Farzaneh Abouhashem Abadi

    2018-01-01

    Full Text Available Medical tourism industry is an international phenomenon, which most of medical tourists for some reasons such as high costs of treatment, long waiting queues, lack of insurance and lack of access to health care in the origin country, travel long distances to benefit from health care services of destination country. Given the competitive nature of this industry, most countries are designing practical and legal services and planning for their development. For this purpose, this study has been conducted to develop a strategic planning framework for development of medical tourism industry in Yazd province of Iran; because in recent years Yazd has recognized as the health pole by patients in developing countries. In sum, emphasizing on servicing, enhancing and developing specialized treatment centers, has attracted patients from center, south and east of the country as well as Middle East and Central Asia countries. The dominant approach in this study is developmental – practical and also the research method is descriptive, analytical and survey. In order to analyzing the data, the SWOT model and best-worst techniques have been used. In the following, after identifying strategic position of Yazd province in terms of medical tourism industry, the related strategies were formulated and practical results were presented.

  17. Multicriteria optimization informed VMAT planning

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Huixiao; Craft, David L.; Gierga, David P., E-mail: dgierga@partners.org

    2014-04-01

    We developed a patient-specific volumetric-modulated arc therapy (VMAT) optimization procedure using dose-volume histogram (DVH) information from multicriteria optimization (MCO) of intensity-modulated radiotherapy (IMRT) plans. The study included 10 patients with prostate cancer undergoing standard fractionation treatment, 10 patients with prostate cancer undergoing hypofractionation treatment, and 5 patients with head/neck cancer. MCO-IMRT plans using 20 and 7 treatment fields were generated for each patient on the RayStation treatment planning system (clinical version 2.5, RaySearch Laboratories, Stockholm, Sweden). The resulting DVH of the 20-field MCO-IMRT plan for each patient was used as the reference DVH, and the extracted point values of the resulting DVH of the MCO-IMRT plan were used as objectives and constraints for VMAT optimization. Weights of objectives or constraints of VMAT optimization or both were further tuned to generate the best match with the reference DVH of the MCO-IMRT plan. The final optimal VMAT plan quality was evaluated by comparison with MCO-IMRT plans based on homogeneity index, conformity number of planning target volume, and organ at risk sparing. The influence of gantry spacing, arc number, and delivery time on VMAT plan quality for different tumor sites was also evaluated. The resulting VMAT plan quality essentially matched the 20-field MCO-IMRT plan but with a shorter delivery time and less monitor units. VMAT plan quality of head/neck cancer cases improved using dual arcs whereas prostate cases did not. VMAT plan quality was improved by fine gantry spacing of 2 for the head/neck cancer cases and the hypofractionation-treated prostate cancer cases but not for the standard fractionation–treated prostate cancer cases. MCO-informed VMAT optimization is a useful and valuable way to generate patient-specific optimal VMAT plans, though modification of the weights of objectives or constraints extracted from resulting DVH of MCO

  18. Optimization and optimal control in automotive systems

    CERN Document Server

    Kolmanovsky, Ilya; Steinbuch, Maarten; Re, Luigi

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier  approaches, based on some degree of heuristics, to the use of  more and more common systematic methods. Even systematic methods can be developed and applied in a large number of forms so the text collects contributions from across the theory, methods and real-world automotive applications of optimization. Greater fuel economy, significant reductions in permissible emissions, new drivability requirements and the generally increasing complexity of automotive systems are among the criteria that the contributing authors set themselves to meet. In many cases multiple and often conflicting requirements give rise to multi-objective constrained optimization problems which are also considered. Some of these problems fall into the domain of the traditional multi-disciplinary optimization applie...

  19. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  20. Decision optimization of case-based computer-aided decision systems using genetic algorithms with application to mammography

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Habas, Piotr A; Zurada, Jacek M; Tourassi, Georgia D

    2008-01-01

    This paper presents an optimization framework for improving case-based computer-aided decision (CB-CAD) systems. The underlying hypothesis of the study is that each example in the knowledge database of a medical decision support system has different importance in the decision making process. A new decision algorithm incorporating an importance weight for each example is proposed to account for these differences. The search for the best set of importance weights is defined as an optimization problem and a genetic algorithm is employed to solve it. The optimization process is tailored to maximize the system's performance according to clinically relevant evaluation criteria. The study was performed using a CAD system developed for the classification of regions of interests (ROIs) in mammograms as depicting masses or normal tissue. The system was constructed and evaluated using a dataset of ROIs extracted from the Digital Database for Screening Mammography (DDSM). Experimental results show that, according to receiver operator characteristic (ROC) analysis, the proposed method significantly improves the overall performance of the CAD system as well as its average specificity for high breast mass detection rates

  1. Using best-worst scaling choice experiments to elicit the most important domains of health for health-related quality of life in Singapore

    OpenAIRE

    Uy, Elenore Judy B.; Bautista, Dianne Carrol; Xin, Xiaohui; Cheung, Yin Bun; Thio, Szu-Tien; Thumboo, Julian

    2018-01-01

    Health-related quality of life (HRQOL) instruments are sometimes used without explicit understanding of which HRQOL domains are important to a given population. In this study, we sought to elicit an importance hierarchy among 27 HRQOL domains (derived from the general population) via a best-worst scaling survey of the population in Singapore, and to determine whether these domains were consistently valued across gender, age, ethnicity, and presence of chronic illnesses. We conducted a communi...

  2. Solid Rocket Motor Design Using Hybrid Optimization

    Directory of Open Access Journals (Sweden)

    Kevin Albarado

    2012-01-01

    Full Text Available A particle swarm/pattern search hybrid optimizer was used to drive a solid rocket motor modeling code to an optimal solution. The solid motor code models tapered motor geometries using analytical burn back methods by slicing the grain into thin sections along the axial direction. Grains with circular perforated stars, wagon wheels, and dog bones can be considered and multiple tapered sections can be constructed. The hybrid approach to optimization is capable of exploring large areas of the solution space through particle swarming, but is also able to climb “hills” of optimality through gradient based pattern searching. A preliminary method for designing tapered internal geometry as well as tapered outer mold-line geometry is presented. A total of four optimization cases were performed. The first two case studies examines designing motors to match a given regressive-progressive-regressive burn profile. The third case study studies designing a neutrally burning right circular perforated grain (utilizing inner and external geometry tapering. The final case study studies designing a linearly regressive burning profile for right circular perforated (tapered grains.

  3. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  4. Optimization of CO2 Storage in Saline Aquifers Using Water-Alternating Gas (WAG) Scheme - Case Study for Utsira Formation

    Science.gov (United States)

    Agarwal, R. K.; Zhang, Z.; Zhu, C.

    2013-12-01

    For optimization of CO2 storage and reduced CO2 plume migration in saline aquifers, a genetic algorithm (GA) based optimizer has been developed which is combined with the DOE multi-phase flow and heat transfer numerical simulation code TOUGH2. Designated as GA-TOUGH2, this combined solver/optimizer has been verified by performing optimization studies on a number of model problems and comparing the results with brute-force optimization which requires a large number of simulations. Using GA-TOUGH2, an innovative reservoir engineering technique known as water-alternating-gas (WAG) injection has been investigated to determine the optimal WAG operation for enhanced CO2 storage capacity. The topmost layer (layer # 9) of Utsira formation at Sleipner Project, Norway is considered as a case study. A cylindrical domain, which possesses identical characteristics of the detailed 3D Utsira Layer #9 model except for the absence of 3D topography, was used. Topographical details are known to be important in determining the CO2 migration at Sleipner, and are considered in our companion model for history match of the CO2 plume migration at Sleipner. However, simplification on topography here, without compromising accuracy, is necessary to analyze the effectiveness of WAG operation on CO2 migration without incurring excessive computational cost. Selected WAG operation then can be simulated with full topography details later. We consider a cylindrical domain with thickness of 35 m with horizontal flat caprock. All hydrogeological properties are retained from the detailed 3D Utsira Layer #9 model, the most important being the horizontal-to-vertical permeability ratio of 10. Constant Gas Injection (CGI) operation with nine-year average CO2 injection rate of 2.7 kg/s is considered as the baseline case for comparison. The 30-day, 15-day, and 5-day WAG cycle durations are considered for the WAG optimization design. Our computations show that for the simplified Utsira Layer #9 model, the

  5. Optimal antiretroviral therapy adherence as evaluated by CASE index score tool is associated with virological suppression in HIV-infected adults in Dakar, Senegal.

    Science.gov (United States)

    Byabene, A K; Fortes-Déguénonvo, L; Niang, K; Manga, M N; Bulabula, A N H; Nachega, J B; Seydi, M

    2017-06-01

    To determine the prevalence and factors associated with optimal antiretroviral therapy (ART) adherence and virological failure (VLF) among HIV-infected adults enrolled in the national ART programme at the teaching hospital of Fann, Dakar, Senegal. Cross-sectional study from 1 September 2013 to 30 January 2014. (1) optimal ART adherence by the Center for Adherence Support Evaluation (CASE) Index Score (>10) and (2) VLF (HIV RNA > 1000 copies/ml). Diagnostic accuracy of CASE Index Score assessed using sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV) and corresponding 95% confidence intervals (CIs). Multivariate logistic regression analysis was performed to identify independent factors associated with optimal adherence and VLF. Of 98 HIV-infected patients on ART, 68% were female. The median (IQR) age was 42 (20-50) years. A total of 57 of 98 (60%) were on ART more than 3 years, and majority (88%) were on NNRTI-based first-line ART regimen. A total of 79 of 98 (80%) patients reported optimal ART adherence, and only five of 84 (5.9%) had documented VLF. Patients with VLF were significantly more likely to have suboptimal ART adherence (17.7% vs. 2.9%; P = 0.02). CASE Index Score showed the best trade-off in Se (78.9%, 95% CI: 54.4-93.9%), Sp (20.0%, 95% CI: 11.1-31.7), PPV (22.4, 95% CI: 13.1-34.2%) and NPV (76.5%, 95% CI: 50.1-93.2), when used VLF threshold of HIV RNA >50 copies/ml. Factors independently associated with VLF were CASE Index Score CASE Index Score was independently associated with virological outcomes, supporting usefulness of this low-cost ART adherence monitoring tool in this setting. © 2017 John Wiley & Sons Ltd.

  6. TOPFARM wind farm optimization tool

    Energy Technology Data Exchange (ETDEWEB)

    Rethore, P.-E.; Fuglsang, P.; Larsen, Torben J.; Buhl, T.; Larsen, Gunner C.

    2011-02-15

    A wind farm optimization framework is presented in detail and demonstrated on two test cases: 1) Middelgrunden and 2) Stags Holt/Coldham. A detailed flow model describing the instationary flow within a wind farm is used together with an aeroelastic model to determine production and fatigue loading of wind farm wind turbines. Based on generic load cases, the wind farm production and fatigue evaluations are subsequently condensed in a large pre-calculated database for rapid calculation of lifetime equivalent loads and energy production in the optimization loop. The objective function defining the optimization problem includes elements as energy production, turbine degradation, operation and maintenance costs, electrical grid costs and foundation costs. The objective function is optimized using a dedicated multi fidelity approach with the locations of individual turbines in the wind farm spanning the design space. The results are over all satisfying and are giving some interesting insights on the pros and cons of the design choices. They show in particular that the inclusion of the fatigue loads costs give rise to some additional details in comparison with pure power based optimization. The Middelgrunden test case resulted in an improvement of the financial balance of 2.1 M Euro originating from a very large increase in the energy production value of 9.3 M Euro mainly counterbalanced by increased electrical grid costs. The Stags Holt/Coldham test case resulted in an improvement of the financial balance of 3.1 M Euro. (Author)

  7. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  8. Tsunami hazard in the Caribbean: Regional exposure derived from credible worst case scenarios

    Science.gov (United States)

    Harbitz, C. B.; Glimsdal, S.; Bazin, S.; Zamora, N.; Løvholt, F.; Bungum, H.; Smebye, H.; Gauer, P.; Kjekstad, O.

    2012-04-01

    The present study documents a high tsunami hazard in the Caribbean region, with several thousands of lives lost in tsunamis and associated earthquakes since the XIXth century. Since then, the coastal population of the Caribbean and the Central West Atlantic region has grown significantly and is still growing. Understanding this hazard is therefore essential for the development of efficient mitigation measures. To this end, we report a regional tsunami exposure assessment based on potential and credible seismic and non-seismic tsunamigenic sources. Regional tsunami databases have been compiled and reviewed, and on this basis five main scenarios have been selected to estimate the exposure. The scenarios comprise two Mw8 earthquake tsunamis (north of Hispaniola and east of Lesser Antilles), two subaerial/submarine volcano flank collapse tsunamis (Montserrat and Saint Lucia), and one tsunami resulting from a landslide on the flanks of the Kick'em Jenny submarine volcano (north of Grenada). Offshore tsunami water surface elevations as well as maximum water level distributions along the shore lines are computed and discussed for each of the scenarios. The number of exposed people has been estimated in each case, together with a summary of the tsunami exposure for the earthquake and the landslide tsunami scenarios. For the earthquake scenarios, the highest tsunami exposure relative to the population is found for Guadeloupe (6.5%) and Antigua (7.5%), while Saint Lucia (4.5%) and Antigua (5%) have been found to have the highest tsunami exposure relative to the population for the landslide scenarios. Such high exposure levels clearly warrant more attention on dedicated mitigation measures in the Caribbean region.

  9. Role of beam orientation optimization in intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Pugachev, Andrei; Li, Jonathan G.; Boyer, Arthur L.; Hancock, Steven L.; Le, Quynh-Thu; Donaldson, Sarah S.; Lei Xing

    2001-01-01

    Purpose: To investigate the role of beam orientation optimization in intensity-modulated radiation therapy (IMRT) and to examine the potential benefits of noncoplanar intensity-modulated beams. Methods and Materials: A beam orientation optimization algorithm was implemented. For this purpose, system variables were divided into two groups: beam position (gantry and table angles) and beam profile (beamlet weights). Simulated annealing was used for beam orientation optimization and the simultaneous iterative inverse treatment planning algorithm (SIITP) for beam intensity profile optimization. Three clinical cases were studied: a localized prostate cancer, a nasopharyngeal cancer, and a paraspinal tumor. Nine fields were used for all treatments. For each case, 3 types of treatment plan optimization were performed: (1) beam intensity profiles were optimized for 9 equiangular spaced coplanar beams; (2) orientations and intensity profiles were optimized for 9 coplanar beams; (3) orientations and intensity profiles were optimized for 9 noncoplanar beams. Results: For the localized prostate case, all 3 types of optimization described above resulted in dose distributions of a similar quality. For the nasopharynx case, optimized noncoplanar beams provided a significant gain in the gross tumor volume coverage. For the paraspinal case, orientation optimization using noncoplanar beams resulted in better kidney sparing and improved gross tumor volume coverage. Conclusion: The sensitivity of an IMRT treatment plan with respect to the selection of beam orientations varies from site to site. For some cases, the choice of beam orientations is important even when the number of beams is as large as 9. Noncoplanar beams provide an additional degree of freedom for IMRT treatment optimization and may allow for notable improvement in the quality of some complicated plans

  10. Determining the optimal mix of federal and contract fire crews: a case study from the Pacific Northwest.

    Science.gov (United States)

    Geoffrey H. Donovan

    2006-01-01

    Federal land management agencies in the United States are increasingly relying on contract crews as opposed to agency fire crews. Despite this increasing reliance on contractors, there have been no studies to determine what the optimal mix of contract and agency fire crews should be. A mathematical model is presented to address this question and is applied to a case...

  11. Optimal quantum learning of a unitary transformation

    International Nuclear Information System (INIS)

    Bisio, Alessandro; Chiribella, Giulio; D'Ariano, Giacomo Mauro; Facchini, Stefano; Perinotti, Paolo

    2010-01-01

    We address the problem of learning an unknown unitary transformation from a finite number of examples. The problem consists in finding the learning machine that optimally emulates the examples, thus reproducing the unknown unitary with maximum fidelity. Learning a unitary is equivalent to storing it in the state of a quantum memory (the memory of the learning machine) and subsequently retrieving it. We prove that, whenever the unknown unitary is drawn from a group, the optimal strategy consists in a parallel call of the available uses followed by a 'measure-and-rotate' retrieving. Differing from the case of quantum cloning, where the incoherent 'measure-and-prepare' strategies are typically suboptimal, in the case of learning the 'measure-and-rotate' strategy is optimal even when the learning machine is asked to reproduce a single copy of the unknown unitary. We finally address the problem of the optimal inversion of an unknown unitary evolution, showing also in this case the optimality of the 'measure-and-rotate' strategies and applying our result to the optimal approximate realignment of reference frames for quantum communication.

  12. Assessing worst case scenarios in movement demands derived from global positioning systems during international rugby union matches: Rolling averages versus fixed length epochs

    Science.gov (United States)

    Cunningham, Daniel J.; Shearer, David A.; Carter, Neil; Drawer, Scott; Pollard, Ben; Bennett, Mark; Eager, Robin; Cook, Christian J.; Farrell, John; Russell, Mark

    2018-01-01

    significantly different to FR. For relative distance covered all other position groups were greater than the FR (p < 0.05). The FIXED method underestimated both relative distance (~11%) and HSR values (up to ~20%) compared to the ROLL method. These differences were exaggerated for the HSR variable in the backs position who covered the greatest HSR distance; highlighting important consideration for those implementing the FIXED method of analysis. The data provides coaches with a worst-case scenario reference on the running demands required for periods of 60–300 s in length. This information offers novel insight into game demands and can be used to inform the design of training games to increase specificity of preparation for the most demanding phases of matches. PMID:29621279

  13. Singularities in Structural Optimization of the Ziegler Pendulum

    Directory of Open Access Journals (Sweden)

    O. N. Kirillov

    2011-01-01

    Full Text Available Structural optimization of non-conservative systems with respect to stability criteria is a research area with important applications in fluid-structure interactions, friction-induced instabilities, and civil engineering. In contrast to optimization of conservative systems where rigorously proven optimal solutions in buckling problems have been found, for nonconservative optimization problems only numerically optimized designs have been reported. The proof of optimality in non-conservative optimization problems is a mathematical challenge related to multiple eigenvalues, singularities in the stability domain, and non-convexity of the merit functional. We present here a study of optimal mass distribution in a classical Ziegler pendulum where local and global extrema can be found explicitly. In particular, for the undamped case, the two maxima of the critical flutter load correspond to a vanishing mass either in a joint or at the free end of the pendulum; in the minimum, the ratio of the masses is equal to the ratio of the stiffness coefficients. The role of the singularities on the stability boundary in the optimization is highlighted, and an extension to the damped case as well as to the case of higher degrees of freedom is discussed.

  14. BER-3.2 report: Methodology for justification and optimization of protective measures including a case study

    International Nuclear Information System (INIS)

    Hedemann Jensen, P.; Sinkko, K.; Walmod-Larsen, O.; Gjoerup, H.L.; Salo, A.

    1992-07-01

    This report is a part of the Nordic BER-3 project's work to propose and harmonize Nordic intervention levels for countermeasures in case of nuclear accidents. This report focuses on the methodology for justification and optimization of protective measures in case of a reactor accident situation with a large release of fission products to the environment. The down-wind situation is very complicated. The dose to the exposed society is almost unpredictable. The task of the radiation protection experts: To give advice to the decision makers on averted doses by the different actions at hand in the situation - is complicated. That of the decision makers is certainly more: On half of the society they represent, they must decide if they wish to follow the advices from their radiation protection experts or if they wish to add further arguments - economical or political (or personal) - into their considerations before their decisions are taken. Two analysis methods available for handling such situations: cost-benefit analysis and multi-attribute utility analysis are described in principle and are utilized in a case study: The impacts of a Chernobyl-like accident on the Swedish island of Gotland in the Baltic Sea are analyzed with regard to the acute consequences. The use of the intervention principles found in international guidance (IAEA 91, ICRP 91), which can be summarized as the principles of justification, optimization and avoidance of unacceptable doses, are described. How to handle more intangible factors of a psychological or political character is indicated. (au) (6 tabs., 3 ills., 17 refs.)

  15. The development of multi-objective optimization model for excess bagasse utilization: A case study for Thailand

    International Nuclear Information System (INIS)

    Buddadee, Bancha; Wirojanagud, Wanpen; Watts, Daniel J.; Pitakaso, Rapeepan

    2008-01-01

    In this paper, a multi-objective optimization model is proposed as a tool to assist in deciding for the proper utilization scheme of excess bagasse produced in sugarcane industry. Two major scenarios for excess bagasse utilization are considered in the optimization. The first scenario is the typical situation when excess bagasse is used for the onsite electricity production. In case of the second scenario, excess bagasse is processed for the offsite ethanol production. Then the ethanol is blended with an octane rating of 91 gasoline by a portion of 10% and 90% by volume respectively and the mixture is used as alternative fuel for gasoline vehicles in Thailand. The model proposed in this paper called 'Environmental System Optimization' comprises the life cycle impact assessment of global warming potential (GWP) and the associated cost followed by the multi-objective optimization which facilitates in finding out the optimal proportion of the excess bagasse processed in each scenario. Basic mathematical expressions for indicating the GWP and cost of the entire process of excess bagasse utilization are taken into account in the model formulation and optimization. The outcome of this study is the methodology developed for decision-making concerning the excess bagasse utilization available in Thailand in view of the GWP and economic effects. A demonstration example is presented to illustrate the advantage of the methodology which may be used by the policy maker. The methodology developed is successfully performed to satisfy both environmental and economic objectives over the whole life cycle of the system. It is shown in the demonstration example that the first scenario results in positive GWP while the second scenario results in negative GWP. The combination of these two scenario results in positive or negative GWP depending on the preference of the weighting given to each objective. The results on economics of all scenarios show the satisfied outcomes

  16. Passivity-based model predictive control for mobile vehicle motion planning

    CERN Document Server

    Tahirovic, Adnan

    2013-01-01

    Passivity-based Model Predictive Control for Mobile Vehicle Navigation represents a complete theoretical approach to the adoption of passivity-based model predictive control (MPC) for autonomous vehicle navigation in both indoor and outdoor environments. The brief also introduces analysis of the worst-case scenario that might occur during the task execution. Some of the questions answered in the text include: • how to use an MPC optimization framework for the mobile vehicle navigation approach; • how to guarantee safe task completion even in complex environments including obstacle avoidance and sideslip and rollover avoidance; and  • what to expect in the worst-case scenario in which the roughness of the terrain leads the algorithm to generate the longest possible path to the goal. The passivity-based MPC approach provides a framework in which a wide range of complex vehicles can be accommodated to obtain a safer and more realizable tool during the path-planning stage. During task execution, the optimi...

  17. Recriticality calculations for uraniumdioxide-water systems with MCNP

    International Nuclear Information System (INIS)

    Kumpf, H.

    1998-01-01

    Investigations of severe accidents in power reactors will hardly produce data on the geometry, composition and density distributions of fuel mixtures in such detail as demanded for criticality calculations. In view of this rather sloppy formulation of the task one might consider as an objective the search for the 'worst case', i.e. the composition and structure of arrangements with maximum multiplication. The fuel geometry with maximum k ∞ is a hexagonal close package of spheres with a certain radius, immersed in water. But this arrangement is mechanically unstable. Furthermore, the collapsed hexagonal close package with touching spheres is by no means optimal with respect to k ∞ . Thus, mechanical stability is a necessary additional condition in the search for the worst case. The main part of the report deals with the determination of such a structure. In view of the complexity of the task rigorous mathematical demonstration is not expected to be successful. Instead one adheres to heuristic reasoning. (orig.)

  18. Recriticality calculations for uraniumdioxide-water systems with MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Kumpf, H

    1998-10-01

    Investigations of severe accidents in power reactors will hardly produce data on the geometry, composition and density distributions of fuel mixtures in such detail as demanded for criticality calculations. In view of this rather sloppy formulation of the task one might consider as an objective the search for the `worst case`, i.e. the composition and structure of arrangements with maximum multiplication. The fuel geometry with maximum k{sub {infinity}} is a hexagonal close package of spheres with a certain radius, immersed in water. But this arrangement is mechanically unstable. Furthermore, the collapsed hexagonal close package with touching spheres is by no means optimal with respect to k{sub {infinity}}. Thus, mechanical stability is a necessary additional condition in the search for the worst case. The main part of the report deals with the determination of such a structure. In view of the complexity of the task rigorous mathematical demonstration is not expected to be successful. Instead one adheres to heuristic reasoning. (orig.)

  19. Quantum systems as embarrassed colleagues: what do tax evasion and state tomography have in common?

    Science.gov (United States)

    Ferrie, Chris; Blume-Kohout, Robin

    2011-03-01

    Quantum state estimation (a.k.a. ``tomography'') plays a key role in designing quantum information processors. As a problem, it resembles probability estimation - e.g. for classical coins or dice - but with some subtle and important discrepancies. We demonstrate an improved classical analogue that captures many of these differences: the ``noisy coin.'' Observations on noisy coins are unreliable - much like soliciting sensitive information such as ones tax preparation habits. So, like a quantum system, it cannot be sampled directly. Unlike standard coins or dice, whose worst-case estimation risk scales as 1 / N for all states, noisy coins (and quantum states) have a worst-case risk that scales as 1 /√{ N } and is overwhelmingly dominated by nearly-pure states. The resulting optimal estimation strategies for noisy coins are surprising and counterintuitive. We demonstrate some important consequences for quantum state estimation - in particular, that adaptive tomography can recover the 1 / N risk scaling of classical probability estimation.

  20. Probabilistic Solar Energetic Particle Models

    Science.gov (United States)

    Adams, James H., Jr.; Dietrich, William F.; Xapsos, Michael A.

    2011-01-01

    To plan and design safe and reliable space missions, it is necessary to take into account the effects of the space radiation environment. This is done by setting the goal of achieving safety and reliability with some desired level of confidence. To achieve this goal, a worst-case space radiation environment at the required confidence level must be obtained. Planning and designing then proceeds, taking into account the effects of this worst-case environment. The result will be a mission that is reliable against the effects of the space radiation environment at the desired confidence level. In this paper we will describe progress toward developing a model that provides worst-case space radiation environments at user-specified confidence levels. We will present a model for worst-case event-integrated solar proton environments that provide the worst-case differential proton spectrum. This model is based on data from IMP-8 and GOES spacecraft that provide a data base extending from 1974 to the present. We will discuss extending this work to create worst-case models for peak flux and mission-integrated fluence for protons. We will also describe plans for similar models for helium and heavier ions.

  1. Rovibrational controlled-NOT gates using optimized stimulated Raman adiabatic passage techniques and optimal control theory

    International Nuclear Information System (INIS)

    Sugny, D.; Bomble, L.; Ribeyre, T.; Dulieu, O.; Desouter-Lecomte, M.

    2009-01-01

    Implementation of quantum controlled-NOT (CNOT) gates in realistic molecular systems is studied using stimulated Raman adiabatic passage (STIRAP) techniques optimized in the time domain by genetic algorithms or coupled with optimal control theory. In the first case, with an adiabatic solution (a series of STIRAP processes) as starting point, we optimize in the time domain different parameters of the pulses to obtain a high fidelity in two realistic cases under consideration. A two-qubit CNOT gate constructed from different assignments in rovibrational states is considered in diatomic (NaCs) or polyatomic (SCCl 2 ) molecules. The difficulty of encoding logical states in pure rotational states with STIRAP processes is illustrated. In such circumstances, the gate can be implemented by optimal control theory and the STIRAP sequence can then be used as an interesting trial field. We discuss the relative merits of the two methods for rovibrational computing (structure of the control field, duration of the control, and efficiency of the optimization).

  2. Ocean disposal option for bulk wastes containing naturally occurring radionuclides: an assessment case history

    International Nuclear Information System (INIS)

    Stull, E.A.; Merry-Libby, P.

    1985-01-01

    There are 180,000 m 3 of slightly contaminated radioactive wastes (36 pCi/g radium-226) currently stored at the US Department of Energy's Niagara Falls Storage Site (NFSS), near Lewiston, New York. These wastes resulted from the cleanup of soils that were contaminated above the guidelines for unrestricted use of property. An alternative to long-term management of these wastes on land is dispersal in the ocean. A scenario for ocean disposal is presented for excavation, transport, and emplacement of these wastes in an ocean disposal site. The potential fate of the wastes and impacts on the ocean environment are analyzed, and uncertainties in the development of two worst-case scenarios for dispersion and pathway analyses are discussed. Based on analysis of a worst-case pathway back to man, the incremental dose from ingesting fish containing naturally occurring radionuclides from ocean disposal of the NFSS wastes is insignificant. Ocean disposal of this type of waste appears to be a technically promising alternative to the long-term maintenance costs and eventual loss of containment associated with management in a near-surface land burial facility

  3. Robustness Recipes for Minimax Robust Optimization in Intensity Modulated Proton Therapy for Oropharyngeal Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Voort, Sebastian van der [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Water, Steven van de [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Perkó, Zoltán [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Heijmen, Ben [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands); Lathouwers, Danny [Section of Nuclear Energy and Radiation Applications, Department of Radiation, Science and Technology, Delft University of Technology, Delft (Netherlands); Hoogeman, Mischa, E-mail: m.hoogeman@erasmusmc.nl [Department of Radiation Oncology, Erasmus MC Cancer Institute, Rotterdam (Netherlands)

    2016-05-01

    Purpose: We aimed to derive a “robustness recipe” giving the range robustness (RR) and setup robustness (SR) settings (ie, the error values) that ensure adequate clinical target volume (CTV) coverage in oropharyngeal cancer patients for given gaussian distributions of systematic setup, random setup, and range errors (characterized by standard deviations of Σ, σ, and ρ, respectively) when used in minimax worst-case robust intensity modulated proton therapy (IMPT) optimization. Methods and Materials: For the analysis, contoured computed tomography (CT) scans of 9 unilateral and 9 bilateral patients were used. An IMPT plan was considered robust if, for at least 98% of the simulated fractionated treatments, 98% of the CTV received 95% or more of the prescribed dose. For fast assessment of the CTV coverage for given error distributions (ie, different values of Σ, σ, and ρ), polynomial chaos methods were used. Separate recipes were derived for the unilateral and bilateral cases using one patient from each group, and all 18 patients were included in the validation of the recipes. Results: Treatment plans for bilateral cases are intrinsically more robust than those for unilateral cases. The required RR only depends on the ρ, and SR can be fitted by second-order polynomials in Σ and σ. The formulas for the derived robustness recipes are as follows: Unilateral patients need SR = −0.15Σ{sup 2} + 0.27σ{sup 2} + 1.85Σ − 0.06σ + 1.22 and RR=3% for ρ = 1% and ρ = 2%; bilateral patients need SR = −0.07Σ{sup 2} + 0.19σ{sup 2} + 1.34Σ − 0.07σ + 1.17 and RR=3% and 4% for ρ = 1% and 2%, respectively. For the recipe validation, 2 plans were generated for each of the 18 patients corresponding to Σ = σ = 1.5 mm and ρ = 0% and 2%. Thirty-four plans had adequate CTV coverage in 98% or more of the simulated fractionated treatments; the remaining 2 had adequate coverage in 97.8% and 97.9%. Conclusions: Robustness recipes were derived that can

  4. Patient- and Family-Identified Problems of Traumatic Brain Injury: Value and Utility of a Target Outcome Approach to Identifying the Worst Problems

    Directory of Open Access Journals (Sweden)

    Laraine Winter

    2016-01-01

    Full Text Available Purpose: This study aimed to identify the sequelae of traumatic brain injury (TBI that are most troubling to veterans with TBI and their families and identify veteran-family differences in content and ranking. Instead of standardized measures of symptom frequency or severity, which may be insensitive to change or intervention effects, we used a target outcome measure for veterans with TBI and their key family members, which elicited open-ended reports concerning the three most serious TBI-related problems. This was followed by Likert-scaled ratings of difficulty in managing the problem. Methods: In this cross-sectional study, interviews were conducted in veterans’ homes. Participants included 83 veterans with TBI diagnosed at a Veterans Affairs medical rehabilitation service and a key family member of each veteran. We utilized open-ended questions to determine the problems caused by TBI within the last month. Sociodemographic characteristics of veterans and family members, and veterans’ military and medical characteristics were collected. A coding scheme was developed to categorize open-ended responses. Results: Families identified nearly twice as many categories of problems as did veterans, and veterans and families ranked problem categories very differently. Veterans ranked cognitive and physical problems worst; families ranked emotional and interpersonal problems worst. Conclusions: Easily administered open-ended questions about the most troubling TBI-related problems yield novel insights and reveal important veteran-family discrepancies.

  5. Digital marketing in travel industry. Case: Hotel landing page optimization

    OpenAIRE

    Bitkulova, Renata

    2017-01-01

    Landing page optimization is implementation of the principles of digital service design to improve the website’s user experience. Well done landing page optimization can have a significant positive effect on the usability and profitability of the website. The objective of the study was to optimize the Russian language version of Vuokatti landing page in order to increase conversion, defined as the number of clicks to accommodation search button. A literature survey was made to determine ...

  6. Effectiveness of noncoplanar IMRT planning using a parallelized multiresolution beam angle optimization method for paranasal sinus carcinoma

    International Nuclear Information System (INIS)

    Wang Xiaochun; Zhang Xiaodong; Dong Lei; Liu, Helen; Gillin, Michael; Ahamad, Anesa; Ang Kian; Mohan, Radhe

    2005-01-01

    Purpose: To determine the effectiveness of noncoplanar beam configurations and the benefit of plans using fewer but optimally placed beams designed by a parallelized multiple-resolution beam angle optimization (PMBAO) approach. Methods and Materials: The PMBAO approach uses a combination of coplanar and noncoplanar beam configurations for intensity-modulated radiation therapy (IMRT) treatment planning of paranasal sinus cancers. A smaller number of beams (e.g. 3) are first used to explore the solution space to determine the best and worst beam directions. The results of this exploration are then used as a starting point for determining an optimum beam orientation configuration with more beams (e.g. 5). This process is parallelized using a message passing interface, which greatly reduces the overall computation time for routine clinical practice. To test this approach, treatment for 10 patients with paranasal sinus cancer was planned using a total of 5 beams from a pool of 46 possible beam angles. The PMBAO treatment plans were also compared with IMRT plans designed using 9 equally spaced coplanar beams, which is the standard approach in our clinic. Plans with these two different beam configurations were compared with respect to dose conformity, dose heterogeneity, dose-volume histograms, and doses to organs at risk (i.e., eyes, optic nerve, optic chiasm, and brain). Results: The noncoplanar beam configuration was superior in most paranasal sinus carcinoma cases. The target dose homogeneity was better using a PMBAO 5-beam configuration. However, the dose conformity using PMBAO was not improved and was case dependent. Compared with the 9-beam configuration, the PMBAO configuration significantly reduced the mean dose to the eyes and optic nerves and the maximum dose to the contralateral optical path (e.g. the contralateral eye and optic nerve). The maximum dose to the ipsilateral eye and optic nerve was also lower using the PMBAO configuration than using the 9-beam

  7. Detailed clinicopathological characterization of progressive alopecia areata patients treated with i.v. corticosteroid pulse therapy toward optimization of inclusion criteria.

    Science.gov (United States)

    Sato, Misato; Amagai, Masayuki; Ohyama, Manabu

    2014-11-01

    The management of progressive alopecia areata (AA) is often challenging. Recently, i.v. corticosteroid pulse therapy has been reported to be effective for acute and severe AA, however, inclusion criteria have not been sufficiently precise, leaving a chance that its efficacy could be further improved by optimizing therapeutic indications. In our attempts to delineate the factors that correlate with favorable outcomes, we minutely evaluated the clinicopathological findings and the prognoses of single-round steroid pulse-treated progressive AA cases with full sets of image and pathology records during the course. Almost complete hair regrowth has been achieved and maintained up to 2 years in five out of seven AA patients with varying degrees of clinical severity. Interestingly, the worst clinical presentation observed during the course correlated with the size of the area where hairs with dystrophic roots were pulled rather than the extent of visible hair loss on the first visit. Dermoscopy detected disease spread but contributed little in assessing prognoses. Dense perifollicular cell infiltration was detected in all cases treated within 4 weeks of onset and those treated later but with excellent response. Importantly, the cases with poor or incomplete hair regrowth were treated 6-8 weeks of onset and showed moderate inflammatory change with high telogen conversion rate. These findings mandate global dermoscopy and hair pull test for judging the treatment indication and suggest that early administration of high-dose corticosteroid, ideally within 4 weeks of onset, enable efficient suppression of active inflammation and maximize the effectiveness of the remedy. © 2014 Japanese Dermatological Association.

  8. Decision making with epistemic uncertainty under safety constraints: An application to seismic design

    Science.gov (United States)

    Veneziano, D.; Agarwal, A.; Karaca, E.

    2009-01-01

    The problem of accounting for epistemic uncertainty in risk management decisions is conceptually straightforward, but is riddled with practical difficulties. Simple approximations are often used whereby future variations in epistemic uncertainty are ignored or worst-case scenarios are postulated. These strategies tend to produce sub-optimal decisions. We develop a general framework based on Bayesian decision theory and exemplify it for the case of seismic design of buildings. When temporal fluctuations of the epistemic uncertainties and regulatory safety constraints are included, the optimal level of seismic protection exceeds the normative level at the time of construction. Optimal Bayesian decisions do not depend on the aleatory or epistemic nature of the uncertainties, but only on the total (epistemic plus aleatory) uncertainty and how that total uncertainty varies randomly during the lifetime of the project. ?? 2009 Elsevier Ltd. All rights reserved.

  9. Performance Analysis of DC-offset STBCs for MIMO Optical Wireless Communications

    KAUST Repository

    Sapenov, Yerzhan

    2017-04-01

    In this report, an optical wireless multiple-input multiple-output communication system employing intensity-modulation direct-detection is considered. The performance of direct current offset space-time block codes (DC-STBC) is studied in terms of pairwise error probability (PEP). It is shown that among the class of DC-STBCs, the worst case PEP corresponding to the minimum distance between two codewords is minimized by repetition coding (RC), under both electrical and optical individual power constraints. It follows that among all DC-STBCs, RC is optimal in terms of worst-case PEP for static channels and also for varying channels under any turbulence statistics. This result agrees with previously published numerical results showing the superiority of RC in such systems. It also agrees with previously published analytic results on this topic under log-normal turbulence and further extends it to arbitrary turbulence statistics. This shows the redundancy of the time-dimension of the DCSTBC in this system. This result is further extended to sum power constraints with static and turbulent channels, where it is also shown that the time dimension is redundant, and the optimal DC-STBC has a spatial beamforming structure. Numerical results are provided to demonstrate the difference in performance for systems with different numbers of receiving apertures and different throughput.

  10. Multiobjective optimization model of intersection signal timing considering emissions based on field data: A case study of Beijing.

    Science.gov (United States)

    Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo

    2018-04-18

    Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.

  11. Worst case prediction of additives migration from polystyrene for food safety purposes: a model update.

    Science.gov (United States)

    Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane

    2018-03-01

    A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.

  12. Optimal design of pressurized irrigation systems. Application cases (Ecuador

    Directory of Open Access Journals (Sweden)

    Carmen Mireya Lapo Pauta

    2013-05-01

    Full Text Available This paper presents research completed with the intention of finding the most economical solution in the design of pressurized irrigation networks, while efficiently meet service delivery. A systematic methodology is proposed that combines two optimization techniques through a “hybrid method” in, which linear programming, nonlinear programming and genetic algorithms are fused. The overall formulations of the problem of optimal dimensioning consist of minimizing an objective function constituted through the associated cost of the pipes that form the network. This methodology was implemented in three networks a fictitious irrigation and two irrigation networks (Tuncarta and Cariyacu located in the cities of Loja and Chimborazo which yielded optimal design  solutions. Finally different scenarios were simulated in both models to obtain an overview of the operation of the hydraulic variables

  13. Architectural Optimization of Digital Libraries

    Science.gov (United States)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  14. Enabling optimization in LCA: from “ad hoc” to “structural” LCA approach—based on a biodiesel well-to-wheel case study

    DEFF Research Database (Denmark)

    Herrmann, Ivan Tengbjerg; Lundberg-Jensen, Martin; Jørgensen, Andreas

    2014-01-01

    for searching or screening product systems for environmental optimization potentials. In the presented case, the design has been a rather simple full factorial design. More complicated problems or designs, such as fractional designs, nested designs, split plot designs, and/or unbalanced data, in the context...... 2005). Through a biodiesel well-to-wheel study, we demonstrate a generic approach of applying explanatory variables and corresponding impact categories within the LCA methodology. Explanatory variables are product system variables that can influence the environmental impacts from the system....... Furthermore, using the structural approach enables two different possibilities for optimization: (1) single-objective optimization (SO) based on response surface methodology (Montgomery 2005) and (2) multiobjective optimization (MO) by the hypervolume estimation taboo search (HETS) method. HETS enables MO...

  15. Efficacy of training optimism on general health

    Directory of Open Access Journals (Sweden)

    Mojgan Behrad

    2012-09-01

    Full Text Available Background: The purpose of this study was to investigate the relation of optimism with mental health and the affectivity of optimism training on mental health and its components on Yazd University students. Materials and Methods: Fifty new students of the 2008-2009 academic years were randomly selected. The General Health Questionnaire (GHQ-28 and the optimism scale were completed by them. Thirty persons of these students, who had the highest psychological problems based on the general health questionnaire, were divided into two case and control groups through random assignment. The case group was trained for one month, in two 90-minute sessions per week. Pre-tests and follow-up tests were performed in both groups.Results: The results of Pearson correlation coefficients showed that optimism had a negative and significant relationship with mental health, anxiety, social function, and depression scores (p0.005. Multivariate analysis of covariance showed that optimism training had significant impact on mental health and its components in the case group, compared with the control group (p< 0.0001.Conclusion: In general, the findings of this research suggest the relationship between optimism and mental health and the effectiveness of optimism training on mental health. This method can be used to treat and prevent mental health problems.

  16. Ergodic optimization in the expanding case concepts, tools and applications

    CERN Document Server

    Garibaldi, Eduardo

    2017-01-01

    This book focuses on the interpretation of ergodic optimal problems as questions of variational dynamics, employing a comparable approach to that of the Aubry-Mather theory for Lagrangian systems. Ergodic optimization is primarily concerned with the study of optimizing probability measures. This work presents and discusses the fundamental concepts of the theory, including the use and relevance of Sub-actions as analogues to subsolutions of the Hamilton-Jacobi equation. Further, it provides evidence for the impressively broad applicability of the tools inspired by the weak KAM theory.

  17. Transmission tariffs based on optimal power flow

    International Nuclear Information System (INIS)

    Wangensteen, Ivar; Gjelsvik, Anders

    1998-01-01

    This report discusses transmission pricing as a means of obtaining optimal scheduling and dispatch in a power system. This optimality includes consumption as well as generation. The report concentrates on how prices can be used as signals towards operational decisions of market participants (generators, consumers). The main focus is on deregulated systems with open access to the network. The optimal power flow theory, with demand side modelling included, is briefly reviewed. It turns out that the marginal costs obtained from the optimal power flow gives the optimal transmission tariff for the particular load flow in case. There is also a correspondence between losses and optimal prices. Emphasis is on simple examples that demonstrate the connection between optimal power flow results and tariffs. Various cases, such as open access and single owner are discussed. A key result is that the location of the ''marketplace'' in the open access case does not influence the net economical result for any of the parties involved (generators, network owner, consumer). The optimal power flow is instantaneous, and in its standard form cannot deal with energy constrained systems that are coupled in time, such as hydropower systems with reservoirs. A simplified example of how the theory can be extended to such a system is discussed. An example of the influence of security constraints on prices is also given. 4 refs., 24 figs., 7 tabs

  18. A Study of the Optimal Planning Model for Reservoir Sustainable Management- A Case Study of Shihmen Reservoir

    Science.gov (United States)

    Chen, Y. Y.; Ho, C. C.; Chang, L. C.

    2017-12-01

    The reservoir management in Taiwan faces lots of challenge. Massive sediment caused by landslide were flushed into reservoir, which will decrease capacity, rise the turbidity, and increase supply risk. Sediment usually accompanies nutrition that will cause eutrophication problem. Moreover, the unevenly distribution of rainfall cause water supply instability. Hence, how to ensure sustainable use of reservoirs has become an important task in reservoir management. The purpose of the study is developing an optimal planning model for reservoir sustainable management to find out an optimal operation rules of reservoir flood control and sediment sluicing. The model applies Genetic Algorithms to combine with the artificial neural network of hydraulic analysis and reservoir sediment movement. The main objective of operation rules in this study is to prevent reservoir outflow caused downstream overflow, minimum the gap between initial and last water level of reservoir, and maximum sluicing sediment efficiency. A case of Shihmen reservoir was used to explore the different between optimal operating rule and the current operation of the reservoir. The results indicate optimal operating rules tended to open desilting tunnel early and extend open duration during flood discharge period. The results also show the sluicing sediment efficiency of optimal operating rule is 36%, 44%, 54% during Typhoon Jangmi, Typhoon Fung-Wong, and Typhoon Sinlaku respectively. The results demonstrate the optimal operation rules do play a role in extending the service life of Shihmen reservoir and protecting the safety of downstream. The study introduces a low cost strategy, alteration of operation reservoir rules, into reservoir sustainable management instead of pump dredger in order to improve the problem of elimination of reservoir sediment and high cost.

  19. An evolutionary algorithm for port-of-entry security optimization considering sensor thresholds

    International Nuclear Information System (INIS)

    Concho, Ana Lisbeth; Ramirez-Marquez, Jose Emmanuel

    2010-01-01

    According to the US Customs and Border Protection (CBP), the number of offloaded ship cargo containers arriving at US seaports each year amounts to more than 11 million. The costs of locating an undetonated terrorist weapon at one US port, or even worst, the cost caused by a detonated weapon of mass destruction, would amount to billions of dollars. These costs do not yet account for the devastating consequences that it would cause in the ability to keep the supply chain operating and the sociological and psychological effects. As such, this paper is concerned with developing a container inspection strategy that minimizes the total cost of inspection while maintaining a user specified detection rate for 'suspicious' containers. In this respect and based on a general decision-tree model, this paper presents a holistic evolutionary algorithm for finding the following: (1) optimal threshold values for every sensor and (2) the optimal configuration of the inspection strategy. The algorithm is under the assumption that different sensors with different reliability and cost characteristics can be used. Testing and experimentation show the proposed approach consistently finds high quality solutions in a reduced computational time.

  20. Optimized packings with applications

    CERN Document Server

    Pintér, János

    2015-01-01

    This volume presents a selection of case studies that address a substantial range of optimized object packings (OOP) and their applications. The contributing authors are well-recognized researchers and practitioners. The mathematical modelling and numerical solution aspects of each application case study are presented in sufficient detail. A broad range of OOP problems are discussed: these include various specific and non-standard container loading and object packing problems, as well as the stowing of hazardous and other materials on container ships, data centre resource management, automotive engineering design, space station logistic support, cutting and packing problems with placement constraints, the optimal design of LED street lighting, robust sensor deployment strategies, spatial scheduling problems, and graph coloring models and metaheuristics for packing applications. Novel points of view related to model development and to computational nonlinear, global, mixed integer optimization and heuristic st...

  1. Thermal design of an electric motor using Particle Swarm Optimization

    International Nuclear Information System (INIS)

    Jandaud, P-O; Harmand, S; Fakes, M

    2012-01-01

    In this paper, flow inside an electric machine called starter-alternator is studied parametrically with CFD in order to be used by a thermal lumped model coupled to an optimization algorithm using Particle Swarm Optimization (PSO). In a first case, the geometrical parameters are symmetric allowing us to model only one side of the machine. The optimized thermal results are not conclusive. In a second case, all the parameters are independent. In this case, the flow is strongly influenced by the dissymmetry. Optimization results are this time a clear improvement compared to the original machine.

  2. Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf

    2012-01-01

    (n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ time algorithm to d...... time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane....

  3. Numerical optimization of Combined Heat and Power Organic Rankine Cycles – Part A: Design optimization

    International Nuclear Information System (INIS)

    Martelli, Emanuele; Capra, Federico; Consonni, Stefano

    2015-01-01

    This two-part paper proposes an approach based on state-of-the-art numerical optimization methods for simultaneously determining the most profitable design and part-load operation of Combined Heat and Power Organic Rankine Cycles. Compared to the usual design practice, the important advantages of the proposed approach are (i) to consider the part-load performance of the ORC at the design stage, (ii) to optimize not only the cycle variables, but also the main turbine design variables (number of stages, stage loads, rotational speed). In this first part (Part A), the design model and the optimization algorithm are presented and tested on a real-world test case. PGS-COM, a recently proposed hybrid derivative-free algorithm, allows to efficiently tackle the challenging non-smooth black-box problem. - Highlights: • Algorithm for the simultaneous optimization Organic Rakine Cycle and turbine. • Thermodynamic and economic models of boiler, cycle, turbine are developed. • Non-smooth black-box optimization problem is successfully tackled with PGS-COM. • Test cases show that the algorithm returns optimal solutions within 4 min. • Toluene outperforms MDM (a siloxane) in terms of efficiency and costs.

  4. Development of a coupled tendon-driven 3D multi-joint manipulator. Investigation of tension transfer efficiency, optimal reel arrangement and tip positioning accuracy

    International Nuclear Information System (INIS)

    Horigome, Atsushi; Yamada, Hiroya; Hirose, Shigeo; Sen, Shin; Endo, Gen

    2017-01-01

    Long-reach robotic manipulators are expected to be used in the space where humans cannot work such as nuclear power plant disaster areas. We suggested a coupled tendon-driven articulated manipulator '3D CT-Arm' and developed a preliminary prototype 'Mini 3D CT-Arm' whose arm had 2.4 m length and 0.3 m width. In order to consider developing '3D CT-Arm' deeply, we discussed tension transfer efficiency of a tendon through pulleys, the arrangement of the maximum number of reels in a limited space and the tip positioning accuracy. Through many transfer efficiency experiments, we conclude that tension transfer efficiency of '3D CT-Arm' can reach over 88% in the worst case. We investigated non-interfering reels' arrangement in the base by full search in cases of up to 10 reels. In all simulations, V-shaped or W-shaped arrangement can support the most reels in a limited space. Therefore, we conclude this is the most optimal reels' arrangement. Finally, we carried out the positioning accuracy experiment with 'Mini 3D CT-Arm' via motion capture system. Although the tip position had a 2 to 41 mm error between the desired value and the measured value by potentiometer, a 29 to 95 mm error between the desired value and the measured value was measured by motion capture system. (author)

  5. An optimal algorithm for configuring delivery options of a one-dimensional intensity-modulated beam

    International Nuclear Information System (INIS)

    Luan Shuang; Chen, Danny Z; Zhang, Li; Wu Xiaodong; Yu, Cedric X

    2003-01-01

    The problem of generating delivery options for one-dimensional intensity-modulated beams (1D IMBs) arises in intensity-modulated radiation therapy. In this paper, we present an algorithm with the optimal running time, based on the 'rightmost-preference' method, for generating all distinct delivery options for an arbitrary 1D IMB. The previously best known method for generating delivery options for a 1D IMB with N left leaf positions and N right leaf positions is a 'brute-force' solution, which first generates all N! possible combinations of the left and right leaf positions and then removes combinations that are not physically allowed delivery options. Compared with the brute-force method, our algorithm has several advantages: (1) our algorithm runs in an optimal time that is linearly proportional to the total number of distinct delivery options that it actually produces. Note that for a 1D IMB with multiple peaks, the total number of distinct delivery options in general tends to be considerably smaller than the worst case N!. (2) Our algorithm can be adapted to generating delivery options subject to additional constraints such as the 'minimum leaf separation' constraint. (3) Our algorithm can also be used to generate random subsets of delivery options; this feature is especially useful when the 1D IMBs in question have too many delivery options for a computer to store and process. The key idea of our method is that we impose an order on how left leaf positions should be paired with right leaf positions. Experiments indicated that our rightmost-preference algorithm runs dramatically faster than the brute-force algorithm. This implies that our algorithm can handle 1D IMBs whose sizes are substantially larger than those handled by the brute-force method. Applications of our algorithm in therapeutic techniques such as intensity-modulated arc therapy and 2D modulations are also discussed

  6. Incentive Compatible and Globally Efficient Position Based Routing for Selfish Reverse Multicast in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Sarvesh Varatharajan

    2009-10-01

    Full Text Available We consider the problem of all-to-one selfish routing in the absence of a payment scheme in wireless sensor networks, where a natural model for cost is the power required to forward, referring to the resulting game as a Locally Minimum Cost Forwarding (LMCF. Our objective is to characterize equilibria and their global costs in terms of stretch and diameter, in particular finding incentive compatible algorithms that are also close to globally optimal. We find that although social costs for equilibria of LMCF exhibit arbitrarily bad worst-case bounds and computational infeasibility of reaching optimal equilibria, there exist greedy and local incentive compatible heuristics achieving near-optimal global costs.

  7. Criticality: static profiling for real-time programs

    DEFF Research Database (Denmark)

    Brandner, Florian; Hepp, Stefan; Jordan, Alexander

    2014-01-01

    With the increasing performance demand in real-time systems it becomes more and more important to provide feedback to programmers and software development tools on the performance-relevant code parts of a real-time program. So far, this information was limited to an estimation of the worst....... Experiments using well-established real-time benchmark programs show an interesting distribution of the criticality values, revealing considerable amounts of highly critical as well as uncritical code. The metric thus provides ideal information to programmers and software development tools to optimize...... view covering the entire code base, tools in the spirit of program profiling are required. This work proposes an efficient approach to compute worst-case timing information for all code parts of a program using a complementary metric, called criticality. Every statement of a program is assigned...

  8. Standardization and optimization of arthropod inventories-the case of Iberian spiders

    DEFF Research Database (Denmark)

    Bondoso Cardoso, Pedro Miguel

    2009-01-01

    and optimization of sampling protocols, especially for mega-diverse arthropod taxa. This study had two objectives: (1) propose guidelines and statistical methods to improve the standardization and optimization of arthropod inventories, and (2) to propose a standardized and optimized protocol for Iberian spiders......, by finding common results between the optimal options for the different sites. The steps listed were successfully followed in the determination of a sampling protocol for Iberian spiders. A protocol with three sub-protocols of varying degrees of effort (24, 96 and 320 h of sampling) is proposed. I also...

  9. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  10. A Distributed Routing Scheme for Energy Management in Solar Powered Sensor Networks

    KAUST Repository

    Dehwah, Ahmad H.

    2017-10-11

    Energy management is critical for solar-powered sensor networks. In this article, we consider data routing policies to optimize the energy in solar powered networks. Motivated by multipurpose sensor networks, the objective is to find the best network policy that maximizes the minimal energy among nodes in a sensor network, over a finite time horizon, given uncertain energy input forecasts. First, we derive the optimal policy in certain special cases using forward dynamic programming. We then introduce a greedy policy that is distributed and exhibits significantly lower complexity. When computationally feasible, we compare the performance of the optimal policy with the greedy policy. We also demonstrate the performance and computational complexity of the greedy policy over randomly simulated networks, and show that it yields results that are almost identical to the optimal policy, for greatly reduced worst-case computational costs and memory requirements. Finally, we demonstrate the implementation of the greedy policy on an experimental sensor network.

  11. Sparking-out optimization while surface grinding aluminum alloy 1933T2 parts using fuzzy logic

    Science.gov (United States)

    Soler, Ya I.; Salov, V. M.; Kien Nguyen, Chi

    2018-03-01

    The article presents the results of a search for optimal sparing-out strokes when surface grinding aluminum parts by high-porous wheels Norton of black silicon carbide 37C80K12VP using fuzzy logic. The topography of grinded surface is evaluated according to the following parameters: roughness – Ra, Rmax, Sm; indicators of flatness deviation – EFEmax, EFEa, EFEq; microhardness HV, each of these parameters is represented by two measures of position and dispersion. The simulation results of fuzzy logic in the Matlab medium establish that during the grinding of alloy 1933T2, the best integral performance evaluation of sparking-out was given to two double-strokes (d=0.827) and the worst – to three ones (d=0.405).

  12. Interactively exploring optimized treatment plans

    International Nuclear Information System (INIS)

    Rosen, Isaac; Liu, H. Helen; Childress, Nathan; Liao Zhongxing

    2005-01-01

    Purpose: A new paradigm for treatment planning is proposed that embodies the concept of interactively exploring the space of optimized plans. In this approach, treatment planning ignores the details of individual plans and instead presents the physician with clinical summaries of sets of solutions to well-defined clinical goals in which every solution has been optimized in advance by computer algorithms. Methods and materials: Before interactive planning, sets of optimized plans are created for a variety of treatment delivery options and critical structure dose-volume constraints. Then, the dose-volume parameters of the optimized plans are fit to linear functions. These linear functions are used to show in real time how the target dose-volume histogram (DVH) changes as the DVHs of the critical structures are changed interactively. A bitmap of the space of optimized plans is used to restrict the feasible solutions. The physician selects the critical structure dose-volume constraints that give the desired dose to the planning target volume (PTV) and then those constraints are used to create the corresponding optimized plan. Results: The method is demonstrated using prototype software, Treatment Plan Explorer (TPEx), and a clinical example of a patient with a tumor in the right lung. For this example, the delivery options included 4 open beams, 12 open beams, 4 wedged beams, and 12 wedged beams. Beam directions and relative weights were optimized for a range of critical structure dose-volume constraints for the lungs and esophagus. Cord dose was restricted to 45 Gy. Using the interactive interface, the physician explored how the tumor dose changed as critical structure dose-volume constraints were tightened or relaxed and selected the best compromise for each delivery option. The corresponding treatment plans were calculated and compared with the linear parameterization presented to the physician in TPEx. The linear fits were best for the maximum PTV dose and worst

  13. On generalized semi-infinite optimization and bilevel optimization

    NARCIS (Netherlands)

    Stein, O.; Still, Georg J.

    2000-01-01

    The paper studies the connections and differences between bilevel problems (BL) and generalized semi-infinite problems (GSIP). Under natural assumptions (GSIP) can be seen as a special case of a (BL). We consider the so-called reduction approach for (BL) and (GSIP) leading to optimality conditions

  14. Multilevel geometry optimization

    Science.gov (United States)

    Rodgers, Jocelyn M.; Fast, Patton L.; Truhlar, Donald G.

    2000-02-01

    Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol.

  15. Deterministic mean-variance-optimal consumption and investment

    DEFF Research Database (Denmark)

    Christiansen, Marcus; Steffensen, Mogens

    2013-01-01

    In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature that the consum......In dynamic optimal consumption–investment problems one typically aims to find an optimal control from the set of adapted processes. This is also the natural starting point in case of a mean-variance objective. In contrast, we solve the optimization problem with the special feature...... that the consumption rate and the investment proportion are constrained to be deterministic processes. As a result we get rid of a series of unwanted features of the stochastic solution including diffusive consumption, satisfaction points and consistency problems. Deterministic strategies typically appear in unit......-linked life insurance contracts, where the life-cycle investment strategy is age dependent but wealth independent. We explain how optimal deterministic strategies can be found numerically and present an example from life insurance where we compare the optimal solution with suboptimal deterministic strategies...

  16. The prevalence of mutations in KCNQ1, KCNH2, and SCN5A in an unselected national cohort of young sudden unexplained death cases

    DEFF Research Database (Denmark)

    Winkel, Bo Gregers; Larsen, Maiken Kudahl; Berge, Knut Erik

    2012-01-01

    INTRODUCTION: Sudden unexplained death account for one-third of all sudden natural deaths in the young (1-35 years). Hitherto, the prevalence of genopositive cases has primarily been based on deceased persons referred for postmortem genetic testing. These deaths potentially may represent the worst...

  17. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zhao, Changhong [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Zamzam, Admed S. [University of Minnesota; Sidiropoulos, Nicholas D. [University of Minnesota; Taylor, Josh A. [University of Toronto

    2018-01-12

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successive convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.

  18. Solar Collector Design Optimization: A Hands-on Project Case Study

    Science.gov (United States)

    Birnie, Dunbar P., III; Kaz, David M.; Berman, Elena A.

    2012-01-01

    A solar power collector optimization design project has been developed for use in undergraduate classrooms and/or laboratories. The design optimization depends on understanding the current-voltage characteristics of the starting photovoltaic cells as well as how the cell's electrical response changes with increased light illumination. Students…

  19. Guillain-Barre Syndrome in Postpartum Period: Rehabilitation Issues and Outcome - Three Case Reports.

    Science.gov (United States)

    Gupta, Anupam; Patil, Maitreyi; Khanna, Meeka; Krishnan, Rashmi; Taly, Arun B

    2017-01-01

    We report three females who developed Guillain-Barre Syndrome in postpartum period (within 6 weeks of delivery) and were admitted in the Neurological Rehabilitation Department for rehabilitation after the initial diagnosis and treatment in the Department of Neurology. The first case, axonal variant (acute motor axonal neuropathy [AMAN]) had worst presentation at the time of admission, recovered well by the time of discharge. The second case, acute motor sensory axonal neuropathy variant and the third case, AMAN variant presented at the late postpartum period. Medical treatment was sought much later due to various reasons and both the patients had an incomplete recovery at discharge. Apart from their presentations, rehabilitation management is also discussed in some detail.

  20. Measuring efficiency of university-industry Ph.D. projects using best worst method.

    Science.gov (United States)

    Salimi, Negin; Rezaei, Jafar

    A collaborative Ph.D. project, carried out by a doctoral candidate, is a type of collaboration between university and industry. Due to the importance of such projects, researchers have considered different ways to evaluate the success, with a focus on the outputs of these projects. However, what has been neglected is the other side of the coin-the inputs. The main aim of this study is to incorporate both the inputs and outputs of these projects into a more meaningful measure called efficiency. A ratio of the weighted sum of outputs over the weighted sum of inputs identifies the efficiency of a Ph.D. The weights of the inputs and outputs can be identified using a multi-criteria decision-making (MCDM) method. Data on inputs and outputs are collected from 51 Ph.D. candidates who graduated from Eindhoven University of Technology. The weights are identified using a new MCDM method called Best Worst Method (BWM). Because there may be differences in the opinion of Ph.D. candidates and supervisors on weighing the inputs and outputs, data for BWM are collected from both groups. It is interesting to see that there are differences in the level of efficiency from the two perspectives, because of the weight differences. Moreover, a comparison between the efficiency scores of these projects and their success scores reveals differences that may have significant implications. A sensitivity analysis divulges the most contributing inputs and outputs.

  1. Necessary optimality conditions of the second oder in a stochastic optimal control problem with delay argument

    Directory of Open Access Journals (Sweden)

    Rashad O. Mastaliev

    2016-12-01

    Full Text Available The optimal control problem of nonlinear stochastic systems which mathematical model is given by Ito stochastic differential equation with delay argument is considered. Assuming that the concerned region is open for the control by the first and the second variation (classical sense of the quality functional we obtain the necessary optimality condition of the first and the second order. In the particular case we receive the stochastic analog of the Legendre—Clebsch condition and some constructively verified conclusions from the second order necessary condition. We investigate the Legendre–Clebsch conditions for the degeneration case and obtain the necessary conditions of optimality for a special control, in the classical sense.

  2. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning

    International Nuclear Information System (INIS)

    Chen Wei; Craft, David; Madden, Thomas M.; Zhang, Kewu; Kooy, Hanne M.; Herman, Gabor T.

    2010-01-01

    Purpose: To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. Methods: The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. Results: The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK's interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. Conclusions: The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  3. A fast optimization algorithm for multicriteria intensity modulated proton therapy planning.

    Science.gov (United States)

    Chen, Wei; Craft, David; Madden, Thomas M; Zhang, Kewu; Kooy, Hanne M; Herman, Gabor T

    2010-09-01

    To describe a fast projection algorithm for optimizing intensity modulated proton therapy (IMPT) plans and to describe and demonstrate the use of this algorithm in multicriteria IMPT planning. The authors develop a projection-based solver for a class of convex optimization problems and apply it to IMPT treatment planning. The speed of the solver permits its use in multicriteria optimization, where several optimizations are performed which span the space of possible treatment plans. The authors describe a plan database generation procedure which is customized to the requirements of the solver. The optimality precision of the solver can be specified by the user. The authors apply the algorithm to three clinical cases: A pancreas case, an esophagus case, and a tumor along the rib cage case. Detailed analysis of the pancreas case shows that the algorithm is orders of magnitude faster than industry-standard general purpose algorithms (MOSEK'S interior point optimizer, primal simplex optimizer, and dual simplex optimizer). Additionally, the projection solver has almost no memory overhead. The speed and guaranteed accuracy of the algorithm make it suitable for use in multicriteria treatment planning, which requires the computation of several diverse treatment plans. Additionally, given the low memory overhead of the algorithm, the method can be extended to include multiple geometric instances and proton range possibilities, for robust optimization.

  4. Optimal Design of Porous Materials

    DEFF Research Database (Denmark)

    Andreassen, Erik

    The focus of this thesis is topology optimization of material microstructures. That is, creating new materials, with attractive properties, by combining classic materials in periodic patterns. First, large-scale topology optimization is used to design complicated three-dimensional materials......, throughout the thesis extra attention is given to obtain structures that can be manufactured. That is also the case in the final part, where a simple multiscale method for the optimization of structural damping is presented. The method can be used to obtain an optimized component with structural details...

  5. Aero-structural optimization of wind turbine blades using a reduced set of design load cases including turbulence

    DEFF Research Database (Denmark)

    Sessarego, Matias; Shen, Wen Zhong

    2018-01-01

    Modern wind turbine aero-structural blade design codes generally use a smaller fraction of the full design load base (DLB) or neglect turbulent inflow as defined by the International Electrotechnical Commission standards. The current article describes an automated blade design optimization method...... based on surrogate modeling that includes a very large number of design load cases (DLCs) including turbulence. In the present work, 325 DLCs representative of the full DLB are selected based on the message-passing-interface (MPI) limitations in Matlab. Other methods are currently being investigated, e.......g. a Python MPI implementation, to overcome the limitations in Matlab MPI and ultimately achieve a full DLB optimization framework. The reduced DLB and the annual energy production are computed using the state-of-the-art aero-servo-elastic tool HAWC2. Furthermore, some of the interior dimensions of the blade...

  6. TOPFARM wind farm optimization tool

    DEFF Research Database (Denmark)

    Réthoré, Pierre-Elouan; Fuglsang, Peter; Larsen, Torben J.

    A wind farm optimization framework is presented in detail and demonstrated on two test cases: 1) Middelgrunden and 2) Stags Holt/Coldham. A detailed flow model describing the instationary flow within a wind farm is used together with an aeroelastic model to determine production and fatigue loading...... of wind farm wind turbines. Based on generic load cases, the wind farm production and fatigue evaluations are subsequently condensed in a large pre-calculated database for rapid calculation of lifetime equivalent loads and energy production in the optimization loop.. The objective function defining....... The Middelgrunden test case resulted in an improvement of the financial balance of 2.1 M€ originating from a very large increase in the energy production value of 9.3 M€ mainly counterbalanced by increased electrical grid costs. The Stags Holt/Coldham test case resulted in an improvement of the financial balance...

  7. Optimizing detectability

    International Nuclear Information System (INIS)

    Anon.

    1992-01-01

    HPLC is useful for trace and ultratrace analyses of a variety of compounds. For most applications, HPLC is useful for determinations in the nanogram-to-microgram range; however, detection limits of a picogram or less have been demonstrated in certain cases. These determinations require state-of-the-art capability; several examples of such determinations are provided in this chapter. As mentioned before, to detect and/or analyze low quantities of a given analyte at submicrogram or ultratrace levels, it is necessary to optimize the whole separation system, including the quantity and type of sample, sample preparation, HPLC equipment, chromatographic conditions (including column), choice of detector, and quantitation techniques. A limited discussion is provided here for optimization based on theoretical considerations, chromatographic conditions, detector selection, and miscellaneous approaches to detectability optimization. 59 refs

  8. Multilevel geometry optimization

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, Jocelyn M. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States); Fast, Patton L. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States); Truhlar, Donald G. [Department of Chemistry and Supercomputer Institute, University of Minnesota, Minneapolis, Minnesota 55455-0431 (United States)

    2000-02-15

    Geometry optimization has been carried out for three test molecules using six multilevel electronic structure methods, in particular Gaussian-2, Gaussian-3, multicoefficient G2, multicoefficient G3, and two multicoefficient correlation methods based on correlation-consistent basis sets. In the Gaussian-2 and Gaussian-3 methods, various levels are added and subtracted with unit coefficients, whereas the multicoefficient Gaussian-x methods involve noninteger parameters as coefficients. The multilevel optimizations drop the average error in the geometry (averaged over the 18 cases) by a factor of about two when compared to the single most expensive component of a given multilevel calculation, and in all 18 cases the accuracy of the atomization energy for the three test molecules improves; with an average improvement of 16.7 kcal/mol. (c) 2000 American Institute of Physics.

  9. Optimal estimation of spatially variable recharge and transmissivity fields under steady-state groundwater flow. Part 2. Case study

    Science.gov (United States)

    Graham, Wendy D.; Neff, Christina R.

    1994-05-01

    The first-order analytical solution of the inverse problem for estimating spatially variable recharge and transmissivity under steady-state groundwater flow, developed in Part 1 is applied to the Upper Floridan Aquifer in NE Florida. Parameters characterizing the statistical structure of the log-transmissivity and head fields are estimated from 152 measurements of transmissivity and 146 measurements of hydraulic head available in the study region. Optimal estimates of the recharge, transmissivity and head fields are produced throughout the study region by conditioning on the nearest 10 available transmissivity measurements and the nearest 10 available head measurements. Head observations are shown to provide valuable information for estimating both the transmissivity and the recharge fields. Accurate numerical groundwater model predictions of the aquifer flow system are obtained using the optimal transmissivity and recharge fields as input parameters, and the optimal head field to define boundary conditions. For this case study, both the transmissivity field and the uncertainty of the transmissivity field prediction are poorly estimated, when the effects of random recharge are neglected.

  10. An Empirical Comparison of Discrete Choice Experiment and Best-Worst Scaling to Estimate Stakeholders' Risk Tolerance for Hip Replacement Surgery.

    Science.gov (United States)

    van Dijk, Joris D; Groothuis-Oudshoorn, Catharina G M; Marshall, Deborah A; IJzerman, Maarten J

    2016-06-01

    Previous studies have been inconclusive regarding the validity and reliability of preference elicitation methods. The aim of this study was to compare the metrics obtained from a discrete choice experiment (DCE) and profile-case best-worst scaling (BWS) with respect to hip replacement. We surveyed the general US population of men aged 45 to 65 years, and potentially eligible for hip replacement surgery. The survey included sociodemographic questions, eight DCE questions, and twelve BWS questions. Attributes were the probability of a first and second revision, pain relief, ability to participate in sports and perform daily activities, and length of hospital stay. Conditional logit analysis was used to estimate attribute weights, level preferences, and the maximum acceptable risk (MAR) for undergoing revision surgery in six hypothetical treatment scenarios with different attribute levels. A total of 429 (96%) respondents were included. Comparable attribute weights and level preferences were found for both BWS and DCE. Preferences were greatest for hip replacement surgery with high pain relief and the ability to participate in sports and perform daily activities. Although the estimated MARs for revision surgery followed the same trend, the MARs were systematically higher in five of the six scenarios using DCE. This study confirms previous findings that BWS or DCEs are comparable in estimating attribute weights and level preferences. However, the risk tolerance threshold based on the estimation of MAR differs between these methods, possibly leading to inconsistency in comparing treatment scenarios. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  11. Socially Optimal Taxation of Alcohol: The Case of Czech Beer

    OpenAIRE

    Janda, Karel; Mikolasek, Jakub; Netuka, Martin

    2010-01-01

    The proposed paper belongs to the literature on food demand and optimal taxation and to the literature dealing with economics of alcohol production and consumption. We investigate the question of optimal taxation for the commodity whose consumption has positive and negative features both for individual consumer and for the society. The commodity we analyze is the Czech beer.

  12. Finding Multiple Optimal Solutions to Optimal Load Distribution Problem in Hydropower Plant

    Directory of Open Access Journals (Sweden)

    Xinhao Jiang

    2012-05-01

    Full Text Available Optimal load distribution (OLD among generator units of a hydropower plant is a vital task for hydropower generation scheduling and management. Traditional optimization methods for solving this problem focus on finding a single optimal solution. However, many practical constraints on hydropower plant operation are very difficult, if not impossible, to be modeled, and the optimal solution found by those models might be of limited practical uses. This motivates us to find multiple optimal solutions to the OLD problem, which can provide more flexible choices for decision-making. Based on a special dynamic programming model, we use a modified shortest path algorithm to produce multiple solutions to the problem. It is shown that multiple optimal solutions exist for the case study of China’s Geheyan hydropower plant, and they are valuable for assessing the stability of generator units, showing the potential of reducing occurrence times of units across vibration areas.

  13. Monitoring Churn in Wireless Networks

    Science.gov (United States)

    Holzer, Stephan; Pignolet, Yvonne Anne; Smula, Jasmin; Wattenhofer, Roger

    Wireless networks often experience a significant amount of churn, the arrival and departure of nodes. In this paper we propose a distributed algorithm for single-hop networks that detects churn and is resilient to a worst-case adversary. The nodes of the network are notified about changes quickly, in asymptotically optimal time up to an additive logarithmic overhead. We establish a trade-off between saving energy and minimizing the delay until notification for single- and multi-channel networks.

  14. Aproximační a online algoritmy

    OpenAIRE

    Tichý, Tomáš

    2008-01-01

    This thesis presents results of our research in the area of optimization problems with incomplete information-our research is focused on the online scheduling problems. Our research is based on the worst-case analysis of studied problems and algorithms; thus we use methods of the competitive analysis during our research. Althrough there are many "real-world" industrial and theoretical applications of the online scheduling problems there are still so many open problems with so simple descripti...

  15. Dynamic Planar Range Maxima Queries

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Tsakalidis, Konstantinos

    2011-01-01

    We consider the dynamic two-dimensional maxima query problem. Let P be a set of n points in the plane. A point is maximal if it is not dominated by any other point in P. We describe two data structures that support the reporting of the t maximal points that dominate a given query point, and allow...... for insertions and deletions of points in P. In the pointer machine model we present a linear space data structure with O(logn + t) worst case query time and O(logn) worst case update time. This is the first dynamic data structure for the planar maxima dominance query problem that achieves these bounds...... are integers in the range U = {0, …,2 w  − 1 }. We present a linear space data structure that supports 3-sided range maxima queries in O(logn/loglogn+t) worst case time and updates in O(logn/loglogn) worst case time. These are the first sublogarithmic worst case bounds for all operations in the RAM model....

  16. Guillain–Barre Syndrome in Postpartum Period: Rehabilitation Issues and Outcome – Three Case Reports

    Science.gov (United States)

    Gupta, Anupam; Patil, Maitreyi; Khanna, Meeka; Krishnan, Rashmi; Taly, Arun B.

    2017-01-01

    We report three females who developed Guillain–Barre Syndrome in postpartum period (within 6 weeks of delivery) and were admitted in the Neurological Rehabilitation Department for rehabilitation after the initial diagnosis and treatment in the Department of Neurology. The first case, axonal variant (acute motor axonal neuropathy [AMAN]) had worst presentation at the time of admission, recovered well by the time of discharge. The second case, acute motor sensory axonal neuropathy variant and the third case, AMAN variant presented at the late postpartum period. Medical treatment was sought much later due to various reasons and both the patients had an incomplete recovery at discharge. Apart from their presentations, rehabilitation management is also discussed in some detail. PMID:28694640

  17. Guillain–Barre syndrome in postpartum period: Rehabilitation issues and outcome – Three case reports

    Directory of Open Access Journals (Sweden)

    Anupam Gupta

    2017-01-01

    Full Text Available We report three females who developed Guillain–Barre Syndrome in postpartum period (within 6 weeks of delivery and were admitted in the Neurological Rehabilitation Department for rehabilitation after the initial diagnosis and treatment in the Department of Neurology. The first case, axonal variant (acute motor axonal neuropathy [AMAN] had worst presentation at the time of admission, recovered well by the time of discharge. The second case, acute motor sensory axonal neuropathy variant and the third case, AMAN variant presented at the late postpartum period. Medical treatment was sought much later due to various reasons and both the patients had an incomplete recovery at discharge. Apart from their presentations, rehabilitation management is also discussed in some detail.

  18. Stiffened Composite Fuselage Barrel Optimization

    Science.gov (United States)

    Movva, R. G.; Mittal, A.; Agrawal, K.; Upadhyay, C. S.

    2012-07-01

    In a typical commercial transport aircraft, Stiffened skin panels and frames contribute around 40% of the fuselage weight. In the current study a stiffened composite fuselage skin panel optimization engine is developed for optimization of the layups of composite panels and stringers using Genetic Algorithm (GA). The skin and stringers of the fuselage section are optimized for the strength and the stability requirements. The selection of the GA parameters considered for the optimization is arrived by performing case studies on selected problems. The optimization engine facilitates in carrying out trade studies for selection of the optimum ply layup and material combination for the configuration being analyzed. The optimization process is applied on a sample model and the results are presented.

  19. Portfolio Optimization in a Semi-Markov Modulated Market

    International Nuclear Information System (INIS)

    Ghosh, Mrinal K.; Goswami, Anindya; Kumar, Suresh K.

    2009-01-01

    We address a portfolio optimization problem in a semi-Markov modulated market. We study both the terminal expected utility optimization on finite time horizon and the risk-sensitive portfolio optimization on finite and infinite time horizon. We obtain optimal portfolios in relevant cases. A numerical procedure is also developed to compute the optimal expected terminal utility for finite horizon problem

  20. Sequential Optimization Methods for Augmentation of Marine Enzymes Production in Solid-State Fermentation: l-Glutaminase Production a Case Study.

    Science.gov (United States)

    Sathish, T; Uppuluri, K B; Veera Bramha Chari, P; Kezia, D

    There is an increased l-glutaminase market worldwide due to its relevant industrial applications. Salt tolerance l-glutaminases play a vital role in the increase of flavor of different types of foods like soya sauce and tofu. This chapter is presenting the economically viable l-glutaminases production in solid-state fermentation (SSF) by Aspergillus flavus MTCC 9972 as a case study. The enzyme production was improved following a three step optimization process. Initially mixture design (MD) (augmented simplex lattice design) was employed to optimize the solid substrate mixture. Such solid substrate mixture consisted of 59:41 of wheat bran and Bengal gram husk has given higher amounts of l-glutaminase. Glucose and l-glutamine were screened as a finest additional carbon and nitrogen sources for l-glutaminase production with help of Plackett-Burman Design (PBD). l-Glutamine also acting as a nitrogen source as well as inducer for secretion of l-glutaminase from A. flavus MTCC 9972. In the final step of optimization various environmental and nutritive parameters such as pH, temperature, moisture content, inoculum concentration, glucose, and l-glutamine levels were optimized through the use of hybrid feed forward neural networks (FFNNs) and genetic algorithm (GA). Through sequential optimization methods MD-PBD-FFNN-GA, the l-glutaminase production in SSF could be improved by 2.7-fold (453-1690U/g). © 2016 Elsevier Inc. All rights reserved.

  1. Are you your own worst enemy?

    International Nuclear Information System (INIS)

    Herle, D.; Swann, A.

    2008-01-01

    Heightened attention has been placed on the need for long term sustainable energy as a result of energy cost volatility, concern over the environment, and adequate production capability. This presentation discussed opinions on renewable energy sources such as solar energy and provided information about a national proportionate quantitative online survey among 1500 Canadians that categorized consumers into three groups when it came to making purchasing or behavior decisions based on the environment. These groups included the strong environmentalist; the moderates; and the overwhelmed and unconvinced. The presentation also provided information on an international literature review on residential solar applications and the segment of early adopters in each of the three categories. Topics that were discussed under the strong environmentalist group included cost savers with environmental leanings; energy independence for the technologically minded; and new home construction solar ordinances. The presentation also addressed barriers, a value proposition, and messaging. Sales and marketing issues were also discussed along with optimal media and marketing channels for early adopters. It was concluded that while radio, print, and television are recommended media, out of home advertising and general television advertisement beyond specialty shows and channels is not

  2. Are you your own worst enemy?

    Energy Technology Data Exchange (ETDEWEB)

    Herle, D.; Swann, A. [Gandalf Group, Toronto, ON (Canada)

    2008-07-01

    Heightened attention has been placed on the need for long term sustainable energy as a result of energy cost volatility, concern over the environment, and adequate production capability. This presentation discussed opinions on renewable energy sources such as solar energy and provided information about a national proportionate quantitative online survey among 1500 Canadians that categorized consumers into three groups when it came to making purchasing or behavior decisions based on the environment. These groups included the strong environmentalist; the moderates; and the overwhelmed and unconvinced. The presentation also provided information on an international literature review on residential solar applications and the segment of early adopters in each of the three categories. Topics that were discussed under the strong environmentalist group included cost savers with environmental leanings; energy independence for the technologically minded; and new home construction solar ordinances. The presentation also addressed barriers, a value proposition, and messaging. Sales and marketing issues were also discussed along with optimal media and marketing channels for early adopters. It was concluded that while radio, print, and television are recommended media, out of home advertising and general television advertisement beyond specialty shows and channels is not.

  3. Topology optimization

    DEFF Research Database (Denmark)

    Bendsøe, Martin P.; Sigmund, Ole

    2007-01-01

    Taking as a starting point a design case for a compliant mechanism (a force inverter), the fundamental elements of topology optimization are described. The basis for the developments is a FEM format for this design problem and emphasis is given to the parameterization of design as a raster image...

  4. Performance-based Pareto optimal design

    NARCIS (Netherlands)

    Sariyildiz, I.S.; Bittermann, M.S.; Ciftcioglu, O.

    2008-01-01

    A novel approach for performance-based design is presented, where Pareto optimality is pursued. Design requirements may contain linguistic information, which is difficult to bring into computation or make consistent their impartial estimations from case to case. Fuzzy logic and soft computing are

  5. Genetic-evolution-based optimization methods for engineering design

    Science.gov (United States)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  6. Equipment cost optimization

    International Nuclear Information System (INIS)

    Ribeiro, E.M.; Farias, M.A.; Dreyer, S.R.B.

    1995-01-01

    Considering the importance of the cost of material and equipment in the overall cost profile of an oil company, which in the case of Petrobras, represents approximately 23% of the total operational cost or 10% of the sales, an organization for the optimization of such costs has been established within Petrobras. Programs are developed aiming at: optimization of life-cycle cost of material and equipment; optimization of industrial processes costs through material development. This paper describes the methodology used in the management of the development programs and presents some examples of concluded and ongoing programs, which are conducted in permanent cooperation with suppliers, technical laboratories and research institutions and have been showing relevant results

  7. Optimal health insurance: the case of observable, severe illness.

    Science.gov (United States)

    Chernew, M E; Encinosa, W E; Hirth, R A

    2000-09-01

    We explore optimal cost-sharing provisions for insurance contracts when individuals have observable, severe diseases with a discrete number of medically appropriate treatment options. Variation in preferences for alternative treatments is unobserved by the insurer and non-contractible. Interest in such situations is increasingly common, exemplified by disease carve-out programs and shared decision-making (SDM) tools. We demonstrate that optimal insurance charges a copay to patients choosing the high-cost treatment and provides consumers of the low-cost treatment a cash payment. A simulation of the effect of such a policy, based on prostate cancer, indicates a substantial reduction in moral hazard.

  8. Timing Constraints Based High Performance Des Design And Implementation On 28nm FPGA

    DEFF Research Database (Denmark)

    Thind, Vandana; Pandey, Sujeet; Hussain, Dil muhammed Akbar

    2018-01-01

    in this work, we are going to implement DES Algorithm on 28nm Artix-7 FPGA. To achieve high performance design goal, we are using minimum period, maximum frequency, minimum low pulse, minimum high pulse for different cases of worst case slack, maximum delay, setup time, hold time and data skew path....... The cases on which analysis is done are like worst case slack, best case achievable, timing error and timing score, which help in differentiating the amount of timing constraint at two different frequencies. We analyzed that in timing analysis there is maximum of 19.56% of variation in worst case slack, 0...

  9. Coordinated Direct and Relay Transmission with Linear Non-Regenerative Relay Beamforming

    DEFF Research Database (Denmark)

    Sun, Fan; De Carvalho, Elisabeth; Popovski, Petar

    2012-01-01

    Joint processing of multiple communication flows in wireless systems has given rise to a number of novel transmission techniques, notably the two-way relaying, but also more general traffic scenarios, such as coordinated direct and relay (CDR) transmissions. In a CDR scheme the relay has a central...... role in managing the interference and boosting the overall system performance. In this letter we consider the case in which an amplify-and-forward relay has multiple antennas and can use beamforming to support the coordinated transmissions. We focus on one representative traffic type with one uplink...... user and one downlink user. Two different criteria for relay beamforming are analyzed: maximal weighted sum-rate and maximization of the worst-case weighted SNR. We propose iterative optimal solutions, as well as low-complexity near-optimal solutions....

  10. Optimal Discount Rates for Government Projects

    OpenAIRE

    Park, Sangkyun

    2012-01-01

    Project selection based on the net present value can be optimal only if the discount rate is optimal. The optimal discount rate for a government project can be a risk-free rate, a comparable market rate (market interest rate corresponding to the risk of cash flows to the government), or an adjusted market rate, depending on circumstances. This paper clarifies the conditions for each case. Provided that the optimal discount rate is the comparable market rate, it varies across intervention meth...

  11. Optimal perturbations for nonlinear systems using graph-based optimal transport

    Science.gov (United States)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  12. Optimal control of quantum measurement

    Energy Technology Data Exchange (ETDEWEB)

    Egger, Daniel; Wilhelm, Frank [Theoretical Physics, Saarland University, 66123 Saarbruecken (Germany)

    2015-07-01

    Pulses to steer the time evolution of quantum systems can be designed with optimal control theory. In most cases it is the coherent processes that can be controlled and one optimizes the time evolution towards a target unitary process, sometimes also in the presence of non-controllable incoherent processes. Here we show how to extend the GRAPE algorithm in the case where the incoherent processes are controllable and the target time evolution is a non-unitary quantum channel. We perform a gradient search on a fidelity measure based on Choi matrices. We illustrate our algorithm by optimizing a measurement pulse for superconducting phase qubits. We show how this technique can lead to large measurement contrast close to 99%. We also show, within the validity of our model, that this algorithm can produce short 1.4 ns pulses with 98.2% contrast.

  13. Truss topology optimization with discrete design variables — Guaranteed global optimality and benchmark examples

    DEFF Research Database (Denmark)

    Achtziger, Wolfgang; Stolpe, Mathias

    2007-01-01

    this problem is well-studied for continuous bar areas, we consider in this study the case of discrete areas. This problem is of major practical relevance if the truss must be built from pre-produced bars with given areas. As a special case, we consider the design problem for a single available bar area, i.......e., a 0/1 problem. In contrast to the heuristic methods considered in many other approaches, our goal is to compute guaranteed globally optimal structures. This is done by a branch-and-bound method for which convergence can be proven. In this branch-and-bound framework, lower bounds of the optimal......-integer problems. The main intention of this paper is to provide optimal solutions for single and multiple load benchmark examples, which can be used for testing and validating other methods or heuristics for the treatment of this discrete topology design problem....

  14. Gingival recontouring by provisional implant restoration for optimal emergence profile: report of two cases.

    Science.gov (United States)

    Son, Mee-Kyoung; Jang, Hyun-Seon

    2011-12-01

    The emergence profile concept of an implant restoration is one of the most important factors for the esthetics and health of peri-implant soft tissue. This paper reports on two cases of gingival recontouring by the fabrication of a provisional implant restoration to produce an optimal emergence profile of a definitive implant restoration. After the second surgery, a preliminary impression was taken to make a soft tissue working cast. A provisional crown was fabricated on the model. The soft tissue around the implant fixture on the model was trimmed with a laboratory scalpel to produce the scalloped gingival form. Light curing composite resin was added to fill the space between the provisional crown base and trimmed gingiva. After 4 to 6 weeks, the final impression was taken to make a definitive implant restoration, where the soft tissue and tooth form were in harmony with the adjacent tooth. At the first insertion of the provisional restoration, gum bleaching revealed gingival pressure. Four to six weeks after placing the provisional restoration, the gum reformed with harmony between the peri-implant gingiva and adjacent dentition. Gingival recontouring with a provisional implant restoration is a non-surgical and non-procedure-sensitive method. The implant restoration with the optimal emergence profile is expected to provide superior esthetic and functional results.

  15. Optimization model using Markowitz model approach for reducing the number of dengue cases in Bandung

    Science.gov (United States)

    Yong, Benny; Chin, Liem

    2017-05-01

    Dengue fever is one of the most serious diseases and this disease can cause death. Currently, Indonesia is a country with the highest cases of dengue disease in Southeast Asia. Bandung is one of the cities in Indonesia that is vulnerable to dengue disease. The sub-districts in Bandung had different levels of relative risk of dengue disease. Dengue disease is transmitted to people by the bite of an Aedesaegypti mosquito that is infected with a dengue virus. Prevention of dengue disease is by controlling the vector mosquito. It can be done by various methods, one of the methods is fogging. The efforts made by the Health Department of Bandung through fogging had constraints in terms of limited funds. This problem causes Health Department selective in fogging, which is only done for certain locations. As a result, many sub-districts are not handled properly by the Health Department because of the unequal distribution of activities to prevent the spread of dengue disease. Thus, it needs the proper allocation of funds to each sub-district in Bandung for preventing dengue transmission optimally. In this research, the optimization model using Markowitz model approach will be applied to determine the allocation of funds should be given to each sub-district in Bandung. Some constraints will be added to this model and the numerical solution will be solved with generalized reduced gradient method using Solver software. The expected result of this research is the proportion of funds given to each sub-district in Bandung correspond to the level of risk of dengue disease in each sub-district in Bandung so that the number of dengue cases in this city can be reduced significantly.

  16. Targeting the worst-off for free health care: a process evaluation in Burkina Faso.

    Science.gov (United States)

    Ridde, Valéry; Yaogo, Maurice; Kafando, Yamba; Kadio, Kadidiatou; Ouedraogo, Moctar; Bicaba, Abel; Haddad, Slim

    2011-11-01

    Effective mechanisms to exempt the indigent from user fees at health care facilities are rare in Africa. A State-led intervention (2004-2005) and two action research projects (2007-2010) were implemented in a health district in Burkina Faso to exempt the indigent from user fees. This article presents the results of the process evaluation of these three interventions. Individual and group interviews were organized with the key stakeholders (health staff, community members) to document the strengths and weaknesses of key components of the interventions (relevance and uptake of the intervention, worst-off selection and information, financial arrangements). Data was subjected to content analysis and thematic analysis. The results show that all three intervention processes can be improved. Community-based targeting was better accepted by the stakeholders than was the State-led intervention. The strengths of the community-based approach were in clearly defining the selection criteria, informing the waiver beneficiaries, using a participative process and using endogenous funding. A weakness was that using endogenous funding led to restrictive selection by the community. The community-based approach appears to be the most effective, but it needs to be improved and retested to generate more knowledge before scaling up. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Development of discrete-time H∞ filtering method for time-delay compensation of rhodium incore detectors

    International Nuclear Information System (INIS)

    Park, Moon Kyu; Kim, Yong Hee; Cha, Kune Ho; Kim, Myung Ki

    1998-01-01

    A method is described to develop an H∞ filtering method for the dynamic compensation of self-powered neutron detectors normally used for fixed incore instruments. An H∞ norm of the filter transfer matrix is used as the optimization criteria in the worst-case estimation error sense. Filter modeling is performed for discrete-time model. The filter gains are optimized in the sense of noise attenuation level of H∞ setting. By introducing Bounded Real Lemma, the conventional algebraic Riccati inequalities are converted into Linear Matrix Inequalities (LMIs). Finally, the filter design problem is solved via the convex optimization framework using LMIs. The simulation results show that remarkable improvements are achieved in view of the filter response time and the filter design efficiency

  18. Optimizing towing processes at airports

    OpenAIRE

    Du, Jia Yan

    2015-01-01

    This work addresses the optimization of push-back and towing processes at airports, as an important part of the turnaround process. A vehicle routing based scheduling model is introduced to find a cost optimal assignment of jobs to towing tractors in daily operations. A second model derives an investment strategy to optimize tractor fleet size and mix in the long-run. Column generation heuristics are proposed as solution procedures. The thesis concludes with a case study of a major European ...

  19. Optimal observables and phase-space ambiguities

    International Nuclear Information System (INIS)

    Nachtmann, O.; Nagel, F.

    2005-01-01

    Optimal observables are known to lead to minimal statistical errors on parameters for a given normalised event distribution of a physics reaction. Thereby all statistical correlations are taken into account. Therefore, on the one hand they are a useful tool to extract values on a set of parameters from measured data. On the other hand one can calculate the minimal constraints on these parameters achievable by any data-analysis method for the specific reaction. In case the final states can be reconstructed without ambiguities optimal observables have a particularly simple form. We give explicit formulae for the optimal observables for generic reactions in case of ambiguities in the reconstruction of the final state and for general parameterisation of the final-state phase space. (orig.)

  20. A new hybrid optimization algorithm CRO-DE for optimal coordination of overcurrent relays in complex power systems

    Directory of Open Access Journals (Sweden)

    Mohamed Zellagui

    2017-09-01

    Full Text Available The paper presents a new hybrid global optimization algorithm based on Chemical Reaction based Optimization (CRO and Di¤erential evolution (DE algorithm for nonlinear constrained optimization problems. This approach proposed for the optimal coordination and setting relays of directional overcurrent relays in complex power systems. In protection coordination problem, the objective function to be minimized is the sum of the operating time of all main relays. The optimization problem is subject to a number of constraints which are mainly focused on the operation of the backup relay, which should operate if a primary relay fails to respond to the fault near to it, Time Dial Setting (TDS, Plug Setting (PS and the minimum operating time of a relay. The hybrid global proposed optimization algorithm aims to minimize the total operating time of each protection relay. Two systems are used as case study to check the effeciency of the optimization algorithm which are IEEE 4-bus and IEEE 6-bus models. Results are obtained and presented for CRO and DE and hybrid CRO-DE algorithms. The obtained results for the studied cases are compared with those results obtained when using other optimization algorithms which are Teaching Learning-Based Optimization (TLBO, Chaotic Differential Evolution Algorithm (CDEA and Modiffied Differential Evolution Algorithm (MDEA, and Hybrid optimization algorithms (PSO-DE, IA-PSO, and BFOA-PSO. From analysing the obtained results, it has been concluded that hybrid CRO-DO algorithm provides the most optimum solution with the best convergence rate.

  1. Measurement methods and optimization of radiation protection: the case of internal exposure by inhalation to natural uranium compounds

    International Nuclear Information System (INIS)

    Degrange, J.P.; Gibert, B.

    1998-01-01

    The aim of this presentation is to discuss the ability of different measurement methods (air sampling and biological examinations) to answer to demands in the particular case of internal exposure by inhalation to natural uranium compounds. The realism and the sensitivity of each method are studied, on the base of new dosimetric models of the ICRP. The ability of analysis of these methods in order to optimize radiation protection are then discussed. (N.C.)

  2. Transportation and Production Lot-size for Sugarcane under Uncertainty of Machine Capacity

    Directory of Open Access Journals (Sweden)

    Sudtachat Kanchala

    2018-01-01

    Full Text Available The integrated transportation and production lot size problems is important effect to total cost of operation system for sugar factories. In this research, we formulate a mathematic model that combines these two problems as two stage stochastic programming model. In the first stage, we determine the lot size of transportation problem and allocate a fixed number of vehicles to transport sugarcane to the mill factory. Moreover, we consider an uncertainty of machine (mill capacities. After machine (mill capacities realized, in the second stage we determine the production lot size and make decision to hold units of sugarcane in front of mills based on discrete random variables of machine (mill capacities. We investigate the model using a small size problem. The results show that the optimal solutions try to choose closest fields and lower holding cost per unit (at fields to transport sugarcane to mill factory. We show the results of comparison of our model and the worst case model (full capacity. The results show that our model provides better efficiency than the results of the worst case model.

  3. Minimax robust relay selection based on uncertain long-term CSI

    KAUST Repository

    Nisar, Muhammad Danish

    2014-02-01

    Cooperative communications via multiple relay nodes is known to provide the benefits of increase diversity and coverage. Simultaneous transmission via multiple relays, however, requires strong coordination between nodes either in terms of slot-based transmission or distributed space-time (ST) code implementation. Dynamically selecting a single best relay out of multiple relays and then using it alone for cooperative transmission alleviates the need for this strong coordination while still reaping the benefits of increased diversity and coverage. In this paper, we consider the design of relay selection (RS) under an imperfect knowledge of long-term channel state information (CSI) at the relay nodes, and we pursue minimax optimization to arrive at a robust RS approach that promises the best guarantee on the worst-case end-to-end signal-to-noise ratio (SNR). We provide some intuitive examples and extensive simulation results, not only in terms of worst-case SNR performance but also in terms of average bit-error-rate (BER) performance, to demonstrate the benefits of the proposed minimax robust RS scheme. © 2013 IEEE.

  4. TU-E-213-03: Tools for Ensuring Quality and Safety

    Energy Technology Data Exchange (ETDEWEB)

    Price, M. [Rhode Island Hospital / Warren Alpert Medical School of Brown University (United States)

    2015-06-15

    Purpose: To develop the first 4D robust optimization (RO) method accounting for respiratory motion and evaluate its potential to improve plan robustness and optimality compared to 3D RO and PTV-based optimization. Methods: A set of 4D CT images are used to track respiratory motion and deformation of tumors and organs. For each of 10 respiration phases, dose distributions for nine different uncertainty scenarios including the nominal one, those incorporating ±5mm setup uncertainties long x, y and z directions and ±3.5% range uncertainties are calculated. All 90 dose distributions are simultaneously optimized to achieve full dose coverage of 10 CTVs and sparing of normal structures. ITV-based 3D RO and PTV-based optimization based on the average CT are also carried for the same patient using same dose volume constrains. After optimization, 4D robustness evaluation was performed for all resulting plans. The CTV coverage and the sparing of normal tissue in 10 phases are evaluated and compared among the three methods. The widths of DVH bands represent the robustness of dose distributions in the structures. Results: For one patient studied so far, the worst case CTV coverage by the prescription dose among all 90 scenarios is: 99% for 4D RO; 88.9% for 3D RO, and 85.2% for PTV based optimization. 4D RO also results in best robustness with the narrowest DVH’ bandwidths for the CTV. 4D and 3D RO have similar organ sparing while PTV based optimization results in worst organ sparing. Conclusion: 4D robust optimization which accounts for anatomy motion and deformation in the optimization process, significantly improves plan robustness and achieves higher quality treatment plans for lung cancer patients. The method is being evaluated for multiple patients with different tumor and motion characteristics.

  5. Models and formal verification of multiprocessor system-on-chips

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid; Hansen, Michael Reichhardt; Madsen, Jan

    2008-01-01

    present experimental results on rather small systems with high complexity, primarily due to differences between best-case and worst-case execution times. Considering worst-case execution times only, the system becomes deterministic and using a special version of {Uppaal}, where the no history is saved, we...

  6. Optimal mixture experiments

    CERN Document Server

    Sinha, B K; Pal, Manisha; Das, P

    2014-01-01

    The book dwells mainly on the optimality aspects of mixture designs. As mixture models are a special case of regression models, a general discussion on regression designs has been presented, which includes topics like continuous designs, de la Garza phenomenon, Loewner order domination, Equivalence theorems for different optimality criteria and standard optimality results for single variable polynomial regression and multivariate linear and quadratic regression models. This is followed by a review of the available literature on estimation of parameters in mixture models. Based on recent research findings, the volume also introduces optimal mixture designs for estimation of optimum mixing proportions in different mixture models, which include Scheffé’s quadratic model, Darroch-Waller model, log- contrast model, mixture-amount models, random coefficient models and multi-response model.  Robust mixture designs and mixture designs in blocks have been also reviewed. Moreover, some applications of mixture desig...

  7. Is Best-Worst Scaling Suitable for Health State Valuation? A Comparison with Discrete Choice Experiments.

    Science.gov (United States)

    Krucien, Nicolas; Watson, Verity; Ryan, Mandy

    2017-12-01

    Health utility indices (HUIs) are widely used in economic evaluation. The best-worst scaling (BWS) method is being used to value dimensions of HUIs. However, little is known about the properties of this method. This paper investigates the validity of the BWS method to develop HUI, comparing it to another ordinal valuation method, the discrete choice experiment (DCE). Using a parametric approach, we find a low level of concordance between the two methods, with evidence of preference reversals. BWS responses are subject to decision biases, with significant effects on individuals' preferences. Non parametric tests indicate that BWS data has lower stability, monotonicity and continuity compared to DCE data, suggesting that the BWS provides lower quality data. As a consequence, for both theoretical and technical reasons, practitioners should be cautious both about using the BWS method to measure health-related preferences, and using HUI based on BWS data. Given existing evidence, it seems that the DCE method is a better method, at least because its limitations (and measurement properties) have been extensively researched. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Impact Analysis of Demand Response Intensity and Energy Storage Size on Operation of Networked Microgrids

    Directory of Open Access Journals (Sweden)

    Akhtar Hussain

    2017-06-01

    Full Text Available Integration of demand response (DR programs and battery energy storage system (BESS in microgrids are beneficial for both microgrid owners and consumers. The intensity of DR programs and BESS size can alter the operation of microgrids. Meanwhile, the optimal size for BESS units is linked with the uncertainties associated with renewable energy sources and load variations. Similarly, the participation of enrolled customers in DR programs is also uncertain and, among various other factors, uncertainty in market prices is a major cause. Therefore, in this paper, the impact of DR program intensity and BESS size on the operation of networked microgrids is analyzed while considering the prevailing uncertainties. The uncertainties associated with forecast load values, output of renewable generators, and market price are realized via the robust optimization method. Robust optimization has the capability to provide immunity against the worst-case scenario, provided the uncertainties lie within the specified bounds. The worst-case scenario of the prevailing uncertainties is considered for evaluating the feasibility of the proposed method. The two representative categories of DR programs, i.e., price-based and incentive-based DR programs are considered. The impact of change in DR intensity and BESS size on operation cost of the microgrid network, external power trading, internal power transfer, load profile of the network, and state-of-charge (SOC of battery energy storage system (BESS units is analyzed. Simulation results are analyzed to determine the integration of favorable DR program and/or BESS units for different microgrid networks with diverse objectives.

  9. Analysis of twisted tape solutions for cooling of the residual ion dump of the ITER HNB

    Energy Technology Data Exchange (ETDEWEB)

    Ochoa Guamán, Santiago, E-mail: santiago.ochoa@kit.edu [Karlsruhe Institute of Technology (KIT), Institute for Technical Physics, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Hanke, Stefan [Karlsruhe Institute of Technology (KIT), Institute for Technical Physics, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Sartori, Emanuele; Palma, Mauro Dalla [Consorzio RFX (CNR, ENEA, INFN, Università di Padova, Acciaierie Venete SpA), Corso Stati Uniti 4, 35127 Padua (Italy)

    2016-11-01

    Highlights: • Due to manufacturing deviations, the cooling channels are made by double side drilling. • Twisted tapes with two different thicknesses are necessary for a better cooling performance. • The manufacturing of cooling channels and twisted tapes was demonstrated to be feasible. • The water critical heat flux safety margin is higher than 1.5 for the total channel length. • Geometry optimization shown better cooling performance and higher CHF safety margins. - Abstract: The ITER HNB residual ion dump is exposed to a heat load about 17 MW on the dump panels with a peak power density of 7 MW/m{sup 2}. Water flows through cooling channels, 2 m long and 14 mm diameter, realized by double side deep drilling. Unavoidable manufacturing deviations could generate a discontinuity at the channel length center. It is necessary to verify the influence of issues such as cavitation, fluid stagnation, low boiling margins, among others, in the cooling performance. Assuming worst case conditions, analytical and CFD methods showed a subcooled boiling operation with high safety margins to the water critical heat flux. Additionally, by analysing several thermo-hydraulic parameters, the twisted tape cross sections were optimized. Per cooling channel, two twisted tapes are inserted from the sides of the panels, thus, a study of a separation gap between them at the channel length center presented an optimal distance. This paper demonstrates that common machining techniques and drilling tolerances allow the manufacturing of panels able to withstand safely the required beam operation heat loads, even under worst case operation scenarios.

  10. Approach to Multi-Criteria Group Decision-Making Problems Based on the Best-Worst-Method and ELECTRE Method

    Directory of Open Access Journals (Sweden)

    Xinshang You

    2016-09-01

    Full Text Available This paper proposes a novel approach to cope with the multi-criteria group decision-making problems. We give the pairwise comparisons based on the best-worst-method (BWM, which can decrease comparison times. Additionally, our comparison results are determined with the positive and negative aspects. In order to deal with the decision matrices effectively, we consider the elimination and choice translation reality (ELECTRE III method under the intuitionistic multiplicative preference relations environment. The ELECTRE III method is designed for a double-automatic system. Under a certain limitation, without bothering the decision-makers to reevaluate the alternatives, this system can adjust some special elements that have the most influence on the group’s satisfaction degree. Moreover, the proposed method is suitable for both the intuitionistic multiplicative preference relation and the interval valued fuzzy preference relations through the transformation formula. An illustrative example is followed to demonstrate the rationality and availability of the novel method.

  11. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    Science.gov (United States)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2014-06-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  12. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    Energy Technology Data Exchange (ETDEWEB)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita [School of Mathematical Sciences, Faculty of Science and Technology, Universiti Kebangsaan Malaysia, 43600 Bangi, Selangor (Malaysia)

    2014-06-19

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  13. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    International Nuclear Information System (INIS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-01-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio

  14. Genotype by environment interaction in sunflower (Helianthus annus L.) to optimize trial network efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Barrios, P.; Castro, M.; Pérez, O.; Vilaró, D.; Gutiérrez, L.

    2017-07-01

    Modeling genotype by environment interaction (GEI) is one of the most challenging aspects of plant breeding programs. The use of efficient trial networks is an effective way to evaluate GEI to define selection strategies. Furthermore, the experimental design and the number of locations, replications, and years are crucial aspects of multi-environment trial (MET) network optimization. The objective of this study was to evaluate the efficiency and performance of a MET network of sunflower (Helianthus annuus L.). Specifically, we evaluated GEI in the network by delineating mega-environments, estimating genotypic stability and identifying relevant environmental covariates. Additionally, we optimized the network by comparing experimental design efficiencies. We used the National Evaluation Network of Sunflower Cultivars of Uruguay (NENSU) in a period of 20 years. MET plot yield and flowering time information was used to evaluate GEI. Additionally, meteorological information was studied for each sunflower physiological stage. An optimal network under these conditions should have three replications, two years of evaluation and at least three locations. The use of incomplete randomized block experimental design showed reasonable performance. Three mega-environments were defined, explained mainly by different management of sowing dates. Late sowings dates had the worst performance in grain yield and oil production, associated with higher temperatures before anthesis and fewer days allocated to grain filling. The optimization of MET networks through the analysis of the experimental design efficiency, the presence of GEI, and appropriate management strategies have a positive impact on the expression of yield potential and selection of superior cultivars.

  15. Simplex-based optimization of numerical and categorical inputs in early bioprocess development: Case studies in HT chromatography.

    Science.gov (United States)

    Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy

    2017-08-01

    Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Guided randomness in optimization

    CERN Document Server

    Clerc, Maurice

    2015-01-01

    The performance of an algorithm used depends on the GNA. This book focuses on the comparison of optimizers, it defines a stress-outcome approach which can be derived all the classic criteria (median, average, etc.) and other more sophisticated.   Source-codes used for the examples are also presented, this allows a reflection on the ""superfluous chance,"" succinctly explaining why and how the stochastic aspect of optimization could be avoided in some cases.

  17. Optimization of automation: III. Development of optimization method for determining automation rate in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Seung Min; Kim, Jong Hyun; Kim, Man Cheol; Seong, Poong Hyun

    2016-01-01

    Highlights: • We propose an appropriate automation rate that enables the best human performance. • We analyze the shortest working time considering Situation Awareness Recovery (SAR). • The optimized automation rate is estimated by integrating the automation and ostracism rate estimation methods. • The process to derive the optimized automation rate is demonstrated through case studies. - Abstract: Automation has been introduced in various industries, including the nuclear field, because it is commonly believed that automation promises greater efficiency, lower workloads, and fewer operator errors through reducing operator errors and enhancing operator and system performance. However, the excessive introduction of automation has deteriorated operator performance due to the side effects of automation, which are referred to as Out-of-the-Loop (OOTL), and this is critical issue that must be resolved. Thus, in order to determine the optimal level of automation introduction that assures the best human operator performance, a quantitative method of optimizing the automation is proposed in this paper. In order to propose the optimization method for determining appropriate automation levels that enable the best human performance, the automation rate and ostracism rate, which are estimation methods that quantitatively analyze the positive and negative effects of automation, respectively, are integrated. The integration was conducted in order to derive the shortest working time through considering the concept of situation awareness recovery (SAR), which states that the automation rate with the shortest working time assures the best human performance. The process to derive the optimized automation rate is demonstrated through an emergency operation scenario-based case study. In this case study, four types of procedures are assumed through redesigning the original emergency operating procedure according to the introduced automation and ostracism levels. Using the

  18. A two-domain real-time algorithm for optimal data reduction: A case study on accelerator magnet measurements

    CERN Document Server

    Arpaia, P; Inglese, V

    2010-01-01

    A real-time algorithm of data reduction, based on the combination a two lossy techniques specifically optimized for high-rate magnetic measurements in two domains (e.g. time and space), is proposed. The first technique exploits an adaptive sampling rule based on the power estimation of the flux increments in order to optimize the information to be gathered for magnetic field analysis in real time. The tracking condition is defined by the target noise level in the Nyquist band required by post-processing procedure of magnetic analysis. The second technique uses a data reduction algorithm in order to improve the compression ratio while preserving the consistency of the measured signal. The allowed loss is set equal to the random noise level in the signal in order to force the loss and the noise to cancel rather than to add, by improving the signal-to-noise ratio. Numerical analysis and experimental results of on-field performance characterization and validation for two case studies of magnetic measurement syste...

  19. Radiological optimization

    International Nuclear Information System (INIS)

    Zeevaert, T.

    1998-01-01

    Radiological optimization is one of the basic principles in each radiation-protection system and it is a basic requirement in the safety standards for radiation protection in the European Communities. The objectives of the research, performed in this field at the Belgian Nuclear Research Centre SCK-CEN, are: (1) to implement the ALARA principles in activities with radiological consequences; (2) to develop methodologies for optimization techniques in decision-aiding; (3) to optimize radiological assessment models by validation and intercomparison; (4) to improve methods to assess in real time the radiological hazards in the environment in case of an accident; (5) to develop methods and programmes to assist decision-makers during a nuclear emergency; (6) to support the policy of radioactive waste management authorities in the field of radiation protection; (7) to investigate existing software programmes in the domain of multi criteria analysis. The main achievements for 1997 are given

  20. Large-scale hydropower system optimization using dynamic programming and object-oriented programming: the case of the Northeast China Power Grid.

    Science.gov (United States)

    Li, Ji-Qing; Zhang, Yu-Shan; Ji, Chang-Ming; Wang, Ai-Jing; Lund, Jay R

    2013-01-01

    This paper examines long-term optimal operation using dynamic programming for a large hydropower system of 10 reservoirs in Northeast China. Besides considering flow and hydraulic head, the optimization explicitly includes time-varying electricity market prices to maximize benefit. Two techniques are used to reduce the 'curse of dimensionality' of dynamic programming with many reservoirs. Discrete differential dynamic programming (DDDP) reduces the search space and computer memory needed. Object-oriented programming (OOP) and the ability to dynamically allocate and release memory with the C++ language greatly reduces the cumulative effect of computer memory for solving multi-dimensional dynamic programming models. The case study shows that the model can reduce the 'curse of dimensionality' and achieve satisfactory results.

  1. A new optimization algotithm with application to nonlinear MPC

    Directory of Open Access Journals (Sweden)

    Frode Martinsen

    2005-01-01

    Full Text Available This paper investigates application of SQP optimization algorithm to nonlinear model predictive control. It considers feasible vs. infeasible path methods, sequential vs. simultaneous methods and reduced vs full space methods. A new optimization algorithm coined rFOPT which remains feasibile with respect to inequality constraints is introduced. The suitable choices between these various strategies are assessed informally through a small CSTR case study. The case study also considers the effect various discretization methods have on the optimization problem.

  2. Case study on implementation of the dose constraint concept in optimization in working environment

    Energy Technology Data Exchange (ETDEWEB)

    Krajewska, Grazyna; Krajewski, Pawel [Central Laboratory for Radiological Protection, PL-03194, Warsaw (Poland)

    2014-07-01

    A case study of already fixed dose constrain values in nuclear medicine sector, indicated that, the practical implementation of ICRP principle of optimization ( Publication 103, ICRP, 2007) still hit on methodology problems due to lack of adequate numerous monitoring data of internal contamination and complicated mathematical formalism. In practice, to ensure that 'the likelihood of incurring exposure, the number of people exposed, and the magnitude of their individual doses are kept as low as reasonably achievable', the baseline of effective doses together with statistical distribution is required. Furthermore, as it has revealed in this study, doses PHP's generated with MC methods had un-regularly shapes, depending on random operations rather than routine procedures. The role of dose constraints for occupational exposures, was further elaborated in Publication 101 (ICRP, 2006) as 'the dose constraint is a value of individual dose used to limit the range of options considered in the process of optimization'. The revisions of the International Basic Safety Standards as well as the Euratom Basic Safety Standard Directive both aim to implement new ICRP recommendations and have requirements to use dose constraints, defined broadly along the lines provided by the ICRP, and suggest that values be selected from the bands recommended by the ICRP. These will be obligatory adopted in the national regulations by regulatory authorities of EU countries. However, due to accidental characteristics of monitoring data, the 95% confidence tail of the doses for the most highly exposed individuals is near the limit of 20 mSv per year. This is apparently observed in the particular endocrinology units dealing with I-131 therapy. One might concluded that dose limitation and optimization are viewed as sufficient for the management of occupational exposures and reasonably be achieved. (authors)

  3. Sedation in a Patient with Prader-Willi Syndrome: A Case Report

    Directory of Open Access Journals (Sweden)

    Hayrettin Daşkaya

    2014-12-01

    Full Text Available Prader-Willi syndrome (PWS is a rare disorder characterized by hypotonia, growth retardation, characteristic face shape, hypogonadism, hyperphagia and related morbid obesity. Anesthesia application in these patients has increased risk of peroperative complication due the central hypotonia, abnormal teeth structure, and limited neck mobility. Therefore, clinicians should be prepared for the worst scenario before anesthesia application in patients with PWS during general or out-patient surgery. Here, in this case report, outpatient anesthesia performed in a patient with PWS for diagnostic electromyography is presented with the literature review.

  4. Robust Algorithms for Detecting a Change in a Stochastic Process with Infinite Memory

    Science.gov (United States)

    1988-03-01

    breakdown point and the additional assumption of 0-mixing on the nominal meas- influence function . The structure of the optimal algorithm ures. Then Huber’s...are i.i.d. sequences of Gaus- For the breakdown point and the influence function sian random variables, with identical variance o2 . Let we will use...algebraic sign for i=0,1. Here z will be chosen such = f nthat it leads to worst case or earliest breakdown. i (14) Next, the influence function measures

  5. Well-posed optimization problems

    CERN Document Server

    Dontchev, Asen L

    1993-01-01

    This book presents in a unified way the mathematical theory of well-posedness in optimization. The basic concepts of well-posedness and the links among them are studied, in particular Hadamard and Tykhonov well-posedness. Abstract optimization problems as well as applications to optimal control, calculus of variations and mathematical programming are considered. Both the pure and applied side of these topics are presented. The main subject is often introduced by heuristics, particular cases and examples. Complete proofs are provided. The expected knowledge of the reader does not extend beyond textbook (real and functional) analysis, some topology and differential equations and basic optimization. References are provided for more advanced topics. The book is addressed to mathematicians interested in optimization and related topics, and also to engineers, control theorists, economists and applied scientists who can find here a mathematical justification of practical procedures they encounter.

  6. Existence theory in optimal control

    International Nuclear Information System (INIS)

    Olech, C.

    1976-01-01

    This paper treats the existence problem in two main cases. One case is that of linear systems when existence is based on closedness or compactness of the reachable set and the other, non-linear case refers to a situation where for the existence of optimal solutions closedness of the set of admissible solutions is needed. Some results from convex analysis are included in the paper. (author)

  7. Comments on `A discrete optimal control problem for descriptor systems'

    DEFF Research Database (Denmark)

    Ravn, Hans

    1990-01-01

    In the above-mentioned work (see ibid., vol.34, p.177-81 (1989)), necessary and sufficient optimality conditions are derived for a discrete-time optimal problem, as well as other specific cases of implicit and explicit dynamic systems. The commenter corrects a mistake and demonstrates that there ......In the above-mentioned work (see ibid., vol.34, p.177-81 (1989)), necessary and sufficient optimality conditions are derived for a discrete-time optimal problem, as well as other specific cases of implicit and explicit dynamic systems. The commenter corrects a mistake and demonstrates...

  8. Medical exposure and optimization of radiological protection

    International Nuclear Information System (INIS)

    Drexler, Gunter

    1997-01-01

    Full text. In the context of occupational and populational exposure the concepts of optimization are implemented widely, at least conceptually, by the relevant authorities and the responsible for radiation protection. In the case of medical exposures this is not so common since the patient is exposed deliberately and cannot be isolated from his environment. The concepts and the instruments of optimization in these cases are discussed with emphasis to the ICRP recommendations in Publication 73. (author)

  9. An occasional diagnosis of myasthenia gravis - a focus on thymus during cardiac surgery: a case report

    Directory of Open Access Journals (Sweden)

    Dainese Luca

    2009-10-01

    Full Text Available Abstract Background Myasthenia gravis, an uncommon autoimmune syndrome, is commonly associated with thymus abnormalities. Thymomatous myasthenia gravis is considered to have worst prognosis and thymectomy can reverse symptoms if precociously performed. Case report We describe a case of a patient who underwent mitral valve repair and was found to have an occasional thymomatous mass during the surgery. A total thymectomy was performed concomitantly to the mitral valve repair. Conclusion The diagnosis of thymomatous myasthenia gravis was confirmed postoperatively. Following the surgery this patient was strictly monitored and at 1-year follow-up a complete stable remission had been successfully achieved.

  10. Applications of polynomial optimization in financial risk investment

    Science.gov (United States)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  11. An SEU resistant 256K SOI SRAM

    Science.gov (United States)

    Hite, L. R.; Lu, H.; Houston, T. W.; Hurta, D. S.; Bailey, W. E.

    1992-12-01

    A novel SEU (single event upset) resistant SRAM (static random access memory) cell has been implemented in a 256K SOI (silicon on insulator) SRAM that has attractive performance characteristics over the military temperature range of -55 to +125 C. These include worst-case access time of 40 ns with an active power of only 150 mW at 25 MHz, and a worst-case minimum WRITE pulse width of 20 ns. Measured SEU performance gives an Adams 10 percent worst-case error rate of 3.4 x 10 exp -11 errors/bit-day using the CRUP code with a conservative first-upset LET threshold. Modeling does show that higher bipolar gain than that measured on a sample from the SRAM lot would produce a lower error rate. Measurements show the worst-case supply voltage for SEU to be 5.5 V. Analysis has shown this to be primarily caused by the drain voltage dependence of the beta of the SOI parasitic bipolar transistor. Based on this, SEU experiments with SOI devices should include measurements as a function of supply voltage, rather than the traditional 4.5 V, to determine the worst-case condition.

  12. Optimizing radiology peer review: a mathematical model for selecting future cases based on prior errors.

    Science.gov (United States)

    Sheu, Yun Robert; Feder, Elie; Balsim, Igor; Levin, Victor F; Bleicher, Andrew G; Branstetter, Barton F

    2010-06-01

    Peer review is an essential process for physicians because it facilitates improved quality of patient care and continuing physician learning and improvement. However, peer review often is not well received by radiologists who note that it is time intensive, is subjective, and lacks a demonstrable impact on patient care. Current advances in peer review include the RADPEER() system, with its standardization of discrepancies and incorporation of the peer-review process into the PACS itself. The purpose of this study was to build on RADPEER and similar systems by using a mathematical model to optimally select the types of cases to be reviewed, for each radiologist undergoing review, on the basis of the past frequency of interpretive error, the likelihood of morbidity from an error, the financial cost of an error, and the time required for the reviewing radiologist to interpret the study. The investigators compiled 612,890 preliminary radiology reports authored by residents and attending radiologists at a large tertiary care medical center from 1999 to 2004. Discrepancies between preliminary and final interpretations were classified by severity and validated by repeat review of major discrepancies. A mathematical model was then used to calculate, for each author of a preliminary report, the combined morbidity and financial costs of expected errors across 3 modalities (MRI, CT, and conventional radiography) and 4 departmental divisions (neuroradiology, abdominal imaging, musculoskeletal imaging, and thoracic imaging). A customized report was generated for each on-call radiologist that determined the category (modality and body part) with the highest total cost function. A universal total cost based on probability data from all radiologists was also compiled. The use of mathematical models to guide case selection could optimize the efficiency and effectiveness of physician time spent on peer review and produce more concrete and meaningful feedback to radiologists

  13. Spatio-temporal optimization of agricultural practices to achieve a sustainable development at basin level; framework of a case study in Colombia

    Science.gov (United States)

    Uribe, Natalia; corzo, Gerald; Solomatine, Dimitri

    2016-04-01

    The flood events present during the last years in different basins of the Colombian territory have raised questions on the sensitivity of the regions and if this regions have common features. From previous studies it seems important features in the sensitivity of the flood process were: land cover change, precipitation anomalies and these related to impacts of agriculture management and water management deficiencies, among others. A significant government investment in the outreach activities for adopting and promoting the Colombia National Action Plan on Climate Change (NAPCC) is being carried out in different sectors and regions, having as a priority the agriculture sector. However, more information is still needed in the local environment in order to assess were the regions have this sensitivity. Also the continuous change in one region with seasonal agricultural practices have been pointed out as a critical information for optimal sustainable development. This combined spatio-temporal dynamics of crops cycle in relation to climate change (or variations) has an important impact on flooding events at basin areas. This research will develop on the assessment and optimization of the aggregated impact of flood events due to determinate the spatio-temporal dynamic of changes in agricultural management practices. A number of common best agricultural practices have been identified to explore their effect in a spatial hydrological model that will evaluate overall changes. The optimization process consists on the evaluation of best performance in the agricultural production, without having to change crops activities or move to other regions. To achieve this objectives a deep analysis of different models combined with current and future climate scenarios have been planned. An algorithm have been formulated to cover the parametric updates such that the optimal temporal identification will be evaluated in different region on the case study area. Different hydroinformatics

  14. Legionnaires’ Disease: Clinicoradiological Comparison of Sporadic Versus Outbreak Cases

    Directory of Open Access Journals (Sweden)

    Hafiz Rizwan Talib Hashmi

    2017-06-01

    Full Text Available Background: In 2015, New York City experienced the worst outbreak of Legionnaires’ disease in the history of the city. We compare patients seen during the 2015 outbreak with sporadic cases of Legionella during the past 5 years. Methods: We conducted a retrospective chart review of 90 patients with Legionnaires’ disease, including sporadic cases of Legionella infection admitted from 2010 to 2015 (n = 55 and cases admitted during the 2015 outbreak (n = 35. Results: We saw no significant differences between the 2 groups regarding demographics, smoking habits, alcohol intake, underlying medical disease, or residence type. Univariate and multivariate analyses showed that patients with sporadic case of Legionella had a longer stay in the hospital and intensive care unit as well as an increased stay in mechanical ventilation. Short-term mortality, discharge disposition, and most clinical parameters did not differ significantly between the 2 groups. Conclusions: We found no specific clinicoradiological characteristics that could differentiate sporadic from epidemic cases of Legionella . Early recognition and high suspicion for Legionnaires’ disease are critical to provide appropriate treatment. Cluster of cases should increase suspicion for an outbreak.

  15. Managerial optimism and the impact of cash flow sensitivity on corporate investment: The case of Greece

    Directory of Open Access Journals (Sweden)

    Dimitrios I. Maditinos

    2015-10-01

    , managerial optimism is never linked to corporate investment regarding firms which belong to middle degree of closely held shares. Additionally it is proved that decisions for acquisitions are not affected by the manager's optimism regarding the prospects of his / her firm. This result it is not consistent with results of previous literature like Malmendier and Tate (2008 and Glaser et al. (2008 who have found that cash flow, Tobin's Q and firm size mainly drive the probability of an acquisition. Finally, it is confirmed that investment of firms with optimistic managers was found to be more sensitive to cash flow than the investment of firms with managers who are not optimistic. Optimism was proved to be extremely effective concerning investment. Research limitations/implications - A possible proposal for further research could be the testing of each year separately. In this study we have run the regressions for the whole of the 6-year period of 2007 to 2012. However, testing each year individually could provide researchers with the ability to compare different results, to find out whether there was anything special statistically for each specific year and maybe test the period after the year 2010 when the Greek crisis had started to come up on the horizon. The impact of the Greek financial crisis on managerial behaviour and on the personal characteristics of managers like optimism could constitute a field for further research. Originality/value - Since research on this specific field of finance is quite limited; this study aims to add value on the existing knowledge on the Greek case. The investigation of managerial optimism as a personal, psychological and mental characteristic encloses the effort of Greek managers to come out of the economic crisis and consequently achieve greater outcomes for their firms.

  16. Full space device optimization for solar cells.

    Science.gov (United States)

    Baloch, Ahmer A B; Aly, Shahzada P; Hossain, Mohammad I; El-Mellouhi, Fedwa; Tabet, Nouar; Alharbi, Fahhad H

    2017-09-20

    Advances in computational materials have paved a way to design efficient solar cells by identifying the optimal properties of the device layers. Conventionally, the device optimization has been governed by single or double descriptors for an individual layer; mostly the absorbing layer. However, the performance of the device depends collectively on all the properties of the material and the geometry of each layer in the cell. To address this issue of multi-property optimization and to avoid the paradigm of reoccurring materials in the solar cell field, a full space material-independent optimization approach is developed and presented in this paper. The method is employed to obtain an optimized material data set for maximum efficiency and for targeted functionality for each layer. To ensure the robustness of the method, two cases are studied; namely perovskite solar cells device optimization and cadmium-free CIGS solar cell. The implementation determines the desirable optoelectronic properties of transport mediums and contacts that can maximize the efficiency for both cases. The resulted data sets of material properties can be matched with those in materials databases or by further microscopic material design. Moreover, the presented multi-property optimization framework can be extended to design any solid-state device.

  17. Battery Simulation Tool for Worst Case Analysis and Mission Evaluations

    Directory of Open Access Journals (Sweden)

    Lefeuvre Stéphane

    2017-01-01

    The first part of this paper presents the PSpice models including their respective variable parameters at SBS and cell level. Then the second part of the paper introduces to the reader the model parameters that were chosen and identified to perform Monte Carlo Analysis simulations. The third part reflects some MCA results for a VES16 battery module. Finally the reader will see some other simulations that were performed by re-using the battery model for an another Saft battery cell type (MP XTD for a specific space application, at high temperature.

  18. Fast, Interactive Worst-Case Execution Time Analysis With Back-Annotation

    DEFF Research Database (Denmark)

    Harmon, Trevor; Schoeberl, Martin; Kirner, Raimund

    2012-01-01

    into the development cycle, requiring WCET analysis to be postponed until a final verification phase. In this paper, we propose interactive WCET analysis as a new method to provide near-instantaneous WCET feedback to the developer during software programming. We show that interactive WCET analysis is feasible using...

  19. Totally optimal decision rules

    KAUST Repository

    Amin, Talha

    2017-11-22

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  20. Totally optimal decision rules

    KAUST Repository

    Amin, Talha M.; Moshkov, Mikhail

    2017-01-01

    Optimality of decision rules (patterns) can be measured in many ways. One of these is referred to as length. Length signifies the number of terms in a decision rule and is optimally minimized. Another, coverage represents the width of a rule’s applicability and generality. As such, it is desirable to maximize coverage. A totally optimal decision rule is a decision rule that has the minimum possible length and the maximum possible coverage. This paper presents a method for determining the presence of totally optimal decision rules for “complete” decision tables (representations of total functions in which different variables can have domains of differing values). Depending on the cardinalities of the domains, we can either guarantee for each tuple of values of the function that totally optimal rules exist for each row of the table (as in the case of total Boolean functions where the cardinalities are equal to 2) or, for each row, we can find a tuple of values of the function for which totally optimal rules do not exist for this row.

  1. A Multicriteria GIS-Based Assessment to Optimize Biomass Facility Sites with Parallel Environment—A Case Study in Spain

    Directory of Open Access Journals (Sweden)

    Jin Su Jeong

    2017-12-01

    Full Text Available Optimizing a biomass facility site is a critical concern that is currently receiving an increased attention because of geographically spread biomass feedstock. This research presents a multicriteria GIS assessment with Weighted Linear Combination (WLC (most suitable areas and a sensitivity analysis (implementation strategies applied to various disciplines using suitable criteria to optimize a biomass facility location in the context of renewable energies respecting the environment. The analyses of results with twelve criteria show the most suitable areas (9.25% and constraints in a case study in Extremadura (Spain, where forest and agriculture are typical for land uses. Thus, the sensitivity analysis demonstrates the insight of the most influential criteria for supporting energy planning decisions. Therefore, this assessment could be used in studies to verify suitable biomass plants sites with corresponding geographical and spatial circumstances and available spatial data necessary in various governmental and industrial sectors.

  2. Automated IMRT planning with regional optimization using planning scripts.

    Science.gov (United States)

    Xhaferllari, Ilma; Wong, Eugene; Bzdusek, Karl; Lock, Michael; Chen, Jeff

    2013-01-07

    Intensity-modulated radiation therapy (IMRT) has become a standard technique in radiation therapy for treating different types of cancers. Various class solutions have been developed for simple cases (e.g., localized prostate, whole breast) to generate IMRT plans efficiently. However, for more complex cases (e.g., head and neck, pelvic nodes), it can be time-consuming for a planner to generate optimized IMRT plans. To generate optimal plans in these more complex cases which generally have multiple target volumes and organs at risk, it is often required to have additional IMRT optimization structures such as dose limiting ring structures, adjust beam geometry, select inverse planning objectives and associated weights, and additional IMRT objectives to reduce cold and hot spots in the dose distribution. These parameters are generally manually adjusted with a repeated trial and error approach during the optimization process. To improve IMRT planning efficiency in these more complex cases, an iterative method that incorporates some of these adjustment processes automatically in a planning script is designed, implemented, and validated. In particular, regional optimization has been implemented in an iterative way to reduce various hot or cold spots during the optimization process that begins with defining and automatic segmentation of hot and cold spots, introducing new objectives and their relative weights into inverse planning, and turn this into an iterative process with termination criteria. The method has been applied to three clinical sites: prostate with pelvic nodes, head and neck, and anal canal cancers, and has shown to reduce IMRT planning time significantly for clinical applications with improved plan quality. The IMRT planning scripts have been used for more than 500 clinical cases.

  3. Optimization of Evacuation Warnings Prior to a Hurricane Disaster

    Directory of Open Access Journals (Sweden)

    Dian Sun

    2017-11-01

    Full Text Available The key purpose of this paper is to demonstrate that optimization of evacuation warnings by time period and impacted zone is crucial for efficient evacuation of an area impacted by a hurricane. We assume that people behave in a manner consistent with the warnings they receive. By optimizing the issuance of hurricane evacuation warnings, one can control the number of evacuees at different time intervals to avoid congestion in the process of evacuation. The warning optimization model is applied to a case study of Hurricane Sandy using the study region of Brooklyn. We first develop a model for shelter assignment and then use this outcome to model hurricane evacuation warning optimization, which prescribes an evacuation plan that maximizes the number of evacuees. A significant technical contribution is the development of an iterative greedy heuristic procedure for the nonlinear formulation, which is shown to be optimal for the case of a single evacuation zone with a single evacuee type case, while it does not guarantee optimality for multiple zones under unusual circumstances. A significant applied contribution is the demonstration of an interface of the evacuation warning method with a public transportation scheme to facilitate evacuation of a car-less population. This heuristic we employ can be readily adapted to the case where response rate is a function of evacuation number in prior periods and other variable factors. This element is also explored in the context of our experiment.

  4. Optimization-based topology identification of complex networks

    International Nuclear Information System (INIS)

    Tang Sheng-Xue; Chen Li; He Yi-Gang

    2011-01-01

    In many cases, the topological structures of a complex network are unknown or uncertain, and it is of significance to identify the exact topological structure. An optimization-based method of identifying the topological structure of a complex network is proposed in this paper. Identification of the exact network topological structure is converted into a minimal optimization problem by using the estimated network. Then, an improved quantum-behaved particle swarm optimization algorithm is used to solve the optimization problem. Compared with the previous adaptive synchronization-based method, the proposed method is simple and effective and is particularly valid to identify the topological structure of synchronization complex networks. In some cases where the states of a complex network are only partially observable, the exact topological structure of a network can also be identified by using the proposed method. Finally, numerical simulations are provided to show the effectiveness of the proposed method. (general)

  5. Topology Optimization of Passive Micromixers Based on Lagrangian Mapping Method

    Directory of Open Access Journals (Sweden)

    Yuchen Guo

    2018-03-01

    Full Text Available This paper presents an optimization-based design method of passive micromixers for immiscible fluids, which means that the Peclet number infinitely large. Based on topology optimization method, an optimization model is constructed to find the optimal layout of the passive micromixers. Being different from the topology optimization methods with Eulerian description of the convection-diffusion dynamics, this proposed method considers the extreme case, where the mixing is dominated completely by the convection with negligible diffusion. In this method, the mixing dynamics is modeled by the mapping method, a Lagrangian description that can deal with the case with convection-dominance. Several numerical examples have been presented to demonstrate the validity of the proposed method.

  6. Optimal trajectories of aircraft and spacecraft

    Science.gov (United States)

    Miele, A.

    1990-01-01

    Work done on algorithms for the numerical solutions of optimal control problems and their application to the computation of optimal flight trajectories of aircraft and spacecraft is summarized. General considerations on calculus of variations, optimal control, numerical algorithms, and applications of these algorithms to real-world problems are presented. The sequential gradient-restoration algorithm (SGRA) is examined for the numerical solution of optimal control problems of the Bolza type. Both the primal formulation and the dual formulation are discussed. Aircraft trajectories, in particular, the application of the dual sequential gradient-restoration algorithm (DSGRA) to the determination of optimal flight trajectories in the presence of windshear are described. Both take-off trajectories and abort landing trajectories are discussed. Take-off trajectories are optimized by minimizing the peak deviation of the absolute path inclination from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. Abort landing trajectories are optimized by minimizing the peak drop of altitude from a reference value. The survival capability of an aircraft in a severe windshear is discussed, and the optimal trajectories are found to be superior to both constant pitch trajectories and maximum angle of attack trajectories. Spacecraft trajectories, in particular, the application of the primal sequential gradient-restoration algorithm (PSGRA) to the determination of optimal flight trajectories for aeroassisted orbital transfer are examined. Both the coplanar case and the noncoplanar case are discussed within the frame of three problems: minimization of the total characteristic velocity; minimization of the time integral of the square of the path inclination; and minimization of the peak heating rate. The solution of the second problem is called nearly-grazing solution, and its merits are pointed out as a useful

  7. A decision model to allocate protective safety barriers and mitigate domino effects

    International Nuclear Information System (INIS)

    Janssens, Jochen; Talarico, Luca; Reniers, Genserik; Sörensen, Kenneth

    2015-01-01

    In this paper, we present a model to support decision-makers about where to locate safety barriers and mitigate the consequences of an accident triggering domino effects. Based on the features of an industrial area that may be affected by domino accidents, and knowing the characteristics of the safety barriers that can be installed to stall the fire propagation between installations, the decision model can help practitioners in their decision-making. The model can be effectively used to decide how to allocate a limited budget in terms of safety barriers. The goal is to maximize the time-to-failure of a chemical installation ensuring a worst case scenario approach. The model is mathematically stated and a flexible and effective solution approach, based on metaheuristics, is developed and tested on an illustrative case study representing a tank storage area of a chemical company. We show that a myopic optimization approach, which does not take into account knock-on effects possibly triggered by an accident, can lead to a distribution of safety barriers that are not effective in mitigating the consequences of a domino accident. Moreover, the optimal allocation of safety barriers, when domino effects are considered, may depend on the so-called cardinality of the domino effects. - Highlights: • A model to allocate safety barriers and mitigate domino effects is proposed. • The goal is to maximize the escalation time of the worst case scenario. • The model provides useful recommendations for decision makers. • A fast metaheuristic approach is proposed to solve such a complex problem. • Numerical simulations on a realistic case study are shown

  8. Thermodynamic optimization opportunities for the recovery and utilization of residual energy and heat in China's iron and steel industry: A case study

    International Nuclear Information System (INIS)

    Chen, Lingen; Yang, Bo; Shen, Xun; Xie, Zhihui; Sun, Fengrui

    2015-01-01

    Analyses and optimizations of material flows and energy flows in iron and steel industry in the world are introduced in this paper. It is found that the recovery and utilization of residual energy and heat (RUREH) plays an important role for energy saving and CO 2 emission reduction no matter what method is used. Although the energy cascade utilization principle is carried out, the efficiency of RUREH in China's iron and steel industry (CISI) is only about 30%–50%, while the international advanced level is higher than 90%, such as USA, Japan, Sweden, etc. An important reason for the low efficiency of RUREH in CISI is that someone ignores the thermodynamic optimization opportunities for the energy recovery or utilization equipment, such as electricity production via waste heat boiler, sintering ore sensible heat recovery, heat transfer through heat exchangers, etc. A case study of hot blast stove flue gas sensible heat recovery and utilization is presented to illustrate the viewpoint above. The results show that before the heat conductance distribution optimization, the system can realize energy saving 76.2 kgce/h, profit 68.9 yuan/h, and CO 2 emission reduction 187.2 kg/h. While after the heat conductance distribution optimization, the system can realize energy saving 88.8 kgce/h, profit 92.5 yuan/h, and CO 2 emission reduction 218.2 kg/h, which are, respectively, improved by 16.5%, 34.2% and 16.5% than those before optimization. Thermodynamic optimization from the single equipment to the whole system of RUREH is a vital part in the future energy conservation work in CISI. - Highlights: • Material flows and energy flows in iron and steel industry are introduced. • Recovery and utilization of residual energy and heat plays an important role. • A case study of hot blast stove flue gas sensible heat recovery is presented. • Thermodynamic optimization for the system is performed. • Energy saving, profit, and CO 2 emission reduction improvements

  9. A Stack Cache for Real-Time Systems

    DEFF Research Database (Denmark)

    Schoeberl, Martin; Nielsen, Carsten

    2016-01-01

    Real-time systems need time-predictable computing platforms to allowfor static analysis of the worst-case execution time. Caches are important for good performance, but data caches arehard to analyze for the worst-case execution time. Stack allocated data has different properties related...

  10. Verification of Memory Performance Contracts with KeY

    OpenAIRE

    Engel, Christian

    2007-01-01

    Determining the worst case memory consumption is an important issue for real-time Java applications. This work describes a methodology for formally verifying worst case memory performance constraints and proposes extensions to Java Modeling Language (JML) facilitating better verifiability of JML performance specifications.

  11. Parallel strategy for optimal learning in perceptrons

    International Nuclear Information System (INIS)

    Neirotti, J P

    2010-01-01

    We developed a parallel strategy for learning optimally specific realizable rules by perceptrons, in an online learning scenario. Our result is a generalization of the Caticha-Kinouchi (CK) algorithm developed for learning a perceptron with a synaptic vector drawn from a uniform distribution over the N-dimensional sphere, so called the typical case. Our method outperforms the CK algorithm in almost all possible situations, failing only in a denumerable set of cases. The algorithm is optimal in the sense that it saturates Bayesian bounds when it succeeds.

  12. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  13. Optimal design of a gas transmission network: A case study of the Turkish natural gas pipeline network system

    Science.gov (United States)

    Gunes, Ersin Fatih

    Turkey is located between Europe, which has increasing demand for natural gas and the geographies of Middle East, Asia and Russia, which have rich and strong natural gas supply. Because of the geographical location, Turkey has strategic importance according to energy sources. To supply this demand, a pipeline network configuration with the optimal and efficient lengths, pressures, diameters and number of compressor stations is extremely needed. Because, Turkey has a currently working and constructed network topology, obtaining an optimal configuration of the pipelines, including an optimal number of compressor stations with optimal locations, is the focus of this study. Identifying a network design with lowest costs is important because of the high maintenance and set-up costs. The quantity of compressor stations, the pipeline segments' lengths, the diameter sizes and pressures at compressor stations, are considered to be decision variables in this study. Two existing optimization models were selected and applied to the case study of Turkey. Because of the fixed cost of investment, both models are formulated as mixed integer nonlinear programs, which require branch and bound combined with the nonlinear programming solution methods. The differences between these two models are related to some factors that can affect the network system of natural gas such as wall thickness, material balance compressor isentropic head and amount of gas to be delivered. The results obtained by these two techniques are compared with each other and with the current system. Major differences between results are costs, pressures and flow rates. These solution techniques are able to find a solution with minimum cost for each model both of which are less than the current cost of the system while satisfying all the constraints on diameter, length, flow rate and pressure. These results give the big picture of an ideal configuration for the future state network for the country of Turkey.

  14. Do People Want Optimal Deterrence?

    OpenAIRE

    Sunstein, Cass Robert; Schkade, David; Kahneman, Daniel

    2014-01-01

    Two studies test whether people believe in optimal deterrence. The first provides people with personal injury cases that are identical except for variations in the probability of detection and explores whether lower probability cases produce higher punitive damage awards and whether higher probability cases produce lower awards. No such effect is observed. The second asks people whether they agree or disagree with administrative and judicial policies that increase penalties when the probabili...

  15. Tracing the transition path between optimal strategies combinations within a competitive market of innovative industrial products

    Science.gov (United States)

    Batzias, Dimitris F.; Pollalis, Yannis A.

    2012-12-01

    In several cases, a competitive market can be simulated by a game, where each company/opponent is referred to as a player. In order to accommodate the fact that each player (alone or with alliances) is working against some others' interest, the rather conservative maximin criterion is frequently used for selecting the strategy or the combination of strategies that yield the best of the worst possible outcomes for each one of the players. Under this criterion, an optimal solution is obtained when neither player finds it beneficial to alter his strategy, which means that an equilibrium has been achieved, giving also the value of the game. If conditions change as regards a player, e.g., because of either achieving an unexpected successful result in developing an innovative industrial product or obtaining higher liquidity permitting him to increase advertisement in order to acquire a larger market share, then a new equilibrium is reached. The identification of the path between the old and the new equilibrium points may prove to be valuable for investigating the robustness of the solution by means of sensitivity analysis, since uncertainty plays a critical role in this situation, where evaluation of the payoff matrix is usually based on experts' estimates. In this work, the development of a standard methodology (including 16 activity stages and 7 decision nodes) for tracing this path is presented while a numerical implementation follows to prove its functionality.

  16. Optimal crowd evacuation

    NARCIS (Netherlands)

    Hoogendoorn, S.P.; Daamen, W.; Duives, D.C.; Van Wageningen-Kessels, F.L.M.

    2013-01-01

    This paper deals with the optimal allocation of routes, destination, and departure times to members of a crowd, for instance in case of an evacuation or another hazardous situation in which the people need to leave the area as quickly as possible. The generic approach minimizes the evacuation times,

  17. Time-Predictable Computer Architecture

    Directory of Open Access Journals (Sweden)

    Schoeberl Martin

    2009-01-01

    Full Text Available Today's general-purpose processors are optimized for maximum throughput. Real-time systems need a processor with both a reasonable and a known worst-case execution time (WCET. Features such as pipelines with instruction dependencies, caches, branch prediction, and out-of-order execution complicate WCET analysis and lead to very conservative estimates. In this paper, we evaluate the issues of current architectures with respect to WCET analysis. Then, we propose solutions for a time-predictable computer architecture. The proposed architecture is evaluated with implementation of some features in a Java processor. The resulting processor is a good target for WCET analysis and still performs well in the average case.

  18. OPTIMAL PRICING OF A PERSONALIZED PRODUCT

    Institute of Scientific and Technical Information of China (English)

    Suresh P.SETHI

    2008-01-01

    This paper deals with optimal pricing of a personalized product such as a personal portrait or photo.A new model of the pricing structure inspired by two real-life cases is introduced to the literature and solved to obtain optimal photo sitting fees and the final product price.A sensitivity analysis with respect to the problem parameters is performed.

  19. Joint Pricing and Purchasing Decisions for the Dual-Channel Newsvendor Model with Partial Information

    Directory of Open Access Journals (Sweden)

    Jixiang Zhou

    2014-01-01

    Full Text Available We investigate a joint pricing and purchasing problem for the dual-channel newsvendor model with the assumption that only the mean and variance of the demand are known. The newsvendor in our model simultaneously distributes a single product through traditional retail and Internet. A robust optimization approach that maximizes the worst-case profit is adapted under the aforementioned conditions to model demand uncertainty and linear clearing functions that characterize the relationship between demand and prices. We obtain a close-form expression for the robust optimal policy. Illustrative simulations and numerical experiments show the effects of several parameters on the optimal policy and on newsvendor performance. Finally, we determine that the gap between newsvendor performance under demand certainty and uncertainty is minimal, which shows that the robust approach can significantly improve performance.

  20. H{infinity} Filtering for Dynamic Compensation of Self-Powered Neutron Detectors - A Linear Matrix Inequality Based Method -

    Energy Technology Data Exchange (ETDEWEB)

    Park, M.G.; Kim, Y.H.; Cha, K.H.; Kim, M.K. [Korea Electric Power Research Institute, Taejon (Korea)

    1999-07-01

    A method is described to develop and H{infinity} filtering method for the dynamic compensation of self-powered neutron detectors normally used for fixed incore instruments. An H{infinity} norm of the filter transfer matrix is used as the optimization criteria in the worst-case estimation error sense. Filter modeling is performed for both continuous- and discrete-time models. The filter gains are optimized in the sense of noise attenuation level of H{infinity} setting. By introducing Bounded Real Lemma, the conventional algebraic Riccati inequalities are converted into Linear Matrix Inequalities (LMIs). Finally, the filter design problem is solved via the convex optimization framework using LMIs. The simulation results show that remarkable improvements are achieved in view of the filter response time and the filter design efficiency. (author). 15 refs., 4 figs., 3 tabs.