WorldWideScience

Sample records for ordinal optimization based

  1. Ordinal optimization and its application to complex deterministic problems

    Yang, Mike Shang-Yu

    1998-10-01

    We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.

  2. Ordinal-Measure Based Shape Correspondence

    Faouzi Alaya Cheikh

    2002-04-01

    Full Text Available We present a novel approach to shape similarity estimation based on distance transformation and ordinal correlation. The proposed method operates in three steps: object alignment, contour to multilevel image transformation, and similarity evaluation. This approach is suitable for use in shape classification, content-based image retrieval and performance evaluation of segmentation algorithms. The two latter applications are addressed in this papers. Simulation results show that in both applications our proposed measure performs quite well in quantifying shape similarity. The scores obtained using this technique reflect well the correspondence between object contours as humans perceive it.

  3. Statistical Optimality in Multipartite Ranking and Ordinal Regression.

    Uematsu, Kazuki; Lee, Yoonkyung

    2015-05-01

    Statistical optimality in multipartite ranking is investigated as an extension of bipartite ranking. We consider the optimality of ranking algorithms through minimization of the theoretical risk which combines pairwise ranking errors of ordinal categories with differential ranking costs. The extension shows that for a certain class of convex loss functions including exponential loss, the optimal ranking function can be represented as a ratio of weighted conditional probability of upper categories to lower categories, where the weights are given by the misranking costs. This result also bridges traditional ranking methods such as proportional odds model in statistics with various ranking algorithms in machine learning. Further, the analysis of multipartite ranking with different costs provides a new perspective on non-smooth list-wise ranking measures such as the discounted cumulative gain and preference learning. We illustrate our findings with simulation study and real data analysis.

  4. New ordinances

    Reuter, H.

    1980-01-01

    Based on extensive preliminary work of the responsible Federal Minister of Labour and Social Affairs, the 'Ordinance to Replace Ordinances under Article 24 of the Irading and Industrial Code' has been issued by the Federal Government on February 27, 1980. This new ordinance also contains the new versions of the Steam Boiler Ordinance, the Pressure Gas Ordinance, the Lift Ordinance, the Ordinance on Electrical Installations in Rooms with High Explosion Hazards, the Acetylene Ordinance, and the Ordinance on Combustible Liquids. Accordingly, these new ordinances all have the same date of issue. Coming into force on July 1, 1980, they will replace six ordinances for plants to be licensed. The same applies to the pertinent general administrative regulations. (orig.) [de

  5. A Hybrid Heuristic Optimization Approach for Leak Detection in Pipe Networks Using Ordinal Optimization Approach and the Symbiotic Organism Search

    Chao-Chih Lin

    2017-10-01

    Full Text Available A new transient-based hybrid heuristic approach is developed to optimize a transient generation process and to detect leaks in pipe networks. The approach couples the ordinal optimization approach (OOA and the symbiotic organism search (SOS to solve the optimization problem by means of iterations. A pipe network analysis model (PNSOS is first used to determine steady-state head distribution and pipe flow rates. The best transient generation point and its relevant valve operation parameters are optimized by maximizing the objective function of transient energy. The transient event is created at the chosen point, and the method of characteristics (MOC is used to analyze the transient flow. The OOA is applied to sift through the candidate pipes and the initial organisms with leak information. The SOS is employed to determine the leaks by minimizing the sum of differences between simulated and computed head at the observation points. Two synthetic leaking scenarios, a simple pipe network and a water distribution network (WDN, are chosen to test the performance of leak detection ordinal symbiotic organism search (LDOSOS. Leak information can be accurately identified by the proposed approach for both of the scenarios. The presented technique makes a remarkable contribution to the success of leak detection in the pipe networks.

  6. Prediction of spectral acceleration response ordinates based on PGA attenuation

    Graizer, V.; Kalkan, E.

    2009-01-01

    Developed herein is a new peak ground acceleration (PGA)-based predictive model for 5% damped pseudospectral acceleration (SA) ordinates of free-field horizontal component of ground motion from shallow-crustal earthquakes. The predictive model of ground motion spectral shape (i.e., normalized spectrum) is generated as a continuous function of few parameters. The proposed model eliminates the classical exhausted matrix of estimator coefficients, and provides significant ease in its implementation. It is structured on the Next Generation Attenuation (NGA) database with a number of additions from recent Californian events including 2003 San Simeon and 2004 Parkfield earthquakes. A unique feature of the model is its new functional form explicitly integrating PGA as a scaling factor. The spectral shape model is parameterized within an approximation function using moment magnitude, closest distance to the fault (fault distance) and VS30 (average shear-wave velocity in the upper 30 m) as independent variables. Mean values of its estimator coefficients were computed by fitting an approximation function to spectral shape of each record using robust nonlinear optimization. Proposed spectral shape model is independent of the PGA attenuation, allowing utilization of various PGA attenuation relations to estimate the response spectrum of earthquake recordings.

  7. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  8. Memory-Based Specification of Verbal Features for Classifying Animals into Super-Ordinate and Sub-Ordinate Categories

    Takahiro Soshi

    2017-09-01

    Full Text Available Accumulating evidence suggests that category representations are based on features. Distinguishing features are considered to define categories, because of all-or-none responses for objects in different categories; however, it is unclear how distinguishing features actually classify objects at various category levels. The present study included 75 animals within three classes (mammal, bird, and fish, along with 195 verbal features. Healthy adults participated in memory-based feature-animal matching verification tests. Analyses included a hierarchical clustering analysis, support vector machine, and independent component analysis to specify features effective for classifications. Quantitative and qualitative comparisons for significant features were conducted between super-ordinate and sub-ordinate levels. The number of significant features was larger for super-ordinate than sub-ordinate levels. Qualitatively, the proportion of biological features was larger than cultural/affective features in both the levels, while the proportion of affective features increased at the sub-ordinate level. To summarize, the two types of features differentially function to establish category representations.

  9. An Integrated Model of Co-ordinated Community-Based Care.

    Scharlach, Andrew E; Graham, Carrie L; Berridge, Clara

    2015-08-01

    Co-ordinated approaches to community-based care are a central component of current and proposed efforts to help vulnerable older adults obtain needed services and supports and reduce unnecessary use of health care resources. This study examines ElderHelp Concierge Club, an integrated community-based care model that includes comprehensive personal and environmental assessment, multilevel care co-ordination, a mix of professional and volunteer service providers, and a capitated, income-adjusted fee model. Evaluation includes a retrospective study (n = 96) of service use and perceived program impact, and a prospective study (n = 21) of changes in participant physical and social well-being and health services utilization. Over the period of this study, participants showed greater mobility, greater ability to meet household needs, greater access to health care, reduced social isolation, reduced home hazards, fewer falls, and greater perceived ability to obtain assistance needed to age in place. This study provides preliminary evidence that an integrated multilevel care co-ordination approach may be an effective and efficient model for serving vulnerable community-based elders, especially low and moderate-income elders who otherwise could not afford the cost of care. The findings suggest the need for multisite controlled studies to more rigorously evaluate program impacts and the optimal mix of various program components. © The Author 2014. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  10. 2D co-ordinate transformation based on a spike timing-dependent plasticity learning mechanism.

    Wu, QingXiang; McGinnity, Thomas Martin; Maguire, Liam; Belatreche, Ammar; Glackin, Brendan

    2008-11-01

    In order to plan accurate motor actions, the brain needs to build an integrated spatial representation associated with visual stimuli and haptic stimuli. Since visual stimuli are represented in retina-centered co-ordinates and haptic stimuli are represented in body-centered co-ordinates, co-ordinate transformations must occur between the retina-centered co-ordinates and body-centered co-ordinates. A spiking neural network (SNN) model, which is trained with spike-timing-dependent-plasticity (STDP), is proposed to perform a 2D co-ordinate transformation of the polar representation of an arm position to a Cartesian representation, to create a virtual image map of a haptic input. Through the visual pathway, a position signal corresponding to the haptic input is used to train the SNN with STDP synapses such that after learning the SNN can perform the co-ordinate transformation to generate a representation of the haptic input with the same co-ordinates as a visual image. The model can be applied to explain co-ordinate transformation in spiking neuron based systems. The principle can be used in artificial intelligent systems to process complex co-ordinate transformations represented by biological stimuli.

  11. A Rational Decision Maker with Ordinal Utility under Uncertainty: Optimism and Pessimism

    Han, Ji

    2009-01-01

    In game theory and artificial intelligence, decision making models often involve maximizing expected utility, which does not respect ordinal invariance. In this paper, the author discusses the possibility of preserving ordinal invariance and still making a rational decision under uncertainty.

  12. Efficient iris texture analysis method based on Gabor ordinal measures

    Tajouri, Imen; Aydi, Walid; Ghorbel, Ahmed; Masmoudi, Nouri

    2017-07-01

    With the remarkably increasing interest directed to the security dimension, the iris recognition process is considered to stand as one of the most versatile technique critically useful for the biometric identification and authentication process. This is mainly due to every individual's unique iris texture. A modestly conceived efficient approach relevant to the feature extraction process is proposed. In the first place, iris zigzag "collarette" is extracted from the rest of the image by means of the circular Hough transform, as it includes the most significant regions lying in the iris texture. In the second place, the linear Hough transform is used for the eyelids' detection purpose while the median filter is applied for the eyelashes' removal. Then, a special technique combining the richness of Gabor features and the compactness of ordinal measures is implemented for the feature extraction process, so that a discriminative feature representation for every individual can be achieved. Subsequently, the modified Hamming distance is used for the matching process. Indeed, the advanced procedure turns out to be reliable, as compared to some of the state-of-the-art approaches, with a recognition rate of 99.98%, 98.12%, and 95.02% on CASIAV1.0, CASIAV3.0, and IIT Delhi V1 iris databases, respectively.

  13. Optimizing nitrogen fertilizer application to irrigated wheat. Results of a co-ordinated research project. 1994-1998

    2000-07-01

    This TECDOC summarizes the results of a Co-ordinated Research Project (CRP) on the Use of Nuclear Techniques for Optimizing Fertilizer Application under Irrigated Wheat to Increase the Efficient Use of Nitrogen Fertilizer and Consequently Reduce Environmental Pollution. The project was carried out between 1994 and 1998 through the technical co-ordination of the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture. Fourteen Member States of the IAEA and FAO carried out a series of field experiments aimed at improving irrigation water and fertilizer-N uptake efficiencies through integrated management of the complex Interactions involving inputs, soils, climate, and wheat cultivars. Its goals were: to investigate various aspects of fertilizer N uptake efficiency of wheat crops under irrigation through an interregional research network involving countries growing large areas of irrigated wheat; to use 15 N and the soil-moisture neutron probe to determine the fate of applied N, to follow water and nitrate movement in the soil, and to determine water balance and water-use efficiency in irrigated wheat cropping systems; to use the data generated to further develop and refine various relationships in the Ceres-Wheat computer simulation model; to use the knowledge generated to produce a N-rate-recommendation package to refine specific management strategies with respect to fertilizer applications and expected yields

  14. Memory-Based Specification of Verbal Features for Classifying Animals into Super-Ordinate and Sub-Ordinate Categories

    Takahiro Soshi; Norio Fujimaki; Atsushi Matsumoto; Aya S. Ihara

    2017-01-01

    Accumulating evidence suggests that category representations are based on features. Distinguishing features are considered to define categories, because of all-or-none responses for objects in different categories; however, it is unclear how distinguishing features actually classify objects at various category levels. The present study included 75 animals within three classes (mammal, bird, and fish), along with 195 verbal features. Healthy adults participated in memory-based feature-animal m...

  15. Wormhole Detection Based on Ordinal MDS Using RTT in Wireless Sensor Network

    Saswati Mukherjee

    2016-01-01

    Full Text Available In wireless communication, wormhole attack is a crucial threat that deteriorates the normal functionality of the network. Invasion of wormholes destroys the network topology completely. However, most of the existing solutions require special hardware or synchronized clock or long processing time to defend against long path wormhole attacks. In this work, we propose a wormhole detection method using range-based topology comparison that exploits the local neighbourhood subgraph. The Round Trip Time (RTT for each node pair is gathered to generate neighbour information. Then, the network is reconstructed by ordinal Multidimensional Scaling (MDS followed by a suspicion phase that enlists the suspected wormholes based on the spatial reconstruction. Iterative computation of MDS helps to visualize the topology changes and can localize the potential wormholes. Finally, a verification phase is used to remove falsely accused nodes and identify real adversaries. The novelty of our algorithm is that it can detect both short path and long path wormhole links. Extensive simulations are executed to demonstrate the efficacy of our approach compared to existing ones.

  16. Development of three-dimensional program based on Monte Carlo and discrete ordinates bidirectional coupling method

    Han Jingru; Chen Yixue; Yuan Longjun

    2013-01-01

    The Monte Carlo (MC) and discrete ordinates (SN) are the commonly used methods in the design of radiation shielding. Monte Carlo method is able to treat the geometry exactly, but time-consuming in dealing with the deep penetration problem. The discrete ordinate method has great computational efficiency, but it is quite costly in computer memory and it suffers from ray effect. Single discrete ordinates method or single Monte Carlo method has limitation in shielding calculation for large complex nuclear facilities. In order to solve the problem, the Monte Carlo and discrete ordinates bidirectional coupling method is developed. The bidirectional coupling method is implemented in the interface program to transfer the particle probability distribution of MC and angular flux of discrete ordinates. The coupling method combines the advantages of MC and SN. The test problems of cartesian and cylindrical coordinate have been calculated by the coupling methods. The calculation results are performed with comparison to MCNP and TORT and satisfactory agreements are obtained. The correctness of the program is proved. (authors)

  17. Agent-Based Optimization

    Jędrzejowicz, Piotr; Kacprzyk, Janusz

    2013-01-01

    This volume presents a collection of original research works by leading specialists focusing on novel and promising approaches in which the multi-agent system paradigm is used to support, enhance or replace traditional approaches to solving difficult optimization problems. The editors have invited several well-known specialists to present their solutions, tools, and models falling under the common denominator of the agent-based optimization. The book consists of eight chapters covering examples of application of the multi-agent paradigm and respective customized tools to solve  difficult optimization problems arising in different areas such as machine learning, scheduling, transportation and, more generally, distributed and cooperative problem solving.

  18. The neural correlates of visual imagery: A co-ordinate-based meta-analysis.

    Winlove, Crawford I P; Milton, Fraser; Ranson, Jake; Fulford, Jon; MacKisack, Matthew; Macpherson, Fiona; Zeman, Adam

    2018-01-02

    Visual imagery is a form of sensory imagination, involving subjective experiences typically described as similar to perception, but which occur in the absence of corresponding external stimuli. We used the Activation Likelihood Estimation algorithm (ALE) to identify regions consistently activated by visual imagery across 40 neuroimaging studies, the first such meta-analysis. We also employed a recently developed multi-modal parcellation of the human brain to attribute stereotactic co-ordinates to one of 180 anatomical regions, the first time this approach has been combined with the ALE algorithm. We identified a total 634 foci, based on measurements from 464 participants. Our overall comparison identified activation in the superior parietal lobule, particularly in the left hemisphere, consistent with the proposed 'top-down' role for this brain region in imagery. Inferior premotor areas and the inferior frontal sulcus were reliably activated, a finding consistent with the prominent semantic demands made by many visual imagery tasks. We observed bilateral activation in several areas associated with the integration of eye movements and visual information, including the supplementary and cingulate eye fields (SCEFs) and the frontal eye fields (FEFs), suggesting that enactive processes are important in visual imagery. V1 was typically activated during visual imagery, even when participants have their eyes closed, consistent with influential depictive theories of visual imagery. Temporal lobe activation was restricted to area PH and regions of the fusiform gyrus, adjacent to the fusiform face complex (FFC). These results provide a secure foundation for future work to characterise in greater detail the functional contributions of specific areas to visual imagery. Copyright © 2017. Published by Elsevier Ltd.

  19. Tree Ordination as Invented Tradition

    Avery Morrow

    2012-01-01

    Full Text Available The symbolic ordination of trees as monks in Thailand is widely perceived in Western scholarship to be proof of the power of Buddhism to spur ecological thought. However, a closer analysis of tree ordination demonstrates that it is not primarily about Buddhist teaching, but rather is an invented tradition based on the sanctity of Thai Buddhist symbols as well as those of spirit worship and the monarchy. Tree ordinations performed by non-Buddhist minorities in Thailand do not demonstrate a religious commitment but rather a political one.

  20. Clearance of material with negligible levels of radioactivity based on the amended German radiation protection ordinance

    Schaller, G.; Bayer, A.

    2002-01-01

    For the first time the modalities for the clearance of relevant materials have been laid down in the amended Radiation Protection Ordinance in a comprehensive form. A distinction is made between unconditional clearance and cases of clearance in which disposal, recycling, and re-use is prescribed. The focus of attention in the case of unconditional clearance consists in special consideration of large volumes of building rubble and excavated soil. Material can only be released from supervision under the Atomic Energy Act, when the radiation-related risk, and correspondingly, also the dose from material issued with clearance, are at negligible levels. In order to facilitate the practical application of clearance, values have been derived on the basis of scenarios which cover all radiation exposures which can reasonably be considered. (orig.) [de

  1. The impact of tourists on Antarctic tardigrades: an ordination-based model

    Sandra J. McInnes

    2013-05-01

    Full Text Available Tardigrades are important members of the Antarctic biota yet little is known about their role in the soil fauna or whether they are affected by anthropogenic factors. The German Federal Environment Agency commissioned research to assess the impact of human activities on soil meiofauna at 14 localities along the Antarctic peninsula during the 2009/2010 and 2010/2011 austral summers. We used ordination techniques to re-assess the block-sampling design used to compare areas of high and low human impact, to identify which of the sampled variables were biologically relevant and/or demonstrated an anthropogenic significance. We found the most significant differences between locations, reflecting local habitat and vegetation factor, rather than within-location anthropogenic impact. We noted no evidence of exotic imports but report on new maritime Antarctic sample sites and habitats.

  2. An approach to solve group-decision-making problems with ordinal interval numbers.

    Fan, Zhi-Ping; Liu, Yang

    2010-10-01

    The ordinal interval number is a form of uncertain preference information in group decision making (GDM), while it is seldom discussed in the existing research. This paper investigates how the ranking order of alternatives is determined based on preference information of ordinal interval numbers in GDM problems. When ranking a large quantity of ordinal interval numbers, the efficiency and accuracy of the ranking process are critical. A new approach is proposed to rank alternatives using ordinal interval numbers when every ranking ordinal in an ordinal interval number is thought to be uniformly and independently distributed in its interval. First, we give the definition of possibility degree on comparing two ordinal interval numbers and the related theory analysis. Then, to rank alternatives, by comparing multiple ordinal interval numbers, a collective expectation possibility degree matrix on pairwise comparisons of alternatives is built, and an optimization model based on this matrix is constructed. Furthermore, an algorithm is also presented to rank alternatives by solving the model. Finally, two examples are used to illustrate the use of the proposed approach.

  3. Nuclear based technologies for estimating microbial protein supply in ruminant livestock. Proceedings of the second research co-ordination meeting of a co-ordinated research project (phase 1)

    1999-06-01

    The Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture through its Co-ordinated Research Projects (CRPs), has been assisting national agricultural research systems in Member States to develop and apply nuclear and related techniques for improving livestock productivity. The programmes have focused on animal nutrition, animal reproduction and more recently on animal nutrition/reproduction interactions with emphasis on smallholder farming systems. The measurement of microbial protein supply to ruminant livestock has been an important area of research in ruminant nutrition. An estimate of microbial protein contribution to the intestinal protein flow is important for estimating the protein requirement of ruminant animals. Understanding the process of microbial protein synthesis has been difficult however, and due to the lack of simple and accurate methods for measuring microbial protein production in vivo, the methods used are based on complex microbial markers which require surgically prepared animals. As a result of a consultants meeting held in May 1995 to advise the Joint FAO/IAEA Division on the feasibility of using nuclear and related techniques for the development and validation of techniques for measuring microbial protein supply in ruminant animals, an FAO/IAEA Co-ordinated Research Project on Development, Standardization and Validation of Nuclear Based Technologies for Measuring Microbial Protein Supply in Ruminant Livestock for Improving Productivity was initiated in 1996, with a view to validating and adapting this technology for use in developing countries. To assist scientists participating in the CRP, a laboratory manual containing experimental protocols and methodologies for standardization and validation of the urine purine derivative technique and the development of models to suit local conditions, was published as IAEA-TECDOC-945. The present publication contains the final reports from participants in Phase 1 of the project

  4. Nuclear based technologies for estimating microbial protein supply in ruminant livestock. Proceedings of the second research co-ordination meeting of a co-ordinated research project (phase 1)

    NONE

    1999-06-01

    The Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture through its Co-ordinated Research Projects (CRPs), has been assisting national agricultural research systems in Member States to develop and apply nuclear and related techniques for improving livestock productivity. The programmes have focused on animal nutrition, animal reproduction and more recently on animal nutrition/reproduction interactions with emphasis on smallholder farming systems. The measurement of microbial protein supply to ruminant livestock has been an important area of research in ruminant nutrition. An estimate of microbial protein contribution to the intestinal protein flow is important for estimating the protein requirement of ruminant animals. Understanding the process of microbial protein synthesis has been difficult however, and due to the lack of simple and accurate methods for measuring microbial protein production in vivo, the methods used are based on complex microbial markers which require surgically prepared animals. As a result of a consultants meeting held in May 1995 to advise the Joint FAO/IAEA Division on the feasibility of using nuclear and related techniques for the development and validation of techniques for measuring microbial protein supply in ruminant animals, an FAO/IAEA Co-ordinated Research Project on Development, Standardization and Validation of Nuclear Based Technologies for Measuring Microbial Protein Supply in Ruminant Livestock for Improving Productivity was initiated in 1996, with a view to validating and adapting this technology for use in developing countries. To assist scientists participating in the CRP, a laboratory manual containing experimental protocols and methodologies for standardization and validation of the urine purine derivative technique and the development of models to suit local conditions, was published as IAEA-TECDOC-945. The present publication contains the final reports from participants in Phase 1 of the project

  5. Risk Based Optimal Fatigue Testing

    Sørensen, John Dalsgaard; Faber, M.H.; Kroon, I.B.

    1992-01-01

    Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value...

  6. Report of the 2. research co-ordination meeting of the co-ordinated research programme on the development of computer-based troubleshooting tools and instruments

    1998-11-01

    The Research coordination meeting reviewed current results on the Development of Computer-Based Troubleshooting Tools and Instruments. Presentations at the meeting were made by the participants, and the project summary reports include: PC based software for troubleshooting microprocessor-based instruments; technical data base software; design and construction of a random pulser for maintenance and quality control of a nuclear counting system; microprocessor-based power conditioner; in-circuit emulator for microprocessor-based nuclear instruments; PC-based analog signal generator for simulated detector signals and arbitrary test waveforms for testing of nuclear instruments; expert system for nuclear instrument troubleshooting; development and application of versatile computer-based measurement and diagnostic tools; and development of a programmable signal generator for troubleshooting of nuclear instrumentation

  7. 关于序区间偏好信息的群决策方法研究%Study on the method of ranking in group decision making based on ordinal interval preference information

    陈侠; 陈岩

    2011-01-01

    It is a new important research topic to discuss the problem of ranking in group decision making based on ordinal interval preference information. In this paper, an analytic method is proposed to solve the problem of ranking based on the ordinal interval preference information in decision making. Firstly, some concepts and characters of the ordinal interval preference information are introduced. Then, based on introducing the concepts of possibility and possibility matrix, the conclusion is obtained that the matrices of possibility of all experts are fuzzy reciprocal matrices and they are weak consistent. Furthermore, an optimization model of group consensus is constructed to calculate the optimization weigh vector, and an analysis method of ranking in group decision making based on the ordinal interval preference information is proposed. Finally, a numerical example is given to illustrate the use of the proposed analysis method.%在群决策分析中,基于序区间偏好信息的排序方法的研究是一个新的重要研究课题.针对决策分析中基于序区间偏好信息的群决策方法问题,提出了一种新的分析方法.首先,提出了序区间的有关定义及性质;其次,通过定义序区间的可能度及可能度矩阵的概念,得出了每个专家的可能度矩阵均具有满意一致性的互补判断矩阵结论.进而构建了基于群体一致性的最优化模型,依据计算的最优权重向量给出了一种关于序区间偏好信息的群决策方案排序方法.最后,通过一个算例说明了提出的分析方法.

  8. A hierarchical model for ordinal matrix factorization

    Paquet, Ulrich; Thomson, Blaise; Winther, Ole

    2012-01-01

    This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based...

  9. Second Ordinance amending the Radiation Protection Ordinance

    1989-01-01

    The amendment of the Radiation Protection Ordinance brings about the following changes: (1) Introduction of the concept of effective dose, reduction of limits for partial body dose, adoption of the radiotoxicity values of radionuclides as established by the EC Basis Standards; (2) introduction of a working-life-related dose limit of 400 mSv; (3) supplementing provisions for the protection of the population, particularly by the standard procedure for radioecological impact assessment and determination of dose factors; (4) supplementing provisions on the use of radioactive substances in medicine and medical research; (5) supplementing provisions on health physics monitoring; (6) provisions for improving the supervision and controls in the transport of radioactive substances; (7) definition of activities and their assignment to the provisions of the Radiation Protection Ordinance; (8) revision of the waste management provisions of the Radiation Protection Ordinance. (HP) [de

  10. Ordinal Regression Based Subpixel Shift Estimation for Video Super-Resolution

    Petrovic Nemanja

    2007-01-01

    Full Text Available We present a supervised learning-based approach for subpixel motion estimation which is then used to perform video super-resolution. The novelty of this work is the formulation of the problem of subpixel motion estimation in a ranking framework. The ranking formulation is a variant of classification and regression formulation, in which the ordering present in class labels namely, the shift between patches is explicitly taken into account. Finally, we demonstrate the applicability of our approach on superresolving synthetically generated images with global subpixel shifts and enhancing real video frames by accounting for both local integer and subpixel shifts.

  11. Simulating Ordinal Data

    Ferrari, Pier Alda; Barbiero, Alessandro

    2012-01-01

    The increasing use of ordinal variables in different fields has led to the introduction of new statistical methods for their analysis. The performance of these methods needs to be investigated under a number of experimental conditions. Procedures to simulate from ordinal variables are then required. In this article, we deal with simulation from…

  12. Neutron distribution modeling based on integro-probabilistic approach of discrete ordinates method

    Khromov, V.V.; Kryuchkov, E.F.; Tikhomirov, G.V.

    1992-01-01

    In this paper is described the universal nodal method for the neutron distribution calculation in reactor and shielding problems, based on using of influence functions and factors of local-integrated volume and surface neutron sources in phase subregions. This method permits to avoid the limited capabilities of collision-probability method concerning with the detailed calculation of angular neutron flux dependence, scattering anisotropy and empty channels. The proposed method may be considered as modification of S n - method with advantage of ray-effects elimination. There are presented the description of method theory and algorithm following by the examples of method applications for calculation of neutron distribution in three-dimensional model of fusion reactor blanket and in highly heterogeneous reactor with empty channel

  13. Optimization of production and quality control of therapeutic radionuclides and radiopharmaceuticals. Final report of a co-ordinated research project 1994-1998

    NONE

    1999-09-01

    The `renaissance` of the therapeutic applications of radiopharmaceuticals during the last few years was in part due to a greater availability of radionuclides with appropriate nuclear decay properties, as well as to the development of carrier molecules with improved characteristics. Although radionuclides such as {sup 32}P, {sup 89}Sr and {sup 131}I, were used from the early days of nuclear medicine in the late 1930s and early 1940s, the inclusion of other particle emitting radionuclides into the nuclear medicine armamentarium was rather late. Only in the early 1980s did the specialized scientific literature start to show the potential for using other beta emitting nuclear reactor produced radionuclides such as {sup 153}Sm, {sup 166} Ho, {sup 165}Dy and {sup 186-188}Re. Bone seeking agents radiolabelled with the above mentioned beta emitting radionuclides demonstrated clear clinical potential in relieving intense bone pain resulting from metastases of the breast, prostate and lung of cancer patients. Therefore, upon the recommendation of a consultants meeting held in Vienna in 1993, the Co-ordinated Research Project (CRP) on Optimization of the Production and quality control of Radiotherapeutic Radionuclides and Radiopharmaceuticals was established in 1994. The CRP aimed at developing and improving existing laboratory protocols for the production of therapeutic radionuclides using existing nuclear research reactors including the corresponding radiolabelling, quality control procedures; and validation in experimental animals. With the participation of ten scientists from IAEA Member States, several laboratory procedures for preparation and quality control were developed, tested and assessed as potential therapeutic radiopharmaceuticals for bone pain palliation. In particular, the CRP optimised the reactor production of {sup 153}Sm and the preparation of the radiopharmaceutical {sup 153}Sm-EDTMP (ethylene diamine tetramethylene phosphonate), as well as radiolabelling

  14. Optimization of production and quality control of therapeutic radionuclides and radiopharmaceuticals. Final report of a co-ordinated research project 1994-1998

    1999-09-01

    The 'renaissance' of the therapeutic applications of radiopharmaceuticals during the last few years was in part due to a greater availability of radionuclides with appropriate nuclear decay properties, as well as to the development of carrier molecules with improved characteristics. Although radionuclides such as 32 P, 89 Sr and 131 I, were used from the early days of nuclear medicine in the late 1930s and early 1940s, the inclusion of other particle emitting radionuclides into the nuclear medicine armamentarium was rather late. Only in the early 1980s did the specialized scientific literature start to show the potential for using other beta emitting nuclear reactor produced radionuclides such as 153 Sm, 166 Ho, 165 Dy and 186-188 Re. Bone seeking agents radiolabelled with the above mentioned beta emitting radionuclides demonstrated clear clinical potential in relieving intense bone pain resulting from metastases of the breast, prostate and lung of cancer patients. Therefore, upon the recommendation of a consultants meeting held in Vienna in 1993, the Co-ordinated Research Project (CRP) on Optimization of the Production and quality control of Radiotherapeutic Radionuclides and Radiopharmaceuticals was established in 1994. The CRP aimed at developing and improving existing laboratory protocols for the production of therapeutic radionuclides using existing nuclear research reactors including the corresponding radiolabelling, quality control procedures; and validation in experimental animals. With the participation of ten scientists from IAEA Member States, several laboratory procedures for preparation and quality control were developed, tested and assessed as potential therapeutic radiopharmaceuticals for bone pain palliation. In particular, the CRP optimised the reactor production of 153 Sm and the preparation of the radiopharmaceutical 153 Sm-EDTMP (ethylene diamine tetramethylene phosphonate), as well as radiolabelling techniques and quality control methods for

  15. Radiation protection Ordinance

    1976-06-01

    This Ordinance lays down the licensing system for activities in Switzerland involving possible exposure to radiation, with the exception of nuclear installations, fuels and radioactive waste which, under the 1959 Atomic Energy Act, are subject to licensing. The Ordinance applies to the production, handling, use, storage, transport, disposal, import and export of radioactive substances and devices and articles containing them; and generally to any activity involving hazards caused by ionizing radiation. The Federal Public Health Office is the competent authority for granting licences. Provision is also made for the administrative conditions to be complied with for obtaining such licences as well as for technical measures required when engaged in work covered by the Ordinance. This consolidated version of the Ordinance contains all the successive amendments up to 26 September 1988. (NEA) [fr

  16. Ridit Analysis for Cooper-Harper and Other Ordinal Ratings for Sparse Data - A Distance-based Approach

    2016-09-01

    412TW-PA-16437 Ridit Analysis for Cooper -Harper & other Ordinal Ratings for Sparse Data – A...2016 2. REPORT TYPE TECHNICAL PAPER 3. DATES COVERED (From - To) 09-19-2016 4. TITLE AND SUBTITLE RIDIT ANALYSIS FOR COOPER -HARPER & OTHER...opinion rankings, are common in many areas of application. In the Air Force, Cooper - Harper ratings are used extensively for the assessment of Flying

  17. Simulation-based optimization parametric optimization techniques and reinforcement learning

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  18. X-ray Ordinance

    Kramer, R.; Zerlett, G.

    1983-01-01

    This commentary, presented as volume 2 of the Deutsches Strahlenschutzrecht (German legislation on radiation protection) deals with the legal provisions of the ordinance on the protection against harmful effects of X-radiation (X-ray Ordinance - RoeV), of March 1, 1973 (announced in BGBl.I, page 173), as amended by the ordinance on the protection against harmful effects of ionizing radiation, of October 13, 1976 (announced in BGBl. I, page 2905). Thus volume 2 completes the task started with volume 1, namely to present a comprehensive view and account of the body of laws governing radiation protection, a task which was thought useful as developments in the FRG led to regulations being split up into the X-ray Ordinance, and the Radiation Protection Ordinance. In order to present a well-balanced commentary on the X-ray Ordinance, it was necessary to discuss the provisions both from the legal and the medical point of view. This edition takes into account the Fourth Public Notice of the BMA (Fed. Min. of Labour and Social Affairs) concerning the implementation of the X-ray Ordinance of January 4, 1982, as well as court decisions and literature published in this field, until September 1982. In addition, the judgment of the Federal Constitutional Court, dated October 19, 1982, concerning the voidness of the law on government liability, and two decisions by the Federal High Court, dated November 23, 1982, concerning the right to have insight into medical reports - of great significance in practice - have been considered. This commentary therefore is up to date with current developments. (orig.) [de

  19. Ordinal bivariate inequality

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlationincreasing switches and/or median......-preserving spreads. For the canonical 2x2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  20. Ordinal Bivariate Inequality

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter Raahave

    2016-01-01

    This paper introduces a concept of inequality comparisons with ordinal bivariate categorical data. In our model, one population is more unequal than another when they have common arithmetic median outcomes and the first can be obtained from the second by correlation-increasing switches and....../or median-preserving spreads. For the canonical 2 × 2 case (with two binary indicators), we derive a simple operational procedure for checking ordinal inequality relations in practice. As an illustration, we apply the model to childhood deprivation in Mozambique....

  1. Amendment of Atomic Ordinance

    1987-10-01

    This amendment to the 1984 Ordinance on definitions and licences in the atomic energy field aims essentially to ensure that the commitments under the Treaty on the Non-Proliferation of Nuclear Weapons are complied with in Switzerland. The goods and articles involving uranium enrichment by the gas centrifuge process and nuclear fuel reprocessing as specified by the competent international bodies, are henceforth included in the goods subject to notification or licensing listed in the Annex to the Ordinance. Also, it is provided that a construction and an operating licence for a nuclear installation may be granted simultaneously in cases where safe operating conditions can be fully assessed. (NEA) [fr

  2. Lifecycle-Based Swarm Optimization Method for Numerical Optimization

    Hai Shen

    2014-01-01

    Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.

  3. Revision without ordinals

    Rivello, Edoardo

    2013-01-01

    We show that Herzberger’s and Gupta’s revision theories of truth can be recast in purely inductive terms, without any appeal neither to the transfinite ordinal numbers nor to the axiom of Choice. The result is presented in an abstract and general setting, emphasising both its validity for a wide

  4. Reliability-Based Optimization in Structural Engineering

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1994-01-01

    In this paper reliability-based optimization problems in structural engineering are formulated on the basis of the classical decision theory. Several formulations are presented: Reliability-based optimal design of structural systems with component or systems reliability constraints, reliability...

  5. Reliability-based optimization of engineering structures

    Sørensen, John Dalsgaard

    2008-01-01

    The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplitied with structural optimization. The basic reliability-based optimization...... problems are generalized to the following extensions: interactive optimization, inspection and repair costs, systematic reconstruction, re-assessment of existing structures. Illustrative examples are presented including a simple introductory example, a decision problem related to bridge re...

  6. The Ontology of Knowledge Based Optimization

    Nasution, Mahyuddin K. M.

    2012-01-01

    Optimization has been becoming a central of studies in mathematic and has many areas with different applications. However, many themes of optimization came from different area have not ties closing to origin concepts. This paper is to address some variants of optimization problems using ontology in order to building basic of knowledge about optimization, and then using it to enhance strategy to achieve knowledge based optimization.

  7. Isotope based assessment of groundwater renewal in water scarce regions. Proceedings of a final research co-ordination meeting

    2001-10-01

    The isotopic composition and chemical constituents of water infiltrating through the soil zone (unsaturated zone, or zone of aeration) into groundwater can be employed to determine the moisture transport in the unsaturated zone, thus enabling estimation of the water infiltration rate to the underlying aquifer. This was the basis on which this CRP was initiated in 1996. The overall results obtained from three years of applied field research related to study of moisture transport dynamics and estimation of natural recharge through use of isotope/hydrochemical depth profiles of the soil moisture in the unsaturated zone were presented and discussed at the final Research Co-ordination Meeting held in Vienna from 18 to 21 October 1999. A total of 44 sites were involved in the project on which detailed information on physiography, lithology, rainfall, unsaturated moisture content and a variety of chemical and isotopic determinants is now available. This publication contains 11 individual reports presented by CRP participants at the Meeting. Each of the reports have been indexed separately

  8. First research co-ordination meeting on development of reference charged particle cross section data base for medical radioisotope production. Summary report

    Oblozinsky, P.

    1996-03-01

    The present report contains the summary of the First Research Co-ordination Meeting on ''Development of Reference Charged Particle Cross Section Data Base for Medical Radioisotope Production'', held at the IAEA Headquarters, Vienna, from 15 to 17 November 1995. The project focuses on monitor reactions and production reactions for gamma emitters and positron emitters induced with light charged particles of incident energies up to about 100 MeV. Summarized are technical discussions and the resulting work plan of the Coordinated Research Programme, including actions and deadlines. Attached are an information sheet on the project, the agenda and a list of participants of the meeting. Also attached is brief information on the adjacent Consultant's Meeting on ''Automated Synthesis Systems for the Cyclotron Production of 18 F and 123 I and their Labeled Radiopharmaceuticals''. (author)

  9. Comparison of two ordinal prediction models

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  10. Application of DNA based marker mutations for improvement of cereals and other sexually reproduced crop plants. Proceedings of a final research co-ordination meeting

    NONE

    1998-03-01

    The Co-ordinated Research Programme (CRP) on the Application of DNA Based Marker Mutations for Improvement of Cereals and Other Sexually Reproduced Crop Plants represents the first of three CRPs dealing with the application of molecular markers to mutations and plant breeding and was implemented between 1992 and 1996. A second companion CRP entitled Use of Novel DNA Fingerprinting Techniques for the Detection and Characterization of Genetic Variation in Vegetatively Propagated Crops devoted to the application of molecular markers in vegetatively propagated crops species was implemented between 1993 and 1997. One positive consequence of these two CRPs has been the implementation of a third CRP entitled Radioactively Labeled DNA Probes for Crop Improvement, which began in 1995 and aims to provide enabling technologies, in the form of probes and primers, to laboratories in developing countries. The rapid development of molecular marker technologies has also resulted in a dramatic increase in request from developing Member States for technical co-operation projects utilizing molecular markers to improve local varieties for biotic and abiotic stresses and other traits of relevance. With the intensified use of induced mutations in genetic studies, it will be important to continue the important work of understanding induced mutations at the molecular level. Evidence of the progress made in implementing molecular marker technologies in laboratories around the world is presented in this publication, which contains the results presented by the participants at the fourth and final Research Co-ordination Meeting of the CRP held in Vienna, 4-8 November 1996. The FAO and IAEA wish to express their sincere appreciation to the participants of the meeting for their work during the project period resulting in the summary and scientific reports presented in this publication. Refs, figs, tabs.

  11. Application of DNA based marker mutations for improvement of cereals and other sexually reproduced crop plants. Proceedings of a final research co-ordination meeting

    1998-03-01

    The Co-ordinated Research Programme (CRP) on the Application of DNA Based Marker Mutations for Improvement of Cereals and Other Sexually Reproduced Crop Plants represents the first of three CRPs dealing with the application of molecular markers to mutations and plant breeding and was implemented between 1992 and 1996. A second companion CRP entitled Use of Novel DNA Fingerprinting Techniques for the Detection and Characterization of Genetic Variation in Vegetatively Propagated Crops devoted to the application of molecular markers in vegetatively propagated crops species was implemented between 1993 and 1997. One positive consequence of these two CRPs has been the implementation of a third CRP entitled Radioactively Labeled DNA Probes for Crop Improvement, which began in 1995 and aims to provide enabling technologies, in the form of probes and primers, to laboratories in developing countries. The rapid development of molecular marker technologies has also resulted in a dramatic increase in request from developing Member States for technical co-operation projects utilizing molecular markers to improve local varieties for biotic and abiotic stresses and other traits of relevance. With the intensified use of induced mutations in genetic studies, it will be important to continue the important work of understanding induced mutations at the molecular level. Evidence of the progress made in implementing molecular marker technologies in laboratories around the world is presented in this publication, which contains the results presented by the participants at the fourth and final Research Co-ordination Meeting of the CRP held in Vienna, 4-8 November 1996. The FAO and IAEA wish to express their sincere appreciation to the participants of the meeting for their work during the project period resulting in the summary and scientific reports presented in this publication

  12. Radiation (Safety Control) Ordinance 1978

    1978-01-01

    This Ordinance provides for the control, regulation, possession, use and transport of radioactive substance and irradiating apparatus. The Director of Health is responsible for administration of the Ordinance, which contains detailed provisions concerning the terms and conditions of licences, duties of licensees, medical examinations, maximum radiation doses, precautions to be taken to avoid exceeding such doses. The Ordinance also lays down a system of record-keeping and registration as well as packaging specifications for the transport of radioactive substances. (NEA) [fr

  13. Classifiers based on optimal decision rules

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  14. Classifiers based on optimal decision rules

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  15. A hybrid bird mating optimizer algorithm with teaching-learning-based optimization for global numerical optimization

    Qingyang Zhang

    2015-02-01

    Full Text Available Bird Mating Optimizer (BMO is a novel meta-heuristic optimization algorithm inspired by intelligent mating behavior of birds. However, it is still insufficient in convergence of speed and quality of solution. To overcome these drawbacks, this paper proposes a hybrid algorithm (TLBMO, which is established by combining the advantages of Teaching-learning-based optimization (TLBO and Bird Mating Optimizer (BMO. The performance of TLBMO is evaluated on 23 benchmark functions, and compared with seven state-of-the-art approaches, namely BMO, TLBO, Artificial Bee Bolony (ABC, Particle Swarm Optimization (PSO, Fast Evolution Programming (FEP, Differential Evolution (DE, Group Search Optimization (GSO. Experimental results indicate that the proposed method performs better than other existing algorithms for global numerical optimization.

  16. The Second Ordinance for Amendment of the Radiation Protection Ordinance

    Czajka, D.

    1989-01-01

    This Second Ordinance for Amendment of the Radiation Protection Ordinance has modified the most important legal provisions supplementing the Atomic Energy Act. But looking closer at the revised version of the Ordinance, many an amendment turns out to be just a new facade on the old brickwork. The article critically reviews the most important amendments, stating that the main principles have remained untouched, and discussing the modification of limiting values, the definition of regulatory scopes, the new meaning of the term 'wastes containing nuclear fuel', and the regulatory scope of provisions governing radioactive substances and their medical applications. (orig./RST) [de

  17. Reliability Based Optimization of Structural Systems

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  18. Ordinal measures for iris recognition.

    Sun, Zhenan; Tan, Tieniu

    2009-12-01

    Images of a human iris contain rich texture information useful for identity authentication. A key and still open issue in iris recognition is how best to represent such textural information using a compact set of features (iris features). In this paper, we propose using ordinal measures for iris feature representation with the objective of characterizing qualitative relationships between iris regions rather than precise measurements of iris image structures. Such a representation may lose some image-specific information, but it achieves a good trade-off between distinctiveness and robustness. We show that ordinal measures are intrinsic features of iris patterns and largely invariant to illumination changes. Moreover, compactness and low computational complexity of ordinal measures enable highly efficient iris recognition. Ordinal measures are a general concept useful for image analysis and many variants can be derived for ordinal feature extraction. In this paper, we develop multilobe differential filters to compute ordinal measures with flexible intralobe and interlobe parameters such as location, scale, orientation, and distance. Experimental results on three public iris image databases demonstrate the effectiveness of the proposed ordinal feature models.

  19. Reliability Based Optimization of Fire Protection

    Thoft-Christensen, Palle

    fire protection (PFP) of firewalls and structural members. The paper is partly based on research performed within the EU supported research project B/E-4359 "Optimized Fire Safety of Offshore Structures" and partly on research supported by the Danish Technical Research Council (see Thoft-Christensen [1......]). Special emphasis is put on the optimization software developed within the project.......It is well known that fire is one of the major risks of serious damage or total loss of several types of structures such as nuclear installations, buildings, offshore platforms/topsides etc. This paper presents a methodology and software for reliability based optimization of the layout of passive...

  20. Genetic algorithm based separation cascade optimization

    Mahendra, A.K.; Sanyal, A.; Gouthaman, G.; Bera, T.K.

    2008-01-01

    The conventional separation cascade design procedure does not give an optimum design because of squaring-off, variation of flow rates and separation factor of the element with respect to stage location. Multi-component isotope separation further complicates the design procedure. Cascade design can be stated as a constrained multi-objective optimization. Cascade's expectation from the separating element is multi-objective i.e. overall separation factor, cut, optimum feed and separative power. Decision maker may aspire for more comprehensive multi-objective goals where optimization of cascade is coupled with the exploration of separating element optimization vector space. In real life there are many issues which make it important to understand the decision maker's perception of cost-quality-speed trade-off and consistency of preferences. Genetic algorithm (GA) is one such evolutionary technique that can be used for cascade design optimization. This paper addresses various issues involved in the GA based multi-objective optimization of the separation cascade. Reference point based optimization methodology with GA based Pareto optimality concept for separation cascade was found pragmatic and promising. This method should be explored, tested, examined and further developed for binary as well as multi-component separations. (author)

  1. Ordinance on protection from the harmful effects of X-radiation (X-ray Ordinance). As of January 8, 1987. 3. ed.

    Hinrichs, O.

    1992-01-01

    The German X-ray Ordinance (Roentgenverordnung) contains the main protective provisions applying to the field of X-ray equipment and sources of unwanted X radiation. It thus forms a complement to the German Radiation Protection Ordinance (Strahlenschutzverordnung). The X-ray Ordinance is based, as is the Radiation Protection Ordinance, on the German Nuclear Energy Act (Atomgesetz). It transposes the same Euratom Directives into national law, through which above all the limit values are defined. The current state of the X-ray Ordinance is that of the text promulgated on 8.01.1987 with the subsequent amendments, the last of which was adopted on 19.12.1990. The brochure also reproduces the Official Memorandum to the X-ray Ordinance, as this gives important indications for the legal construction of the Ordinance. (orig./HSCH) [de

  2. Development of GPT-based optimization algorithm

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  3. Interactive Reliability-Based Optimal Design

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Siemaszko, A.

    1994-01-01

    Interactive design/optimization of large, complex structural systems is considered. The objective function is assumed to model the expected costs. The constraints are reliability-based and/or related to deterministic code requirements. Solution of this optimization problem is divided in four main...... tasks, namely finite element analyses, sensitivity analyses, reliability analyses and application of an optimization algorithm. In the paper it is shown how these four tasks can be linked effectively and how existing information on design variables, Lagrange multipliers and the Hessian matrix can...

  4. Simulation-based optimization of thermal systems

    Jaluria, Yogesh

    2009-01-01

    This paper considers the design and optimization of thermal systems on the basis of the mathematical and numerical modeling of the system. Many complexities are often encountered in practical thermal processes and systems, making the modeling challenging and involved. These include property variations, complicated regions, combined transport mechanisms, chemical reactions, and intricate boundary conditions. The paper briefly presents approaches that may be used to accurately simulate these systems. Validation of the numerical model is a particularly critical aspect and is discussed. It is important to couple the modeling with the system performance, design, control and optimization. This aspect, which has often been ignored in the literature, is considered in this paper. Design of thermal systems based on concurrent simulation and experimentation is also discussed in terms of dynamic data-driven optimization methods. Optimization of the system and of the operating conditions is needed to minimize costs and improve product quality and system performance. Different optimization strategies that are currently used for thermal systems are outlined, focusing on new and emerging strategies. Of particular interest is multi-objective optimization, since most thermal systems involve several important objective functions, such as heat transfer rate and pressure in electronic cooling systems. A few practical thermal systems are considered in greater detail to illustrate these approaches and to present typical simulation, design and optimization results

  5. Optimization of the radiological protection of patients: Image quality and dose in mammography (co-ordinated research in Europe). Results of the coordinated research project on optimization of protection mammography in some eastern European States

    2005-05-01

    Mammography is an extremely useful non-invasive imaging technique with unparalleled advantages for the detection of breast cancer. It has played an immense role in the screening of women above a certain age or with a family history of breast cancer. The IAEA has a statutory responsibility to establish standards for the protection of people against exposure to ionizing radiation and to provide for the worldwide application of those standards. A fundamental requirement of the International Basic Safety Standards for Protection Against Ionizing Radiation (BSS) and for the Safety of Radiation Sources, issued by the IAEA and co-sponsored by FAO, ILO, WHO, PAHO and NEA, is the optimization of radiological protection of patients undergoing medical exposure. In keeping with its responsibility on the application of standards, the IAEA programme on Radiological Protection of Patients attempts to reduce radiation doses to patients while balancing quality assurance considerations. IAEA-TECDOC-796, Radiation Doses in Diagnostic Radiology and Methods for Dose Reduction (1995), addresses this aspect. The related IAEA-TECDOC-1423 on Optimization of the Radiological Protection of Patients undergoing Radiography, Fluoroscopy and Computed Tomography, (2004) constitutes the final report of the coordinated research in Africa, Asia and eastern Europe. The preceding publications do not explicitly consider mammography. Mindful of the importance of this imaging technique, the IAEA launched a Coordinated Research Project on Optimization of Protection in Mammography in some eastern European States. The present publication is the outcome of this project: it is aimed at evaluating the situation in a number of countries, identifying variations in the technique, examining the status of the equipment and comparing performance in the light of the norms established by the European Commission. A number of important aspects are covered, including: - quality control of mammography equipment; - imaging

  6. Coverage-based constraints for IMRT optimization

    Mescher, H.; Ulrich, S.; Bangert, M.

    2017-09-01

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.

  7. CFD-based optimization in plastics extrusion

    Eusterholz, Sebastian; Elgeti, Stefanie

    2018-05-01

    This paper presents novel ideas in numerical design of mixing elements in single-screw extruders. The actual design process is reformulated as a shape optimization problem, given some functional, but possibly inefficient initial design. Thereby automatic optimization can be incorporated and the design process is advanced, beyond the simulation-supported, but still experience-based approach. This paper proposes concepts to extend a method which has been developed and validated for die design to the design of mixing-elements. For simplicity, it focuses on single-phase flows only. The developed method conducts forward-simulations to predict the quasi-steady melt behavior in the relevant part of the extruder. The result of each simulation is used in a black-box optimization procedure based on an efficient low-order parameterization of the geometry. To minimize user interaction, an objective function is formulated that quantifies the products' quality based on the forward simulation. This paper covers two aspects: (1) It reviews the set-up of the optimization framework as discussed in [1], and (2) it details the necessary extensions for the optimization of mixing elements in single-screw extruders. It concludes with a presentation of first advances in the unsteady flow simulation of a metering and mixing section with the SSMUM [2] using the Carreau material model.

  8. Reliability-Based Optimization of Wind Turbines

    Sørensen, John Dalsgaard; Tarp-Johansen, N.J.

    2004-01-01

    Reliability-based optimization of the main tower and monopile foundation of an offshore wind turbine is considered. Different formulations are considered of the objective function including benefits and building and failure costs of the wind turbine. Also different reconstruction policies in case...

  9. Performance-based Pareto optimal design

    Sariyildiz, I.S.; Bittermann, M.S.; Ciftcioglu, O.

    2008-01-01

    A novel approach for performance-based design is presented, where Pareto optimality is pursued. Design requirements may contain linguistic information, which is difficult to bring into computation or make consistent their impartial estimations from case to case. Fuzzy logic and soft computing are

  10. Parameter optimization toward optimal microneedle-based dermal vaccination.

    van der Maaden, Koen; Varypataki, Eleni Maria; Yu, Huixin; Romeijn, Stefan; Jiskoot, Wim; Bouwstra, Joke

    2014-11-20

    Microneedle-based vaccination has several advantages over vaccination by using conventional hypodermic needles. Microneedles are used to deliver a drug into the skin in a minimally-invasive and potentially pain free manner. Besides, the skin is a potent immune organ that is highly suitable for vaccination. However, there are several factors that influence the penetration ability of the skin by microneedles and the immune responses upon microneedle-based immunization. In this study we assessed several different microneedle arrays for their ability to penetrate ex vivo human skin by using trypan blue and (fluorescently or radioactively labeled) ovalbumin. Next, these different microneedles and several factors, including the dose of ovalbumin, the effect of using an impact-insertion applicator, skin location of microneedle application, and the area of microneedle application, were tested in vivo in mice. The penetration ability and the dose of ovalbumin that is delivered into the skin were shown to be dependent on the use of an applicator and on the microneedle geometry and size of the array. Besides microneedle penetration, the above described factors influenced the immune responses upon microneedle-based vaccination in vivo. It was shown that the ovalbumin-specific antibody responses upon microneedle-based vaccination could be increased up to 12-fold when an impact-insertion applicator was used, up to 8-fold when microneedles were applied over a larger surface area, and up to 36-fold dependent on the location of microneedle application. Therefore, these influencing factors should be considered to optimize microneedle-based dermal immunization technologies. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Atomic Energy Act and ordinances. 8. ed.

    Anon.

    1978-01-01

    The new issue of the text contains the Atomic Energy Act (AtG) in its new wording of the announcement of 31 Oct 76, the new wording of the ordinances put in effect in 1977: Atomic procedure ordinance (AtVfV), radiation protection ordinance (SSU), and atomic financial security ordinance (AtDeckV); furthermore the x-ray ordinance (RoeV) of 1978 in its wording which has been changed by the radiation protection ordinance. Also printed are the cost ordinance (AtKostV) of 1971, the food irradiation ordinance (LebensmBestrV) in the wording of 1975 and the medicine ordinance (ArzneimV) in the wording of 1971. An addition was made by adding to the liability laws the Paris agreement (PUE) on the liability towards third persons in the field of nuclear energy in the wording of the announcement of 5 Feb 76. (orig./HP) [de

  12. Biogeography-Based Optimization with Orthogonal Crossover

    Quanxi Feng

    2013-01-01

    Full Text Available Biogeography-based optimization (BBO is a new biogeography inspired, population-based algorithm, which mainly uses migration operator to share information among solutions. Similar to crossover operator in genetic algorithm, migration operator is a probabilistic operator and only generates the vertex of a hyperrectangle defined by the emigration and immigration vectors. Therefore, the exploration ability of BBO may be limited. Orthogonal crossover operator with quantization technique (QOX is based on orthogonal design and can generate representative solution in solution space. In this paper, a BBO variant is presented through embedding the QOX operator in BBO algorithm. Additionally, a modified migration equation is used to improve the population diversity. Several experiments are conducted on 23 benchmark functions. Experimental results show that the proposed algorithm is capable of locating the optimal or closed-to-optimal solution. Comparisons with other variants of BBO algorithms and state-of-the-art orthogonal-based evolutionary algorithms demonstrate that our proposed algorithm possesses faster global convergence rate, high-precision solution, and stronger robustness. Finally, the analysis result of the performance of QOX indicates that QOX plays a key role in the proposed algorithm.

  13. Spatial Representation of Ordinal Information

    Meng eZhang

    2016-04-01

    Full Text Available Right hand responds faster than left hand when shown larger numbers and vice-versa when shown smaller numbers (the SNARC effect. Accumulating evidence suggests that the SNARC effect may not be exclusive for numbers and can be extended to other ordinal sequences (e.g., months or letters in the alphabet as well. In this study, we tested the SNARC effect with a non-numerically ordered sequence: The Chinese notations for the color spectrum (Red, Orange, Yellow, Green, Blue, Indigo, and Violet. Chinese color word sequence reserves relatively weak ordinal information, because each element color in the sequence normally appears in non-sequential contexts, making it ideal to test the spatial organization of sequential information that was stored in the long-term memory. This study found a reliable SNARC-like effect for Chinese color words (deciding whether the presented color word was before or after the reference color word green, suggesting that, without access to any quantitative information or exposure to any previous training, ordinal representation can still activate a sense of space. The results support that weak ordinal information without quantitative magnitude encoded in the long-term memory can activate spatial representation in a comparison task.

  14. Spatial Representation of Ordinal Information.

    Zhang, Meng; Gao, Xuefei; Li, Baichen; Yu, Shuyuan; Gong, Tianwei; Jiang, Ting; Hu, Qingfen; Chen, Yinghe

    2016-01-01

    Right hand responds faster than left hand when shown larger numbers and vice-versa when shown smaller numbers (the SNARC effect). Accumulating evidence suggests that the SNARC effect may not be exclusive for numbers and can be extended to other ordinal sequences (e.g., months or letters in the alphabet) as well. In this study, we tested the SNARC effect with a non-numerically ordered sequence: the Chinese notations for the color spectrum (Red, Orange, Yellow, Green, Blue, Indigo, and Violet). Chinese color word sequence reserves relatively weak ordinal information, because each element color in the sequence normally appears in non-sequential contexts, making it ideal to test the spatial organization of sequential information that was stored in the long-term memory. This study found a reliable SNARC-like effect for Chinese color words (deciding whether the presented color word was before or after the reference color word "green"), suggesting that, without access to any quantitative information or exposure to any previous training, ordinal representation can still activate a sense of space. The results support that weak ordinal information without quantitative magnitude encoded in the long-term memory can activate spatial representation in a comparison task.

  15. Optimal depth-based regional frequency analysis

    H. Wazneh

    2013-06-01

    Full Text Available Classical methods of regional frequency analysis (RFA of hydrological variables face two drawbacks: (1 the restriction to a particular region which can lead to a loss of some information and (2 the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors. In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.

  16. Optimal depth-based regional frequency analysis

    Wazneh, H.; Chebana, F.; Ouarda, T. B. M. J.

    2013-06-01

    Classical methods of regional frequency analysis (RFA) of hydrological variables face two drawbacks: (1) the restriction to a particular region which can lead to a loss of some information and (2) the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA) approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors). In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA) method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.

  17. Optimal interference code based on machine learning

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  18. Social Host Ordinances and Policies. Prevention Update

    Higher Education Center for Alcohol, Drug Abuse, and Violence Prevention, 2011

    2011-01-01

    Social host liability laws (also known as teen party ordinances, loud or unruly gathering ordinances, or response costs ordinances) target the location in which underage drinking takes place. Social host liability laws hold noncommercial individuals responsible for underage drinking events on property they own, lease, or otherwise control. They…

  19. Risk-based optimization of land reclamation

    Lendering, K.T.; Jonkman, S.N.; Gelder, P.H.A.J.M. van; Peters, D.J.

    2015-01-01

    Large-scale land reclamations are generally constructed by means of a landfill well above mean sea level. This can be costly in areas where good quality fill material is scarce. An alternative to save materials and costs is a ‘polder terminal’. The quay wall acts as a flood defense and the terminal level is well below the level of the quay wall. Compared with a conventional terminal, the costs are lower, but an additional flood risk is introduced. In this paper, a risk-based optimization is developed for a conventional and a polder terminal. It considers the investment and residual flood risk. The method takes into account both the quay wall and terminal level, which determine the probability and damage of flooding. The optimal quay wall level is found by solving a Lambert function numerically. The terminal level is bounded by engineering boundary conditions, i.e. piping and uplift of the cover layer of the terminal yard. It is found that, for a representative case study, the saving of reclamation costs for a polder terminal is larger than the increase of flood risk. The model is applicable to other cases of land reclamation and to similar optimization problems in flood risk management. - Highlights: • A polder terminal can be an attractive alternative for a conventional terminal. • A polder terminal is feasible at locations with high reclamation cost. • A risk-based approach is required to determine the optimal protection levels. • The depth of the polder terminal yard is bounded by uplifting of the cover layer. • This paper can support decisions regarding alternatives for port expansions.

  20. Pixel-based OPC optimization based on conjugate gradients.

    Ma, Xu; Arce, Gonzalo R

    2011-01-31

    Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.

  1. ITER co-ordinated technical activities

    2001-01-01

    As agreed upon between the ITER Engineering Design Activities (EDA) Parties 'Co-ordinated Technical Activities' (CTA) means technical activities which are deemed necessary to maintain the integrity of the international project, so as to prepare for the ITER joint implementation. The scope of these activities includes design adaptation to the specific site conditions, safety analysis and licensing preparation that are based on specific site offers, evaluation of cost and construction schedule, preparation of procurement documents and other issues raised by the Parties collectively, whilst assuring the coherence of the ITER project including design control

  2. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  3. PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS

    Constantin D. STANESCU

    2016-05-01

    Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .

  4. Robust optimization based upon statistical theory.

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  5. Analysis of Ordinal Categorical Data

    Agresti, Alan

    2012-01-01

    Statistical science's first coordinated manual of methods for analyzing ordered categorical data, now fully revised and updated, continues to present applications and case studies in fields as diverse as sociology, public health, ecology, marketing, and pharmacy. Analysis of Ordinal Categorical Data, Second Edition provides an introduction to basic descriptive and inferential methods for categorical data, giving thorough coverage of new developments and recent methods. Special emphasis is placed on interpretation and application of methods including an integrated comparison of the available st

  6. Ordinance 463/2009: new challenges and opportunities in the viability optimization of SHP projects; Portaria 463/2009: novos desafios e oportunidades na otimizacao da viabilidade de projetos de PCHs

    Tessaro, Cristiano [Lindner Engenharia, Florianopolis, SC (Brazil)], E-mail: cristiano@camerge.com.br

    2011-07-15

    This study aims to discuss and review the Ordinance No 463/2009 of the Ministry of Mines and Energy (MME) which sets out the methodology for calculating amounts to physical energy of hydroelectric not centrally dispatched by the National System Operator (ONS), for participating in the Clean Energy Reallocation, including for the participation in the Auction Sale of Electricity. In addition, this paper presents a case study through the simulation to calculate energy provided from PCH with a pre-established in different country. For this, we used series of monthly average stream flow and hydrologic series of daily average flows to the same location. There was also potential gains from the changes in inflows from rivers and further gains for the system with the proposed new methodology. The results showed the importance of the Ordinance of the MME, and possible gains and losses to the entrepreneur. (author)

  7. Location based Network Optimizations for Mobile Wireless Networks

    Nielsen, Jimmy Jessen

    selection in Wi-Fi networks and predictive handover optimization in heterogeneous wireless networks. The investigations in this work have indicated that location based network optimizations are beneficial compared to typical link measurement based approaches. Especially the knowledge of geographical...

  8. Optimization of the Case Based Reasoning Systems

    Mohamed, A.H.

    2014-01-01

    Intrusion Detection System (IDS) have a great importance in saving the authority of the information widely spread all over the world through the networks. Many Case Based Systems concerned on the different methods of the unauthorized users/hackers that face the developers of the IDS. The proposed system introduces a new hybrid system that uses the genetic algorithm to optimize an IDS - case based system. It can detect the new anomalies appeared through the network and use the cases in the case library to determine the suitable solution for their behavior. The suggested system can solve the problem either by using an old identical solution or adapt the optimum one till have the targeted solution. The proposed system has been applied to block unauthorized users / hackers from attach the medical images for radiotherapy of the cancer diseases during their transmission through web. The proposed system can prove its accepted performance in this manner

  9. Performance-based shape optimization of continuum structures

    Liang Qingquan

    2010-01-01

    This paper presents a performance-based optimization (PBO) method for optimal shape design of continuum structures with stiffness constraints. Performance-based design concepts are incorporated in the shape optimization theory to achieve optimal designs. In the PBO method, the traditional shape optimization problem of minimizing the weight of a continuum structure with displacement or mean compliance constraints is transformed to the problem of maximizing the performance of the structure. The optimal shape of a continuum structure is obtained by gradually eliminating inefficient finite elements from the structure until its performance is maximized. Performance indices are employed to monitor the performance of optimized shapes in an optimization process. Performance-based optimality criteria are incorporated in the PBO method to identify the optimum from the optimization process. The PBO method is used to produce optimal shapes of plane stress continuum structures and plates in bending. Benchmark numerical results are provided to demonstrate the effectiveness of the PBO method for generating the maximum stiffness shape design of continuum structures. It is shown that the PBO method developed overcomes the limitations of traditional shape optimization methods in optimal design of continuum structures. Performance-based optimality criteria presented can be incorporated in any shape and topology optimization methods to obtain optimal designs of continuum structures.

  10. Ordinance on the body responsible for taking measures in case of increased radioactivity (OROIR)

    1987-04-01

    This Ordinance, based on atomic energy legislation, public safety, military organisation and the defense council, replaced a previous ordinance of 1966 on alert in case of increased radioactivity. It sets up the body responsible for this work and describes the tasks to be performed in case of an occurrence which could create hazards for the population due to increased radioactivity. If a Swiss nuclear installation creates such a hazard, the 1982 Ordinance on emergency measures in the neighbourhood of nuclear installations also applies. The Ordinance entered into force on 1 May 1987 (NEA) [fr

  11. ASOP, Shield Calculation, 1-D, Discrete Ordinates Transport

    1993-01-01

    1 - Nature of physical problem solved: ASOP is a shield optimization calculational system based on the one-dimensional discrete ordinates transport program ANISN. It has been used to design optimum shields for space applications of SNAP zirconium-hydride-uranium- fueled reactors and uranium-oxide fueled thermionic reactors and to design beam stops for the ORELA facility. 2 - Method of solution: ASOP generates coefficients of linear equations describing the logarithm of the dose and dose-weight derivatives as functions of position from data obtained in an automated sequence of ANISN calculations. With the dose constrained to a design value and all dose-weight derivatives required to be equal, the linear equations may be solved for a new set of shield dimensions. Since changes in the shield dimensions may cause the linear functions to change, the entire procedure is repeated until convergence is obtained. The detailed calculations of the radiation transport through shield configurations for every step in the procedure distinguish ASOP from other shield optimization computer code systems which rely on multiple component sources and attenuation coefficients to describe the transport. 3 - Restrictions on the complexity of the problem: Problem size is limited only by machine size

  12. Local zoning ordinances -- how they limit or restrict mining

    Ingram, H.

    1991-01-01

    Local regulation of mining by zoning has taken place for a long period of time. The delegation to local municipalities of land use planning, zoning and nuisance abatement authority which may affect mining by state governments has been consistently upheld by appellate courts as valid exercises of the police power. Recently, mine operators and mineral owners have been confronted by efforts of local municipalities, often initiated by anti-mining citizen's groups, to impose more stringent restrictions on mining activities within their borders. In some situations, existing ordinances are being enforced for the first time, in others, new ordinances have been adopted without much awareness or involvement by the public. Enforced to the letter, these ordinances can sterilize large blocks of mineable reserves open-quotes operatingclose quotes or performance standards in excess of SMCRA-based regulatory requirements. It is fair to say that investigation of the potential impacts of local zoning and other related ordinances is essential in the planning for the expansion of existing operations or for new operations. There may be new rules in the game. This paper identifies problem areas in typical open-quotes modernclose quotes ordinances and discusses legal and constitutional issues which may arise by their enforcement in coal producing regions

  13. Reliability-Based Optimization of Structural Elements

    Sørensen, John Dalsgaard

    In this paper structural elements from an optimization point of view are considered, i.e. only the geometry of a structural element is optimized. Reliability modelling of the structural element is discussed both from an element point of view and from a system point of view. The optimization...

  14. ZZ MATXSLIBJ33, JENDL-3.3 based, 175 N-42 photon groups (VITAMIN-J) MATXS library for discrete ordinates multi-group

    Kosako, K.; Yamano, N.; Fukahori, T.; Shibata, K.; Hasegawa, A.

    2006-01-01

    1 - Description of program or function: JENDL-3.3 based, 175 neutron-42 photon groups (VITAMIN-J) MATXS library for discrete ordinates multi-group transport codes. Format: MATXS. Number of groups: 175 neutron, 42 gamma-ray. Nuclides: 337 nuclides contained in JENDL-3.3: H-1, H-2, He-3, He-4, Li-6, Li-7, Be-9, B-10, B-11, C-Nat, N-14, N-15, O-16, F-19, Na-23, Mg-24, Mg-25, Mg-26, Al-27, Si-28, Si-29, Si-30, P-31, S-32, S-33, S-34, S-36, Cl-35, Cl-37, Ar-40, K-39, K-40, K-41, Ca-40, Ca-42, Ca-43, Ca-44, Ca-46, Ca-48, Sc-45, Ti-46, Ti-47, Ti-48, Ti-49, Ti-50, V-Nat, Cr-50, Cr-52, Cr-53, Cr-54, Mn-55, Fe-54, Fe-56, Fe-57, Fe-58, Co-59, Ni-58, Ni-60, Ni-61, Ni-62, Ni-64, Cu-63, Cu-65, Ga-69, Ga-71, Ge-70, Ge-72, Ge-73, Ge-74, Ge-76, As-75, Se-74, Se-76, Se-77, Se-78, Se-79, Se-80, Se-82, Br-79, Br-81, Kr-78, Kr-80, Kr-82, Kr-83, Kr-84, Kr-85, Kr-86, Rb-85, Rb-87, Sr-86, Sr-87, Sr-88, Sr-89, Sr-90, Y-89, Y-91, Zr-90, Zr-91, Zr-92, Zr-93, Zr-94, Zr-95, Zr-96, Nb-93, Nb-94, Nb-95, Mo-92, Mo-94, Mo-95, Mo-96, Mo-97, Mo-98, Mo-99, Mo-100, Tc-99, Ru-96, Ru-98, Ru-99, Ru-100, Ru-101, Ru-102, Ru-103, Ru-104, Ru-106, Rh-103, Rh-105, Pd-102, Pd-104, Pd-105, Pd-106, Pd-107, Pd-108, Pd-110, Ag-107, Ag-109, Ag-110m, Cd-106, Cd-108, Cd-110, Cd-111, Cd-112, Cd-113, Cd-114, Cd-116, In-113, In-115, Sn-112, Sn-114, Sn-115, Sn-116, Sn-117, Sn-118, Sn-119, Sn-120, Sn-122, Sn-123, Sn-124, Sn-126, Sb-121, Sb-123, Sb-124, Sb-125, Te-120, Te-122, Te-123, Te-124, Te-125, Te-126, Te-127m, Te-128, Te-129m, Te-130, I-127, I-129, I-131, Xe-124, Xe-126, Xe-128, Xe-129, Xe-130, Xe-131, Xe-132, Xe-133, Xe-134, Xe-135, Xe-136, Cs-133, Cs-134, Cs-135, Cs-136, Cs-137, Ba-130, Ba-132, Ba-134, Ba-135, Ba-136, Ba-137, Ba-138, Ba-140, La-138, La-139, Ce-140, Ce-141, Ce-142, Ce-144, Pr-141, Pr-143, Nd-142, Nd-143, Nd-144, Nd-145, Nd-146, Nd-147, Nd-148, Nd-150, Pm-147, Pm-148, Pm-148m, Pm-149, Sm-144, Sm-147, Sm-148, Sm-149, Sm-150, Sm-151, Sm-152, Sm-153, Sm-154, Eu-151, Eu-152, Eu-153, Eu-154, Eu-155, Eu

  15. Ordinal Comparison of Multidimensional Deprivation

    Sonne-Schmidt, Christoffer Scavenius; Tarp, Finn; Østerdal, Lars Peter

    This paper develops an ordinal method of comparison of multidimensional inequality. In our model, population distribution g is more unequal than f when the distributions have common median and can be obtained from f  by one or more shifts in population density that increase inequality. For our be...... benchmark 2x2 case (i.e. the case of two binary outcome variables), we derive an empirical method for making inequality comparisons. As an illustration, we apply the model to childhood poverty in Mozambique....

  16. Optimal truss and frame design from projected homogenization-based topology optimization

    Larsen, S. D.; Sigmund, O.; Groen, J. P.

    2018-01-01

    In this article, we propose a novel method to obtain a near-optimal frame structure, based on the solution of a homogenization-based topology optimization model. The presented approach exploits the equivalence between Michell’s problem of least-weight trusses and a compliance minimization problem...... using optimal rank-2 laminates in the low volume fraction limit. In a fully automated procedure, a discrete structure is extracted from the homogenization-based continuum model. This near-optimal structure is post-optimized as a frame, where the bending stiffness is continuously decreased, to allow...

  17. GA based CNC turning center exploitation process parameters optimization

    Z. Car

    2009-01-01

    Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.

  18. An ordinal model of the McGurk illusion

    Andersen, Tobias

    2011-01-01

    Audiovisual information is integrated in speech perception. One manifestation of this is the McGurk illusion in which watching the articulating face alters the auditory phonetic percept. Understanding this phenomenon fully requires a computational model with predictive power. Here, we describe...... model it also employed 30 free parameters where the ordinal model needed only 14. Testing the predictive power of the models using a form of cross-validation we found that, although both models performed rather poorly, the ordinal model performed better than the FLMP. Based on these findings we suggest...... that ordinal models generally have greater predictive power because they are constrained by a priori information about the adjacency of phonetic categories....

  19. Co-ordination Action on Ocean Energy (CA-OE)

    Tedd, James; Frigaard, Peter

    In October 2004, the Co-ordination Action on Ocean Energy (CA-OE) was launched, co-financed by the European Commission, under the Renewable Energy Technologies priority within the 6th Framework programme, contract number 502701, chaired by Kim Nielsen, Rambøll, Denmark. The project involves 41...... partners. In general the public is not aware of the development of ocean energy and its exploitation. There is a need to make a united effort from the developers and research community to present the various principles and results in a coordinated manner with public appeal. The main objectives of the Co......-ordination Action on Ocean Energy are: To develop a common knowledge base necessary for coherent research and development policiesTo bring a co-ordinated approach within key areas of ocean energy research and development.To provide a forum for the longer term marketing of promising research developments...

  20. CFD based draft tube hydraulic design optimization

    McNabb, J; Murry, N; Mullins, B F; Devals, C; Kyriacou, S A

    2014-01-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis

  1. CFD based draft tube hydraulic design optimization

    McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.

    2014-03-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a

  2. Data visualization for ONEDANT and TWODANT discrete ordinates codes

    Lee, C.L.

    1993-01-01

    Effective graphical display of code calculations allow for efficient analysis of results. This is especially true in the case of discrete ordinates transport codes, which can generate thousands of flux or reaction rate data points per calculation. For this reason, a package of portable interface programs called OTTUI (ONEDANT-TWODANT-Tecplot trademark Unix-Based Interface) has been developed at Los Alamos National Laboratory to permit rapid visualization of ONEDANT and TWODANT discrete ordinates results using the graphics package Tecplot. This paper describes the various uses of OTTUI for display of ONEDANT and TWODANT problem geometries and calculational results

  3. Cultural Consensus Theory for the ordinal data case.

    Anders, Royce; Batchelder, William H

    2015-03-01

    A Cultural Consensus Theory approach for ordinal data is developed, leading to a new model for ordered polytomous data. The model introduces a novel way of measuring response biases and also measures consensus item values, a consensus response scale, item difficulty, and informant knowledge. The model is extended as a finite mixture model to fit both simulated and real multicultural data, in which subgroups of informants have different sets of consensus item values. The extension is thus a form of model-based clustering for ordinal data. The hierarchical Bayesian framework is utilized for inference, and two posterior predictive checks are developed to verify the central assumptions of the model.

  4. Logic-based methods for optimization combining optimization and constraint satisfaction

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  5. Optimizing a Water Simulation based on Wavefront Parameter Optimization

    Lundgren, Martin

    2017-01-01

    DICE, a Swedish game company, wanted a more realistic water simulation. Currently, most large scale water simulations used in games are based upon ocean simulation technology. These techniques falter when used in other scenarios, such as coastlines. In order to produce a more realistic simulation, a new one was created based upon the water simulation technique "Wavefront Parameter Interpolation". This technique involves a rather extensive preprocess that enables ocean simulations to have inte...

  6. Quantization in rotating co-ordinates revisited

    Hussain, F.; Qadir, A.

    1982-07-01

    Recent work on quantization in rotating co-ordinates showed that no radiation would be seen by an observer rotating with a constant angular speed. This work used a Galilean-type co-ordinate transformation. We show that the same result holds for a Lorentz-type co-ordinate system, in spite of the fact that the metric has a co-ordinate singularity at rΩ = 1. Further, we are able to define positive and negative energy modes for a particular case of a non-static, non-stationary metric. (author)

  7. Economically optimal thermal insulation

    Berber, J.

    1978-10-01

    Exemplary calculations to show that exact adherence to the demands of the thermal insulation ordinance does not lead to an optimal solution with regard to economics. This is independent of the mode of financing. Optimal thermal insulation exceeds the values given in the thermal insulation ordinance.

  8. Optimization Strategies for Hardware-Based Cofactorization

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  9. An ordinal classification approach for CTG categorization.

    Georgoulas, George; Karvelis, Petros; Gavrilis, Dimitris; Stylios, Chrysostomos D; Nikolakopoulos, George

    2017-07-01

    Evaluation of cardiotocogram (CTG) is a standard approach employed during pregnancy and delivery. But, its interpretation requires high level expertise to decide whether the recording is Normal, Suspicious or Pathological. Therefore, a number of attempts have been carried out over the past three decades for development automated sophisticated systems. These systems are usually (multiclass) classification systems that assign a category to the respective CTG. However most of these systems usually do not take into consideration the natural ordering of the categories associated with CTG recordings. In this work, an algorithm that explicitly takes into consideration the ordering of CTG categories, based on binary decomposition method, is investigated. Achieved results, using as a base classifier the C4.5 decision tree classifier, prove that the ordinal classification approach is marginally better than the traditional multiclass classification approach, which utilizes the standard C4.5 algorithm for several performance criteria.

  10. Multiobjective optimization framework for landmark measurement error correction in three-dimensional cephalometric tomography.

    DeCesare, A; Secanell, M; Lagravère, M O; Carey, J

    2013-01-01

    The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.

  11. Transmission tariffs based on optimal power flow

    Wangensteen, Ivar; Gjelsvik, Anders

    1998-01-01

    This report discusses transmission pricing as a means of obtaining optimal scheduling and dispatch in a power system. This optimality includes consumption as well as generation. The report concentrates on how prices can be used as signals towards operational decisions of market participants (generators, consumers). The main focus is on deregulated systems with open access to the network. The optimal power flow theory, with demand side modelling included, is briefly reviewed. It turns out that the marginal costs obtained from the optimal power flow gives the optimal transmission tariff for the particular load flow in case. There is also a correspondence between losses and optimal prices. Emphasis is on simple examples that demonstrate the connection between optimal power flow results and tariffs. Various cases, such as open access and single owner are discussed. A key result is that the location of the ''marketplace'' in the open access case does not influence the net economical result for any of the parties involved (generators, network owner, consumer). The optimal power flow is instantaneous, and in its standard form cannot deal with energy constrained systems that are coupled in time, such as hydropower systems with reservoirs. A simplified example of how the theory can be extended to such a system is discussed. An example of the influence of security constraints on prices is also given. 4 refs., 24 figs., 7 tabs

  12. Validating the cross-cultural factor structure and invariance property of the Insomnia Severity Index: evidence based on ordinal EFA and CFA.

    Chen, Po-Yi; Yang, Chien-Ming; Morin, Charles M

    2015-05-01

    The purpose of this study is to examine the factor structure of the Insomnia Severity Index (ISI) across samples recruited from different countries. We tried to identify the most appropriate factor model for the ISI and further examined the measurement invariance property of the ISI across samples from different countries. Our analyses included one data set collected from a Taiwanese sample and two data sets obtained from samples in Hong Kong and Canada. The data set collected in Taiwan was analyzed with ordinal exploratory factor analysis (EFA) to obtain the appropriate factor model for the ISI. After that, we conducted a series of confirmatory factor analyses (CFAs), which is a special case of the structural equation model (SEM) that concerns the parameters in the measurement model, to the statistics collected in Canada and Hong Kong. The purposes of these CFA were to cross-validate the result obtained from EFA and further examine the cross-cultural measurement invariance of the ISI. The three-factor model outperforms other models in terms of global fit indices in Taiwan's population. Its external validity is also supported by confirmatory factor analyses. Furthermore, the measurement invariance analyses show that the strong invariance property between the samples from different cultures holds, providing evidence that the ISI results obtained in different cultures are comparable. The factorial validity of the ISI is stable in different populations. More importantly, its invariance property across cultures suggests that the ISI is a valid measure of the insomnia severity construct across countries. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Global stability-based design optimization of truss structures using ...

    Furthermore, a pure pareto-ranking based multi-objective optimization model is employed for the design optimization of the truss structure with multiple objectives. The computational performance of the optimization model is increased by implementing an island model into its evolutionary search mechanism. The proposed ...

  14. Cable Television Report and Suggested Ordinance.

    League of California Cities, Sacramento.

    Guidelines and suggested ordinances for cable television regulation by local governments are comprehensively discussed in this report. The emphasis is placed on franchising the cable operator. Seventeen legal aspects of franchising are reviewed, and an exemplary ordinance is presented. In addition, current statistics about cable franchising in…

  15. An SPSSR -Menu for Ordinal Factor Analysis

    Mario Basto

    2012-01-01

    Full Text Available Exploratory factor analysis is a widely used statistical technique in the social sciences. It attempts to identify underlying factors that explain the pattern of correlations within a set of observed variables. A statistical software package is needed to perform the calculations. However, there are some limitations with popular statistical software packages, like SPSS. The R programming language is a free software package for statistical and graphical computing. It offers many packages written by contributors from all over the world and programming resources that allow it to overcome the dialog limitations of SPSS. This paper offers an SPSS dialog written in theR programming language with the help of some packages, so that researchers with little or no knowledge in programming, or those who are accustomed to making their calculations based on statistical dialogs, have more options when applying factor analysis to their data and hence can adopt a better approach when dealing with ordinal, Likert-type data.

  16. Process optimization of friction stir welding based on thermal models

    Larsen, Anders Astrup

    2010-01-01

    This thesis investigates how to apply optimization methods to numerical models of a friction stir welding process. The work is intended as a proof-of-concept using different methods that are applicable to models of high complexity, possibly with high computational cost, and without the possibility...... information of the high-fidelity model. The optimization schemes are applied to stationary thermal models of differing complexity of the friction stir welding process. The optimization problems considered are based on optimizing the temperature field in the workpiece by finding optimal translational speed....... Also an optimization problem based on a microstructure model is solved, allowing the hardness distribution in the plate to be optimized. The use of purely thermal models represents a simplification of the real process; nonetheless, it shows the applicability of the optimization methods considered...

  17. PARTICLE SWARM OPTIMIZATION BASED OF THE MAXIMUM ...

    2010-06-30

    Jun 30, 2010 ... Keywords: Particle Swarm Optimization (PSO), photovoltaic system, MPOP, ... systems from one hand and because of the instantaneous change of ..... Because of the P-V characteristics this heuristic method is used to seek ...

  18. Product portfolio optimization based on substitution

    Myrodia, Anna; Moseley, A.; Hvam, Lars

    2017-01-01

    The development of production capabilities has led to proliferation of the product variety offered to the customer. Yet this fact does not directly imply increase of manufacturers' profitability, nor customers' satisfaction. Consequently, recent research focuses on portfolio optimization through...... substitution and standardization techniques. However when re-defining the strategic market decisions are characterized by uncertainty due to several parameters. In this study, by using a GAMS optimization model we present a method for supporting strategic decisions on substitution, by quantifying the impact...

  19. Atomic Energy Law with ordinances. 9. ed.

    Anon.

    1982-01-01

    The revised edition of the text is due to a variety of major changes in, and amendments to, the German Atomic Energy Law. This book includes the current version of the Atomic Energy Law which has been changed several times, the 1982-version of the ordinace concerning procedures laid down in the Atomic Energy Law, the 1976 radiation protection ordinance together with recent amendments, the 1973 X-ray ordinance, the 1977 financial security ordinance laid down in the Atomic Energy Law, the 1981 ordinance concerning costs, the ordinance concerning performance in anticipation of ultimate disposal. The book is a compilation of the basic Atomic Energy Law which is needed mostly for imminent practical requirements. (orig./HSCH) [de

  20. Optimization of a space based radiator

    Sam, Kien Fan Cesar Hung; Deng Zhongmin

    2011-01-01

    Nowadays there is an increased demand in satellite weight reduction for the reduction of costs. Thermal control system designers have to face the challenge of reducing both the weight of the system and required heater power while maintaining the components temperature within their design ranges. The main purpose of this paper is to present an optimization of a heat pipe radiator applied to a practical engineering design application. For this study, a communications satellite payload panel was considered. Four radiator areas were defined instead of a centralized one in order to improve the heat rejection into space; the radiator's dimensions were determined considering worst hot scenario, solar fluxes, heat dissipation and the component's design temperature upper limit. Dimensions, thermal properties of the structural panel, optical properties and degradation/contamination on thermal control coatings were also considered. A thermal model was constructed for thermal analysis and two heat pipe network designs were evaluated and compared. The model that allowed better radiator efficiency was selected for parametric thermal analysis and optimization. This pursues finding the minimum size of the heat pipe network while keeping complying with thermal control requirements without increasing power consumption. - Highlights: →Heat pipe radiator optimization applied to a practical engineering design application. →The heat pipe radiator of a communications satellite panel is optimized. →A thermal model was built for parametric thermal analysis and optimization. →Optimal heat pipe network size is determined for the optimal weight solution. →The thermal compliance was verified by transient thermal analysis.

  1. Practical mathematical optimization basic optimization theory and gradient-based algorithms

    Snyman, Jan A

    2018-01-01

    This textbook presents a wide range of tools for a course in mathematical optimization for upper undergraduate and graduate students in mathematics, engineering, computer science, and other applied sciences. Basic optimization principles are presented with emphasis on gradient-based numerical optimization strategies and algorithms for solving both smooth and noisy discontinuous optimization problems. Attention is also paid to the difficulties of expense of function evaluations and the existence of multiple minima that often unnecessarily inhibit the use of gradient-based methods. This second edition addresses further advancements of gradient-only optimization strategies to handle discontinuities in objective functions. New chapters discuss the construction of surrogate models as well as new gradient-only solution strategies and numerical optimization using Python. A special Python module is electronically available (via springerlink) that makes the new algorithms featured in the text easily accessible and dir...

  2. Topology optimization based on the harmony search method

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  3. Topology optimization based on the harmony search method

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  4. Three-dimensional discrete ordinates reactor assembly calculations on GPUs

    Evans, Thomas M [ORNL; Joubert, Wayne [ORNL; Hamilton, Steven P [ORNL; Johnson, Seth R [ORNL; Turner, John A [ORNL; Davidson, Gregory G [ORNL; Pandya, Tara M [ORNL

    2015-01-01

    In this paper we describe and demonstrate a discrete ordinates sweep algorithm on GPUs. This sweep algorithm is nested within a multilevel comunication-based decomposition based on energy. We demonstrated the effectiveness of this algorithm on detailed three-dimensional critical experiments and PWR lattice problems. For these problems we show improvement factors of 4 6 over conventional communication-based, CPU-only sweeps. These sweep kernel speedups resulted in a factor of 2 total time-to-solution improvement.

  5. Reliability-Based Optimization of Series Systems of Parallel Systems

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1993-01-01

    Reliability-based design of structural systems is considered. In particular, systems where the reliability model is a series system of parallel systems are treated. A sensitivity analysis for this class of problems is presented. Optimization problems with series systems of parallel systems...... optimization of series systems of parallel systems, but it is also efficient in reliability-based optimization of series systems in general....

  6. Geometrically based optimization for extracranial radiosurgery

    Liu Ruiguo; Wagner, Thomas H; Buatti, John M; Modrick, Joseph; Dill, John; Meeks, Sanford L

    2004-01-01

    For static beam conformal intracranial radiosurgery, geometry of the beam arrangement dominates overall dose distribution. Maximizing beam separation in three dimensions decreases beam overlap, thus maximizing dose conformality and gradient outside of the target volume. Webb proposed arrangements of isotropically convergent beams that could be used as the starting point for a radiotherapy optimization process. We have developed an extracranial radiosurgery optimization method by extending Webb's isotropic beam arrangements to deliverable beam arrangements. This method uses an arrangement of N maximally separated converging vectors within the space available for beam delivery. Each bouquet of isotropic beam vectors is generated by a random sampling process that iteratively maximizes beam separation. Next, beam arrangement is optimized for critical structure avoidance while maintaining minimal overlap between beam entrance and exit pathways. This geometrically optimized beam set can then be used as a template for either conformal beam or intensity modulated extracranial radiosurgery. Preliminary results suggest that using this technique with conformal beam planning provides high plan conformality, a steep dose gradient outside of the tumour volume and acceptable critical structure avoidance in the majority of clinical cases

  7. Optimal separable bases and molecular collisions

    Poirier, L.W.

    1997-12-01

    A new methodology is proposed for the efficient determination of Green's functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, are problems of reduced dimensionality for most systems of physical interest. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. These distorted waves give rise to a Born series with optimized convergence properties. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic oscillator system. The primary interest however, is quantum reactive scattering in molecular systems. For numerical calculations, the use of distorted waves corresponds to numerical preconditioning. The new methodology therefore gives rise to an optimized preconditioning scheme for the efficient calculation of reactive and inelastic scattering amplitudes, especially at intermediate energies. This scheme is particularly suited to discrete variable representations (DVR's) and iterative sparse matrix methods commonly employed in such calculations. State to state and cumulative reactive scattering results obtained via the optimized preconditioner are presented for the two-dimensional collinear H + H 2 → H 2 + H system. Computational time and memory requirements for this system are drastically reduced in comparison with other methods, and results are obtained for previously prohibitive energy regimes

  8. Lanthanide co-ordination frameworks: Opportunities and diversity

    Hill, Robert J.; Long, De-Liang; Hubberstey, Peter; Schroeder, Martin; Champness, Neil R.

    2005-01-01

    Significant successes have been made over recent years in preparing co-ordination framework polymers that show macroscopic material properties, but in the vast majority of cases this has been achieved with d-block metal-based systems. Lanthanide co-ordination frameworks also offer attractive properties in terms of their potential applications as luminescent, non-linear optical and porous materials. However, lanthanide-based systems have been far less studied to date than their d-block counterparts. One possible reason for this is that the co-ordination spheres of lanthanide cations are more difficult to control and, in the absence of design strategies for lanthanide co-ordination frameworks, it is significantly more difficult to target materials with specific properties. However, this article highlights some of the exciting possibilities that have emerged from the earliest investigations in this field with new topological families of compounds being discovered from relatively simple framework components, including unusual eight, seven and five-connected framework systems. Our own research, as well as others, is leading to a much greater appreciation of the factors that control framework formation and the resultant observed topologies of these polymers. As this understanding develops targeting particular framework types will become more straightforward and the development of designed polyfunctional materials more accessible. Thus, it can be seen that lanthanide co-ordination frameworks have the potential to open up previously unexplored directions for materials chemistry. This article focuses on the underlying concepts for the construction of these enticing and potentially highly important materials

  9. The Optimal Wavelengths for Light Absorption Spectroscopy Measurements Based on Genetic Algorithm-Particle Swarm Optimization

    Tang, Ge; Wei, Biao; Wu, Decao; Feng, Peng; Liu, Juan; Tang, Yuan; Xiong, Shuangfei; Zhang, Zheng

    2018-03-01

    To select the optimal wavelengths in the light extinction spectroscopy measurement, genetic algorithm-particle swarm optimization (GAPSO) based on genetic algorithm (GA) and particle swarm optimization (PSO) is adopted. The change of the optimal wavelength positions in different feature size parameters and distribution parameters is evaluated. Moreover, the Monte Carlo method based on random probability is used to identify the number of optimal wavelengths, and good inversion effects of the particle size distribution are obtained. The method proved to have the advantage of resisting noise. In order to verify the feasibility of the algorithm, spectra with bands ranging from 200 to 1000 nm are computed. Based on this, the measured data of standard particles are used to verify the algorithm.

  10. Physical bases for diffusion welding processes optimization

    Bulygina, S.M.; Berber, N.N.; Mukhambetov, D.G.

    1999-01-01

    One of wide-spread method of different materials joint is diffusion welding. It has being brought off at the expense of mutual diffusion of atoms of contacting surfaces under long-duration curing at its heating and compression. Welding regime in dependence from properties of welding details is defining of three parameters: temperature, pressure, time. Problem of diffusion welding optimization concludes in determination less values of these parameters, complying with requirements for quality of welded joint. In the work experiments on diffusion welding for calculated temperature and for given surface's roughness were carried out. Tests conduct on samples of iron and iron-nickel alloy with size 1·1·1 cm 3 . Optimal regime of diffusion welding of examined samples in vacuum is defined. It includes compression of welding samples, heating, isothermal holding at temperature 650 deg C during 0.5 h and affords the required homogeneity of joint

  11. Interleaver Optimization using Population-Based Metaheuristics

    Snášel, V.; Platoš, J.; Krömer, P.; Abraham, A.; Ouddane, N.; Húsek, Dušan

    2010-01-01

    Roč. 20, č. 5 (2010), s. 591-608 ISSN 1210-0552 R&D Projects: GA ČR GA205/09/1079 Grant - others:GA ČR(CZ) GA102/09/1494 Institutional research plan: CEZ:AV0Z10300504 Keywords : turbo codes * global optimization * genetic algorithms * differential evolution * noisy communication channel Subject RIV: IN - Informatics, Computer Science Impact factor: 0.511, year: 2010

  12. Defining a region of optimization based on engine usage data

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-08-04

    Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.

  13. Optimal design of the heat pipe using TLBO (teaching–learning-based optimization) algorithm

    Rao, R.V.; More, K.C.

    2015-01-01

    Heat pipe is a highly efficient and reliable heat transfer component. It is a closed container designed to transfer a large amount of heat in system. Since the heat pipe operates on a closed two-phase cycle, the heat transfer capacity is greater than for solid conductors. Also, the thermal response time is less than with solid conductors. The three major elemental parts of the rotating heat pipe are: a cylindrical evaporator, a truncated cone condenser, and a fixed amount of working fluid. In this paper, a recently proposed new stochastic advanced optimization algorithm called TLBO (Teaching–Learning-Based Optimization) algorithm is used for single objective as well as multi-objective design optimization of heat pipe. It is easy to implement, does not make use of derivatives and it can be applied to unconstrained or constrained problems. Two examples of heat pipe are presented in this paper. The results of application of TLBO algorithm for the design optimization of heat pipe are compared with the NPGA (Niched Pareto Genetic Algorithm), GEM (Grenade Explosion Method) and GEO (Generalized External optimization). It is found that the TLBO algorithm has produced better results as compared to those obtained by using NPGA, GEM and GEO algorithms. - Highlights: • The TLBO (Teaching–Learning-Based Optimization) algorithm is used for the design and optimization of a heat pipe. • Two examples of heat pipe design and optimization are presented. • The TLBO algorithm is proved better than the other optimization algorithms in terms of results and the convergence

  14. A Novel Optimal Control Method for Impulsive-Correction Projectile Based on Particle Swarm Optimization

    Ruisheng Sun

    2016-01-01

    Full Text Available This paper presents a new parametric optimization approach based on a modified particle swarm optimization (PSO to design a class of impulsive-correction projectiles with discrete, flexible-time interval, and finite-energy control. In terms of optimal control theory, the task is described as the formulation of minimum working number of impulses and minimum control error, which involves reference model linearization, boundary conditions, and discontinuous objective function. These result in difficulties in finding the global optimum solution by directly utilizing any other optimization approaches, for example, Hp-adaptive pseudospectral method. Consequently, PSO mechanism is employed for optimal setting of impulsive control by considering the time intervals between two neighboring lateral impulses as design variables, which makes the briefness of the optimization process. A modification on basic PSO algorithm is developed to improve the convergence speed of this optimization through linearly decreasing the inertial weight. In addition, a suboptimal control and guidance law based on PSO technique are put forward for the real-time consideration of the online design in practice. Finally, a simulation case coupled with a nonlinear flight dynamic model is applied to validate the modified PSO control algorithm. The results of comparative study illustrate that the proposed optimal control algorithm has a good performance in obtaining the optimal control efficiently and accurately and provides a reference approach to handling such impulsive-correction problem.

  15. Empty tracks optimization based on Z-Map model

    Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao

    2017-12-01

    For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.

  16. Elitism set based particle swarm optimization and its application

    Yanxia Sun

    2017-01-01

    Full Text Available Topology plays an important role for Particle Swarm Optimization (PSO to achieve good optimization performance. It is difficult to find one topology structure for the particles to achieve better optimization performance than the others since the optimization performance not only depends on the searching abilities of the particles, also depends on the type of the optimization problems. Three elitist set based PSO algorithm without using explicit topology structure is proposed in this paper. An elitist set, which is based on the individual best experience, is used to communicate among the particles. Moreover, to avoid the premature of the particles, different statistical methods have been used in these three proposed methods. The performance of the proposed PSOs is compared with the results of the standard PSO 2011 and several PSO with different topologies, and the simulation results and comparisons demonstrate that the proposed PSO with adaptive probabilistic preference can achieve good optimization performance.

  17. Shape signature based on Ricci flow and optimal mass transportation

    Luo, Wei; Su, Zengyu; Zhang, Min; Zeng, Wei; Dai, Junfei; Gu, Xianfeng

    2014-11-01

    A shape signature based on surface Ricci flow and optimal mass transportation is introduced for the purpose of surface comparison. First, the surface is conformally mapped onto plane by Ricci flow, which induces a measure on the planar domain. Second, the unique optimal mass transport map is computed that transports the new measure to the canonical measure on the plane. The map is obtained by a convex optimization process. This optimal transport map encodes all the information of the Riemannian metric on the surface. The shape signature consists of the optimal transport map, together with the mean curvature, which can fully recover the original surface. The discrete theories of surface Ricci flow and optimal mass transportation are explained thoroughly. The algorithms are given in detail. The signature is tested on human facial surfaces with different expressions accquired by structured light 3-D scanner based on phase-shifting method. The experimental results demonstrate the efficiency and efficacy of the method.

  18. A Characterization of Ordinal Potential Games

    Voorneveld, M.; Norde, H.W.

    1996-01-01

    This note characterizes ordinal potential games by the absence of weak improvement cycles and an order condition on the strategy space.This order condition is automatically satisfied if the strategy space is countable.

  19. Overview of Existing Wind Energy Ordinances

    Oteri, F.

    2008-12-01

    Due to increased energy demand in the United States, rural communities with limited or no experience with wind energy now have the opportunity to become involved in this industry. Communities with good wind resources may be approached by entities with plans to develop the resource. Although these opportunities can create new revenue in the form of construction jobs and land lease payments, they also create a new responsibility on the part of local governments to ensure that ordinances will be established to aid the development of safe facilities that will be embraced by the community. The purpose of this report is to educate and engage state and local governments, as well as policymakers, about existing large wind energy ordinances. These groups will have a collection of examples to utilize when they attempt to draft a new large wind energy ordinance in a town or county without existing ordinances.

  20. Extracts from the Ordinance on foodstuffs

    1936-05-01

    This Ordinance which regulates the consumption and treatment of foodstuffs also contains provisions on irradiated food, providing in particular that treatment of food by irradiation is subject to a prior licence. (NEA) [fr

  1. Allegheny County Municipal Land Use Ordinances

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Many municipalities have their own land use ordinances and establish standards and requirements for land use and development in that municipality. This dataset is...

  2. Ordinance on nuclear third party liability (ORCN)

    1983-12-01

    The Ordinance exempts from the application of the 1983 Act on Nuclear Third Party Liability some substances with low radiation effects. It determines the amount of private insurance cover and defines the risks that insurers may exclude from cover. It establishes a special fund for nuclear damage made up of contributions from the nuclear operators. Specifications are given on the amount of the contributions and their conditions, as well as on administration of the fund. The Ordinance repeals the Ordinance of 13 June 1960 on funds for delayed atomic damage, the Order of 19 December 1960 on contributions to the fund for delayed atomic damage and the Ordinance of 30 November 1981 on cover for third party liability resulting from nuclear power plant operation [fr

  3. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  4. Optimal, Reliability-Based Code Calibration

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  5. Support vector machines optimization based theory, algorithms, and extensions

    Deng, Naiyang; Zhang, Chunhua

    2013-01-01

    Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi

  6. Ordinance on the Protection against X-Radiation Hazards (X-Ray Ordinance)

    1987-01-01

    The ordinance refers to X-ray equipment and to stray radiation sources which generate X-radiation of at least 5 keV by means of accelerated electrons, and for this purpose apply an acceleration energy not exceeding 3 MeV. The ordinance does not apply to stray radiation sources which are used for the generation of ionizing particle radiation and thus are subject to the provisions of the Radiation Protection Ordinance. (orig./PW) [de

  7. Optimal design of RTCs in digital circuit fault self-repair based on global signal optimization

    Zhang Junbin; Cai Jinyan; Meng Yafeng

    2016-01-01

    Since digital circuits have been widely and thoroughly applied in various fields, electronic systems are increasingly more complicated and require greater reliability. Faults may occur in elec-tronic systems in complicated environments. If immediate field repairs are not made on the faults, elec-tronic systems will not run normally, and this will lead to serious losses. The traditional method for improving system reliability based on redundant fault-tolerant technique has been unable to meet the requirements. Therefore, on the basis of (evolvable hardware)-based and (reparation balance technology)-based electronic circuit fault self-repair strategy proposed in our preliminary work, the optimal design of rectification circuits (RTCs) in electronic circuit fault self-repair based on global sig-nal optimization is deeply researched in this paper. First of all, the basic theory of RTC optimal design based on global signal optimization is proposed. Secondly, relevant considerations and suitable ranges are analyzed. Then, the basic flow of RTC optimal design is researched. Eventually, a typical circuit is selected for simulation verification, and detailed simulated analysis is made on five circumstances that occur during RTC evolution. The simulation results prove that compared with the conventional design method based RTC, the global signal optimization design method based RTC is lower in hardware cost, faster in circuit evolution, higher in convergent precision, and higher in circuit evolution success rate. Therefore, the global signal optimization based RTC optimal design method applied in the elec-tronic circuit fault self-repair technology is proven to be feasible, effective, and advantageous.

  8. Atomic ordinance - amendment of 28 october 1987

    1987-10-01

    This Ordinance amends certain provisions of the 1984 Ordinance on licences for the construction and operation of nuclear installations, import, export and transit of nuclear fuel, as well as the export of nuclear reactors, equipment and technical data. The Order also amends the provisions on the delivery procedure for these licences and makes minor amendments to the 1983 Order on nuclear third party liability [fr

  9. Co-ordinating Product Developing Activities

    Terkelsen, Søren Bendix

    1996-01-01

    The paper contains a presentation of research methods to be used in case studies in product development and a presentation on how to deal with Design Co-ordination according to litterature......The paper contains a presentation of research methods to be used in case studies in product development and a presentation on how to deal with Design Co-ordination according to litterature...

  10. Recent developments in discrete ordinates electron transport

    Morel, J.E.; Lorence, L.J. Jr.

    1986-01-01

    The discrete ordinates method is a deterministic method for numerically solving the Boltzmann equation. It was originally developed for neutron transport calculations, but is routinely used for photon and coupled neutron-photon transport calculations as well. The computational state of the art for coupled electron-photon transport (CEPT) calculations is not as developed as that for neutron transport calculations. The only production codes currently available for CEPT calculations are condensed-history Monte Carlo codes such as the ETRAN and ITS codes. A deterministic capability for production calculations is clearly needed. In response to this need, we have begun the development of a production discrete ordinates code for CEPT calculations. The purpose of this paper is to describe the basic approach we are taking, discuss the current status of the project, and present some new computational results. Although further characterization of the coupled electron-photon discrete ordinates method remains to be done, the results to date indicate that the discrete ordinates method can be just as accurate and from 10 to 100 times faster than the Monte Carlo method for a wide variety of problems. We stress that these results are obtained with standard discrete ordinates codes such as ONETRAN. It is clear that even greater efficiency can be obtained by developing a new generation of production discrete ordinates codes specifically designed to solve the Boltzmann-Fokker-Planck equation. However, the prospects for such development in the near future appear to be remote

  11. portfolio optimization based on nonparametric estimation methods

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  12. Optimal separable bases and series expansions

    Poirier, B.

    1997-01-01

    A method is proposed for the efficient calculation of the Green close-quote s functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert-space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, is a problem of reduced dimensionality. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. The full solution is obtained from the approximation via iterative expansion. In the time-independent perturbation expansion for instance, all of the first-order energy corrections are zero. In the Green close-quote s function case, we have a distorted-wave Born series with optimized convergence properties. This series may converge even when the usual Born series diverges. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic-oscillator system, in the course of which the quantum tanh 2 potential problem is solved exactly. The universal presence of bound states in the latter is shown to imply long-lived resonances in the former. In a comparison with other theoretical methods, we find that the reaction path Hamiltonian fails to predict such resonances. copyright 1997 The American Physical Society

  13. An integrated reliability-based design optimization of offshore towers

    Karadeniz, Halil; Togan, Vedat; Vrouwenvelder, Ton

    2009-01-01

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  14. An integrated reliability-based design optimization of offshore towers

    Karadeniz, Halil [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)], E-mail: h.karadeniz@tudelft.nl; Togan, Vedat [Department of Civil Engineering, Karadeniz Technical University, Trabzon (Turkey); Vrouwenvelder, Ton [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)

    2009-10-15

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  15. OPF-Based Optimal Location of Two Systems Two Terminal HVDC to Power System Optimal Operation

    Mehdi Abolfazli

    2013-04-01

    Full Text Available In this paper a suitable mathematical model of the two terminal HVDC system is provided for optimal power flow (OPF and optimal location based on OPF such power injection model. The ability of voltage source converter (VSC-based HVDC to independently control active and reactive power is well represented by the model. The model is used to develop an OPF-based optimal location algorithm of two systems two terminal HVDC to minimize the total fuel cost and active power losses as objective function. The optimization framework is modeled as non-linear programming (NLP and solved by Matlab and GAMS softwares. The proposed algorithm is implemented on the IEEE 14- and 30-bus test systems. The simulation results show ability of two systems two terminal HVDC in improving the power system operation. Furthermore, two systems two terminal HVDC is compared by PST and OUPFC in the power system operation from economical and technical aspects.

  16. Optimizing block-based maintenance under random machine usage

    de Jonge, Bram; Jakobsons, Edgars

    Existing studies on maintenance optimization generally assume that machines are either used continuously, or that times until failure do not depend on the actual usage. In practice, however, these assumptions are often not realistic. In this paper, we consider block-based maintenance optimization

  17. Reliability-Based Optimization of Series Systems of Parallel Systems

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  18. Optimization of microgrids based on controller designing for ...

    The power quality of microgrid during islanded operation is strongly related with the controller performance of DGs. Therefore a new optimal control strategy for distributed generation based inverter to connect to the generalized microgrid is proposed. This work shows developing optimal control algorithms for the DG ...

  19. Energy dependent mesh adaptivity of discontinuous isogeometric discrete ordinate methods with dual weighted residual error estimators

    Owens, A. R.; Kópházi, J.; Welch, J. A.; Eaton, M. D.

    2017-04-01

    In this paper a hanging-node, discontinuous Galerkin, isogeometric discretisation of the multigroup, discrete ordinates (SN) equations is presented in which each energy group has its own mesh. The equations are discretised using Non-Uniform Rational B-Splines (NURBS), which allows the coarsest mesh to exactly represent the geometry for a wide range of engineering problems of interest; this would not be the case using straight-sided finite elements. Information is transferred between meshes via the construction of a supermesh. This is a non-trivial task for two arbitrary meshes, but is significantly simplified here by deriving every mesh from a common coarsest initial mesh. In order to take full advantage of this flexible discretisation, goal-based error estimators are derived for the multigroup, discrete ordinates equations with both fixed (extraneous) and fission sources, and these estimators are used to drive an adaptive mesh refinement (AMR) procedure. The method is applied to a variety of test cases for both fixed and fission source problems. The error estimators are found to be extremely accurate for linear NURBS discretisations, with degraded performance for quadratic discretisations owing to a reduction in relative accuracy of the "exact" adjoint solution required to calculate the estimators. Nevertheless, the method seems to produce optimal meshes in the AMR process for both linear and quadratic discretisations, and is ≈×100 more accurate than uniform refinement for the same amount of computational effort for a 67 group deep penetration shielding problem.

  20. Optimization algorithm based on densification and dynamic canonical descent

    Bousson, K.; Correia, S. D.

    2006-07-01

    Stochastic methods have gained some popularity in global optimization in that most of them do not assume the cost functions to be differentiable. They have capabilities to avoid being trapped by local optima, and may converge even faster than gradient-based optimization methods on some problems. The present paper proposes an optimization method, which reduces the search space by means of densification curves, coupled with the dynamic canonical descent algorithm. The performances of the new method are shown on several known problems classically used for testing optimization algorithms, and proved to outperform competitive algorithms such as simulated annealing and genetic algorithms.

  1. Optimization Research of Generation Investment Based on Linear Programming Model

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  2. Radiation Ordinance 1983 (No. 58 of 1983) (Australian Capital Territory)

    1983-01-01

    This Ordinance provides for the safe use, transportation and disposal of radioactive materials and irradiating apparatus. It repeals the Fluoroscopes Ordinance of 1958. Radioactive materials whose radioactivity does not exceed levels as set out in a Schedule to the Ordinance are exempted from application of the Ordinance. (NEA) [fr

  3. Novel Verification Method for Timing Optimization Based on DPSO

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  4. A comparison of physically and radiobiologically based optimization for IMRT

    Jones, Lois; Hoban, Peter

    2002-01-01

    Many optimization techniques for intensity modulated radiotherapy have now been developed. The majority of these techniques including all the commercial systems that are available are based on physical dose methods of assessment. Some techniques have also been based on radiobiological models. None of the radiobiological optimization techniques however have assessed the clinically realistic situation of considering both tumor and normal cells within the target volume. This study considers a ratio-based fluence optimizing technique to compare a dose-based optimization method described previously and two biologically based models. The biologically based methods use the values of equivalent uniform dose calculated for the tumor cells and integral biological effective dose for normal cells. The first biologically based method includes only tumor cells in the target volume while the second considers both tumor and normal cells in the target volume. All three methods achieve good conformation to the target volume. The biologically based optimization without the normal tissue in the target volume shows a high dose region in the center of the target volume while this is reduced when the normal tissues are also considered in the target volume. This effect occurs because the normal tissues in the target volume require the optimization to reduce the dose and therefore limit the maximum dose to that volume

  5. Intelligent fault recognition strategy based on adaptive optimized multiple centers

    Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong

    2018-06-01

    For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.

  6. Integrated Reliability-Based Optimal Design of Structures

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1987-01-01

    In conventional optimal design of structural systems the weight or the initial cost of the structure is usually used as objective function. Further, the constraints require that the stresses and/or strains at some critical points have to be less than some given values. Finally, all variables......-based optimal design is discussed. Next, an optimal inspection and repair strategy for existing structural systems is presented. An optimization problem is formulated , where the objective is to minimize the expected total future cost of inspection and repair subject to the constraint that the reliability...... value. The reliability can be measured from an element and/or a systems point of view. A number of methods to solve reliability-based optimization problems has been suggested, see e.g. Frangopol [I]. Murotsu et al. (2], Thoft-Christensen & Sørensen (3] and Sørensen (4). For structures where...

  7. Topology Optimization of Passive Micromixers Based on Lagrangian Mapping Method

    Yuchen Guo

    2018-03-01

    Full Text Available This paper presents an optimization-based design method of passive micromixers for immiscible fluids, which means that the Peclet number infinitely large. Based on topology optimization method, an optimization model is constructed to find the optimal layout of the passive micromixers. Being different from the topology optimization methods with Eulerian description of the convection-diffusion dynamics, this proposed method considers the extreme case, where the mixing is dominated completely by the convection with negligible diffusion. In this method, the mixing dynamics is modeled by the mapping method, a Lagrangian description that can deal with the case with convection-dominance. Several numerical examples have been presented to demonstrate the validity of the proposed method.

  8. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  9. optimization of object tracking based on enhanced imperialist ...

    Damuut and Dogara

    A typical example is the Roman Empire which had influence or control over ... the Enhance Imperialist Competitive Algorithm (EICA) in optimizing the generated ... segment the video frame into a number of regions based on visual features like ...

  10. Optimizing ring-based CSR sources

    Byrd, J.M.; De Santis, S.; Hao, Z.; Martin, M.C.; Munson, D.V.; Li, D.; Nishimura, H.; Robin, D.S.; Sannibale, F.; Schlueter, R.D.; Schoenlein, R.; Jung, J.Y.; Venturini, M.; Wan, W.; Zholents, A.A.; Zolotorev, M.

    2004-01-01

    Coherent synchrotron radiation (CSR) is a fascinating phenomenon recently observed in electron storage rings and shows tremendous promise as a high power source of radiation at terahertz frequencies. However, because of the properties of the radiation and the electron beams needed to produce it, there are a number of interesting features of the storage ring that can be optimized for CSR. Furthermore, CSR has been observed in three distinct forms: as steady pulses from short bunches, bursts from growth of spontaneous modulations in high current bunches, and from micro modulations imposed on a bunch from laser slicing. These processes have their relative merits as sources and can be improved via the ring design. The terahertz (THz) and sub-THz region of the electromagnetic spectrum lies between the infrared and the microwave . This boundary region is beyond the normal reach of optical and electronic measurement techniques and sources associated with these better-known neighbors. Recent research has demonstrated a relatively high power source of THz radiation from electron storage rings: coherent synchrotron radiation (CSR). Besides offering high power, CSR enables broadband optical techniques to be extended to nearly the microwave region, and has inherently sub-picosecond pulses. As a result, new opportunities for scientific research and applications are enabled across a diverse array of disciplines: condensed matter physics, medicine, manufacturing, and space and defense industries. CSR will have a strong impact on THz imaging, spectroscopy, femtosecond dynamics, and driving novel non-linear processes. CSR is emitted by bunches of accelerated charged particles when the bunch length is shorter than the wavelength being emitted. When this criterion is met, all the particles emit in phase, and a single-cycle electromagnetic pulse results with an intensity proportional to the square of the number of particles in the bunch. It is this quadratic dependence that can

  11. Optimal portfolio model based on WVAR

    Hao, Tianyu

    2012-01-01

    This article is focused on using a new measurement of risk-- Weighted Value at Risk to develop a new method of constructing initiate from the TVAR solving problem, based on MATLAB software, using the historical simulation method (avoiding income distribution will be assumed to be normal), the results of previous studies also based on, study the U.S. Nasdaq composite index, combining the Simpson formula for the solution of TVAR and its deeply study; then, through the representation of WVAR for...

  12. Reliability-Based Structural Optimization of Wave Energy Converters

    Ambühl, Simon; Kramer, Morten; Sørensen, John Dalsgaard

    2014-01-01

    More and more wave energy converter (WEC) concepts are reaching prototype level. Once the prototype level is reached, the next step in order to further decrease the levelized cost of energy (LCOE) is optimizing the overall system with a focus on structural and maintenance (inspection) costs......, as well as on the harvested power from the waves. The target of a fully-developed WEC technology is not maximizing its power output, but minimizing the resulting LCOE. This paper presents a methodology to optimize the structural design of WECs based on a reliability-based optimization problem...

  13. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  14. Optimal perturbations for nonlinear systems using graph-based optimal transport

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  15. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto

  16. Ordinance on the Implementation of Atomic Safety and Radiation Protection

    1984-01-01

    In execution of the new Atomic Energy Act the Ordinance on the Implementation of Atomic Safety and Radiation Protection was put into force on 1 February 1985. It takes into account all forms of peaceful nuclear energy and ionizing radiation uses in nuclear installations, irradiation facilities and devices in research, industries, and health services, and in radioactive isotope production and laboratories. It covers all aspects of safety and protection and defines atomic safety as nuclear safety and nuclear safeguards and physical protection of nuclear materials and facilities, whereas radiation protection includes the total of requirements, measures, means and methods necessary to protect man and the environment from the detrimental effects of ionizing radiation. It has been based on ICRP Recommendation No. 26 and the IAEA's Basic Safety Standards and supersedes the Radiation Protection Ordinance of 1969

  17. Parameters Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    Zuhdi, Shaifudin; Retno Sari Saputro, Dewi; Widyaningsih, Purnami

    2017-06-01

    A regression model is the representation of relationship between independent variable and dependent variable. The dependent variable has categories used in the logistic regression model to calculate odds on. The logistic regression model for dependent variable has levels in the logistics regression model is ordinal. GWOLR model is an ordinal logistic regression model influenced the geographical location of the observation site. Parameters estimation in the model needed to determine the value of a population based on sample. The purpose of this research is to parameters estimation of GWOLR model using R software. Parameter estimation uses the data amount of dengue fever patients in Semarang City. Observation units used are 144 villages in Semarang City. The results of research get GWOLR model locally for each village and to know probability of number dengue fever patient categories.

  18. Optimal policy for value-based decision-making.

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  19. The new German radiation protection ordinance

    Pfeffer, W.; Weimer, G.

    2003-01-01

    According to European law, the Basic Safety Standards (BSS) published by the European Council in 1996 and the Council Directive on health protection of individuals against dangers of ionising radiation in relation to medical exposure had to be transferred into national law within due time. In 2001 the new Ordinance for the Implementation of the Euratom Guidelines on Radiation Protection] was published, which replaces the old Radiation Protection Ordinance. The new German Ordinance adapts the European Directive to German law, covering the general principles but even giving more details in many fields of radiation protection. The BSS scope certainly is much broader than the prescriptions important for the field of radiation protection in nuclear power plants. According to the scope of this workshop on occupational exposure in nuclear power plants - and as the BSS most probably will be quite familiar to all of you - after a short general overview on relevant contents of the German Ordinance, this presentation will focus on the main issues important in the operation of NPP and especially on some areas which may give rise to necessary changes caused by the new Ordinance. (A.L.B.)

  20. Segment-based dose optimization using a genetic algorithm

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  1. Modified Chaos Particle Swarm Optimization-Based Optimized Operation Model for Stand-Alone CCHP Microgrid

    Fei Wang

    2017-07-01

    Full Text Available The optimized dispatch of different distributed generations (DGs in stand-alone microgrid (MG is of great significance to the operation’s reliability and economy, especially for energy crisis and environmental pollution. Based on controllable load (CL and combined cooling-heating-power (CCHP model of micro-gas turbine (MT, a multi-objective optimization model with relevant constraints to optimize the generation cost, load cut compensation and environmental benefit is proposed in this paper. The MG studied in this paper consists of photovoltaic (PV, wind turbine (WT, fuel cell (FC, diesel engine (DE, MT and energy storage (ES. Four typical scenarios were designed according to different day types (work day or weekend and weather conditions (sunny or rainy in view of the uncertainty of renewable energy in variable situations and load fluctuation. A modified dispatch strategy for CCHP is presented to further improve the operation economy without reducing the consumers’ comfort feeling. Chaotic optimization and elite retention strategy are introduced into basic particle swarm optimization (PSO to propose modified chaos particle swarm optimization (MCPSO whose search capability and convergence speed are improved greatly. Simulation results validate the correctness of the proposed model and the effectiveness of MCPSO algorithm in the optimized operation application of stand-alone MG.

  2. GENETIC ALGORITHM BASED CONCEPT DESIGN TO OPTIMIZE NETWORK LOAD BALANCE

    Ashish Jain

    2012-07-01

    Full Text Available Multiconstraints optimal network load balancing is an NP-hard problem and it is an important part of traffic engineering. In this research we balance the network load using classical method (brute force approach and dynamic programming is used but result shows the limitation of this method but at a certain level we recognized that the optimization of balanced network load with increased number of nodes and demands is intractable using the classical method because the solution set increases exponentially. In such case the optimization techniques like evolutionary techniques can employ for optimizing network load balance. In this paper we analyzed proposed classical algorithm and evolutionary based genetic approach is devise as well as proposed in this paper for optimizing the balance network load.

  3. Interactive Reliability-Based Optimization of Structural Systems

    Pedersen, Claus

    In order to introduce the basic concepts within the field of reliability-based structural optimization problems, this chapter is devoted to a brief outline of the basic theories. Therefore, this chapter is of a more formal nature and used as a basis for the remaining parts of the thesis. In section...... 2.2 a general non-linear optimization problem and corresponding terminology are presented whereupon optimality conditions and the standard form of an iterative optimization algorithm are outlined. Subsequently, the special properties and characteristics concerning structural optimization problems...... are treated in section 2.3. With respect to the reliability evalutation, the basic theory behind a reliability analysis and estimation of probability of failure by the First-Order Reliability Method (FORM) and the iterative Rackwitz-Fiessler (RF) algorithm are considered in section 2.5 in which...

  4. Robust optimization-based DC optimal power flow for managing wind generation uncertainty

    Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn

    2012-11-01

    Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.

  5. An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems

    Vivek Patel

    2012-08-01

    Full Text Available Nature inspired population based algorithms is a research field which simulates different natural phenomena to solve a wide range of problems. Researchers have proposed several algorithms considering different natural phenomena. Teaching-Learning-based optimization (TLBO is one of the recently proposed population based algorithm which simulates the teaching-learning process of the class room. This algorithm does not require any algorithm-specific control parameters. In this paper, elitism concept is introduced in the TLBO algorithm and its effect on the performance of the algorithm is investigated. The effects of common controlling parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 35 constrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. The proposed algorithm can be applied to various optimization problems of the industrial environment.

  6. Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm

    Chaudhary, Kailash; Chaudhary, Himanshu

    2015-01-01

    In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).

  7. Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm

    Chaudhary, Kailash; Chaudhary, Himanshu [Malaviya National Institute of Technology, Jaipur (Malaysia)

    2015-11-15

    In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).

  8. Sputtering calculations with the discrete ordinated method

    Hoffman, T.J.; Dodds, H.L. Jr.; Robinson, M.T.; Holmes, D.K.

    1977-01-01

    The purpose of this work is to investigate the applicability of the discrete ordinates (S/sub N/) method to light ion sputtering problems. In particular, the neutral particle discrete ordinates computer code, ANISN, was used to calculate sputtering yields. No modifications to this code were necessary to treat charged particle transport. However, a cross section processing code was written for the generation of multigroup cross sections; these cross sections include a modification to the total macroscopic cross section to account for electronic interactions and small-scattering-angle elastic interactions. The discrete ordinates approach enables calculation of the sputtering yield as functions of incident energy and angle and of many related quantities such as ion reflection coefficients, angular and energy distributions of sputtering particles, the behavior of beams penetrating thin foils, etc. The results of several sputtering problems as calculated with ANISN are presented

  9. Acceleration techniques for the discrete ordinate method

    Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego; Trautmann, Thomas

    2013-01-01

    In this paper we analyze several acceleration techniques for the discrete ordinate method with matrix exponential and the small-angle modification of the radiative transfer equation. These techniques include the left eigenvectors matrix approach for computing the inverse of the right eigenvectors matrix, the telescoping technique, and the method of false discrete ordinate. The numerical simulations have shown that on average, the relative speedup of the left eigenvector matrix approach and the telescoping technique are of about 15% and 30%, respectively. -- Highlights: ► We presented the left eigenvector matrix approach. ► We analyzed the method of false discrete ordinate. ► The telescoping technique is applied for matrix operator method. ► Considered techniques accelerate the computations by 20% in average.

  10. Benchmarking the cad-based attila discrete ordinates code with experimental data of fusion experiments and to the results of MCNP code in simulating ITER

    Youssef, M. Z.

    2007-01-01

    Attila is a newly developed finite element code based on Sn neutron, gamma, and charged particle transport in 3-D geometry in which unstructured tetrahedral meshes are generated to describe complex geometry that is based on CAD input (Solid Works, Pro/Engineer, etc). In the present work we benchmark its calculation accuracy by comparing its prediction to the measured data inside two experimental mock-ups bombarded with 14 MeV neutrons. The results are also compared to those based on MCNP calculations. The experimental mock-ups simulate parts of the International Thermonuclear Experimental Reactor (ITER) in-vessel components, namely: (1) the Tungsten mockup configuration (54.3 cm x 46.8 cm x 45 cm), and (2) the ITER shielding blanket followed by the SCM region (simulated by alternating layers of SS316 and copper). In the latter configuration, a high aspect ratio rectangular streaming channel was introduced (to simulate steaming paths between ITER blanket modules) which ends with a rectangular cavity. The experiments on these two fusion-oriented integral experiments were performed at the Fusion Neutron Generator (FNG) facility, Frascati, Italy. In addition, the nuclear performance of the ITER MCNP 'Benchmark' CAD model has been performed with Attila to compare its results to those obtained with CAD-based MCNP approach developed by several ITER participants. The objective of this paper is to compare results based on two distinctive 3-D calculation tools using the same nuclear data, FENDL2.1, and the same response functions of several reaction rates measured in ITER mock-ups and to enhance confidence from the international neutronics community in the Attila code and how it can precisely quantify the nuclear field in large and complex systems, such as ITER. Attila has the advantage of providing a full flux mapping visualization everywhere in one run where components subjected to excessive radiation level and strong streaming paths can be identified. In addition, the

  11. Optimization of a Fuzzy-Logic-Control-Based MPPT Algorithm Using the Particle Swarm Optimization Technique

    Po-Chen Cheng

    2015-06-01

    Full Text Available In this paper, an asymmetrical fuzzy-logic-control (FLC-based maximum power point tracking (MPPT algorithm for photovoltaic (PV systems is presented. Two membership function (MF design methodologies that can improve the effectiveness of the proposed asymmetrical FLC-based MPPT methods are then proposed. The first method can quickly determine the input MF setting values via the power–voltage (P–V curve of solar cells under standard test conditions (STC. The second method uses the particle swarm optimization (PSO technique to optimize the input MF setting values. Because the PSO approach must target and optimize a cost function, a cost function design methodology that meets the performance requirements of practical photovoltaic generation systems (PGSs is also proposed. According to the simulated and experimental results, the proposed asymmetrical FLC-based MPPT method has the highest fitness value, therefore, it can successfully address the tracking speed/tracking accuracy dilemma compared with the traditional perturb and observe (P&O and symmetrical FLC-based MPPT algorithms. Compared to the conventional FLC-based MPPT method, the obtained optimal asymmetrical FLC-based MPPT can improve the transient time and the MPPT tracking accuracy by 25.8% and 0.98% under STC, respectively.

  12. Evaluation of a new preconditioning algorithm based on the 3-D even-parity simplified SN equations for discrete ordinates in parallel environments

    Longoni, G.; Haghighat, A.; Sjoden, G.

    2005-01-01

    This paper discusses a new preconditioned Sn algorithm referred to as FAST (Flux Acceleration Sn Transport). This algorithm uses the PENSSn code as the pre-conditioner, and the PENTRANSSn code system as the transport solver. PENSSn is developed based on the even-parity simplified Sn formulation in a parallel environment, and PENTRAN-SSn is a version of PENTRAN that uses PENSSn as the pre-conditioner with the FAST system. The paper briefly discusses the EP-SSn formulation and important numerical features of PENSSn. The FAST algorithm is discussed and tested for the C5G7 MOX eigenvalue benchmark problem. It is demonstrated that FAST leads to significant speedups (∼7) over the standard PENTRAN code. Moreover, FAST shows closer agreement with a reference Monte Carlo simulation. (authors)

  13. Optimal Dispatching of Active Distribution Networks Based on Load Equilibrium

    Xiao Han

    2017-12-01

    Full Text Available This paper focuses on the optimal intraday scheduling of a distribution system that includes renewable energy (RE generation, energy storage systems (ESSs, and thermostatically controlled loads (TCLs. This system also provides time-of-use pricing to customers. Unlike previous studies, this study attempts to examine how to optimize the allocation of electric energy and to improve the equilibrium of the load curve. Accordingly, we propose a concept of load equilibrium entropy to quantify the overall equilibrium of the load curve and reflect the allocation optimization of electric energy. Based on this entropy, we built a novel multi-objective optimal dispatching model to minimize the operational cost and maximize the load curve equilibrium. To aggregate TCLs into the optimization objective, we introduced the concept of a virtual power plant (VPP and proposed a calculation method for VPP operating characteristics based on the equivalent thermal parameter model and the state-queue control method. The Particle Swarm Optimization algorithm was employed to solve the optimization problems. The simulation results illustrated that the proposed dispatching model can achieve cost reductions of system operations, peak load curtailment, and efficiency improvements, and also verified that the load equilibrium entropy can be used as a novel index of load characteristics.

  14. Orthogonal Analysis Based Performance Optimization for Vertical Axis Wind Turbine

    Lei Song

    2016-01-01

    Full Text Available Geometrical shape of a vertical axis wind turbine (VAWT is composed of multiple structural parameters. Since there are interactions among the structural parameters, traditional research approaches, which usually focus on one parameter at a time, cannot obtain performance of the wind turbine accurately. In order to exploit overall effect of a novel VAWT, we firstly use a single parameter optimization method to obtain optimal values of the structural parameters, respectively, by Computational Fluid Dynamics (CFD method; based on the results, we then use an orthogonal analysis method to investigate the influence of interactions of the structural parameters on performance of the wind turbine and to obtain optimization combination of the structural parameters considering the interactions. Results of analysis of variance indicate that interactions among the structural parameters have influence on performance of the wind turbine, and optimization results based on orthogonal analysis have higher wind energy utilization than that of traditional research approaches.

  15. Optimization-based topology identification of complex networks

    Tang Sheng-Xue; Chen Li; He Yi-Gang

    2011-01-01

    In many cases, the topological structures of a complex network are unknown or uncertain, and it is of significance to identify the exact topological structure. An optimization-based method of identifying the topological structure of a complex network is proposed in this paper. Identification of the exact network topological structure is converted into a minimal optimization problem by using the estimated network. Then, an improved quantum-behaved particle swarm optimization algorithm is used to solve the optimization problem. Compared with the previous adaptive synchronization-based method, the proposed method is simple and effective and is particularly valid to identify the topological structure of synchronization complex networks. In some cases where the states of a complex network are only partially observable, the exact topological structure of a network can also be identified by using the proposed method. Finally, numerical simulations are provided to show the effectiveness of the proposed method. (general)

  16. Length scale and manufacturability in density-based topology optimization

    Lazarov, Boyan Stefanov; Wang, Fengwen; Sigmund, Ole

    2016-01-01

    Since its original introduction in structural design, density-based topology optimization has been applied to a number of other fields such as microelectromechanical systems, photonics, acoustics and fluid mechanics. The methodology has been well accepted in industrial design processes where it can...... provide competitive designs in terms of cost, materials and functionality under a wide set of constraints. However, the optimized topologies are often considered as conceptual due to loosely defined topologies and the need of postprocessing. Subsequent amendments can affect the optimized design...

  17. Cooperative Game Study of Airlines Based on Flight Frequency Optimization

    Wanming Liu

    2014-01-01

    Full Text Available By applying the game theory, the relationship between airline ticket price and optimal flight frequency is analyzed. The paper establishes the payoff matrix of the flight frequency in noncooperation scenario and flight frequency optimization model in cooperation scenario. The airline alliance profit distribution is converted into profit distribution game based on the cooperation game theory. The profit distribution game is proved to be convex, and there exists an optimal distribution strategy. The results show that joining the airline alliance can increase airline whole profit, the change of negotiated prices and cost is beneficial to profit distribution of large airlines, and the distribution result is in accordance with aviation development.

  18. Genetic-evolution-based optimization methods for engineering design

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  19. Maximum length scale in density based topology optimization

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  20. Augmenting Ordinal Methods of Attribute Weight Approximation

    Daneilson, Mats; Ekenberg, Love; He, Ying

    2014-01-01

    of the obstacles and methods for introducing so-called surrogate weights have proliferated in the form of ordinal ranking methods for criteria weights. Considering the decision quality, one main problem is that the input information allowed in ordinal methods is sometimes too restricted. At the same time, decision...... makers often possess more background information, for example, regarding the relative strengths of the criteria, and might want to use that. We propose combined methods for facilitating the elicitation process and show how this provides a way to use partial information from the strength of preference...

  1. Multiband discrete ordinates method: formalism and results

    Luneville, L.

    1998-06-01

    The multigroup discrete ordinates method is a classical way to solve transport equation (Boltzmann) for neutral particles. Self-shielding effects are not correctly treated due to large variations of cross sections in a group (in the resonance range). To treat the resonance domain, the multiband method is introduced. The main idea is to divide the cross section domain into bands. We obtain the multiband parameters using the moment method; the code CALENDF provides probability tables for these parameters. We present our implementation in an existing discrete ordinates code: SN1D. We study deep penetration benchmarks and show the improvement of the method in the treatment of self-shielding effects. (author)

  2. Compliance to two city convenience store ordinance requirements

    Menéndez, Cammie K Chaumont; Amandus, Harlan E; Wu, Nan; Hendricks, Scott A

    2015-01-01

    Background Robbery-related homicides and assaults are the leading cause of death in retail businesses. Robbery reduction approaches focus on compliance to Crime Prevention Through Environmental Design (CPTED) guidelines. Purpose We evaluated the level of compliance to CPTED guidelines specified by convenience store safety ordinances effective in 2010 in Dallas and Houston, Texas, USA. Methods Convenience stores were defined as businesses less than 10 000 square feet that sell grocery items. Store managers were interviewed for store ordinance requirements from August to November 2011, in a random sample of 594 (289 in Dallas, 305 in Houston) convenience stores that were open before and after the effective dates of their city’s ordinance. Data were collected in 2011 and analysed in 2012–2014. Results Overall, 9% of stores were in full compliance, although 79% reported being registered with the police departments as compliant. Compliance was consistently significantly higher in Dallas than in Houston for many requirements and by store type. Compliance was lower among single owner-operator stores compared with corporate/franchise stores. Compliance to individual requirements was lowest for signage and visibility. Conclusions Full compliance to the required safety measures is consistent with industry ‘best practices’ and evidence-based workplace violence prevention research findings. In Houston and Dallas compliance was higher for some CPTED requirements but not the less costly approaches that are also the more straightforward to adopt. PMID:26337569

  3. An opinion formation based binary optimization approach for feature selection

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  4. Discounted cost model for condition-based maintenance optimization

    Weide, J.A.M. van der; Pandey, M.D.; Noortwijk, J.M. van

    2010-01-01

    This paper presents methods to evaluate the reliability and optimize the maintenance of engineering systems that are damaged by shocks or transients arriving randomly in time and overall degradation is modeled as a cumulative stochastic point process. The paper presents a conceptually clear and comprehensive derivation of formulas for computing the discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance. The proposed discounted cost model provides a more realistic basis for optimizing the maintenance policies than those based on the asymptotic, non-discounted cost rate criterion.

  5. Cancer Classification Based on Support Vector Machine Optimized by Particle Swarm Optimization and Artificial Bee Colony.

    Gao, Lingyun; Ye, Mingquan; Wu, Changrong

    2017-11-29

    Intelligent optimization algorithms have advantages in dealing with complex nonlinear problems accompanied by good flexibility and adaptability. In this paper, the FCBF (Fast Correlation-Based Feature selection) method is used to filter irrelevant and redundant features in order to improve the quality of cancer classification. Then, we perform classification based on SVM (Support Vector Machine) optimized by PSO (Particle Swarm Optimization) combined with ABC (Artificial Bee Colony) approaches, which is represented as PA-SVM. The proposed PA-SVM method is applied to nine cancer datasets, including five datasets of outcome prediction and a protein dataset of ovarian cancer. By comparison with other classification methods, the results demonstrate the effectiveness and the robustness of the proposed PA-SVM method in handling various types of data for cancer classification.

  6. Gradient-based methods for production optimization of oil reservoirs

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  7. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  8. Multi-Objective Optimization of a Hybrid ESS Based on Optimal Energy Management Strategy for LHDs

    Jiajun Liu

    2017-10-01

    Full Text Available Energy storage systems (ESS play an important role in the performance of mining vehicles. A hybrid ESS combining both batteries (BTs and supercapacitors (SCs is one of the most promising solutions. As a case study, this paper discusses the optimal hybrid ESS sizing and energy management strategy (EMS of 14-ton underground load-haul-dump vehicles (LHDs. Three novel contributions are added to the relevant literature. First, a multi-objective optimization is formulated regarding energy consumption and the total cost of a hybrid ESS, which are the key factors of LHDs, and a battery capacity degradation model is used. During the process, dynamic programming (DP-based EMS is employed to obtain the optimal energy consumption and hybrid ESS power profiles. Second, a 10-year life cycle cost model of a hybrid ESS for LHDs is established to calculate the total cost, including capital cost, operating cost, and replacement cost. According to the optimization results, three solutions chosen from the Pareto front are compared comprehensively, and the optimal one is selected. Finally, the optimal and battery-only options are compared quantitatively using the same objectives, and the hybrid ESS is found to be a more economical and efficient option.

  9. Contributory fault and level of personal injury to drivers involved in head-on collisions: Application of copula-based bivariate ordinal models.

    Wali, Behram; Khattak, Asad J; Xu, Jingjing

    2018-01-01

    models, the study provides evidence that copula based bivariate models can provide more reliable estimates and richer insights. Practical implications of the results are discussed. Published by Elsevier Ltd.

  10. Cover crop-based ecological weed management: exploration and optimization

    Kruidhof, H.M.

    2008-01-01

    Keywords: organic farming, ecologically-based weed management, cover crops, green manure, allelopathy, Secale cereale, Brassica napus, Medicago sativa

    Cover crop-based ecological weed management: exploration and optimization. In organic farming systems, weed control is recognized as one

  11. An Optimal-Estimation-Based Aerosol Retrieval Algorithm Using OMI Near-UV Observations

    Jeong, U; Kim, J.; Ahn, C.; Torres, O.; Liu, X.; Bhartia, P. K.; Spurr, R. J. D.; Haffner, D.; Chance, K.; Holben, B. N.

    2016-01-01

    An optimal-estimation(OE)-based aerosol retrieval algorithm using the OMI (Ozone Monitoring Instrument) near-ultraviolet observation was developed in this study. The OE-based algorithm has the merit of providing useful estimates of errors simultaneously with the inversion products. Furthermore, instead of using the traditional lookup tables for inversion, it performs online radiative transfer calculations with the VLIDORT (linearized pseudo-spherical vector discrete ordinate radiative transfer code) to eliminate interpolation errors and improve stability. The measurements and inversion products of the Distributed Regional Aerosol Gridded Observation Network campaign in northeast Asia (DRAGON NE-Asia 2012) were used to validate the retrieved aerosol optical thickness (AOT) and single scattering albedo (SSA). The retrieved AOT and SSA at 388 nm have a correlation with the Aerosol Robotic Network (AERONET) products that is comparable to or better than the correlation with the operational product during the campaign. The OEbased estimated error represented the variance of actual biases of AOT at 388 nm between the retrieval and AERONET measurements better than the operational error estimates. The forward model parameter errors were analyzed separately for both AOT and SSA retrievals. The surface reflectance at 388 nm, the imaginary part of the refractive index at 354 nm, and the number fine-mode fraction (FMF) were found to be the most important parameters affecting the retrieval accuracy of AOT, while FMF was the most important parameter for the SSA retrieval. The additional information provided with the retrievals, including the estimated error and degrees of freedom, is expected to be valuable for relevant studies. Detailed advantages of using the OE method were described and discussed in this paper.

  12. GPU-Monte Carlo based fast IMRT plan optimization

    Yongbao Li

    2014-03-01

    Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z

  13. [Physician versus 'off-label" ordinance].

    Kordus, Katarzyna; Spiewak, Radosław

    2015-01-01

    Polish physicians are obliged by legislation to prescribe drugs authorized for the sale in the Republic of Poland, based on registration documentation, including the Summaries of Product Characteristics (SPC). So called 'off label' treatment occurs in case of the conflict between prescription and information contained in the SPC, which may be considered as a 'medical experiment'. In case of adverse drug reactions, such classification excludes the responsibility of the marketing authorization holders, and damages are not covered by obligatory third party insurance, which can pose financial and legal consequences to the doctor. Deviations from SPC-compliant prescription may include a different way of drug administration, drug administration compliant with the indications yet in patients for whom there is no specified drug dosage, or in case of an indication not contained in the SPC. Medicinal products with equivalent active component, form and dose can have different registration indications in the SPC, and the actively promoted dispensation of less expensive substitutes by the pharmacists may, against doctor's will, result in the use that is non-compliant with registration of the substitute drug. Pharmacotherapy of 'orphan diseases', for which there are no registered medicinal products, also becomes an essential issue as it forces doctors into 'off-label' prescriptions. Moreover, the reimbursement of drugs in most cases is limited to medicinal products that are prescribed according to the registration indications. The pleas by medical professionals to make ordination and reimbursement of drugs depend on active pharmaceutical ingredient and evidence of efficacy, as well as to introduce Evidence Based Medicine (EBM) standards for the treatment of diseases, did not receive proper attention from the governing bodies. Therefore, a fundamental question remains unanswered as to what should be the principal gauge for physicians' therapeutic decision: the ethics, conscience

  14. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Trust regions in Kriging-based optimization with expected improvement

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  16. APPROACH ON INTELLIGENT OPTIMIZATION DESIGN BASED ON COMPOUND KNOWLEDGE

    Yao Jianchu; Zhou Ji; Yu Jun

    2003-01-01

    A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.

  17. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  18. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    Mohammed Hasan Abdulameer

    2014-01-01

    Full Text Available Existing face recognition methods utilize particle swarm optimizer (PSO and opposition based particle swarm optimizer (OPSO to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM. In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented.

  19. A systematic optimization for graphene-based supercapacitors

    Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho

    2017-08-01

    Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF  =  0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.

  20. Teaching learning based optimization algorithm and its engineering applications

    Rao, R Venkata

    2016-01-01

    Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.

  1. EUD-based biological optimization for carbon ion therapy

    Brüningk, Sarah C.; Kamp, Florian; Wilkens, Jan J.

    2015-01-01

    Purpose: Treatment planning for carbon ion therapy requires an accurate modeling of the biological response of each tissue to estimate the clinical outcome of a treatment. The relative biological effectiveness (RBE) accounts for this biological response on a cellular level but does not refer to the actual impact on the organ as a whole. For photon therapy, the concept of equivalent uniform dose (EUD) represents a simple model to take the organ response into account, yet so far no formulation of EUD has been reported that is suitable to carbon ion therapy. The authors introduce the concept of an equivalent uniform effect (EUE) that is directly applicable to both ion and photon therapies and exemplarily implemented it as a basis for biological treatment plan optimization for carbon ion therapy. Methods: In addition to a classical EUD concept, which calculates a generalized mean over the RBE-weighted dose distribution, the authors propose the EUE to simplify the optimization process of carbon ion therapy plans. The EUE is defined as the biologically equivalent uniform effect that yields the same probability of injury as the inhomogeneous effect distribution in an organ. Its mathematical formulation is based on the generalized mean effect using an effect-volume parameter to account for different organ architectures and is thus independent of a reference radiation. For both EUD concepts, quadratic and logistic objective functions are implemented into a research treatment planning system. A flexible implementation allows choosing for each structure between biological effect constraints per voxel and EUD constraints per structure. Exemplary treatment plans are calculated for a head-and-neck patient for multiple combinations of objective functions and optimization parameters. Results: Treatment plans optimized using an EUE-based objective function were comparable to those optimized with an RBE-weighted EUD-based approach. In agreement with previous results from photon

  2. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  3. Multivariate ordination statistics workshop with R slides

    Strack, Michael

    2015-01-01

    2-hour workshop given at Macquarie University Department of Biological Sciences, 4 November 2015. Workshop was an introduction to the family of techniques falling under multivariate ordination, using the R language and drawing heavily from the book "Numerical Ecology with R" by Borcard et. al (2012).

  4. Ordinal Welfare Comparisons with Multiple Discrete Indicators

    Arndt, Channing; Distante, Roberta; Hussain, M. Azhar

    We develop an ordinal method for making welfare comparisons between populations with multidimensional discrete well-being indicators observed at the micro level. The approach assumes that, for each well-being indicator, the levels can be ranked from worse to better; however, no assumptions are made...

  5. INFCE technical co-ordinating committee documents

    None

    1980-07-01

    A collection of the documents covering the period December 1977 through February 1980 submitted to or generated by the Technical Co-ordinating Comittee is presented. The documents cover primarily the organizational aspects of INFCE, but conclusions from the various Working Groups are summarized.

  6. INFCE technical co-ordinating committee documents

    A collection of the documents covering the period December 1977 through February 1980 submitted to or generated by the Technical Co-ordinating Comittee is presented. The documents cover primarily the organizational aspects of INFCE, but conclusions from the various Working Trays are summarized

  7. A test for ordinal measurement invariance

    Ligtvoet, R.; Millsap, R.E.; Bolt, D.M.; van der Ark, L.A.; Wang, W.-C.

    2015-01-01

    One problem with the analysis of measurement invariance is the reliance of the analysis on having a parametric model that accurately describes the data. In this paper an ordinal version of the property of measurement invariance is proposed, which relies only on nonparametric restrictions. This

  8. Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization

    Na Tian

    2015-01-01

    Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.

  9. Simulation-based optimization of sustainable national energy systems

    Batas Bjelić, Ilija; Rajaković, Nikola

    2015-01-01

    The goals of the EU2030 energy policy should be achieved cost-effectively by employing the optimal mix of supply and demand side technical measures, including energy efficiency, renewable energy and structural measures. In this paper, the achievement of these goals is modeled by introducing an innovative method of soft-linking of EnergyPLAN with the generic optimization program (GenOpt). This soft-link enables simulation-based optimization, guided with the chosen optimization algorithm, rather than manual adjustments of the decision vectors. In order to obtain EnergyPLAN simulations within the optimization loop of GenOpt, the decision vectors should be chosen and explained in GenOpt for scenarios created in EnergyPLAN. The result of the optimization loop is an optimal national energy master plan (as a case study, energy policy in Serbia was taken), followed with sensitivity analysis of the exogenous assumptions and with focus on the contribution of the smart electricity grid to the achievement of EU2030 goals. It is shown that the increase in the policy-induced total costs of less than 3% is not significant. This general method could be further improved and used worldwide in the optimal planning of sustainable national energy systems. - Highlights: • Innovative method of soft-linking of EnergyPLAN with GenOpt has been introduced. • Optimal national energy master plan has been developed (the case study for Serbia). • Sensitivity analysis on the exogenous world energy and emission price development outlook. • Focus on the contribution of smart energy systems to the EU2030 goals. • Innovative soft-linking methodology could be further improved and used worldwide.

  10. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  11. A Framework for Constrained Optimization Problems Based on a Modified Particle Swarm Optimization

    Biwei Tang

    2016-01-01

    Full Text Available This paper develops a particle swarm optimization (PSO based framework for constrained optimization problems (COPs. Aiming at enhancing the performance of PSO, a modified PSO algorithm, named SASPSO 2011, is proposed by adding a newly developed self-adaptive strategy to the standard particle swarm optimization 2011 (SPSO 2011 algorithm. Since the convergence of PSO is of great importance and significantly influences the performance of PSO, this paper first theoretically investigates the convergence of SASPSO 2011. Then, a parameter selection principle guaranteeing the convergence of SASPSO 2011 is provided. Subsequently, a SASPSO 2011-based framework is established to solve COPs. Attempting to increase the diversity of solutions and decrease optimization difficulties, the adaptive relaxation method, which is combined with the feasibility-based rule, is applied to handle constraints of COPs and evaluate candidate solutions in the developed framework. Finally, the proposed method is verified through 4 benchmark test functions and 2 real-world engineering problems against six PSO variants and some well-known methods proposed in the literature. Simulation results confirm that the proposed method is highly competitive in terms of the solution quality and can be considered as a vital alternative to solve COPs.

  12. Optimization of DNA Sensor Model Based Nanostructured Graphene Using Particle Swarm Optimization Technique

    Hediyeh Karimi

    2013-01-01

    Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.

  13. Rapid Optimal Generation Algorithm for Terrain Following Trajectory Based on Optimal Control

    杨剑影; 张海; 谢邦荣; 尹健

    2004-01-01

    Based on the optimal control theory, a 3-dimensionnal direct generation algorithm is proposed for anti-ground low altitude penetration tasks under complex terrain. By optimizing the terrain following(TF) objective function,terrain coordinate system, missile dynamic model and control vector, the TF issue is turning into the improved optimal control problem whose mathmatical model is simple and need not solve the second order terrain derivative. Simulation results prove that this method is reasonable and feasible. The TF precision is in the scope from 0.3 m to 3.0 m,and the planning time is less than 30 min. This method have the strongpionts such as rapidness, precision and has great application value.

  14. Trafficability Analysis at Traffic Crossing and Parameters Optimization Based on Particle Swarm Optimization Method

    Bin He

    2014-01-01

    Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.

  15. Sizing optimization of skeletal structures using teaching-learning based optimization

    Vedat Toğan

    2017-03-01

    Full Text Available Teaching Learning Based Optimization (TLBO is one of the non-traditional techniques to simulate natural phenomena into a numerical algorithm. TLBO mimics teaching learning process occurring between a teacher and students in a classroom. A parameter named as teaching factor, TF, seems to be the only tuning parameter in TLBO. Although the value of the teaching factor, TF, is determined by an equation, the value of 1 or 2 has been used by the researchers for TF. This study intends to explore the effect of the variation of teaching factor TF on the performances of TLBO. This effect is demonstrated in solving structural optimization problems including truss and frame structures under the stress and displacement constraints. The results indicate that the variation of TF in the TLBO process does not change the results obtained at the end of the optimization procedure when the computational cost of TLBO is ignored.

  16. Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis

    Sen Zhang

    2015-01-01

    Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.

  17. Reliability-based performance simulation for optimized pavement maintenance

    Chou, Jui-Sheng; Le, Thanh-Son

    2011-01-01

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: →A novel algorithm using multi-objective particle swarm optimization technique. → Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. → A probabilistic model for regression parameters is employed to assess reliability-based performance. → The proposed approach can help decision makers to optimize roadway maintenance plans.

  18. Reliability-based performance simulation for optimized pavement maintenance

    Chou, Jui-Sheng, E-mail: jschou@mail.ntust.edu.tw [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China); Le, Thanh-Son [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China)

    2011-10-15

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: > A novel algorithm using multi-objective particle swarm optimization technique. > Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. > A probabilistic model for regression parameters is employed to assess reliability-based performance. > The proposed approach can help decision makers to optimize roadway maintenance plans.

  19. Visual grading characteristics and ordinal regression analysis during optimisation of CT head examinations.

    Zarb, Francis; McEntee, Mark F; Rainford, Louise

    2015-06-01

    To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.

  20. Urban Runoff: Model Ordinances for Erosion and Sediment Control

    The model ordinance in this section borrows language from the erosion and sediment control ordinance features that might help prevent erosion and sedimentation and protect natural resources more fully.

  1. Optimization for PET imaging based on phantom study and NECdensity

    Daisaki, Hiromitsu; Shimada, Naoki; Shinohara, Hiroyuki

    2012-01-01

    In consideration of the requirement for global standardization and quality control of PET imaging, the present studies gave an outline of phantom study to decide both scan and reconstruction parameters based on FDG-PET/CT procedure guideline in Japan, and optimization of scan duration based on NEC density was performed continuously. In the phantom study, scan and reconstruction parameters were decided by visual assessment and physical indexes (N 10mm , NEC phantom , Q H,10mm /N 10mm ) to visualize hot spot of 10 mm diameter with standardized uptake value (SUV)=4 explicitly. Simultaneously, Recovery Coefficient (RC) was evaluated to recognize that PET images had enough quantifiably. Scan durations were optimized by Body Mass Index (BMI) based on retrospective analysis of NEC density . Correlation between visual score in clinical FDG-PET images and NEC density fell after the optimization of scan duration. Both Inter-institution and inter-patient variability were decreased by performing the phantom study based on the procedure guideline and the optimization of scan duration based on NEC density which seem finally useful to practice highly precise examination and promote high-quality controlled study. (author)

  2. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  3. Radiation protection optimization using a knowledge based methodology

    Reyes-Jimenez, J.; Tsoukalas, L.H.

    1991-01-01

    This paper presents a knowledge based methodology for radiological planning and radiation protection optimization. The cost-benefit methodology described on International Commission of Radiation Protection Report No. 37 is employed within a knowledge based framework for the purpose of optimizing radiation protection and plan maintenance activities while optimizing radiation protection. 1, 2 The methodology is demonstrated through an application to a heating ventilation and air conditioning (HVAC) system. HVAC is used to reduce radioactivity concentration levels in selected contaminated multi-compartment models at nuclear power plants when higher than normal radiation levels are detected. The overall objective is to reduce personnel exposure resulting from airborne radioactivity, when routine or maintenance access is required in contaminated areas. 2 figs, 15 refs

  4. ENERGY OPTIMIZATION IN CLUSTER BASED WIRELESS SENSOR NETWORKS

    T. SHANKAR

    2014-04-01

    Full Text Available Wireless sensor networks (WSN are made up of sensor nodes which are usually battery-operated devices, and hence energy saving of sensor nodes is a major design issue. To prolong the networks lifetime, minimization of energy consumption should be implemented at all layers of the network protocol stack starting from the physical to the application layer including cross-layer optimization. Optimizing energy consumption is the main concern for designing and planning the operation of the WSN. Clustering technique is one of the methods utilized to extend lifetime of the network by applying data aggregation and balancing energy consumption among sensor nodes of the network. This paper proposed new version of Low Energy Adaptive Clustering Hierarchy (LEACH, protocols called Advanced Optimized Low Energy Adaptive Clustering Hierarchy (AOLEACH, Optimal Deterministic Low Energy Adaptive Clustering Hierarchy (ODLEACH, and Varying Probability Distance Low Energy Adaptive Clustering Hierarchy (VPDL combination with Shuffled Frog Leap Algorithm (SFLA that enables selecting best optimal adaptive cluster heads using improved threshold energy distribution compared to LEACH protocol and rotating cluster head position for uniform energy dissipation based on energy levels. The proposed algorithm optimizing the life time of the network by increasing the first node death (FND time and number of alive nodes, thereby increasing the life time of the network.

  5. Enhancing product robustness in reliability-based design optimization

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  6. Design Optimization of Mechanical Components Using an Enhanced Teaching-Learning Based Optimization Algorithm with Differential Operator

    B. Thamaraikannan

    2014-01-01

    Full Text Available This paper studies in detail the background and implementation of a teaching-learning based optimization (TLBO algorithm with differential operator for optimization task of a few mechanical components, which are essential for most of the mechanical engineering applications. Like most of the other heuristic techniques, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. A differential operator is incorporated into the TLBO for effective search of better solutions. To validate the effectiveness of the proposed method, three typical optimization problems are considered in this research: firstly, to optimize the weight in a belt-pulley drive, secondly, to optimize the volume in a closed coil helical spring, and finally to optimize the weight in a hollow shaft. have been demonstrated. Simulation result on the optimization (mechanical components problems reveals the ability of the proposed methodology to find better optimal solutions compared to other optimization algorithms.

  7. Ordinance of 8 February 1984 on the radioactivity of timepieces

    1984-01-01

    This Ordinance regulates the approval of radioluminescent timepieces (wristwatches, fob-watches, alarm-clocks, clocks, etc.) imported or made in Switzerland. Such timepieces must comply with conditions in particular regarding their maximum radioactivity as laid down by the Ordinance and are subject to controls by the Federal Office of Public Health. The Ordinance, which came into force on 1 March 1984, replaces a similar Ordinance of 18 April 1968. (NEA) [fr

  8. A modified teaching–learning based optimization for multi-objective optimal power flow problem

    Shabanpour-Haghighi, Amin; Seifi, Ali Reza; Niknam, Taher

    2014-01-01

    Highlights: • A new modified teaching–learning based algorithm is proposed. • A self-adaptive wavelet mutation strategy is used to enhance the performance. • To avoid reaching a large repository size, a fuzzy clustering technique is used. • An efficiently smart population selection is utilized. • Simulations show the superiority of this algorithm compared with other ones. - Abstract: In this paper, a modified teaching–learning based optimization algorithm is analyzed to solve the multi-objective optimal power flow problem considering the total fuel cost and total emission of the units. The modified phase of the optimization algorithm utilizes a self-adapting wavelet mutation strategy. Moreover, a fuzzy clustering technique is proposed to avoid extremely large repository size besides a smart population selection for the next iteration. These techniques make the algorithm searching a larger space to find the optimal solutions while speed of the convergence remains good. The IEEE 30-Bus and 57-Bus systems are used to illustrate performance of the proposed algorithm and results are compared with those in literatures. It is verified that the proposed approach has better performance over other techniques

  9. MVMO-based approach for optimal placement and tuning of ...

    DR OKE

    differential evolution DE algorithm with adaptive crossover operator, .... x are assigned by using a sequential scheme which accounts for mean and ... the representative scenarios from probabilistic model based Monte Carlo ... Comparison of average convergence of MVMO-S with other metaheuristic optimization methods.

  10. Runtime Optimizations for Tree-Based Machine Learning Models

    N. Asadi; J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)

    2014-01-01

    htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression

  11. Optimal energy management for a flywheel-based hybrid vehicle

    Berkel, van K.; Hofman, T.; Vroemen, B.G.; Steinbuch, M.

    2011-01-01

    This paper presents the modeling and design of an optimal Energy Management Strategy (EMS) for a flywheel-based hybrid vehicle, that does not use any electrical motor/generator, or a battery, for its hybrid functionalities. The hybrid drive train consists of only low-cost components, such as a

  12. Reality based optimization of steel monopod offshore-towers

    Vrouwenvelder, A.C.W.M.

    2008-01-01

    In this work, the implementation of reliability-based optimization (RBO) of a circular steel monopod-offshore-tower with constant and variable diameters (represented by segmentations) and thicknesses is presented. The tower is subjected to the extreme wave loading. For this purpose, the

  13. Economics-based optimal control of greenhouse tomato crop production

    Tap, F.

    2000-01-01

    The design and testing of an optimal control algorithm, based on scientific models of greenhouse and tomato crop and an economic criterion (goal function), to control greenhouse climate, is described. An important characteristic of this control is that it aims at maximising an economic

  14. Optimization-based Method for Automated Road Network Extraction

    Xiong, D

    2001-01-01

    Automated road information extraction has significant applicability in transportation. It provides a means for creating, maintaining, and updating transportation network databases that are needed for purposes ranging from traffic management to automated vehicle navigation and guidance. This paper is to review literature on the subject of road extraction and to describe a study of an optimization-based method for automated road network extraction

  15. Optimal Sequential Rules for Computer-Based Instruction.

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  16. Processing ordinality and quantity: the case of developmental dyscalculia.

    Orly Rubinsten

    Full Text Available In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1 the traditionally and well accepted numerical magnitude system but also (2 core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD. Participants made "ordered" or "non-ordered" judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1 and 3 numbers (symbolic task: Experiment 2. In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks, DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information.

  17. Processing ordinality and quantity: the case of developmental dyscalculia.

    Rubinsten, Orly; Sury, Dana

    2011-01-01

    In contrast to quantity processing, up to date, the nature of ordinality has received little attention from researchers despite the fact that both quantity and ordinality are embodied in numerical information. Here we ask if there are two separate core systems that lie at the foundations of numerical cognition: (1) the traditionally and well accepted numerical magnitude system but also (2) core system for representing ordinal information. We report two novel experiments of ordinal processing that explored the relation between ordinal and numerical information processing in typically developing adults and adults with developmental dyscalculia (DD). Participants made "ordered" or "non-ordered" judgments about 3 groups of dots (non-symbolic numerical stimuli; in Experiment 1) and 3 numbers (symbolic task: Experiment 2). In contrast to previous findings and arguments about quantity deficit in DD participants, when quantity and ordinality are dissociated (as in the current tasks), DD participants exhibited a normal ratio effect in the non-symbolic ordinal task. They did not show, however, the ordinality effect. Ordinality effect in DD appeared only when area and density were randomized, but only in the descending direction. In the symbolic task, the ordinality effect was modulated by ratio and direction in both groups. These findings suggest that there might be two separate cognitive representations of ordinal and quantity information and that linguistic knowledge may facilitate estimation of ordinal information.

  18. Comparative performance of an elitist teaching-learning-based optimization algorithm for solving unconstrained optimization problems

    R. Venkata Rao

    2013-01-01

    Full Text Available Teaching-Learning-based optimization (TLBO is a recently proposed population based algorithm, which simulates the teaching-learning process of the class room. This algorithm requires only the common control parameters and does not require any algorithm-specific control parameters. In this paper, the effect of elitism on the performance of the TLBO algorithm is investigated while solving unconstrained benchmark problems. The effects of common control parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 76 unconstrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. A statistical test is also performed to investigate the results obtained using different algorithms. The results have proved the effectiveness of the proposed elitist TLBO algorithm.

  19. Attitude Optimal Backstepping Controller Based Quaternion for a UAV

    Djamel, Kaddouri; Abdellah, Mokhtari; Benallegue, Abdelaziz

    2016-01-01

    A hierarchical controller design based on nonlinear H∞ theory and backstepping technique is developed for a nonlinear and coupled dynamic attitude system using conventional quaternion based method. The derived controller combines the attractive features of H∞ optimal controller and the advantages of the backstepping technique leading to a control law which avoids winding phenomena. Performance issues of the controller are illustrated in a simulation study made for a four-rotor vertical take-o...

  20. Mesh Denoising based on Normal Voting Tensor and Binary Optimization

    Yadav, S. K.; Reitebuch, U.; Polthier, K.

    2016-01-01

    This paper presents a tensor multiplication based smoothing algorithm that follows a two step denoising method. Unlike other traditional averaging approaches, our approach uses an element based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stoc...

  1. Optimize Etching Based Single Mode Fiber Optic Temperature Sensor

    Ajay Kumar; Dr. Pramod Kumar

    2014-01-01

    This paper presents a description of etching process for fabrication single mode optical fiber sensors. The process of fabrication demonstrates an optimized etching based method to fabricate single mode fiber (SMF) optic sensors in specified constant time and temperature. We propose a single mode optical fiber based temperature sensor, where the temperature sensing region is obtained by etching its cladding diameter over small length to a critical value. It is observed that th...

  2. A Rapid Aeroelasticity Optimization Method Based on the Stiffness characteristics

    Yuan, Zhe; Huo, Shihui; Ren, Jianting

    2018-01-01

    A rapid aeroelasticity optimization method based on the stiffness characteristics was proposed in the present study. Large time expense in static aeroelasticity analysis based on traditional time domain aeroelasticity method is solved. Elastic axis location and torsional stiffness are discussed firstly. Both torsional stiffness and the distance between stiffness center and aerodynamic center have a direct impact on divergent velocity. The divergent velocity can be adjusted by changing the cor...

  3. Parallel Harmony Search Based Distributed Energy Resource Optimization

    Ceylan, Oguzhan [ORNL; Liu, Guodong [ORNL; Tomsovic, Kevin [University of Tennessee, Knoxville (UTK)

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  4. Resource-based optimization of electric power production (in Iran)

    Sadeghzadeh, Mohammad

    1999-01-01

    This paper is about electric power production optimization and chiefly discusses on the types of resources available in Iran. The modeling has been based on the marginal cost of different energy resources and types of technologies used. the computed costs are the basic standards for optimization of the production system of energy. the costs associated with environmental pollution and also pollution control are considered. the present paper also studied gas fossil fuel, hydro, nuclear, renewable and co-generation of heat and power. The results are discussed and reported at the last of the paper

  5. Proposal optimization in nuclear accident emergency decision based on IAHP

    Xin Jing

    2007-01-01

    On the basis of establishing the multi-layer structure of nuclear accident emergency decision, several decision objectives are synthetically analyzed, and an optimization model of decision proposals for nuclear accident emergency based on interval analytic hierarchy process is proposed in the paper. The model makes comparisons among several emergency decision proposals quantified, and the optimum proposal is selected out, which solved the uncertain and fuzzy decision problem of judgments by experts' experiences in nuclear accidents emergency decision. Case study shows that the optimization result is much more reasonable, objective and reliable than subjective judgments, and it could be decision references for nuclear accident emergency. (authors)

  6. Optimization of morphing flaps based on fluid structure interaction modeling

    Barlas, Athanasios; Akay, Busra

    2018-01-01

    This article describes the design optimization of morphing trailing edge flaps for wind turbines with ‘smart blades’. A high fidelity Fluid Structure Interaction (FSI) simulation framework is utilized, comprised of 2D Finite Element Analysis (FEA) and Computational Fluid Dynamics (CFD) models....... A coupled aero-structural simulation of a 10% chordwise length morphing trailing edge flap for a 4 MW wind turbine rotor is carried out and response surfaces are produced with respect to the flap internal geometry design parameters for the design conditions. Surrogate model based optimization is applied...

  7. Investment Strategies Optimization based on a SAX-GA Methodology

    Canelas, António M L; Horta, Nuno C G

    2013-01-01

    This book presents a new computational finance approach combining a Symbolic Aggregate approXimation (SAX) technique with an optimization kernel based on genetic algorithms (GA). While the SAX representation is used to describe the financial time series, the evolutionary optimization kernel is used in order to identify the most relevant patterns and generate investment rules. The proposed approach considers several different chromosomes structures in order to achieve better results on the trading platform The methodology presented in this book has great potential on investment markets.

  8. Reliability-Based Structural Optimization of Wave Energy Converters

    Simon Ambühl

    2014-12-01

    Full Text Available More and more wave energy converter (WEC concepts are reaching prototypelevel. Once the prototype level is reached, the next step in order to further decrease thelevelized cost of energy (LCOE is optimizing the overall system with a focus on structuraland maintenance (inspection costs, as well as on the harvested power from the waves.The target of a fully-developed WEC technology is not maximizing its power output,but minimizing the resulting LCOE. This paper presents a methodology to optimize thestructural design of WECs based on a reliability-based optimization problem and the intentto maximize the investor’s benefits by maximizing the difference between income (e.g., fromselling electricity and the expected expenses (e.g., structural building costs or failure costs.Furthermore, different development levels, like prototype or commercial devices, may havedifferent main objectives and will be located at different locations, as well as receive varioussubsidies. These points should be accounted for when performing structural optimizationsof WECs. An illustrative example on the gravity-based foundation of the Wavestar deviceis performed showing how structural design can be optimized taking target reliability levelsand different structural failure modes due to extreme loads into account.

  9. Bare-Bones Teaching-Learning-Based Optimization

    Feng Zou

    2014-01-01

    Full Text Available Teaching-learning-based optimization (TLBO algorithm which simulates the teaching-learning process of the class room is one of the recently proposed swarm intelligent (SI algorithms. In this paper, a new TLBO variant called bare-bones teaching-learning-based optimization (BBTLBO is presented to solve the global optimization problems. In this method, each learner of teacher phase employs an interactive learning strategy, which is the hybridization of the learning strategy of teacher phase in the standard TLBO and Gaussian sampling learning based on neighborhood search, and each learner of learner phase employs the learning strategy of learner phase in the standard TLBO or the new neighborhood search strategy. To verify the performance of our approaches, 20 benchmark functions and two real-world problems are utilized. Conducted experiments can been observed that the BBTLBO performs significantly better than, or at least comparable to, TLBO and some existing bare-bones algorithms. The results indicate that the proposed algorithm is competitive to some other optimization algorithms.

  10. Optimal diabatic dynamics of Majorana-based quantum gates

    Rahmani, Armin; Seradjeh, Babak; Franz, Marcel

    2017-08-01

    In topological quantum computing, unitary operations on qubits are performed by adiabatic braiding of non-Abelian quasiparticles, such as Majorana zero modes, and are protected from local environmental perturbations. In the adiabatic regime, with timescales set by the inverse gap of the system, the errors can be made arbitrarily small by performing the process more slowly. To enhance the performance of quantum information processing with Majorana zero modes, we apply the theory of optimal control to the diabatic dynamics of Majorana-based qubits. While we sacrifice complete topological protection, we impose constraints on the optimal protocol to take advantage of the nonlocal nature of topological information and increase the robustness of our gates. By using the Pontryagin's maximum principle, we show that robust equivalent gates to perfect adiabatic braiding can be implemented in finite times through optimal pulses. In our implementation, modifications to the device Hamiltonian are avoided. Focusing on thermally isolated systems, we study the effects of calibration errors and external white and 1 /f (pink) noise on Majorana-based gates. While a noise-induced antiadiabatic behavior, where a slower process creates more diabatic excitations, prohibits indefinite enhancement of the robustness of the adiabatic scheme, our fast optimal protocols exhibit remarkable stability to noise and have the potential to significantly enhance the practical performance of Majorana-based information processing.

  11. Adaptive surrogate model based multiobjective optimization for coastal aquifer management

    Song, Jian; Yang, Yun; Wu, Jianfeng; Wu, Jichun; Sun, Xiaomin; Lin, Jin

    2018-06-01

    In this study, a novel surrogate model assisted multiobjective memetic algorithm (SMOMA) is developed for optimal pumping strategies of large-scale coastal groundwater problems. The proposed SMOMA integrates an efficient data-driven surrogate model with an improved non-dominated sorted genetic algorithm-II (NSGAII) that employs a local search operator to accelerate its convergence in optimization. The surrogate model based on Kernel Extreme Learning Machine (KELM) is developed and evaluated as an approximate simulator to generate the patterns of regional groundwater flow and salinity levels in coastal aquifers for reducing huge computational burden. The KELM model is adaptively trained during evolutionary search to satisfy desired fidelity level of surrogate so that it inhibits error accumulation of forecasting and results in correctly converging to true Pareto-optimal front. The proposed methodology is then applied to a large-scale coastal aquifer management in Baldwin County, Alabama. Objectives of minimizing the saltwater mass increase and maximizing the total pumping rate in the coastal aquifers are considered. The optimal solutions achieved by the proposed adaptive surrogate model are compared against those solutions obtained from one-shot surrogate model and original simulation model. The adaptive surrogate model does not only improve the prediction accuracy of Pareto-optimal solutions compared with those by the one-shot surrogate model, but also maintains the equivalent quality of Pareto-optimal solutions compared with those by NSGAII coupled with original simulation model, while retaining the advantage of surrogate models in reducing computational burden up to 94% of time-saving. This study shows that the proposed methodology is a computationally efficient and promising tool for multiobjective optimizations of coastal aquifer managements.

  12. Optimal Model-Based Control in HVAC Systems

    Komareji, Mohammad; Stoustrup, Jakob; Rasmussen, Henrik

    2008-01-01

    is developed. Then the optimal control structure is designed and implemented. The HVAC system is splitted into two subsystems. By selecting the right set-points and appropriate cost functions for each subsystem controller the optimal control strategy is respected to gaurantee the minimum thermal and electrical......This paper presents optimal model-based control of a heating, ventilating, and air-conditioning (HVAC) system. This HVAC system is made of two heat exchangers: an air-to-air heat exchanger (a rotary wheel heat recovery) and a water-to- air heat exchanger. First dynamic model of the HVAC system...... energy consumption. Finally, the controller is applied to control the mentioned HVAC system and the results show that the expected goals are fulfilled....

  13. Visibility-based optimal path and motion planning

    Wang, Paul Keng-Chieh

    2015-01-01

    This monograph deals with various visibility-based path and motion planning problems motivated by real-world applications such as exploration and mapping planetary surfaces, environmental surveillance using stationary or mobile robots, and imaging of global air/pollutant circulation. The formulation and solution of these problems call for concepts and methods from many areas of applied mathematics including computational geometry, set-covering, non-smooth optimization, combinatorial optimization and optimal control. Emphasis is placed on the formulation of new problems and methods of approach to these problems. Since geometry and visualization play important roles in the understanding of these problems, intuitive interpretations of the basic concepts are presented before detailed mathematical development. The development of a particular topic begins with simple cases illustrated by specific examples, and then progresses forward to more complex cases. The intended readers of this monograph are primarily studen...

  14. Optimization of Classical Hydraulic Engine Mounts Based on RMS Method

    J. Christopherson

    2005-01-01

    Full Text Available Based on RMS averaging of the frequency response functions of the absolute acceleration and relative displacement transmissibility, optimal parameters describing the hydraulic engine mount are determined to explain the internal mount geometry. More specifically, it is shown that a line of minima exists to define a relationship between the absolute acceleration and relative displacement transmissibility of a sprung mass using a hydraulic mount as a means of suspension. This line of minima is used to determine several optimal systems developed on the basis of different clearance requirements, hence different relative displacement requirements, and compare them by means of their respective acceleration and displacement transmissibility functions. In addition, the transient response of the mount to a step input is also investigated to show the effects of the optimization upon the time domain response of the hydraulic mount.

  15. Analog Circuit Design Optimization Based on Evolutionary Algorithms

    Mansour Barari

    2014-01-01

    Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.

  16. Joint Optimization in UMTS-Based Video Transmission

    Attila Zsiros

    2007-01-01

    Full Text Available A software platform is exposed, which was developed to enable demonstration and capacity testing. The platform simulates a joint optimized wireless video transmission. The development succeeded within the frame of the IST-PHOENIX project and is based on the system optimization model of the project. One of the constitutive parts of the model, the wireless network segment, is changed to a detailed, standard UTRA network simulation module. This paper consists of (1 a brief description of the projects simulation chain, (2 brief description of the UTRAN system, and (3 the integration of the two segments. The role of the UTRAN part in the joint optimization is described, with the configuration and control of this element. Finally, some simulation results are shown. In the conclusion, we show how our simulation results translate into real-world performance gains.

  17. Rain Scattering and Co-ordinate Distance Calculation

    M. Hajny

    1998-12-01

    Full Text Available Calculations of scattered field on the rain objects are based on using of Multiple MultiPole (MMP numerical method. Both bi-static scattering function and bi-static scattering cross section are calculated in the plane parallel to Earth surface. The co-ordination area was determined using the simple model of scattering volume [1]. Calculation for frequency 9.595 GHz and antenna elevation of 25° was done. Obtained results are compared with calculation in accordance to ITU-R recommendation.

  18. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  19. Optimization Model for Web Based Multimodal Interactive Simulations.

    Halic, Tansel; Ahn, Woojin; De, Suvranu

    2015-07-15

    This paper presents a technique for optimizing the performance of web based multimodal interactive simulations. For such applications where visual quality and the performance of simulations directly influence user experience, overloading of hardware resources may result in unsatisfactory reduction in the quality of the simulation and user satisfaction. However, optimization of simulation performance on individual hardware platforms is not practical. Hence, we present a mixed integer programming model to optimize the performance of graphical rendering and simulation performance while satisfying application specific constraints. Our approach includes three distinct phases: identification, optimization and update . In the identification phase, the computing and rendering capabilities of the client device are evaluated using an exploratory proxy code. This data is utilized in conjunction with user specified design requirements in the optimization phase to ensure best possible computational resource allocation. The optimum solution is used for rendering (e.g. texture size, canvas resolution) and simulation parameters (e.g. simulation domain) in the update phase. Test results are presented on multiple hardware platforms with diverse computing and graphics capabilities to demonstrate the effectiveness of our approach.

  20. Model county ordinance for wind projects

    Bain, D.A. [Oregon Office of Energy, Portland, OR (United States)

    1997-12-31

    Permitting is a crucial step in the development cycle of a wind project and permits affect the timing, cost, location, feasibility, layout, and impacts of wind projects. Counties often have the lead responsibility for permitting yet few have appropriate siting regulations for wind projects. A model ordinance allows a county to quickly adopt appropriate permitting procedures. The model county wind ordinance developed for use by northwest states is generally applicable across the country and counties seeking to adopt siting or zoning regulations for wind will find it a good starting place. The model includes permitting procedures for wind measurement devices and two types of wind systems. Both discretionary and nondiscretionary standards apply to wind systems and a conditional use permit would be issued. The standards, criteria, conditions for approval, and process procedures are defined for each. Adaptation examples for the four northwest states are provided along with a model Wind Resource Overlay Zone.

  1. The revised German radiation protection ordinance

    Palm, M.

    2002-01-01

    Since August 2001, German radiation protection law is governed by a new Radiation Protection Ordinance, implementing two new Euratom Directives and taking into account new scientific developments, which provides a comprehensive basis for the protection of man and the environment. The Ordinance has been completely restructured; however, it is still a very complex piece of legislation comprising 118 provisions and 14 annexes, some of them highly technical. Reduced dose limits for occupationally exposed persons and members of the public, a detailed provision on clearance of radioactive substances, a new part aiming at the protection of man and the environment against ionising radiation emanating from natural sources, and regulations dealing with the protection of consumers in connection with the addition of radioactive substances to consumer goods are some of the centre pieces of the new legislation which shall contribute significantly to the further prevention or at least minimisation of the adverse effects of radiation exposure. (orig.) [de

  2. Optimal difference-based estimation for partially linear models

    Zhou, Yuejin; Cheng, Yebin; Dai, Wenlin; Tong, Tiejun

    2017-01-01

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  3. Modification of species-based differential evolution for multimodal optimization

    Idrus, Said Iskandar Al; Syahputra, Hermawan; Firdaus, Muliawan

    2015-12-01

    At this time optimization has an important role in various fields as well as between other operational research, industry, finance and management. Optimization problem is the problem of maximizing or minimizing a function of one variable or many variables, which include unimodal and multimodal functions. Differential Evolution (DE), is a random search technique using vectors as an alternative solution in the search for the optimum. To localize all local maximum and minimum on multimodal function, this function can be divided into several domain of fitness using niching method. Species-based niching method is one of method that build sub-populations or species in the domain functions. This paper describes the modification of species-based previously to reduce the computational complexity and run more efficiently. The results of the test functions show species-based modifications able to locate all the local optima in once run the program.

  4. Optimal difference-based estimation for partially linear models

    Zhou, Yuejin

    2017-12-16

    Difference-based methods have attracted increasing attention for analyzing partially linear models in the recent literature. In this paper, we first propose to solve the optimal sequence selection problem in difference-based estimation for the linear component. To achieve the goal, a family of new sequences and a cross-validation method for selecting the adaptive sequence are proposed. We demonstrate that the existing sequences are only extreme cases in the proposed family. Secondly, we propose a new estimator for the residual variance by fitting a linear regression method to some difference-based estimators. Our proposed estimator achieves the asymptotic optimal rate of mean squared error. Simulation studies also demonstrate that our proposed estimator performs better than the existing estimator, especially when the sample size is small and the nonparametric function is rough.

  5. Adjoint current-based approaches to prostate brachytherapy optimization

    Roberts, J. A.; Henderson, D. L.

    2009-01-01

    This paper builds on previous work done at the Univ. of Wisconsin - Madison to employ the adjoint concept of nuclear reactor physics in the so-called greedy heuristic of brachytherapy optimization. Whereas that previous work focused on the adjoint flux, i.e. the importance, this work has included use of the adjoint current to increase the amount of information available in optimizing. Two current-based approaches were developed for 2-D problems, and each was compared to the most recent form of the flux-based methodology. The first method aimed to take a treatment plan from the flux-based greedy heuristic and adjust via application of the current-displacement, or a vector displacement based on a combination of tissue (adjoint) and seed (forward) currents acting as forces on a seed. This method showed promise in improving key urethral and rectal dosimetric quantities. The second method uses the normed current-displacement as the greedy criterion such that seeds are placed in regions of least force. This method, coupled with the dose-update scheme, generated treatment plans with better target irradiation and sparing of the urethra and normal tissues than the flux-based approach. Tables of these parameters are given for both approaches. In summary, these preliminary results indicate adjoint current methods are useful in optimization and further work in 3-D should be performed. (authors)

  6. Nuclear data preparation and discrete ordinates calculation

    Carmignani, B.

    1980-01-01

    These lectures deal with the use of the GAM-GATHER and GAM-THERMOS chains for the calculation of lattice cross sections and within use of the discrete ordinates one dimensional ANISN code for the calculation of criticality and flux distribution of the cell and of the whole reactor. As an example the codes are applied to the calculation of a PWR. Results of different approximations are compared. (author)

  7. A variational synthesis nodal discrete ordinates method

    Favorite, J.A.; Stacey, W.M.

    1999-01-01

    A self-consistent nodal approximation method for computing discrete ordinates neutron flux distributions has been developed from a variational functional for neutron transport theory. The advantage of the new nodal method formulation is that it is self-consistent in its definition of the homogenized nodal parameters, the construction of the global nodal equations, and the reconstruction of the detailed flux distribution. The efficacy of the method is demonstrated by two-dimensional test problems

  8. The Trend Odds Model for Ordinal Data‡

    Capuano, Ana W.; Dawson, Jeffrey D.

    2013-01-01

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520

  9. The trend odds model for ordinal data.

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  10. Group Elevator Peak Scheduling Based on Robust Optimization Model

    ZHANG, J.

    2013-08-01

    Full Text Available Scheduling of Elevator Group Control System (EGCS is a typical combinatorial optimization problem. Uncertain group scheduling under peak traffic flows has become a research focus and difficulty recently. RO (Robust Optimization method is a novel and effective way to deal with uncertain scheduling problem. In this paper, a peak scheduling method based on RO model for multi-elevator system is proposed. The method is immune to the uncertainty of peak traffic flows, optimal scheduling is realized without getting exact numbers of each calling floor's waiting passengers. Specifically, energy-saving oriented multi-objective scheduling price is proposed, RO uncertain peak scheduling model is built to minimize the price. Because RO uncertain model could not be solved directly, RO uncertain model is transformed to RO certain model by elevator scheduling robust counterparts. Because solution space of elevator scheduling is enormous, to solve RO certain model in short time, ant colony solving algorithm for elevator scheduling is proposed. Based on the algorithm, optimal scheduling solutions are found quickly, and group elevators are scheduled according to the solutions. Simulation results show the method could improve scheduling performances effectively in peak pattern. Group elevators' efficient operation is realized by the RO scheduling method.

  11. Computer Based Porosity Design by Multi Phase Topology Optimization

    Burblies, Andreas; Busse, Matthias

    2008-02-01

    A numerical simulation technique called Multi Phase Topology Optimization (MPTO) based on finite element method has been developed and refined by Fraunhofer IFAM during the last five years. MPTO is able to determine the optimum distribution of two or more different materials in components under thermal and mechanical loads. The objective of optimization is to minimize the component's elastic energy. Conventional topology optimization methods which simulate adaptive bone mineralization have got the disadvantage that there is a continuous change of mass by growth processes. MPTO keeps all initial material concentrations and uses methods adapted from molecular dynamics to find energy minimum. Applying MPTO to mechanically loaded components with a high number of different material densities, the optimization results show graded and sometimes anisotropic porosity distributions which are very similar to natural bone structures. Now it is possible to design the macro- and microstructure of a mechanical component in one step. Computer based porosity design structures can be manufactured by new Rapid Prototyping technologies. Fraunhofer IFAM has applied successfully 3D-Printing and Selective Laser Sintering methods in order to produce very stiff light weight components with graded porosities calculated by MPTO.

  12. Surrogate-Based Optimization of Biogeochemical Transport Models

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  13. Chaos Time Series Prediction Based on Membrane Optimization Algorithms

    Meng Li

    2015-01-01

    Full Text Available This paper puts forward a prediction model based on membrane computing optimization algorithm for chaos time series; the model optimizes simultaneously the parameters of phase space reconstruction (τ,m and least squares support vector machine (LS-SVM (γ,σ by using membrane computing optimization algorithm. It is an important basis for spectrum management to predict accurately the change trend of parameters in the electromagnetic environment, which can help decision makers to adopt an optimal action. Then, the model presented in this paper is used to forecast band occupancy rate of frequency modulation (FM broadcasting band and interphone band. To show the applicability and superiority of the proposed model, this paper will compare the forecast model presented in it with conventional similar models. The experimental results show that whether single-step prediction or multistep prediction, the proposed model performs best based on three error measures, namely, normalized mean square error (NMSE, root mean square error (RMSE, and mean absolute percentage error (MAPE.

  14. Otsu Based Optimal Multilevel Image Thresholding Using Firefly Algorithm

    N. Sri Madhava Raja

    2014-01-01

    Full Text Available Histogram based multilevel thresholding approach is proposed using Brownian distribution (BD guided firefly algorithm (FA. A bounded search technique is also presented to improve the optimization accuracy with lesser search iterations. Otsu’s between-class variance function is maximized to obtain optimal threshold level for gray scale images. The performances of the proposed algorithm are demonstrated by considering twelve benchmark images and are compared with the existing FA algorithms such as Lévy flight (LF guided FA and random operator guided FA. The performance assessment comparison between the proposed and existing firefly algorithms is carried using prevailing parameters such as objective function, standard deviation, peak-to-signal ratio (PSNR, structural similarity (SSIM index, and search time of CPU. The results show that BD guided FA provides better objective function, PSNR, and SSIM, whereas LF based FA provides faster convergence with relatively lower CPU time.

  15. Optimization based tuning approach for offset free MPC

    Olesen, Daniel Haugård; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp

    2012-01-01

    We present an optimization based tuning procedure with certain robustness properties for an offset free Model Predictive Controller (MPC). The MPC is designed for multivariate processes that can be represented by an ARX model. The advantage of ARX model representations is that standard system...... identifiation techniques using convex optimization can be used for identification of such models from input-output data. The stochastic model of the ARX model identified from input-output data is modified with an ARMA model designed as part of the MPC-design procedure to ensure offset-free control. The ARMAX...... model description resulting from the extension can be realized as a state space model in innovation form. The MPC is designed and implemented based on this state space model in innovation form. Expressions for the closed-loop dynamics of the unconstrained system is used to derive the sensitivity...

  16. Compliance to two city convenience store ordinance requirements.

    Chaumont Menéndez, Cammie K; Amandus, Harlan E; Wu, Nan; Hendricks, Scott A

    2016-04-01

    Robbery-related homicides and assaults are the leading cause of death in retail businesses. Robbery reduction approaches focus on compliance to Crime Prevention Through Environmental Design (CPTED) guidelines. We evaluated the level of compliance to CPTED guidelines specified by convenience store safety ordinances effective in 2010 in Dallas and Houston, Texas, USA. Convenience stores were defined as businesses less than 10 000 square feet that sell grocery items. Store managers were interviewed for store ordinance requirements from August to November 2011, in a random sample of 594 (289 in Dallas, 305 in Houston) convenience stores that were open before and after the effective dates of their city's ordinance. Data were collected in 2011 and analysed in 2012-2014. Overall, 9% of stores were in full compliance, although 79% reported being registered with the police departments as compliant. Compliance was consistently significantly higher in Dallas than in Houston for many requirements and by store type. Compliance was lower among single owner-operator stores compared with corporate/franchise stores. Compliance to individual requirements was lowest for signage and visibility. Full compliance to the required safety measures is consistent with industry 'best practices' and evidence-based workplace violence prevention research findings. In Houston and Dallas compliance was higher for some CPTED requirements but not the less costly approaches that are also the more straightforward to adopt. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  17. A surrogate based multistage-multilevel optimization procedure for multidisciplinary design optimization

    Yao, W.; Chen, X.; Ouyang, Q.; Van Tooren, M.

    2011-01-01

    Optimization procedure is one of the key techniques to address the computational and organizational complexities of multidisciplinary design optimization (MDO). Motivated by the idea of synthetically exploiting the advantage of multiple existing optimization procedures and meanwhile complying with

  18. Optimized data evaluation for k0-based NAA

    Van Sluijs, R.; Bossus, D.A.W.

    1999-01-01

    k 0 -NAA allows the simultaneous analysis of up-to 67 elements. The k 0 method is based on calculations using a special library instead of measuring standards. For an efficient use of the method, the calculations and resulting raw data require optimized evaluation procedures. In this paper two efficient procedures for nuclide identification and gamma interference correction are outlined. For a fast computation of the source-detector efficiency and coincidence correction factors the matrix interpolation technique is introduced. (author)

  19. Simulation based optimization on automated fibre placement process

    Lei, Shi

    2018-02-01

    In this paper, a software simulation (Autodesk TruPlan & TruFiber) based method is proposed to optimize the automate fibre placement (AFP) process. Different types of manufacturability analysis are introduced to predict potential defects. Advanced fibre path generation algorithms are compared with respect to geometrically different parts. Major manufacturing data have been taken into consideration prior to the tool paths generation to achieve high success rate of manufacturing.

  20. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  1. Tour Route Multiobjective Optimization Design Based on the Tourist Satisfaction

    Yan Han

    2014-01-01

    Full Text Available The question prompted is how to design the tour route to make the tourists get the maximum satisfactions considering the tourists’ demand. The influence factors of the tour route choices of tourists were analyzed and tourists’ behavior characteristics and psychological preferences were regarded as the important influence factors based on the tourist behavioral theories. A questionnaire of tourists’ tour route information and satisfaction degree was carried out. Some information about the scene spot and tourists demand and tour behaviors characteristic such as visit frequency, number of attractions visited was obtained and analyzed. Based on the convey datum, tour routes multiobjective optimization functions were prompted for the tour route design regarding the maximum satisfaction and the minimum tour distance as the optimal objective. The available routes are listed and categorized. Based on the particle swarm optimization model, the priorities of the tour route are calculated and finally the suggestion depth tour route and quick route tour routes are given considering the different tour demands of tourists. The results can offer constructive suggestions on how to design tour routes on the part of tourism enterprises and how to choose a proper tour route on the part of tourists.

  2. Genetic Algorithm (GA)-Based Inclinometer Layout Optimization.

    Liang, Weijie; Zhang, Ping; Chen, Xianping; Cai, Miao; Yang, Daoguo

    2015-04-17

    This paper presents numerical simulation results of an airflow inclinometer with sensitivity studies and thermal optimization of the printed circuit board (PCB) layout for an airflow inclinometer based on a genetic algorithm (GA). Due to the working principle of the gas sensor, the changes of the ambient temperature may cause dramatic voltage drifts of sensors. Therefore, eliminating the influence of the external environment for the airflow is essential for the performance and reliability of an airflow inclinometer. In this paper, the mechanism of an airflow inclinometer and the influence of different ambient temperatures on the sensitivity of the inclinometer will be examined by the ANSYS-FLOTRAN CFD program. The results show that with changes of the ambient temperature on the sensing element, the sensitivity of the airflow inclinometer is inversely proportional to the ambient temperature and decreases when the ambient temperature increases. GA is used to optimize the PCB thermal layout of the inclinometer. The finite-element simulation method (ANSYS) is introduced to simulate and verify the results of our optimal thermal layout, and the results indicate that the optimal PCB layout greatly improves (by more than 50%) the sensitivity of the inclinometer. The study may be useful in the design of PCB layouts that are related to sensitivity improvement of gas sensors.

  3. Simulated Annealing-Based Krill Herd Algorithm for Global Optimization

    Gai-Ge Wang

    2013-01-01

    Full Text Available Recently, Gandomi and Alavi proposed a novel swarm intelligent method, called krill herd (KH, for global optimization. To enhance the performance of the KH method, in this paper, a new improved meta-heuristic simulated annealing-based krill herd (SKH method is proposed for optimization tasks. A new krill selecting (KS operator is used to refine krill behavior when updating krill’s position so as to enhance its reliability and robustness dealing with optimization problems. The introduced KS operator involves greedy strategy and accepting few not-so-good solutions with a low probability originally used in simulated annealing (SA. In addition, a kind of elitism scheme is used to save the best individuals in the population in the process of the krill updating. The merits of these improvements are verified by fourteen standard benchmarking functions and experimental results show that, in most cases, the performance of this improved meta-heuristic SKH method is superior to, or at least highly competitive with, the standard KH and other optimization methods.

  4. Fog computing job scheduling optimization based on bees swarm

    Bitam, Salim; Zeadally, Sherali; Mellouk, Abdelhamid

    2018-04-01

    Fog computing is a new computing architecture, composed of a set of near-user edge devices called fog nodes, which collaborate together in order to perform computational services such as running applications, storing an important amount of data, and transmitting messages. Fog computing extends cloud computing by deploying digital resources at the premise of mobile users. In this new paradigm, management and operating functions, such as job scheduling aim at providing high-performance, cost-effective services requested by mobile users and executed by fog nodes. We propose a new bio-inspired optimization approach called Bees Life Algorithm (BLA) aimed at addressing the job scheduling problem in the fog computing environment. Our proposed approach is based on the optimized distribution of a set of tasks among all the fog computing nodes. The objective is to find an optimal tradeoff between CPU execution time and allocated memory required by fog computing services established by mobile users. Our empirical performance evaluation results demonstrate that the proposal outperforms the traditional particle swarm optimization and genetic algorithm in terms of CPU execution time and allocated memory.

  5. Kriging-based algorithm for nuclear reactor neutronic design optimization

    Kempf, Stephanie; Forget, Benoit; Hu, Lin-Wen

    2012-01-01

    Highlights: ► A Kriging-based algorithm was selected to guide research reactor optimization. ► We examined impacts of parameter values upon the algorithm. ► The best parameter values were incorporated into a set of best practices. ► Algorithm with best practices used to optimize thermal flux of concept. ► Final design produces thermal flux 30% higher than other 5 MW reactors. - Abstract: Kriging, a geospatial interpolation technique, has been used in the present work to drive a search-and-optimization algorithm which produces the optimum geometric parameters for a 5 MW research reactor design. The technique has been demonstrated to produce an optimal neutronic solution after a relatively small number of core calculations. It has additionally been successful in producing a design which significantly improves thermal neutron fluxes by 30% over existing reactors of the same power rating. Best practices for use of this algorithm in reactor design were identified and indicated the importance of selecting proper correlation functions.

  6. An optimization-based framework for anisotropic simplex mesh adaptation

    Yano, Masayuki; Darmofal, David L.

    2012-09-01

    We present a general framework for anisotropic h-adaptation of simplex meshes. Given a discretization and any element-wise, localizable error estimate, our adaptive method iterates toward a mesh that minimizes error for a given degrees of freedom. Utilizing mesh-metric duality, we consider a continuous optimization problem of the Riemannian metric tensor field that provides an anisotropic description of element sizes. First, our method performs a series of local solves to survey the behavior of the local error function. This information is then synthesized using an affine-invariant tensor manipulation framework to reconstruct an approximate gradient of the error function with respect to the metric tensor field. Finally, we perform gradient descent in the metric space to drive the mesh toward optimality. The method is first demonstrated to produce optimal anisotropic meshes minimizing the L2 projection error for a pair of canonical problems containing a singularity and a singular perturbation. The effectiveness of the framework is then demonstrated in the context of output-based adaptation for the advection-diffusion equation using a high-order discontinuous Galerkin discretization and the dual-weighted residual (DWR) error estimate. The method presented provides a unified framework for optimizing both the element size and anisotropy distribution using an a posteriori error estimate and enables efficient adaptation of anisotropic simplex meshes for high-order discretizations.

  7. Optimal configuration of power grid sources based on optimal particle swarm algorithm

    Wen, Yuanhua

    2018-04-01

    In order to optimize the distribution problem of power grid sources, an optimized particle swarm optimization algorithm is proposed. First, the concept of multi-objective optimization and the Pareto solution set are enumerated. Then, the performance of the classical genetic algorithm, the classical particle swarm optimization algorithm and the improved particle swarm optimization algorithm are analyzed. The three algorithms are simulated respectively. Compared with the test results of each algorithm, the superiority of the algorithm in convergence and optimization performance is proved, which lays the foundation for subsequent micro-grid power optimization configuration solution.

  8. Cat swarm optimization based evolutionary framework for multi document summarization

    Rautray, Rasmita; Balabantaray, Rakesh Chandra

    2017-07-01

    Today, World Wide Web has brought us enormous quantity of on-line information. As a result, extracting relevant information from massive data has become a challenging issue. In recent past text summarization is recognized as one of the solution to extract useful information from vast amount documents. Based on number of documents considered for summarization, it is categorized as single document or multi document summarization. Rather than single document, multi document summarization is more challenging for the researchers to find accurate summary from multiple documents. Hence in this study, a novel Cat Swarm Optimization (CSO) based multi document summarizer is proposed to address the problem of multi document summarization. The proposed CSO based model is also compared with two other nature inspired based summarizer such as Harmony Search (HS) based summarizer and Particle Swarm Optimization (PSO) based summarizer. With respect to the benchmark Document Understanding Conference (DUC) datasets, the performance of all algorithms are compared in terms of different evaluation metrics such as ROUGE score, F score, sensitivity, positive predicate value, summary accuracy, inter sentence similarity and readability metric to validate non-redundancy, cohesiveness and readability of the summary respectively. The experimental analysis clearly reveals that the proposed approach outperforms the other summarizers included in the study.

  9. Analyze the optimal solutions of optimization problems by means of fractional gradient based system using VIM

    Firat Evirgen

    2016-04-01

    Full Text Available In this paper, a class of Nonlinear Programming problem is modeled with gradient based system of fractional order differential equations in Caputo's sense. To see the overlap between the equilibrium point of the fractional order dynamic system and theoptimal solution of the NLP problem in a longer timespan the Multistage Variational İteration Method isapplied. The comparisons among the multistage variational iteration method, the variationaliteration method and the fourth order Runge-Kutta method in fractional and integer order showthat fractional order model and techniques can be seen as an effective and reliable tool for finding optimal solutions of Nonlinear Programming problems.

  10. Mesh Denoising based on Normal Voting Tensor and Binary Optimization.

    Yadav, Sunil Kumar; Reitebuch, Ulrich; Polthier, Konrad

    2017-08-17

    This paper presents a two-stage mesh denoising algorithm. Unlike other traditional averaging approaches, our approach uses an element-based normal voting tensor to compute smooth surfaces. By introducing a binary optimization on the proposed tensor together with a local binary neighborhood concept, our algorithm better retains sharp features and produces smoother umbilical regions than previous approaches. On top of that, we provide a stochastic analysis on the different kinds of noise based on the average edge length. The quantitative results demonstrate that the performance of our method is better compared to state-of-the-art smoothing approaches.

  11. Attitude Optimal Backstepping Controller Based Quaternion for a UAV

    Kaddouri Djamel

    2016-01-01

    Full Text Available A hierarchical controller design based on nonlinear H∞ theory and backstepping technique is developed for a nonlinear and coupled dynamic attitude system using conventional quaternion based method. The derived controller combines the attractive features of H∞ optimal controller and the advantages of the backstepping technique leading to a control law which avoids winding phenomena. Performance issues of the controller are illustrated in a simulation study made for a four-rotor vertical take-off and landing (VTOL aerial robot prototype known as the quadrotor aircraft.

  12. Perbandingan Analisis Diskriminan dan Analisis Regresi Logistik Ordinal dalam Prediksi Klasifikasi Kondisi Kesehatan Bank

    Fajri Zufa

    2017-12-01

    Full Text Available The purpose of this research is to compare the accuracy of bank classification prediction based on Capital Adequacy Ratio (CAR, Earning Asset Quality (EAQ, Non Performing Loan (NPL, Return on Assets (ROA, Net Interest Margin (NIM, Short Term Mismatch (STM and Loan to Deposit Ratio (LDR. Discriminant analysis and ordinal logistic regression analysis are compared in classifying the prediction. The data used are secondary data, namely data classification of bank conditions in Indonesia in 2014 obtained from research institute PT Infovesta Utama. Based on Apparent Error Rate (APER score obtained, it can be said that discriminant analysis is better in predicting the classification of bank conditions in Indonesia than that of ordinal logistic regression analysis. Discriminant analysis has the average prediction accuracy of 80%, while ordinal logistic regression analysis has the average prediction accuracy of 74,38%.

  13. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    Cserkaszky, Á; Légrády, D.; Wirth, A.; Bükki, T.; Patay, G.

    2011-01-01

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  14. RISK LOAN PORTFOLIO OPTIMIZATION MODEL BASED ON CVAR RISK MEASURE

    Ming-Chang LEE

    2015-07-01

    Full Text Available In order to achieve commercial banks liquidity, safety and profitability objective requirements, loan portfolio risk analysis based optimization decisions are rational allocation of assets.  The risk analysis and asset allocation are the key technology of banking and risk management.  The aim of this paper, build a loan portfolio optimization model based on risk analysis.  Loan portfolio rate of return by using Value-at-Risk (VaR and Conditional Value-at-Risk (CVaR constraint optimization decision model reflects the bank's risk tolerance, and the potential loss of direct control of the bank.  In this paper, it analyze a general risk management model applied to portfolio problems with VaR and CVaR risk measures by using Using the Lagrangian Algorithm.  This paper solves the highly difficult problem by matrix operation method.  Therefore, the combination of this paper is easy understanding the portfolio problems with VaR and CVaR risk model is a hyperbola in mean-standard deviation space.  It is easy calculation in proposed method.

  15. Optimal alignment of mirror based pentaprisms for scanning deflectometric devices

    Barber, Samuel K.; Geckeler, Ralf D.; Yashchuk, Valeriy V.; Gubarev, Mikhail V.; Buchheim, Jana; Siewert, Frank; Zeschke, Thomas

    2011-03-04

    In the recent work [Proc. of SPIE 7801, 7801-2/1-12 (2010), Opt. Eng. 50(5) (2011), in press], we have reported on improvement of the Developmental Long Trace Profiler (DLTP), a slope measuring profiler available at the Advanced Light Source Optical Metrology Laboratory, achieved by replacing the bulk pentaprism with a mirror based pentaprism (MBPP). An original experimental procedure for optimal mutual alignment of the MBPP mirrors has been suggested and verified with numerical ray tracing simulations. It has been experimentally shown that the optimally aligned MBPP allows the elimination of systematic errors introduced by inhomogeneity of the optical material and fabrication imperfections of the bulk pentaprism. In the present article, we provide the analytical derivation and verification of easily executed optimal alignment algorithms for two different designs of mirror based pentaprisms. We also provide an analytical description for the mechanism for reduction of the systematic errors introduced by a typical high quality bulk pentaprism. It is also shown that residual misalignments of an MBPP introduce entirely negligible systematic errors in surface slope measurements with scanning deflectometric devices.

  16. Optimization on Trajectory of Stanford Manipulator based on Genetic Algorithm

    Han Xi

    2017-01-01

    Full Text Available The optimization of robot manipulator’s trajectory has become a hot topic in academic and industrial fields. In this paper, a method for minimizing the moving distance of robot manipulators is presented. The Stanford Manipulator is used as the research object and the inverse kinematics model is established with Denavit-Hartenberg method. Base on the initial posture matrix, the inverse kinematics model is used to find the initial state of each joint. In accordance with the given beginning moment, cubic polynomial interpolation is applied to each joint variable and the positive kinematic model is used to calculate the moving distance of end effector. Genetic algorithm is used to optimize the sequential order of each joint and the time difference between different starting time of joints. Numerical applications involving a Stanford manipulator are presented.

  17. Gradient-based optimization in nonlinear structural dynamics

    Dou, Suguang

    The intrinsic nonlinearity of mechanical structures can give rise to rich nonlinear dynamics. Recently, nonlinear dynamics of micro-mechanical structures have contributed to developing new Micro-Electro-Mechanical Systems (MEMS), for example, atomic force microscope, passive frequency divider......, frequency stabilization, and disk resonator gyroscope. For advanced design of these structures, it is of considerable value to extend current optimization in linear structural dynamics into nonlinear structural dynamics. In this thesis, we present a framework for modelling, analysis, characterization......, and optimization of nonlinear structural dynamics. In the modelling, nonlinear finite elements are used. In the analysis, nonlinear frequency response and nonlinear normal modes are calculated based on a harmonic balance method with higher-order harmonics. In the characterization, nonlinear modal coupling...

  18. Structural Optimization based on the Concept of First Order Analysis

    Shinji, Nishiwaki; Hidekazu, Nishigaki; Yasuaki, Tsurumi; Yoshio, Kojima; Noboru, Kikuchi

    2002-01-01

    Computer Aided Engineering (CAE) has been successfully utilized in mechanical industries such as the automotive industry. It is, however, difficult for most mechanical design engineers to directly use CAE due to the sophisticated nature of the operations involved. In order to mitigate this problem, a new type of CAE, First Order Analysis (FOA) has been proposed. This paper presents the outcome of research concerning the development of a structural topology optimization methodology within FOA. This optimization method is constructed based on discrete and function-oriented elements such as beam and panel elements, and sequential convex programming. In addition, examples are provided to show the utility of the methodology presented here for mechanical design engineers

  19. Celestial Navigation Fix Based on Particle Swarm Optimization

    Tsou Ming-Cheng

    2015-09-01

    Full Text Available A technique for solving celestial fix problems is proposed in this study. This method is based on Particle Swarm Optimization from the field of swarm intelligence, utilizing its superior optimization and searching abilities to obtain the most probable astronomical vessel position. In addition to being applicable to two-body fix, multi-body fix, and high-altitude observation problems, it is also less reliant on the initial dead reckoning position. Moreover, by introducing spatial data processing and display functions in a Geographical Information System, calculation results and chart work used in Circle of Position graphical positioning can both be integrated. As a result, in addition to avoiding tedious and complicated computational and graphical procedures, this work has more flexibility and is more robust when compared to other analytical approaches.

  20. Restaurant volatility and the Iowa City, Iowa, smoke-free restaurant ordinance.

    Sheffer, Megan A; Squier, Christopher A; Gilmore, Gary D

    2013-01-01

    To determine the economic impact of the Iowa City, Iowa, smoke-free restaurant ordinance (IC-SFRO) using an immediate and novel approach. In this retrospective study, food permit licensure served as the measure to assess the IC-SFRO impact. The Iowa City experience provided an excellent experimental setting, as the ordinance was enacted March 1, 2002, and repealed May 7, 2003, because of preemption. The city of Coralville served as a natural control, as it is contiguous to Iowa City, has similar population demographics, and has never enacted a smoke-free restaurant ordinance. Food permit licensure data for all Iowa City and Coralville restaurants were obtained from the Johnson County Health Department. Differences in restaurant volatility were assessed using Fisher's exact probability test. The number of restaurants increased in both Iowa City and Coralville throughout the ordinance period. The ratio of the total number of restaurants in Iowa City to the total number of restaurants in the Iowa City-Coralville metropolitan area remained stable. The proportion of restaurants for each city did not differ significantly during the preordinance, ordinance, and postordinance periods. The IC-SFRO did not adversely impact the restaurant industry in terms of restaurant closures. The Iowa legislature was urged to draft evidence-based legislation, such as amending preemption of the IC-SFRO, to protect and promote the health of its communities.

  1. Anatomical localization of electrophysiological recording sites by co-ordinate transformation.

    Sinex, D G

    1997-07-18

    A method for estimating the anatomical locations of the units recorded in electrophysiological mapping experiments is described. A total of three locations must be marked by dye injections or electrolytic lesions and identified in tissue sections. From those locations, equations are derived to translate, scale, and rotate the three-dimensional co-ordinates of the recording sites, so that they are correct for a second, three-dimensional co-ordinate system based on the anatomy of the mapped structure. There is no limit to the number of recording sites that can be localized. This differs from methods that require a dye injection or lesion to be made at the exact location at which a particular unit was recorded. The accuracy of the transformed co-ordinates is limited by the accuracy with which the co-ordinates can be measured: in test measurements and in the experiments for which this algorithm was developed, the computed co-ordinates were typically accurate to within 100 microns or less.

  2. Detecting DIF in Polytomous Items Using MACS, IRT and Ordinal Logistic Regression

    Elosua, Paula; Wells, Craig

    2013-01-01

    The purpose of the present study was to compare the Type I error rate and power of two model-based procedures, the mean and covariance structure model (MACS) and the item response theory (IRT), and an observed-score based procedure, ordinal logistic regression, for detecting differential item functioning (DIF) in polytomous items. A simulation…

  3. Co-ordinating innate and adaptive immunity to viral infection: mobility is the key

    Wern, Jeanette Erbo; Thomsen, Allan Randrup

    2009-01-01

    The host counters a viral infection through a complex response made up of components belonging to both the innate and the adaptive immune system. In this report, we review the mechanisms underlying this response, how it is induced and how it is co-ordinated. As cell-cell communication represents...... the very essence of immune system physiology, a key to a rapid, efficient and optimally regulated immune response is the ability of the involved cells to rapidly shift between a stationary and a mobile state, combined with stringent regulation of cell migration during the mobile state. Through the co......-ordinated recruitment of different cell types intended to work in concert, cellular co-operation is optimized particularly under conditions that may involve rare cells. Consequently, a major focus is placed on presenting an overview of the co-operative events and the associated cell migration, which is essential...

  4. The Pakistan atomic energy commission ordinance, 1965 ordinance no. XVII of 1965

    1983-01-01

    This act, entitled Pakistan Atomic Energy Commission ordinance 1965, allows amendments incorporated under PAEC (amendment) act 1974 upto August 1983. The amendments relates to regulations concerned with the composition and functions of the commission and some miscellaneous rules. (A.B.)

  5. The adaptive collision source method for discrete ordinates radiation transport

    Walters, William J.; Haghighat, Alireza

    2017-01-01

    Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.

  6. Discrete Ordinates Method-Like Computation with Group Condensation and Angle Collapsing in Transport Theory

    Won, Jong Hyuck; Cho, Nam Zin

    2010-01-01

    In group condensation for transport method, it is well-known that angle-dependent total cross section is generated. To remove this difficulty on angledependent total cross section, we normally perform the group condensation on total cross section by using scalar flux weight as used in neutron diffusion method. In this study, angle-dependent total cross section is directly applied to the discrete ordinates method. In addition, angle collapsing concept is introduced based on equivalence to reduce calculational burden of transport computation. We also show numerical results for a heterogeneous 1-D slab problem with local/global iteration, in which fine-group discrete ordinates calculation is used in local problem while few-group angle collapsed discrete ordinates calculation is used in global problem iteratively

  7. Optimal Output of Distributed Generation Based On Complex Power Increment

    Wu, D.; Bao, H.

    2017-12-01

    In order to meet the growing demand for electricity and improve the cleanliness of power generation, new energy generation, represented by wind power generation, photovoltaic power generation, etc has been widely used. The new energy power generation access to distribution network in the form of distributed generation, consumed by local load. However, with the increase of the scale of distribution generation access to the network, the optimization of its power output is becoming more and more prominent, which needs further study. Classical optimization methods often use extended sensitivity method to obtain the relationship between different power generators, but ignore the coupling parameter between nodes makes the results are not accurate; heuristic algorithm also has defects such as slow calculation speed, uncertain outcomes. This article proposes a method called complex power increment, the essence of this method is the analysis of the power grid under steady power flow. After analyzing the results we can obtain the complex scaling function equation between the power supplies, the coefficient of the equation is based on the impedance parameter of the network, so the description of the relation of variables to the coefficients is more precise Thus, the method can accurately describe the power increment relationship, and can obtain the power optimization scheme more accurately and quickly than the extended sensitivity method and heuristic method.

  8. Simulation-based optimal Bayesian experimental design for nonlinear systems

    Huan, Xun

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical framework and an algorithmic approach for optimal experimental design with nonlinear simulation-based models; in particular, we focus on finding sets of experiments that provide the most information about targeted sets of parameters.Our framework employs a Bayesian statistical setting, which provides a foundation for inference from noisy, indirect, and incomplete data, and a natural mechanism for incorporating heterogeneous sources of information. An objective function is constructed from information theoretic measures, reflecting expected information gain from proposed combinations of experiments. Polynomial chaos approximations and a two-stage Monte Carlo sampling method are used to evaluate the expected information gain. Stochastic approximation algorithms are then used to make optimization feasible in computationally intensive and high-dimensional settings. These algorithms are demonstrated on model problems and on nonlinear parameter inference problems arising in detailed combustion kinetics. © 2012 Elsevier Inc.

  9. Reliability-Based Optimal Design for Very Large Floating Structure

    ZHANG Shu-hua(张淑华); FUJIKUBO Masahiko

    2003-01-01

    Costs and losses induced by possible future extreme environmental conditions and difficulties in repairing post-yielding damage strongly suggest the need for proper consideration in design rather than just life loss prevention. This can be addressed through the development of design methodology that balances the initial cost of the very large floating structure (VLFS) against the expected potential losses resulting from future extreme wave-induced structural damage. Here, the development of a methodology for determining optimal, cost-effective design will be presented and applied to a VLFS located in the Tokyo bay. Optimal design criteria are determined based on the total expected life-cycle cost and acceptable damage probability and curvature of the structure, and a set of sizes of the structure are obtained. The methodology and applications require expressions of the initial cost and the expected life-cycle damage cost as functions of the optimal design variables. This study includes the methodology, total life-cycle cost function, structural damage modeling, and reliability analysis.

  10. Optimization of heat transfer utilizing graph based evolutionary algorithms

    Bryden, Kenneth M.; Ashlock, Daniel A.; McCorkle, Douglas S.; Urban, Gregory L.

    2003-01-01

    This paper examines the use of graph based evolutionary algorithms (GBEAs) for optimization of heat transfer in a complex system. The specific case examined in this paper is the optimization of heat transfer in a biomass cookstove utilizing three-dimensional computational fluid dynamics to generate the fitness function. In this stove hot combustion gases are used to heat a cooking surface. The goal is to provide an even spatial temperature distribution on the cooking surface by redirecting the flow of combustion gases with baffles. The variables in the optimization are the position and size of the baffles, which are described by integer values. GBEAs are a novel type of EA in which a topology or geography is imposed on an evolving population of solutions. The choice of graph controls the rate at which solutions can spread within the population, impacting the diversity of solutions and convergence rate of the EAs. In this study, the choice of graph in the GBEAs changes the number of mating events required for convergence by a factor of approximately 2.25 and the diversity of the population by a factor of 2. These results confirm that by tuning the graph and parameters in GBEAs, computational time can be significantly reduced

  11. Energy Optimal Control Strategy of PHEV Based on PMP Algorithm

    Tiezhou Wu

    2017-01-01

    Full Text Available Under the global voice of “energy saving” and the current boom in the development of energy storage technology at home and abroad, energy optimal control of the whole hybrid electric vehicle power system, as one of the core technologies of electric vehicles, is bound to become a hot target of “clean energy” vehicle development and research. This paper considers the constraints to the performance of energy storage system in Parallel Hybrid Electric Vehicle (PHEV, from which lithium-ion battery frequently charges/discharges, PHEV largely consumes energy of fuel, and their are difficulty in energy recovery and other issues in a single cycle; the research uses lithium-ion battery combined with super-capacitor (SC, which is hybrid energy storage system (Li-SC HESS, working together with internal combustion engine (ICE to drive PHEV. Combined with PSO-PI controller and Li-SC HESS internal power limited management approach, the research proposes the PHEV energy optimal control strategy. It is based on revised Pontryagin’s minimum principle (PMP algorithm, which establishes the PHEV vehicle simulation model through ADVISOR software and verifies the effectiveness and feasibility. Finally, the results show that the energy optimization control strategy can improve the instantaneity of tracking PHEV minimum fuel consumption track, implement energy saving, and prolong the life of lithium-ion batteries and thereby can improve hybrid energy storage system performance.

  12. A seismic fault recognition method based on ant colony optimization

    Chen, Lei; Xiao, Chuangbai; Li, Xueliang; Wang, Zhenli; Huo, Shoudong

    2018-05-01

    Fault recognition is an important section in seismic interpretation and there are many methods for this technology, but no one can recognize fault exactly enough. For this problem, we proposed a new fault recognition method based on ant colony optimization which can locate fault precisely and extract fault from the seismic section. Firstly, seismic horizons are extracted by the connected component labeling algorithm; secondly, the fault location are decided according to the horizontal endpoints of each horizon; thirdly, the whole seismic section is divided into several rectangular blocks and the top and bottom endpoints of each rectangular block are considered as the nest and food respectively for the ant colony optimization algorithm. Besides that, the positive section is taken as an actual three dimensional terrain by using the seismic amplitude as a height. After that, the optimal route from nest to food calculated by the ant colony in each block is judged as a fault. Finally, extensive comparative tests were performed on the real seismic data. Availability and advancement of the proposed method were validated by the experimental results.

  13. Optimization of Microemulsion Based Transdermal Gel of Triamcinolone.

    Jagdale, Swati; Chaudhari, Bhagyashree

    2017-01-01

    Triamcinolone is a long acting corticosteroid used in the treatment of arthritis, eczema, psoriasis and similar conditions which cause inflammation. Triamcinolone has half-life of 88min. Prolonged oral use is associated with gastrointestinal adverse effects as peptic ulcer, abdominal distention and ulcerative esophagitis as described in various patents. Microemulgel offers advantage of better stability, better loading capacity and controlled release especially for drug with short half life. Objective of the present study was to optimize microemulgel based transdermal delivery of triamcinolone. Saturated solubility of triamcinolone in various oils, surfactants and co-surfactants is estimated. Pseudo-ternary phase diagrams were constructed to determine the region of transparent microemulsion. Microemulsion was evaluated for globule size (FE-SEM, zetasizer), % transmittance, pH, viscosity, conductivity etc. Design of experiment was used to optimize microemulsion based gel. Carbopol 971P and HPMC K100M were used as independent variables. Microemulsion based gel was evaluated for in-vitro as well as ex-vivo parameters. Microemulsion was formulated with oleic acid, lauroglycol FCC and propylene glycol. PDI 0.197 indicated microemulsion is mono-disperse. 32 factorial design gave batch F8 as optimized. Design expert suggested drug release; gel viscosity and bio-adhesive strength were three significant dependant factors affecting the transdermal delivery. F8 showed drug release 92.62.16±1.22% through egg membrane, 95.23±1.44% through goat skin after 8hr and Korsmeyer-Peppas release model was followed. It can be concluded that a stable, effective controlled release transdermal microemulgel was optimised for triamcinolone. This would be a promising tool to deliver triamcinolone with enhanced bioavailability and reduced dosing frequency. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. An assessment of the effects of a cadmium discharge ordinance

    Moser, J.H.; Schultz, J.L.

    1982-01-01

    The problem facing the MMSD was high levels of cadmium in Milorganite fertilizer. The cause was determined to be discharges from industry, primarily electroplaters. The solution was the cooperative development of an ordinance to limit the discharge of cadmium. Because the dischargers acted responsibly to comply with the ordinance, the ordinance succeeded in achieving its objective of significantly reducing the cadmium loading to the municipal sewerage system and subsequently reducing the cadmium concentration in Milorganite fertilizer

  15. Price-based Optimal Control of Electrical Power Systems

    Jokic, A.

    2007-09-10

    The research presented in this thesis is motivated by the following issue of concern for the operation of future power systems: Future power systems will be characterized by significantly increased uncertainties at all time scales and, consequently, their behavior in time will be difficult to predict. In Chapter 2 we will present a novel explicit, dynamic, distributed feedback control scheme that utilizes nodal-prices for real-time optimal power balance and network congestion control. The term explicit means that the controller is not based on solving an optimization problem on-line. Instead, the nodal prices updates are based on simple, explicitly defined and easily comprehensible rules. We prove that the developed control scheme, which acts on the measurements from the current state of the system, always provide the correct nodal prices. In Chapter 3 we will develop a novel, robust, hybrid MPC control (model predictive controller) scheme for power balance control with hard constraints on line power flows and network frequency deviations. The developed MPC controller acts in parallel with the explicit controller from Chapter 2, and its task is to enforce the constraints during the transient periods following suddenly occurring power imbalances in the system. In Chapter 4 the concept of autonomous power networks will be presented as a concise formulation to deal with economic, technical and reliability issues in power systems with a large penetration of distributed generating units. With autonomous power networks as new market entities, we propose a novel operational structure of ancillary service markets. In Chapter 5 we will consider the problem of controlling a general linear time-invariant dynamical system to an economically optimal operating point, which is defined by a multiparametric constrained convex optimization problem related with the steady-state operation of the system. The parameters in the optimization problem are values of the exogenous inputs to

  16. Optimal attacks on qubit-based Quantum Key Recycling

    Leermakers, Daan; Škorić, Boris

    2018-03-01

    Quantum Key Recycling (QKR) is a quantum cryptographic primitive that allows one to reuse keys in an unconditionally secure way. By removing the need to repeatedly generate new keys, it improves communication efficiency. Škorić and de Vries recently proposed a QKR scheme based on 8-state encoding (four bases). It does not require quantum computers for encryption/decryption but only single-qubit operations. We provide a missing ingredient in the security analysis of this scheme in the case of noisy channels: accurate upper bounds on the required amount of privacy amplification. We determine optimal attacks against the message and against the key, for 8-state encoding as well as 4-state and 6-state conjugate coding. We provide results in terms of min-entropy loss as well as accessible (Shannon) information. We show that the Shannon entropy analysis for 8-state encoding reduces to the analysis of quantum key distribution, whereas 4-state and 6-state suffer from additional leaks that make them less effective. From the optimal attacks we compute the required amount of privacy amplification and hence the achievable communication rate (useful information per qubit) of qubit-based QKR. Overall, 8-state encoding yields the highest communication rates.

  17. Analysis of the morphology of oral structures from 3-D co-ordinate data.

    Jovanovski, V; Lynch, E

    2000-01-01

    A non-intrusive method is described which can be used to determine the forms of oral structures. It is based on the digitising of standard replicas with a co-ordinate-measuring machine. Supporting software permits a mathematical model of the surface to be reconstructed and visualised from captured three-dimensional co-ordinates. A series of surface data sets can be superposed into a common reference frame without the use of extrinsic markers, allowing changes in the shapes of oral structures to be quantified accurately over an extended period of time. The system has found numerous applications.

  18. An internet graph model based on trade-off optimization

    Alvarez-Hamelin, J. I.; Schabanel, N.

    2004-03-01

    This paper presents a new model for the Internet graph (AS graph) based on the concept of heuristic trade-off optimization, introduced by Fabrikant, Koutsoupias and Papadimitriou in[CITE] to grow a random tree with a heavily tailed degree distribution. We propose here a generalization of this approach to generate a general graph, as a candidate for modeling the Internet. We present the results of our simulations and an analysis of the standard parameters measured in our model, compared with measurements from the physical Internet graph.

  19. Simulation Based Optimization for World Line Card Production System

    Sinan APAK

    2012-07-01

    Full Text Available Simulation based decision support system is one of the commonly used tool to examine complex production systems. The simulation approach provides process modules which can be adjusted with certain parameters by using data relatively easily obtainable in production process. World Line Card production system simulation is developed to evaluate the optimality of existing production line via using discrete event simulation model with variaty of alternative proposals. The current production system is analysed by a simulation model emphasizing the bottlenecks and the poorly utilized production line. Our analysis identified some improvements and efficient solutions for the existing system.

  20. Optimal model-based sensorless adaptive optics for epifluorescence microscopy.

    Pozzi, Paolo; Soloviev, Oleg; Wilding, Dean; Vdovin, Gleb; Verhaegen, Michel

    2018-01-01

    We report on a universal sample-independent sensorless adaptive optics method, based on modal optimization of the second moment of the fluorescence emission from a point-like excitation. Our method employs a sample-independent precalibration, performed only once for the particular system, to establish the direct relation between the image quality and the aberration. The method is potentially applicable to any form of microscopy with epifluorescence detection, including the practically important case of incoherent fluorescence emission from a three dimensional object, through minor hardware modifications. We have applied the technique successfully to a widefield epifluorescence microscope and to a multiaperture confocal microscope.

  1. Utilization-Based Modeling and Optimization for Cognitive Radio Networks

    Liu, Yanbing; Huang, Jun; Liu, Zhangxiong

    The cognitive radio technique promises to manage and allocate the scarce radio spectrum in the highly varying and disparate modern environments. This paper considers a cognitive radio scenario composed of two queues for the primary (licensed) users and cognitive (unlicensed) users. According to the Markov process, the system state equations are derived and an optimization model for the system is proposed. Next, the system performance is evaluated by calculations which show the rationality of our system model. Furthermore, discussions among different parameters for the system are presented based on the experimental results.

  2. Morphing-Based Shape Optimization in Computational Fluid Dynamics

    Rousseau, Yannick; Men'Shov, Igor; Nakamura, Yoshiaki

    In this paper, a Morphing-based Shape Optimization (MbSO) technique is presented for solving Optimum-Shape Design (OSD) problems in Computational Fluid Dynamics (CFD). The proposed method couples Free-Form Deformation (FFD) and Evolutionary Computation, and, as its name suggests, relies on the morphing of shape and computational domain, rather than direct shape parameterization. Advantages of the FFD approach compared to traditional parameterization are first discussed. Then, examples of shape and grid deformations by FFD are presented. Finally, the MbSO approach is illustrated and applied through an example: the design of an airfoil for a future Mars exploration airplane.

  3. An Improved Teaching-Learning-Based Optimization with the Social Character of PSO for Global Optimization

    Feng Zou

    2016-01-01

    Full Text Available An improved teaching-learning-based optimization with combining of the social character of PSO (TLBO-PSO, which is considering the teacher’s behavior influence on the students and the mean grade of the class, is proposed in the paper to find the global solutions of function optimization problems. In this method, the teacher phase of TLBO is modified; the new position of the individual is determined by the old position, the mean position, and the best position of current generation. The method overcomes disadvantage that the evolution of the original TLBO might stop when the mean position of students equals the position of the teacher. To decrease the computation cost of the algorithm, the process of removing the duplicate individual in original TLBO is not adopted in the improved algorithm. Moreover, the probability of local convergence of the improved method is decreased by the mutation operator. The effectiveness of the proposed method is tested on some benchmark functions, and the results are competitive with respect to some other methods.

  4. Ordinance on the transport of dangerous goods by road (SDR)

    1985-04-01

    This Ordinance regulates the transport of dangerous goods by road and replaces a similar Ordinance of 1972. The dangerous goods are listed in Annex A and the special provisions to be complied with for their transport are contained in Annex B. Radioactive materials, categorized as Class IVb, are included in the goods covered by the Ordinance. The Ordinance which entered into force on 1 May 1985 was amended on 9 April 1987 on a minor point and on 27 November 1989 so as to provide for special training for drivers of vehicles carrying such goods. This latter amendment entered into force on 1 January 1990. (NEA) [fr

  5. Optimization of multi-objective micro-grid based on improved particle swarm optimization algorithm

    Zhang, Jian; Gan, Yang

    2018-04-01

    The paper presents a multi-objective optimal configuration model for independent micro-grid with the aim of economy and environmental protection. The Pareto solution set can be obtained by solving the multi-objective optimization configuration model of micro-grid with the improved particle swarm algorithm. The feasibility of the improved particle swarm optimization algorithm for multi-objective model is verified, which provides an important reference for multi-objective optimization of independent micro-grid.

  6. A surrogate based multistage-multilevel optimization procedure for multidisciplinary design optimization

    Yao, W.; Chen, X.; Ouyang, Q.; Van Tooren, M.

    2011-01-01

    Optimization procedure is one of the key techniques to address the computational and organizational complexities of multidisciplinary design optimization (MDO). Motivated by the idea of synthetically exploiting the advantage of multiple existing optimization procedures and meanwhile complying with the general process of satellite system design optimization in conceptual design phase, a multistage-multilevel MDO procedure is proposed in this paper by integrating multiple-discipline-feasible (M...

  7. Optimal Power Allocation Algorithm for Radar Network Systems Based on Low Probability of Intercept Optimization(in English

    Shi Chen-guang

    2014-08-01

    Full Text Available A novel optimal power allocation algorithm for radar network systems is proposed for Low Probability of Intercept (LPI technology in modern electronic warfare. The algorithm is based on the LPI optimization. First, the Schleher intercept factor for a radar network is derived, and then the Schleher intercept factor is minimized by optimizing the transmission power allocation among netted radars in the network to guarantee target-tracking performance. Furthermore, the Nonlinear Programming Genetic Algorithm (NPGA is used to solve the resulting nonconvex, nonlinear, and constrained optimization problem. Numerical simulation results show the effectiveness of the proposed algorithm.

  8. Optimizing DNA assembly based on statistical language modelling.

    Fang, Gang; Zhang, Shemin; Dong, Yafei

    2017-12-15

    By successively assembling genetic parts such as BioBrick according to grammatical models, complex genetic constructs composed of dozens of functional blocks can be built. However, usually every category of genetic parts includes a few or many parts. With increasing quantity of genetic parts, the process of assembling more than a few sets of these parts can be expensive, time consuming and error prone. At the last step of assembling it is somewhat difficult to decide which part should be selected. Based on statistical language model, which is a probability distribution P(s) over strings S that attempts to reflect how frequently a string S occurs as a sentence, the most commonly used parts will be selected. Then, a dynamic programming algorithm was designed to figure out the solution of maximum probability. The algorithm optimizes the results of a genetic design based on a grammatical model and finds an optimal solution. In this way, redundant operations can be reduced and the time and cost required for conducting biological experiments can be minimized. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  9. Design optimization of PVDF-based piezoelectric energy harvesters

    Jundong Song

    2017-09-01

    Full Text Available Energy harvesting is a promising technology that powers the electronic devices via scavenging the ambient energy. Piezoelectric energy harvesters have attracted considerable interest for their high conversion efficiency and easy fabrication in minimized sensors and transducers. To improve the output capability of energy harvesters, properties of piezoelectric materials is an influential factor, but the potential of the material is less likely to be fully exploited without an optimized configuration. In this paper, an optimization strategy for PVDF-based cantilever-type energy harvesters is proposed to achieve the highest output power density with the given frequency and acceleration of the vibration source. It is shown that the maximum power output density only depends on the maximum allowable stress of the beam and the working frequency of the device, and these two factors can be obtained by adjusting the geometry of piezoelectric layers. The strategy is validated by coupled finite-element-circuit simulation and a practical device. The fabricated device within a volume of 13.1 mm3 shows an output power of 112.8 μW which is comparable to that of the best-performing piezoceramic-based energy harvesters within the similar volume reported so far.

  10. [Study on the land use optimization based on PPI].

    Wu, Xiao-Feng; Li, Ting

    2012-03-01

    Land use type and managing method which is greatly influenced by human activities, is one of the most important factors of non-point pollution. Based on the collection and analysis of non-point pollution control methods and the concept of the three ecological fronts, 9 land use optimized scenarios were designed according to rationality analysis of the current land use situation in the 3 typed small watersheds in Miyun reservoir basin. Take Caojialu watershed for example to analyze and compare the influence to environment of different scenarios based on potential pollution index (PPI) and river section potential pollution index (R-PPI) and the best combination scenario was found. Land use scenario designing and comparison on basis of PPI and R-PPI could help to find the best combination scenario of land use type and managing method, to optimize space distribution and managing methods of land use in basin, to reduce soil erosion and to provide powerful support to formulation of land use planning and pollution control project.

  11. Coherent Network Optimizing of Rail-Based Urban Mass Transit

    Ying Zhang

    2012-01-01

    Full Text Available An efficient public transport is more than ever a crucial factor when it comes to the quality of life and competitiveness of many cities and regions in Asia. In recent years, the rail-based urban mass transit has been regarded as one of the key means to overcoming the great challenges in Chinese megacities. The purpose of this study is going to develop a coherent network optimizing for rail-based urban mass transit to find the best alternatives for the user and to demonstrate how to meet sustainable development needs and to match the enormous capacity requirements simultaneously. This paper presents an introduction to the current situation of the important lines, and transfer points in the metro system Shanghai. The insufficient aspects are analyzed and evaluated; while the optimizing ideas and measurements are developed and concreted. A group of examples are used to illustrate the approach. The whole study could be used for the latest reference for other megacities which have to be confronted with the similar situations and processes with enormous dynamic travel and transport demands.

  12. Market-Based and System-Wide Fuel Cycle Optimization

    Wilson, Paul Philip Hood [Univ. of Wisconsin, Madison, WI (United States); Scopatz, Anthony [Univ. of South Carolina, Columbia, SC (United States); Gidden, Matthew [Univ. of Wisconsin, Madison, WI (United States); Carlsen, Robert [Univ. of Wisconsin, Madison, WI (United States); Mouginot, Baptiste [Univ. of Wisconsin, Madison, WI (United States); Flanagan, Robert [Univ. of South Carolina, Columbia, SC (United States)

    2017-06-13

    This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.

  13. Market-Based and System-Wide Fuel Cycle Optimization

    Wilson, Paul Philip Hood; Scopatz, Anthony; Gidden, Matthew; Carlsen, Robert; Mouginot, Baptiste; Flanagan, Robert

    2017-01-01

    This work introduces automated optimization into fuel cycle simulations in the Cyclus platform. This includes system-level optimizations, seeking a deployment plan that optimizes the performance over the entire transition, and market-level optimization, seeking an optimal set of material trades at each time step. These concepts were introduced in a way that preserves the flexibility of the Cyclus fuel cycle framework, one of its most important design principles.

  14. Some Problems of Rocket-Space Vehicles' Characteristics co- ordination

    Sergienko, Alexander A.

    2002-01-01

    of the XX century suffered a reverse. The designers of the United States' firms and enterprises of aviation and rocket-space industry (Boeing, Rocketdyne, Lockheed Martin, McDonnell Douglas, Rockwell, etc.) and NASA (Marshall Space Flight Center, Johnson Space Center, Langley Research Center and Lewis Research Center and others) could not correctly co-ordinate the characteristics of a propulsion system and a space vehicle for elaboration of the "Single-Stage-To-Orbit" reusable vehicle (SSTO) as an integral whole system, which is would able to inject a payload into an orbit and to return back on the Earth. jet nozzle design as well as the choice of propulsion system characteristics, ensuring the high ballistic efficiency, are considered in the present report. The efficiency criterions for the engine and launch system parameters optimization are discussed. The new methods of the nozzle block optimal parameters' choice for the satisfaction of the object task of flight are suggested. The family of SSTO with a payload mass from 5 to 20 ton and initial weight under 800 ton is considered.

  15. Stochastic Optimized Relevance Feedback Particle Swarm Optimization for Content Based Image Retrieval

    Muhammad Imran

    2014-01-01

    Full Text Available One of the major challenges for the CBIR is to bridge the gap between low level features and high level semantics according to the need of the user. To overcome this gap, relevance feedback (RF coupled with support vector machine (SVM has been applied successfully. However, when the feedback sample is small, the performance of the SVM based RF is often poor. To improve the performance of RF, this paper has proposed a new technique, namely, PSO-SVM-RF, which combines SVM based RF with particle swarm optimization (PSO. The aims of this proposed technique are to enhance the performance of SVM based RF and also to minimize the user interaction with the system by minimizing the RF number. The PSO-SVM-RF was tested on the coral photo gallery containing 10908 images. The results obtained from the experiments showed that the proposed PSO-SVM-RF achieved 100% accuracy in 8 feedback iterations for top 10 retrievals and 80% accuracy in 6 iterations for 100 top retrievals. This implies that with PSO-SVM-RF technique high accuracy rate is achieved at a small number of iterations.

  16. Vision-based coaching: optimizing resources for leader development

    Passarelli, Angela M.

    2015-01-01

    Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the benefits of the individual’s personal vision. Drawing on intentional change theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity) evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion–orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed. PMID:25926803

  17. Optimal Risk-Based Inspection Planning for Offshore Wind Turbines

    Rangel-Ramirez, Jose G.; Sørensen, John Dalsgaard

    2008-01-01

    , inspection and maintenance activities are developed. This paper considers aspects of inspection and maintenance planning of fatigue prone details in jacket and tripod types of wind turbine support structures. Based oil risk-based inspection planning methods used for oil & gas installations, a framework......Wind turbines for electricity production have increased significantly the last years both in production capability and size. This development is expected to continue also in the coining years. The Support structure for offshore wind turbines is typically a steel structure consisting of a tower...... for optimal inspection and maintenance planning of offshore wind turbines is presented. Special aspects for offshore wind turbines are considered: usually the wind loading are dominating the wave loading, wake effects in wind farms are important and the reliability level is typically significantly lower than...

  18. Optimization of an Image-Based Talking Head System

    Kang Liu

    2009-01-01

    Full Text Available This paper presents an image-based talking head system, which includes two parts: analysis and synthesis. The audiovisual analysis part creates a face model of a recorded human subject, which is composed of a personalized 3D mask as well as a large database of mouth images and their related information. The synthesis part generates natural looking facial animations from phonetic transcripts of text. A critical issue of the synthesis is the unit selection which selects and concatenates these appropriate mouth images from the database such that they match the spoken words of the talking head. Selection is based on lip synchronization and the similarity of consecutive images. The unit selection is refined in this paper, and Pareto optimization is used to train the unit selection. Experimental results of subjective tests show that most people cannot distinguish our facial animations from real videos.

  19. Vision-based coaching: Optimizing resources for leader development

    Angela M. Passarelli

    2015-04-01

    Full Text Available Leaders develop in the direction of their dreams, not in the direction of their deficits. Yet many coaching interactions intended to promote a leader’s development fail to leverage the developmental benefits of the individual’s personal vision. Drawing on Intentional Change Theory, this article postulates that coaching interactions that emphasize a leader’s personal vision (future aspirations and core identity evoke a psychophysiological state characterized by positive emotions, cognitive openness, and optimal neurobiological functioning for complex goal pursuit. Vision-based coaching, via this psychophysiological state, generates a host of relational and motivational resources critical to the developmental process. These resources include: formation of a positive coaching relationship, expansion of the leader’s identity, increased vitality, activation of learning goals, and a promotion-orientation. Organizational outcomes as well as limitations to vision-based coaching are discussed.

  20. Model-based dynamic control and optimization of gas networks

    Hofsten, Kai

    2001-07-01

    by a structured sequential quadratic programming algorithm of Newton type. Each open loop problem is specified using a nonlinear prediction model. For each iteration of the quadratic programming procedure, a linear time variant prediction model is formulated. The suggested controller also handles time varying source capacity. Potential problems such as infeasibility and the security of the supply when facing a change in the status of the infrastructure of the transmission system under a transient customer load are treated. Comments on the infeasibility due to errors such as load forecast error, model error and state estimation error are also discussed. A simplified nonlinear model called the creep flow model is used to describe the fluid dynamics inside a natural gas transmission line. Different assumptions and reformulations of this model yield the different control, simulation and optimization models used in this thesis. The control of a single gas transmission line is investigated using linear model predictive control based on instant linearization of the nonlinear model. Model predictive control using a bi quadratic optimization model formulated from the creep flow model is also investigated. A distributed parameter control model of the gas dynamics for a transmission line is formulated. An analytic solution of this model is given with both Neuman boundary conditions and distributed supplies and loads. A transfer function model is developed expressing the dynamics between the defined output and the control and disturbance inputs of the transmission line. Based on the qualitative behaviour observed from the step responses of the solutions of the distributed parameter model formulated in this thesis, simplified transfer function models were developed. These control models expresses the dynamics of a natural gas transmission line with Neuman boundary control and load. Further, these models were used to design a control law, which is a combination of a Smith

  1. Order-constrained linear optimization.

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  2. Optimization of modal filters based on arrays of piezoelectric sensors

    Pagani, Carlos C Jr; Trindade, Marcelo A

    2009-01-01

    Modal filters may be obtained by a properly designed weighted sum of the output signals of an array of sensors distributed on the host structure. Although several research groups have been interested in techniques for designing and implementing modal filters based on a given array of sensors, the effect of the array topology on the effectiveness of the modal filter has received much less attention. In particular, it is known that some parameters, such as size, shape and location of a sensor, are very important in determining the observability of a vibration mode. Hence, this paper presents a methodology for the topological optimization of an array of sensors in order to maximize the effectiveness of a set of selected modal filters. This is done using a genetic algorithm optimization technique for the selection of 12 piezoceramic sensors from an array of 36 piezoceramic sensors regularly distributed on an aluminum plate, which maximize the filtering performance, over a given frequency range, of a set of modal filters, each one aiming to isolate one of the first vibration modes. The vectors of the weighting coefficients for each modal filter are evaluated using QR decomposition of the complex frequency response function matrix. Results show that the array topology is not very important for lower frequencies but it greatly affects the filter effectiveness for higher frequencies. Therefore, it is possible to improve the effectiveness and frequency range of a set of modal filters by optimizing the topology of an array of sensors. Indeed, using 12 properly located piezoceramic sensors bonded on an aluminum plate it is shown that the frequency range of a set of modal filters may be enlarged by 25–50%

  3. Quaternion error-based optimal control applied to pinpoint landing

    Ghiglino, Pablo

    Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.

  4. Topological Optimization of Continuum Structure based on ANSYS

    Li Xue-ping

    2017-01-01

    Full Text Available Topology optimization is at the phase of structural concept design and the result of it is foundation for succeeding design, therefore, structural topology optimization is more important to engineering design. in this thesis, in order to seek the optimal structure shape of the winch’s mounting bracket of ROV simulator, topology optimization design of it by finite element analysis software ANSYS was carried out. the results show that the topology optimization method is an effective optimization method and indicate that the method is correct and effective, it has a certain engineering application prospect.

  5. Warehouse stocking optimization based on dynamic ant colony genetic algorithm

    Xiao, Xiaoxu

    2018-04-01

    In view of the various orders of FAW (First Automotive Works) International Logistics Co., Ltd., the SLP method is used to optimize the layout of the warehousing units in the enterprise, thus the warehouse logistics is optimized and the external processing speed of the order is improved. In addition, the relevant intelligent algorithms for optimizing the stocking route problem are analyzed. The ant colony algorithm and genetic algorithm which have good applicability are emphatically studied. The parameters of ant colony algorithm are optimized by genetic algorithm, which improves the performance of ant colony algorithm. A typical path optimization problem model is taken as an example to prove the effectiveness of parameter optimization.

  6. Optimization-Based Approaches to Control of Probabilistic Boolean Networks

    Koichi Kobayashi

    2017-02-01

    Full Text Available Control of gene regulatory networks is one of the fundamental topics in systems biology. In the last decade, control theory of Boolean networks (BNs, which is well known as a model of gene regulatory networks, has been widely studied. In this review paper, our previously proposed methods on optimal control of probabilistic Boolean networks (PBNs are introduced. First, the outline of PBNs is explained. Next, an optimal control method using polynomial optimization is explained. The finite-time optimal control problem is reduced to a polynomial optimization problem. Furthermore, another finite-time optimal control problem, which can be reduced to an integer programming problem, is also explained.

  7. Simplified discrete ordinates method in spherical geometry

    Elsawi, M.A.; Abdurrahman, N.M.; Yavuz, M.

    1999-01-01

    The authors extend the method of simplified discrete ordinates (SS N ) to spherical geometry. The motivation for such an extension is that the appearance of the angular derivative (redistribution) term in the spherical geometry transport equation makes it difficult to decide which differencing scheme best approximates this term. In the present method, the angular derivative term is treated implicitly and thus avoids the need for the approximation of such term. This method can be considered to be analytic in nature with the advantage of being free from spatial truncation errors from which most of the existing transport codes suffer. In addition, it treats the angular redistribution term implicitly with the advantage of avoiding approximations to that term. The method also can handle scattering in a very general manner with the advantage of spending almost the same computational effort for all scattering modes. Moreover, the methods can easily be applied to higher-order S N calculations

  8. Energy-pointwise discrete ordinates transport methods

    Williams, M.L.; Asgari, M.; Tashakorri, R.

    1997-01-01

    A very brief description is given of a one-dimensional code, CENTRM, which computes a detailed, space-dependent flux spectrum in a pointwise-energy representation within the resolved resonance range. The code will become a component in the SCALE system to improve computation of self-shielded cross sections, thereby enhancing the accuracy of codes such as KENO. CENTRM uses discrete-ordinates transport theory with an arbitrary angular quadrature order and a Legendre expansion of scattering anisotropy for moderator materials and heavy nuclides. The CENTRM program provides capability to deterministically compute full energy range, space-dependent angular flux spectra, rigorously accounting for resonance fine-structure and scattering anisotropy effects

  9. Robust sawtooth period control based on adaptive online optimization

    Bolder, J.J.; Witvoet, G.; De Baar, M.R.; Steinbuch, M.; Van de Wouw, N.; Haring, M.A.M.; Westerhof, E.; Doelman, N.J.

    2012-01-01

    The systematic design of a robust adaptive control strategy for the sawtooth period using electron cyclotron current drive (ECCD) is presented. Recent developments in extremum seeking control (ESC) are employed to derive an optimized controller structure and offer practical tuning guidelines for its parameters. In this technique a cost function in terms of the desired sawtooth period is optimized online by changing the ECCD deposition location based on online estimations of the gradient of the cost function. The controller design does not require a detailed model of the sawtooth instability. Therefore, the proposed ESC is widely applicable to any sawtoothing plasma or plasma simulation and is inherently robust against uncertainties or plasma variations. Moreover, it can handle a broad class of disturbances. This is demonstrated by time-domain simulations, which show successful tracking of time-varying sawtooth period references throughout the whole operating space, even in the presence of variations in plasma parameters, disturbances and slow launcher mirror dynamics. Due to its simplicity and robustness the proposed ESC is a valuable sawtooth control candidate for any experimental tokamak plasma, and may even be applicable to other fusion-related control problems. (paper)

  10. Vanpool trip planning based on evolutionary multiple objective optimization

    Zhao, Ming; Yang, Disheng; Feng, Shibing; Liu, Hengchang

    2017-08-01

    Carpool and vanpool draw a lot of researchers’ attention, which is the emphasis of this paper. A concrete vanpool operation definition is given, based on the given definition, this paper tackles vanpool operation optimization using user experience decline index(UEDI). This paper is focused on making each user having identical UEDI and the system having minimum sum of all users’ UEDI. Three contributions are made, the first contribution is a vanpool operation scheme diagram, each component of the scheme is explained in detail. The second contribution is getting all customer’s UEDI as a set, standard deviation and sum of all users’ UEDI set are used as objectives in multiple objective optimization to decide trip start address, trip start time and trip destination address. The third contribution is a trip planning algorithm, which tries to minimize the sum of all users’ UEDI. Geographical distribution of the charging stations and utilization rate of the charging stations are considered in the trip planning process.

  11. Bifurcation-based approach reveals synergism and optimal combinatorial perturbation.

    Liu, Yanwei; Li, Shanshan; Liu, Zengrong; Wang, Ruiqi

    2016-06-01

    Cells accomplish the process of fate decisions and form terminal lineages through a series of binary choices in which cells switch stable states from one branch to another as the interacting strengths of regulatory factors continuously vary. Various combinatorial effects may occur because almost all regulatory processes are managed in a combinatorial fashion. Combinatorial regulation is crucial for cell fate decisions because it may effectively integrate many different signaling pathways to meet the higher regulation demand during cell development. However, whether the contribution of combinatorial regulation to the state transition is better than that of a single one and if so, what the optimal combination strategy is, seem to be significant issue from the point of view of both biology and mathematics. Using the approaches of combinatorial perturbations and bifurcation analysis, we provide a general framework for the quantitative analysis of synergism in molecular networks. Different from the known methods, the bifurcation-based approach depends only on stable state responses to stimuli because the state transition induced by combinatorial perturbations occurs between stable states. More importantly, an optimal combinatorial perturbation strategy can be determined by investigating the relationship between the bifurcation curve of a synergistic perturbation pair and the level set of a specific objective function. The approach is applied to two models, i.e., a theoretical multistable decision model and a biologically realistic CREB model, to show its validity, although the approach holds for a general class of biological systems.

  12. How to Plan an Ordinance: An Outline and Some Examples.

    Cable Television Information Center, Washington, DC.

    Designed for public officials who must make policy decisions concerning cable television, this booklet forms a checklist to ensure that all basic questions have been considered in drafting an ordinance. The purpose of a cable television ordinance is to develop a law listing the specifications and obligations that will govern the franchising of a…

  13. Proposed Ordinance for the Regulation of Cable Television. Working Draft.

    Chicago City Council, IL.

    A model ordinance is proposed for the regulation of cable television in the city of Chicago. It defines the language of the ordinance, sets forth the method of granting franchises, and describes the terms of the franchises. The duties of a commission to regulate cable television are listed and the method of selecting commission members is…

  14. Rite of Ordination of Fr Karol Wojtyła

    Szymon Fedorowicz

    2014-06-01

    Full Text Available The article contains the source text of the Polish translation of the rite of ordination of a bishop prepared for the episcopal ordination of Fr. Karol Wojtyła by Franciszek Małaczyński OSB. The text was found in the archives of Jacek Fedorowicz and prepared for publication by his son Szymon Fedorowicz.

  15. Using GIS to check co-ordinates of genebank accessions

    Hijmans, R.J.; Schreuder, M.; Cruz, de la J.; Guarino, L.

    1999-01-01

    The geographic co-ordinates of the locations where germplasm accessions have been collected are usually documented in genebank databases. However, the co-ordinate data are often incomplete and may contain errors. This paper describes procedures to check for errors, to determine the cause of these

  16. High order discrete ordinates transport in two dimensions

    Arkuszewski, J.J.

    1980-01-01

    A two-dimensional neutron transport equation in (x,y) geometry is solved by the subdomain version of the weighted residual method. The weight functions are chosen to be characteristic functions of computational boxes (subdomains). In the case of bilinear interpolant the conventional diamond relations are obtained, while the quadratic one produces generalized diamond relations containing first derivatives of the solution. The balance equation remains the same. The derivation yields also additional relations for extrapolating boundary values of derivatives and leaves the room for supplementing the interpolant with specially curtailed higher order polynomials. The method requires only slight modifications in inner iteration process used by conventional discrete ordinates programs, and has been introduced as an option into the program DOT2. The paper contains comparisons of the proposed method with conventional one based on calculations of IAEA-CRP transport theory benchmarks. (author)

  17. A Laplace transform method for energy multigroup hybrid discrete ordinates

    Segatto, C.F.; Vilhena, M.T.; Barros, R.C.

    2010-01-01

    In typical lattice cells where a highly absorbing, small fuel element is embedded in the moderator, a large weakly absorbing medium, high-order transport methods become unnecessary. In this work we describe a hybrid discrete ordinates (S N) method for energy multigroup slab lattice calculations. This hybrid S N method combines the convenience of a low-order S N method in the moderator with a high-order S N method in the fuel. The idea is based on the fact that in weakly absorbing media whose physical size is several neutron mean free paths in extent, even the S 2 method (P 1 approximation), leads to an accurate result. We use special fuel-moderator interface conditions and the Laplace transform (LTS N ) analytical numerical method to calculate the two-energy group neutron flux distributions and the thermal disadvantage factor. We present numerical results for a range of typical model problems.

  18. First ordinance amending the Ordinance on Rail Transport of Dangerous Goods (1. Amendment Ordinance Rail Transport of Dangerous Goods). As of June 22, 1983

    1983-01-01

    This Amendment which came into force on September 1, 1983 brings about modifications of many items of the original Ordinance on Rail Transport of Dangerous Goods and its supplement, as of August 29, 1979. (HSCH) [de

  19. Solution of optimal power flow using evolutionary-based algorithms

    It aims to estimate the optimal settings of real generator output power, bus voltage, ...... Lansey, K. E., 2003, Optimization of water distribution network design using ... Pandit, M., 2016, Economic load dispatch of wind-solar-thermal system using ...

  20. Binary cuckoo search based optimal PMU placement scheme for ...

    without including zero-injection effect, an Optimal PMU Placement strategy considering ..... in Indian power grid — A case study, Frontiers in Energy, Vol. ... optimization approach, Proceedings: International Conference on Intelligent Systems ...

  1. MVMO-based approach for optimal placement and tuning of ...

    bus (New England) test system. Numerical results include performance comparisons with other metaheuristic optimization techniques, namely, comprehensive learning particle swarm optimization (CLPSO), genetic algorithm with multi-parent ...

  2. Optimizing Maintenance of Constraint-Based Database Caches

    Klein, Joachim; Braun, Susanne

    Caching data reduces user-perceived latency and often enhances availability in case of server crashes or network failures. DB caching aims at local processing of declarative queries in a DBMS-managed cache close to the application. Query evaluation must produce the same results as if done at the remote database backend, which implies that all data records needed to process such a query must be present and controlled by the cache, i. e., to achieve “predicate-specific” loading and unloading of such record sets. Hence, cache maintenance must be based on cache constraints such that “predicate completeness” of the caching units currently present can be guaranteed at any point in time. We explore how cache groups can be maintained to provide the data currently needed. Moreover, we design and optimize loading and unloading algorithms for sets of records keeping the caching units complete, before we empirically identify the costs involved in cache maintenance.

  3. Density-based penalty parameter optimization on C-SVM.

    Liu, Yun; Lian, Jie; Bartolacci, Michael R; Zeng, Qing-An

    2014-01-01

    The support vector machine (SVM) is one of the most widely used approaches for data classification and regression. SVM achieves the largest distance between the positive and negative support vectors, which neglects the remote instances away from the SVM interface. In order to avoid a position change of the SVM interface as the result of an error system outlier, C-SVM was implemented to decrease the influences of the system's outliers. Traditional C-SVM holds a uniform parameter C for both positive and negative instances; however, according to the different number proportions and the data distribution, positive and negative instances should be set with different weights for the penalty parameter of the error terms. Therefore, in this paper, we propose density-based penalty parameter optimization of C-SVM. The experiential results indicated that our proposed algorithm has outstanding performance with respect to both precision and recall.

  4. SVM-based glioma grading. Optimization by feature reduction analysis

    Zoellner, Frank G.; Schad, Lothar R.; Emblem, Kyrre E.; Harvard Medical School, Boston, MA; Oslo Univ. Hospital

    2012-01-01

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values (∝87%) while reducing the number of features by up to 98%. (orig.)

  5. Multilevel Thresholding Segmentation Based on Harmony Search Optimization

    Diego Oliva

    2013-01-01

    Full Text Available In this paper, a multilevel thresholding (MT algorithm based on the harmony search algorithm (HSA is introduced. HSA is an evolutionary method which is inspired in musicians improvising new harmonies while playing. Different to other evolutionary algorithms, HSA exhibits interesting search capabilities still keeping a low computational overhead. The proposed algorithm encodes random samples from a feasible search space inside the image histogram as candidate solutions, whereas their quality is evaluated considering the objective functions that are employed by the Otsu’s or Kapur’s methods. Guided by these objective values, the set of candidate solutions are evolved through the HSA operators until an optimal solution is found. Experimental results demonstrate the high performance of the proposed method for the segmentation of digital images.

  6. Optimization of arterial age prediction models based in pulse wave

    Scandurra, A G [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Meschino, G J [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Passoni, L I [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Dai Pra, A L [Engineering Aplied Artificial Intelligence Group, Mathematics Department, Mar del Plata University (Argentina); Introzzi, A R [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina); Clara, F M [Bioengineering Laboratory, Electronic Department, Mar del Plata University (Argentina)

    2007-11-15

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff.

  7. Efficacy of Code Optimization on Cache-Based Processors

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  8. Optimization-based particle filter for state and parameter estimation

    Li Fu; Qi Fei; Shi Guangming; Zhang Li

    2009-01-01

    In recent years, the theory of particle filter has been developed and widely used for state and parameter estimation in nonlinear/non-Gaussian systems. Choosing good importance density is a critical issue in particle filter design. In order to improve the approximation of posterior distribution, this paper provides an optimization-based algorithm (the steepest descent method) to generate the proposal distribution and then sample particles from the distribution. This algorithm is applied in 1-D case, and the simulation results show that the proposed particle filter performs better than the extended Kalman filter (EKF), the standard particle filter (PF), the extended Kalman particle filter (PF-EKF) and the unscented particle filter (UPF) both in efficiency and in estimation precision.

  9. Regularized Regression and Density Estimation based on Optimal Transport

    Burger, M.

    2012-03-11

    The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).

  10. Optimization of arterial age prediction models based in pulse wave

    Scandurra, A G; Meschino, G J; Passoni, L I; Dai Pra, A L; Introzzi, A R; Clara, F M

    2007-01-01

    We propose the detection of early arterial ageing through a prediction model of arterial age based in the coherence assumption between the pulse wave morphology and the patient's chronological age. Whereas we evaluate several methods, a Sugeno fuzzy inference system is selected. Models optimization is approached using hybrid methods: parameter adaptation with Artificial Neural Networks and Genetic Algorithms. Features selection was performed according with their projection on main factors of the Principal Components Analysis. The model performance was tested using the bootstrap error type .632E. The model presented an error smaller than 8.5%. This result encourages including this process as a diagnosis module into the device for pulse analysis that has been developed by the Bioengineering Laboratory staff

  11. Optimizing Human Diet Problem Based on Price and Taste Using

    Hossein EGHBALI

    2012-07-01

    Full Text Available Low price and good taste of foods are regarded as two major factors for optimal human nutrition. Due to price fluctuations and taste diversity, these two factors cannot be certainly and determinately evaluated. This problem must be viewed from another perspective because of the uncertainty about the amount of nutrients per unit of foods and also diversity of people’s daily needs to receive them.This paper discusses human diet problem in fuzzy environment. The approach deals with multi-objective fuzzy linear programming problem using a fuzzy programming technique for its solution. By prescribing a diet merely based on crisp data, some ofthe realities are neglected. For the same reason, we dealt with human diet problem through fuzzy approach. Results indicated uncertainty about factors of nutrition diet -including taste and price, amount of nutrients and their intake- would affect diet quality, making the proposed diet more realistic.

  12. Oracle-based online robust optimization via online learning

    Ben-Tal, A.; Hazan, E.; Koren, T.; Shie, M.

    2015-01-01

    Robust optimization is a common optimization framework under uncertainty when problem parameters are unknown, but it is known that they belong to some given uncertainty set. In the robust optimization framework, a min-max problem is solved wherein a solution is evaluated according to its performance

  13. Optimization of conditions for gene delivery system based on PEI

    Roya Cheraghi

    2017-01-01

    Full Text Available Objective(s: PEI based nanoparticle (NP due to dual capabilities of proton sponge and DNA binding is known as powerful tool for nucleic acid delivery to cells. However, serious cytotoxicity and complicated conditions, which govern NPs properties and its interactions with cells practically, hindered achievement to high transfection efficiency. Here, we have tried to optimize the properties of PEI/ firefly luciferase plasmid complexes and cellular condition to improve transfection efficiency. Materials and Methods: For this purpose, firefly luciferase, as a robust gene reporter, was complexed with PEI to prepare NPs with different size and charge. The physicochemical properties of nanoparticles were evaluated using agarose gel retardation and dynamic light scattering.  MCF7 and BT474 cells at different confluency were also transfected with prepared nanoparticles at various concentrations for short and long times. Results: The branched PEI can instantaneously bind to DNA and form cationic NPs. The results demonstrated the production of nanoparticles with size about 100-500 nm dependent on N/P ratio. Moreover, increase of nanoparticles concentration on the cell surface drastically improved the transfection rate, so at a concentration of 30 ng/ìl, the highest transfection efficiency was achieved. On the other side, at confluency between 40-60%, the maximum efficiency was obtained. The result demonstrated that N/P ratio of 12 could establish an optimized ratio between transfection efficiency and cytotoxicity of PEI/plasmid nanoparticles. The increase of NPs N/P ratio led to significant cytotoxicity. Conclusion: Obtained results verified the optimum conditions for PEI based gene delivery in different cell lines.

  14. Multivariate ordination identifies vegetation types associated with spider conservation in brassica crops

    Hafiz Sohaib Ahmed Saqib

    2017-10-01

    Full Text Available Conservation biological control emphasizes natural and other non-crop vegetation as a source of natural enemies to focal crops. There is an unmet need for better methods to identify the types of vegetation that are optimal to support specific natural enemies that may colonize the crops. Here we explore the commonality of the spider assemblage—considering abundance and diversity (H—in brassica crops with that of adjacent non-crop and non-brassica crop vegetation. We employ spatial-based multivariate ordination approaches, hierarchical clustering and spatial eigenvector analysis. The small-scale mixed cropping and high disturbance frequency of southern Chinese vegetation farming offered a setting to test the role of alternate vegetation for spider conservation. Our findings indicate that spider families differ markedly in occurrence with respect to vegetation type. Grassy field margins, non-crop vegetation, taro and sweetpotato harbour spider morphospecies and functional groups that are also present in brassica crops. In contrast, pumpkin and litchi contain spiders not found in brassicas, and so may have little benefit for conservation biological control services for brassicas. Our findings also illustrate the utility of advanced statistical approaches for identifying spatial relationships between natural enemies and the land uses most likely to offer alternative habitats for conservation biological control efforts that generates testable hypotheses for future studies.

  15. Optimal power flow based on glow worm-swarm optimization for three-phase islanded microgrids

    Quang, Ninh Nguyen; Sanseverino, Eleonora Riva; Di Silvestre, Maria Luisa

    2014-01-01

    This paper presents an application of the Glowworm Swarm Optimization method (GSO) to solve the optimal power flow problem in three-phase islanded microgrids equipped with power electronics dc-ac inverter interfaced distributed generation units. In this system, the power injected by the distribut...

  16. Sensor Calibration Design Based on D-Optimality Criterion

    Hajiyev Chingiz

    2016-09-01

    Full Text Available In this study, a procedure for optimal selection of measurement points using the D-optimality criterion to find the best calibration curves of measurement sensors is proposed. The coefficients of calibration curve are evaluated by applying the classical Least Squares Method (LSM. As an example, the problem of optimal selection for standard pressure setters when calibrating a differential pressure sensor is solved. The values obtained from the D-optimum measurement points for calibration of the differential pressure sensor are compared with those from actual experiments. Comparison of the calibration errors corresponding to the D-optimal, A-optimal and Equidistant calibration curves is done.

  17. Topology Optimization in Electric Car Body Frame Based on Optistruct

    Ge Dongdong

    2017-01-01

    Full Text Available In order to optimize the structure of the electric car body frame, the static analysis of the car frame were carried on. For the goal of the frame’s weight minimum, OptiStruct software was used to topology optimization design. And the optimal material distribution program of the frame structure was got. Static strength before and after optimization was comprehensive compared through the stress, deformation. The results showed that the weight of frame after optimization was reduced by 18.96%, and the requirements of the strength and stiffness were ensured.

  18. Swarm Optimization-Based Magnetometer Calibration for Personal Handheld Devices

    Naser El-Sheimy

    2012-09-01

    Full Text Available Inertial Navigation Systems (INS consist of accelerometers, gyroscopes and a processor that generates position and orientation solutions by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the user heading based on Earth’s magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are usually corrupted by several errors, including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO-based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometers. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. Furthermore, the proposed algorithm can help in the development of Pedestrian Navigation Devices (PNDs when combined with inertial sensors and GPS/Wi-Fi for indoor navigation and Location Based Services (LBS applications.

  19. Multispecies Coevolution Particle Swarm Optimization Based on Previous Search History

    Danping Wang

    2017-01-01

    Full Text Available A hybrid coevolution particle swarm optimization algorithm with dynamic multispecies strategy based on K-means clustering and nonrevisit strategy based on Binary Space Partitioning fitness tree (called MCPSO-PSH is proposed. Previous search history memorized into the Binary Space Partitioning fitness tree can effectively restrain the individuals’ revisit phenomenon. The whole population is partitioned into several subspecies and cooperative coevolution is realized by an information communication mechanism between subspecies, which can enhance the global search ability of particles and avoid premature convergence to local optimum. To demonstrate the power of the method, comparisons between the proposed algorithm and state-of-the-art algorithms are grouped into two categories: 10 basic benchmark functions (10-dimensional and 30-dimensional, 10 CEC2005 benchmark functions (30-dimensional, and a real-world problem (multilevel image segmentation problems. Experimental results show that MCPSO-PSH displays a competitive performance compared to the other swarm-based or evolutionary algorithms in terms of solution accuracy and statistical tests.

  20. Parallel Performance Optimizations on Unstructured Mesh-based Simulations

    Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid

    2015-01-01

    © The Authors. Published by Elsevier B.V. This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches. We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.

  1. Estimating Ordinal Reliability for Likert-Type and Ordinal Item Response Data: A Conceptual, Empirical, and Practical Guide

    Gadermann, Anne M.; Guhn, Martin; Zumbo, Bruno D.

    2012-01-01

    This paper provides a conceptual, empirical, and practical guide for estimating ordinal reliability coefficients for ordinal item response data (also referred to as Likert, Likert-type, ordered categorical, or rating scale item responses). Conventionally, reliability coefficients, such as Cronbach's alpha, are calculated using a Pearson…

  2. CASTING IMPROVEMENT BASED ON METAHEURISTIC OPTIMIZATION AND NUMERICAL SIMULATION

    Radomir Radiša

    2017-12-01

    Full Text Available This paper presents the use of metaheuristic optimization techniques to support the improvement of casting process. Genetic algorithm (GA, Ant Colony Optimization (ACO, Simulated annealing (SA and Particle Swarm Optimization (PSO have been considered as optimization tools to define the geometry of the casting part’s feeder. The proposed methodology has been demonstrated in the design of the feeder for casting Pelton turbine bucket. The results of the optimization are dimensional characteristics of the feeder, and the best result from all the implemented optimization processes has been adopted. Numerical simulation has been used to verify the validity of the presented design methodology and the feeding system optimization in the casting system of the Pelton turbine bucket.

  3. $H_2$ optimal controllers with observer based architecture for continuous-time systems : separation principle

    Saberi, A.; Sannuti, P.; Stoorvogel, A.A.

    1994-01-01

    For a general H2 optimal control problem, at first all Hz optimal measurement feedback controllers are characterized and parameterized, and then attention is focused on controllers with observer based architecture. Both full order as well as reduced order observer based H2 optimal controllers are

  4. Ordinal classification of vegetation along mangla dam, mirpur, ajk

    Urooj, R.; Ahmad, S.S.

    2015-01-01

    Vegetation plays an important role in ecosystem maintenance. But the construction of dams transform the riparian vegetation into impoundment region. The present study was conducted to identify and quantify herbaceous flora around the vicinity of Mangla dam. Study area was divided into two zones on the basis of distance from the dam boundary. Pattern of vegetation distribution and their association in area was grouped in to different communities by using ordination techniques. Two ordination techniques TWINSPAN and DECORANA were used. A total of 37 species belonging to 17 families were identified from fifty quadrats. Random sampling was done by using 1*1 m sized quadrat. Percentage of vegetation was assessed by using Domin cover scale. TWINSPAN classified two groups and four communities in Zone-I, while in Zone-II two groups and six communities were formed. Dominance curve showed that Cynodon dactylon, Desmostachya bipinnata and Rhynchosia minima were dominant species in Zone-I and Croton bonplandianus, C. dactylon, D. bipinnata and Brachiaria decumbens were frequent species in Zone-II. DCA as indirect multivariate technique based on reciprocal averaging determined the environmental gradients that affect the species richness and also verified the groups of species and indicated four communities in both Zones. Monte Carlo test of significance was used to analyze stress in relation to number of axis/dimensionality under Non-metric Multidimensional Scaling (NMS) through p-value. This study provided the significant results of least abundant and most abundant herbaceous species around the dam which will be helpful for biodiversity conservation and in decision making for further land planning

  5. Ordinal convolutional neural networks for predicting RDoC positive valence psychiatric symptom severity scores.

    Rios, Anthony; Kavuluru, Ramakanth

    2017-11-01

    The CEGS N-GRID 2016 Shared Task in Clinical Natural Language Processing (NLP) provided a set of 1000 neuropsychiatric notes to participants as part of a competition to predict psychiatric symptom severity scores. This paper summarizes our methods, results, and experiences based on our participation in the second track of the shared task. Classical methods of text classification usually fall into one of three problem types: binary, multi-class, and multi-label classification. In this effort, we study ordinal regression problems with text data where misclassifications are penalized differently based on how far apart the ground truth and model predictions are on the ordinal scale. Specifically, we present our entries (methods and results) in the N-GRID shared task in predicting research domain criteria (RDoC) positive valence ordinal symptom severity scores (absent, mild, moderate, and severe) from psychiatric notes. We propose a novel convolutional neural network (CNN) model designed to handle ordinal regression tasks on psychiatric notes. Broadly speaking, our model combines an ordinal loss function, a CNN, and conventional feature engineering (wide features) into a single model which is learned end-to-end. Given interpretability is an important concern with nonlinear models, we apply a recent approach called locally interpretable model-agnostic explanation (LIME) to identify important words that lead to instance specific predictions. Our best model entered into the shared task placed third among 24 teams and scored a macro mean absolute error (MMAE) based normalized score (100·(1-MMAE)) of 83.86. Since the competition, we improved our score (using basic ensembling) to 85.55, comparable with the winning shared task entry. Applying LIME to model predictions, we demonstrate the feasibility of instance specific prediction interpretation by identifying words that led to a particular decision. In this paper, we present a method that successfully uses wide features and

  6. Optimal Allocation of Water Resources Based on Water Supply Security

    Jianhua Wang

    2016-06-01

    Full Text Available Under the combined impacts of climate change and human activities, a series of water issues, such as water shortages, have arisen all over the world. According to current studies in Science and Nature, water security has become a frontier critical topic. Water supply security (WSS, which is the state of water resources and their capacity and their capacity to meet the demand of water users by water supply systems, is an important part of water security. Currently, WSS is affected by the amount of water resources, water supply projects, water quality and water management. Water shortages have also led to water supply insecurity. WSS is now evaluated based on the balance of the supply and demand under a single water resources condition without considering the dynamics of the varying conditions of water resources each year. This paper developed an optimal allocation model for water resources that can realize the optimal allocation of regional water resources and comprehensively evaluate WSS. The objective of this model is to minimize the duration of water shortages in the long term, as characterized by the Water Supply Security Index (WSSI, which is the assessment value of WSS, a larger WSSI value indicates better results. In addition, the simulation results of the model can determine the change process and dynamic evolution of the WSS. Quanzhou, a city in China with serious water shortage problems, was selected as a case study. The allocation results of the current year and target year of planning demonstrated that the level of regional comprehensive WSS was significantly influenced by the capacity of water supply projects and the conditions of the natural water resources. The varying conditions of the water resources allocation results in the same year demonstrated that the allocation results and WSSI were significantly affected by reductions in precipitation, decreases in the water yield coefficient, and changes in the underlying surface.

  7. Efficacy of Code Optimization on Cache-based Processors

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  8. Particle swarm optimization algorithm based low cost magnetometer calibration

    Ali, A. S.; Siddharth, S., Syed, Z., El-Sheimy, N.

    2011-12-01

    Inertial Navigation Systems (INS) consist of accelerometers, gyroscopes and a microprocessor provide inertial digital data from which position and orientation is obtained by integrating the specific forces and rotation rates. In addition to the accelerometers and gyroscopes, magnetometers can be used to derive the absolute user heading based on Earth's magnetic field. Unfortunately, the measurements of the magnetic field obtained with low cost sensors are corrupted by several errors including manufacturing defects and external electro-magnetic fields. Consequently, proper calibration of the magnetometer is required to achieve high accuracy heading measurements. In this paper, a Particle Swarm Optimization (PSO) based calibration algorithm is presented to estimate the values of the bias and scale factor of low cost magnetometer. The main advantage of this technique is the use of the artificial intelligence which does not need any error modeling or awareness of the nonlinearity. The estimated bias and scale factor errors from the proposed algorithm improve the heading accuracy and the results are also statistically significant. Also, it can help in the development of the Pedestrian Navigation Devices (PNDs) when combined with the INS and GPS/Wi-Fi especially in the indoor environments

  9. Credibilistic multi-period portfolio optimization based on scenario tree

    Mohebbi, Negin; Najafi, Amir Abbas

    2018-02-01

    In this paper, we consider a multi-period fuzzy portfolio optimization model with considering transaction costs and the possibility of risk-free investment. We formulate a bi-objective mean-VaR portfolio selection model based on the integration of fuzzy credibility theory and scenario tree in order to dealing with the markets uncertainty. The scenario tree is also a proper method for modeling multi-period portfolio problems since the length and continuity of their horizon. We take the return and risk as well cardinality, threshold, class, and liquidity constraints into consideration for further compliance of the model with reality. Then, an interactive dynamic programming method, which is based on a two-phase fuzzy interactive approach, is employed to solve the proposed model. In order to verify the proposed model, we present an empirical application in NYSE under different circumstances. The results show that the consideration of data uncertainty and other real-world assumptions lead to more practical and efficient solutions.

  10. Task-based optimization of image reconstruction in breast CT

    Sanchez, Adrian A.; Sidky, Emil Y.; Pan, Xiaochuan

    2014-03-01

    We demonstrate a task-based assessment of image quality in dedicated breast CT in order to optimize the number of projection views acquired. The methodology we employ is based on the Hotelling Observer (HO) and its associated metrics. We consider two tasks: the Rayleigh task of discerning between two resolvable objects and a single larger object, and the signal detection task of classifying an image as belonging to either a signalpresent or signal-absent hypothesis. HO SNR values are computed for 50, 100, 200, 500, and 1000 projection view images, with the total imaging radiation dose held constant. We use the conventional fan-beam FBP algorithm and investigate the effect of varying the width of a Hanning window used in the reconstruction, since this affects both the noise properties of the image and the under-sampling artifacts which can arise in the case of sparse-view acquisitions. Our results demonstrate that fewer projection views should be used in order to increase HO performance, which in this case constitutes an upper-bound on human observer performance. However, the impact on HO SNR of using fewer projection views, each with a higher dose, is not as significant as the impact of employing regularization in the FBP reconstruction through a Hanning filter.

  11. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  12. Application of surrogate-based global optimization to aerodynamic design

    Pérez, Esther

    2016-01-01

    Aerodynamic design, like many other engineering applications, is increasingly relying on computational power. The growing need for multi-disciplinarity and high fidelity in design optimization for industrial applications requires a huge number of repeated simulations in order to find an optimal design candidate. The main drawback is that each simulation can be computationally expensive – this becomes an even bigger issue when used within parametric studies, automated search or optimization loops, which typically may require thousands of analysis evaluations. The core issue of a design-optimization problem is the search process involved. However, when facing complex problems, the high-dimensionality of the design space and the high-multi-modality of the target functions cannot be tackled with standard techniques. In recent years, global optimization using meta-models has been widely applied to design exploration in order to rapidly investigate the design space and find sub-optimal solutions. Indeed, surrogat...

  13. RJMCMC based Text Placement to Optimize Label Placement and Quantity

    Touya, Guillaume; Chassin, Thibaud

    2018-05-01

    Label placement is a tedious task in map design, and its automation has long been a goal for researchers in cartography, but also in computational geometry. Methods that search for an optimal or nearly optimal solution that satisfies a set of constraints, such as label overlapping, have been proposed in the literature. Most of these methods mainly focus on finding the optimal position for a given set of labels, but rarely allow the removal of labels as part of the optimization. This paper proposes to apply an optimization technique called Reversible-Jump Markov Chain Monte Carlo that enables to easily model the removal or addition during the optimization iterations. The method, quite preliminary for now, is tested on a real dataset, and the first results are encouraging.

  14. The First Women’s ordination in the Episcopal Church of the 1970s

    Posternak Andrei, priest

    2015-02-01

    Full Text Available Episcopal Church of the USA in 1976 adopted a positive resolution (1976-B300 regarding women’s ordination to the priesthood and episcopacy. The Church thus legalizes the experience of the Anglican community of the East coast: Philadelphia and Washington where in July 1974 and September 1975, took place women’s ordination. The article is devoted to the history of these ordinations, public reactions to them and theological discussions concerning the permissibility of female ordination in the Episcopal community of the first half of the 1970s. The research is based on the offi cial reports of the Episcopal Church. Believe in the Divine will on the vocation of women to the priesthood was associated with transformation of the Western society: women’s struggle for their rights and public struggle against race discrimination. The Anglican bishops were concerned about the problem of adaptation of the new ministry to modern conditions: the 1970s became a period of transition from traditional to post-Christian society in which gender was considered as a new social function. It will transform the Anglican community where the priesthood will become a form of ministry to the parish and in these conditions women can be ordained .

  15. Optimal Sensor Placement for Latticed Shell Structure Based on an Improved Particle Swarm Optimization Algorithm

    Xun Zhang

    2014-01-01

    Full Text Available Optimal sensor placement is a key issue in the structural health monitoring of large-scale structures. However, some aspects in existing approaches require improvement, such as the empirical and unreliable selection of mode and sensor numbers and time-consuming computation. A novel improved particle swarm optimization (IPSO algorithm is proposed to address these problems. The approach firstly employs the cumulative effective modal mass participation ratio to select mode number. Three strategies are then adopted to improve the PSO algorithm. Finally, the IPSO algorithm is utilized to determine the optimal sensors number and configurations. A case study of a latticed shell model is implemented to verify the feasibility of the proposed algorithm and four different PSO algorithms. The effective independence method is also taken as a contrast experiment. The comparison results show that the optimal placement schemes obtained by the PSO algorithms are valid, and the proposed IPSO algorithm has better enhancement in convergence speed and precision.

  16. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Multiphase Return Trajectory Optimization Based on Hybrid Algorithm

    Yi Yang

    2016-01-01

    Full Text Available A hybrid trajectory optimization method consisting of Gauss pseudospectral method (GPM and natural computation algorithm has been developed and utilized to solve multiphase return trajectory optimization problem, where a phase is defined as a subinterval in which the right-hand side of the differential equation is continuous. GPM converts the optimal control problem to a nonlinear programming problem (NLP, which helps to improve calculation accuracy and speed of natural computation algorithm. Through numerical simulations, it is found that the multiphase optimal control problem could be solved perfectly.

  18. OPTIMIZED PARTICLE SWARM OPTIMIZATION BASED DEADLINE CONSTRAINED TASK SCHEDULING IN HYBRID CLOUD

    Dhananjay Kumar

    2016-01-01

    Full Text Available Cloud Computing is a dominant way of sharing of computing resources that can be configured and provisioned easily. Task scheduling in Hybrid cloud is a challenge as it suffers from producing the best QoS (Quality of Service when there is a high demand. In this paper a new resource allocation algorithm, to find the best External Cloud provider when the intermediate provider’s resources aren’t enough to satisfy the customer’s demand is proposed. The proposed algorithm called Optimized Particle Swarm Optimization (OPSO combines the two metaheuristic algorithms namely Particle Swarm Optimization and Ant Colony Optimization (ACO. These metaheuristic algorithms are used for the purpose of optimization in the search space of the required solution, to find the best resource from the pool of resources and to obtain maximum profit even when the number of tasks submitted for execution is very high. This optimization is performed to allocate job requests to internal and external cloud providers to obtain maximum profit. It helps to improve the system performance by improving the CPU utilization, and handle multiple requests at the same time. The simulation result shows that an OPSO yields 0.1% - 5% profit to the intermediate cloud provider compared with standard PSO and ACO algorithms and it also increases the CPU utilization by 0.1%.

  19. Recent Progress on Data-Based Optimization for Mineral Processing Plants

    Jinliang Ding

    2017-04-01

    Full Text Available In the globalized market environment, increasingly significant economic and environmental factors within complex industrial plants impose importance on the optimization of global production indices; such optimization includes improvements in production efficiency, product quality, and yield, along with reductions of energy and resource usage. This paper briefly overviews recent progress in data-driven hybrid intelligence optimization methods and technologies in improving the performance of global production indices in mineral processing. First, we provide the problem description. Next, we summarize recent progress in data-based optimization for mineral processing plants. This optimization consists of four layers: optimization of the target values for monthly global production indices, optimization of the target values for daily global production indices, optimization of the target values for operational indices, and automation systems for unit processes. We briefly overview recent progress in each of the different layers. Finally, we point out opportunities for future works in data-based optimization for mineral processing plants.

  20. Market-Based and System-Wide Fuel Cycle Optimization

    Wilson, Paul

    2016-01-01

    The Dynamic Resource Exchange (DRE) gives agency to consumer facilities to determine the preference of any particular trade that is offered by suppliers to satisfy its requests. This provides a natural balance of power in the relationship between consumers and suppliers. However, in situations in which suppliers have flexibility surrounding the way that they respond to individual requests, they have no mechanism to assess how different bids will be received by the consumer. Theoretically, a supplier can offer multiple bids to respond to a given request in an attempt to cover their bases, but this introduces more arcs into the underlying network flow problem, increasing the cost to solve the problem. In the extreme, when a supplier can continuously vary the characteristics of the bid, this can represent a large number of additional arcs and have real performance consequences. To remedy this inefficiency in the implementation of the market-level optimization, the definition of a request has been extended to include a function that can be used by the supplier to query the preference that would be assigned by a consumer for a potential bid. The supplier is then free to implement arbitrarily complex algorithms to revise/optimize its bid based on responses to this function. A supplier can chose to not invoke the function at all, mimicking the original DRE behavior, can use it to select among a small set of discrete choices, or can implement an internal algorithm to seek an optimum bid on a continuous parameter space. This capability was demonstrated with a storage facility that preferred material with a specific decay heat that was as close as possible to the maximum allowable decay heat, while requiring the specific decay heat to fall between a minimum and maximum level. This archetype was used to fill multiple storage roles in a simulation that also included a standard recipe reactor: wet storage with no maximum allowable specific decay heat, dry storage with a modest

  1. Adaptive discrete-ordinates algorithms and strategies

    Stone, J.C.; Adams, M.L.

    2005-01-01

    We present our latest algorithms and strategies for adaptively refined discrete-ordinates quadrature sets. In our basic strategy, which we apply here in two-dimensional Cartesian geometry, the spatial domain is divided into regions. Each region has its own quadrature set, which is adapted to the region's angular flux. Our algorithms add a 'test' direction to the quadrature set if the angular flux calculated at that direction differs by more than a user-specified tolerance from the angular flux interpolated from other directions. Different algorithms have different prescriptions for the method of interpolation and/or choice of test directions and/or prescriptions for quadrature weights. We discuss three different algorithms of different interpolation orders. We demonstrate through numerical results that each algorithm is capable of generating solutions with negligible angular discretization error. This includes elimination of ray effects. We demonstrate that all of our algorithms achieve a given level of error with far fewer unknowns than does a standard quadrature set applied to an entire problem. To address a potential issue with other algorithms, we present one algorithm that retains exact integration of high-order spherical-harmonics functions, no matter how much local refinement takes place. To address another potential issue, we demonstrate that all of our methods conserve partial currents across interfaces where quadrature sets change. We conclude that our approach is extremely promising for solving the long-standing problem of angular discretization error in multidimensional transport problems. (authors)

  2. Expert knowledge as defined by the X-Ray Ordinance

    1987-01-01

    The radiation protection officer or any person responsible for radiation safety have to give proof of their expert knowledge in accordance with sections 3, 4 of the X-Ray Ordinance. Proof of expert knowledge has to be furnished within the procedure of appointment (sec. 13, sub-sec. (3) X-Ray Ordinance). The directive defines the scope of the expert knowledge required, and the scope of expert knowledge persons must have, or acquire, who are responsible for radiation protection within the preview of sec. 23, no. 2, 4 and sec. 29, sub-sec. 1, no. 3 of the X-Ray Ordinance. (orig./HP) [de

  3. Ordinance on the Finnish Centre of Radiation and Nuclear Safety

    1990-01-01

    This Ordinance was adopted in implementation of the 1983 Act setting up the Finnish Centre for Radiation and Nuclear Safety and the 1987 Nuclear Energy Act and entered into force on 1 November 1990. The Ordinance specifies the tasks of the Centre, as provided under both Acts, and gives it several supplementary responsibilities. In addition to its overall competence in respect of radiation safety, the Centre will carry out research into and supervise the health effects of radiation and maintain a laboratory for national measurements in that field. The Ordinance also sets out the Centre's organisation chart and the staff duties [fr

  4. Multidimensional electron-photon transport with standard discrete ordinates codes

    Drumm, C.R.

    1995-01-01

    A method is described for generating electron cross sections that are compatible with standard discrete ordinates codes without modification. There are many advantages of using an established discrete ordinates solver, e.g. immediately available adjoint capability. Coupled electron-photon transport capability is needed for many applications, including the modeling of the response of electronics components to space and man-made radiation environments. The cross sections have been successfully used in the DORT, TWODANT and TORT discrete ordinates codes. The cross sections are shown to provide accurate and efficient solutions to certain multidimensional electronphoton transport problems

  5. Open Method of Co-Ordination for Demoi-Cracy?

    Borrás, Susana; Radaelli, Claudio

    2014-01-01

    Under which conditions does the open method of co-ordination match the standards for demoi-cracy? To answer this question, we need some explicit standards about demoi-cracy. In fact, open co-ordination serves three different but interrelated purposes in European Union policy: to facilitate...... convergence; to support learning processes; and to encourage exploration of policy innovation. By intersecting standards and purposes, we find open co-ordination is neither inherently ‘good’ nor ‘bad’ for demoi-cracy, as it depends on how it has been put into practice. Therefore, we qualify the answer...

  6. Optimal control of stretching process of flexible solar arrays on spacecraft based on a hybrid optimization strategy

    Qijia Yao

    2017-07-01

    Full Text Available The optimal control of multibody spacecraft during the stretching process of solar arrays is investigated, and a hybrid optimization strategy based on Gauss pseudospectral method (GPM and direct shooting method (DSM is presented. First, the elastic deformation of flexible solar arrays was described approximately by the assumed mode method, and a dynamic model was established by the second Lagrangian equation. Then, the nonholonomic motion planning problem is transformed into a nonlinear programming problem by using GPM. By giving fewer LG points, initial values of the state variables and control variables were obtained. A serial optimization framework was adopted to obtain the approximate optimal solution from a feasible solution. Finally, the control variables were discretized at LG points, and the precise optimal control inputs were obtained by DSM. The optimal trajectory of the system can be obtained through numerical integration. Through numerical simulation, the stretching process of solar arrays is stable with no detours, and the control inputs match the various constraints of actual conditions. The results indicate that the method is effective with good robustness. Keywords: Motion planning, Multibody spacecraft, Optimal control, Gauss pseudospectral method, Direct shooting method

  7. Trajectory Optimization Based on Multi-Interval Mesh Refinement Method

    Ningbo Li

    2017-01-01

    Full Text Available In order to improve the optimization accuracy and convergence rate for trajectory optimization of the air-to-air missile, a multi-interval mesh refinement Radau pseudospectral method was introduced. This method made the mesh endpoints converge to the practical nonsmooth points and decreased the overall collocation points to improve convergence rate and computational efficiency. The trajectory was divided into four phases according to the working time of engine and handover of midcourse and terminal guidance, and then the optimization model was built. The multi-interval mesh refinement Radau pseudospectral method with different collocation points in each mesh interval was used to solve the trajectory optimization model. Moreover, this method was compared with traditional h method. Simulation results show that this method can decrease the dimensionality of nonlinear programming (NLP problem and therefore improve the efficiency of pseudospectral methods for solving trajectory optimization problems.

  8. PSO Based Optimization of Testing and Maintenance Cost in NPPs

    Qiang Chou

    2014-01-01

    Full Text Available Testing and maintenance activities of safety equipment have drawn much attention in Nuclear Power Plant (NPP to risk and cost control. The testing and maintenance activities are often implemented in compliance with the technical specification and maintenance requirements. Technical specification and maintenance-related parameters, that is, allowed outage time (AOT, maintenance period and duration, and so forth, in NPP are associated with controlling risk level and operating cost which need to be minimized. The above problems can be formulated by a constrained multiobjective optimization model, which is widely used in many other engineering problems. Particle swarm optimizations (PSOs have proved their capability to solve these kinds of problems. In this paper, we adopt PSO as an optimizer to optimize the multiobjective optimization problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Numerical results have demonstrated the efficiency of our proposed algorithm.

  9. Chinese National Condition Based Power Dispatching Optimization in Microgrids

    Gang Chen

    2018-01-01

    Full Text Available This paper proposed a study on the power dispatching optimization in the microgrid aiming at Chinese national condition based on PSO algorithm. The whole work is on the basis of the weighted factor variation of the objective function due to different weather conditions. Three cases including the good contamination-diffusing weather condition, the smog weather condition, and the normal condition are considered, respectively. In the case of smog weather, the new energy generation and the battery system will be all out to use as less power as possible from the primary grid so that the pollution produced by coal consumption in the thermal power plants can be upmost reduced. However, in the case of perfect contamination-diffusing weather, the battery is not used to reserve its lifetime, while a large amount of exchanged power from the primary grid is used to obtain a most economic-efficient effect. In normal condition, the power dispatching is performed in a most balanced way considering not only the cost but also the environmental management. The case study in Suzhou Industrial Part confirms the effectiveness of the proposed method in this paper.

  10. A DE-Based Scatter Search for Global Optimization Problems

    Kun Li

    2015-01-01

    Full Text Available This paper proposes a hybrid scatter search (SS algorithm for continuous global optimization problems by incorporating the evolution mechanism of differential evolution (DE into the reference set updated procedure of SS to act as the new solution generation method. This hybrid algorithm is called a DE-based SS (SSDE algorithm. Since different kinds of mutation operators of DE have been proposed in the literature and they have shown different search abilities for different kinds of problems, four traditional mutation operators are adopted in the hybrid SSDE algorithm. To adaptively select the mutation operator that is most appropriate to the current problem, an adaptive mechanism for the candidate mutation operators is developed. In addition, to enhance the exploration ability of SSDE, a reinitialization method is adopted to create a new population and subsequently construct a new reference set whenever the search process of SSDE is trapped in local optimum. Computational experiments on benchmark problems show that the proposed SSDE is competitive or superior to some state-of-the-art algorithms in the literature.

  11. Application-Oriented Chemical Optimization of a Metakaolin Based Geopolymer.

    Ferone, Claudio; Colangelo, Francesco; Roviello, Giuseppina; Asprone, Domenico; Menna, Costantino; Balsamo, Alberto; Prota, Andrea; Cioffi, Raffaele; Manfredi, Gaetano

    2013-05-10

    In this study the development of a metakaolin based geopolymeric mortar to be used as bonding matrix for external strengthening of reinforced concrete beams is reported. Four geopolymer formulations have been obtained by varying the composition of the activating solution in terms of SiO₂/Na₂O ratio. The obtained samples have been characterized from a structural, microstructural and mechanical point of view. The differences in structure and microstructure have been correlated to the mechanical properties. A major issue of drying shrinkage has been encountered in the high Si/Al ratio samples. In the light of the characterization results, the optimal geopolymer composition was then applied to fasten steel fibers to reinforced concrete beams. The mechanical behavior of the strengthened reinforced beams was evaluated by four-points bending tests, which were performed also on reinforced concrete beams as they are for comparison. The preliminary results of the bending tests point out an excellent behavior of the geopolymeric mixture tested, with the failure load of the reinforced beams roughly twice that of the control beam.

  12. Application-Oriented Chemical Optimization of a Metakaolin Based Geopolymer

    Raffaele Cioffi

    2013-05-01

    Full Text Available In this study the development of a metakaolin based geopolymeric mortar to be used as bonding matrix for external strengthening of reinforced concrete beams is reported. Four geopolymer formulations have been obtained by varying the composition of the activating solution in terms of SiO2/Na2O ratio. The obtained samples have been characterized from a structural, microstructural and mechanical point of view. The differences in structure and microstructure have been correlated to the mechanical properties. A major issue of drying shrinkage has been encountered in the high Si/Al ratio samples. In the light of the characterization results, the optimal geopolymer composition was then applied to fasten steel fibers to reinforced concrete beams. The mechanical behavior of the strengthened reinforced beams was evaluated by four-points bending tests, which were performed also on reinforced concrete beams as they are for comparison. The preliminary results of the bending tests point out an excellent behavior of the geopolymeric mixture tested, with the failure load of the reinforced beams roughly twice that of the control beam.

  13. Hybrid Genetic Algorithm Optimization for Case Based Reasoning Systems

    Mohamed, A.H.

    2008-01-01

    The success of a CBR system largely depen ds on an effective retrieval of useful prior case for the problem. Nearest neighbor and induction are the main CBR retrieval algorithms. Each of them can be more suitable in different situations. Integrated the two retrieval algorithms can catch the advantages of both of them. But, they still have some limitations facing the induction retrieval algorithm when dealing with a noisy data, a large number of irrelevant features, and different types of data. This research utilizes a hybrid approach using genetic algorithms (GAs) to case-based induction retrieval of the integrated nearest neighbor - induction algorithm in an attempt to overcome these limitations and increase the overall classification accuracy. GAs can be used to optimize the search space of all the possible subsets of the features set. It can deal with the irrelevant and noisy features while still achieving a significant improvement of the retrieval accuracy. Therefore, the proposed CBR-GA introduces an effective general purpose retrieval algorithm that can improve the performance of CBR systems. It can be applied in many application areas. CBR-GA has proven its success when applied for different problems in real-life

  14. Optimal Scheduling of Doctors Outpatient Departments Based on Patients’ Behavior

    Zongwei Ren

    2016-01-01

    Full Text Available The low operational efficiency in the field of medical and health care has become a serious problem in China; the long time that the patients have to wait for is the main phenomenon of difficult medical service. Medical industry is service-oriented and its main purpose is to make profits, so the benefits and competitiveness of a hospital depend on patient satisfaction. This paper makes a survey on a large hospital in Harbin of China and collects relevant data and then uses the prospect theory to analyze patients’ and doctors’ behavioral characteristics and the model of patient satisfaction is established based on fuzzy theory with a triplet α/β/γ. The optimal scheduling of clinic is described as a problem with the rule of first come, first served which maximizes patient satisfaction for the main goal and minimizes operating costs for the secondary goal. And the corresponding mathematical model is established. Finally, a solution method named plant growth simulation algorithm (PGSA is presented. Then, by means of calculating of the example and comparing with genetic algorithm, the results show that the optimum can be reached; meanwhile the efficiency of the presented algorithm is better than the genetic algorithm.

  15. On the Optimization of GLite-Based Job Submission

    Misurelli, Giuseppe; Veronesi, Paolo; Palmieri, Francesco; Pardi, Silvio

    2011-01-01

    A Grid is a very dynamic, complex and heterogeneous system, whose reliability can be adversely conditioned by several different factors such as communications and hardware faults, middleware bugs or wrong configurations due to human errors. As the infrastructure scales, spanning a large number of sites, each hosting hundreds or thousands of hosts/resources, the occurrence of runtime faults following job submission becomes a very frequent and phenomenon. Therefore, fault avoidance becomes a fundamental aim in modern Grids since the dependability of individual resources spread upon widely distributed computing infrastructures and often used outside of their native organizational boundaries, cannot be guaranteed in any systematic way. Accordingly, we propose a simple job optimization solution based on a user-driven fault avoidance strategy. Such strategy starts from the introduction within the grid information system of several on-line service-monitoring metrics that can be used as specific hints to the workload management system for driving resource discovery operations according to a fault-free resource-scheduling plan. This solution, whose main goal is to minimize the execution time by avoiding execution failures, demonstrated to be very effective in incrementing both the user perceivable quality and the overall grid performance.

  16. SVM-based glioma grading. Optimization by feature reduction analysis

    Zoellner, Frank G.; Schad, Lothar R. [University Medical Center Mannheim, Heidelberg Univ., Mannheim (Germany). Computer Assisted Clinical Medicine; Emblem, Kyrre E. [Massachusetts General Hospital, Charlestown, A.A. Martinos Center for Biomedical Imaging, Boston MA (United States). Dept. of Radiology; Harvard Medical School, Boston, MA (United States); Oslo Univ. Hospital (Norway). The Intervention Center

    2012-11-01

    We investigated the predictive power of feature reduction analysis approaches in support vector machine (SVM)-based classification of glioma grade. In 101 untreated glioma patients, three analytic approaches were evaluated to derive an optimal reduction in features; (i) Pearson's correlation coefficients (PCC), (ii) principal component analysis (PCA) and (iii) independent component analysis (ICA). Tumor grading was performed using a previously reported SVM approach including whole-tumor cerebral blood volume (CBV) histograms and patient age. Best classification accuracy was found using PCA at 85% (sensitivity = 89%, specificity = 84%) when reducing the feature vector from 101 (100-bins rCBV histogram + age) to 3 principal components. In comparison, classification accuracy by PCC was 82% (89%, 77%, 2 dimensions) and 79% by ICA (87%, 75%, 9 dimensions). For improved speed (up to 30%) and simplicity, feature reduction by all three methods provided similar classification accuracy to literature values ({proportional_to}87%) while reducing the number of features by up to 98%. (orig.)

  17. Research on Optimal Observation Scale for Damaged Buildings after Earthquake Based on Optimal Feature Space

    Chen, J.; Chen, W.; Dou, A.; Li, W.; Sun, Y.

    2018-04-01

    A new information extraction method of damaged buildings rooted in optimal feature space is put forward on the basis of the traditional object-oriented method. In this new method, ESP (estimate of scale parameter) tool is used to optimize the segmentation of image. Then the distance matrix and minimum separation distance of all kinds of surface features are calculated through sample selection to find the optimal feature space, which is finally applied to extract the image of damaged buildings after earthquake. The overall extraction accuracy reaches 83.1 %, the kappa coefficient 0.813. The new information extraction method greatly improves the extraction accuracy and efficiency, compared with the traditional object-oriented method, and owns a good promotional value in the information extraction of damaged buildings. In addition, the new method can be used for the information extraction of different-resolution images of damaged buildings after earthquake, then to seek the optimal observation scale of damaged buildings through accuracy evaluation. It is supposed that the optimal observation scale of damaged buildings is between 1 m and 1.2 m, which provides a reference for future information extraction of damaged buildings.

  18. Projector primary-based optimization for superimposed projection mappings

    Ahmed, Bilal; Lee, Jong Hun; Lee, Yong Yi; Lee, Kwan H.

    2018-01-01

    Recently, many researchers have focused on fully overlapping projections for three-dimensional (3-D) projection mapping systems but reproducing a high-quality appearance using this technology still remains a challenge. On top of existing color compensation-based methods, much effort is still required to faithfully reproduce an appearance that is free from artifacts, colorimetric inconsistencies, and inappropriate illuminance over the 3-D projection surface. According to our observation, this is due to the fact that overlapping projections are treated as an additive-linear mixture of color. However, this is not the case according to our elaborated observations. We propose a method that enables us to use high-quality appearance data that are measured from original objects and regenerate the same appearance by projecting optimized images using multiple projectors, ensuring that the projection-rendered results look visually close to the real object. We prepare our target appearances by photographing original objects. Then, using calibrated projector-camera pairs, we compensate for missing geometric correspondences to make our method robust against noise. The heart of our method is a target appearance-driven adaptive sampling of the projection surface followed by a representation of overlapping projections in terms of the projector-primary response. This gives off projector-primary weights to facilitate blending and the system is applied with constraints. These samples are used to populate a light transport-based system. Then, the system is solved minimizing the error to get the projection images in a noise-free manner by utilizing intersample overlaps. We ensure that we make the best utilization of available hardware resources to recreate projection mapped appearances that look as close to the original object as possible. Our experimental results show compelling results in terms of visual similarity and colorimetric error.

  19. Amendments to ordinances in Radiation Protection Law; Novellierung der strahlenschutzrechtlichen Verordnungen

    Heller, W.

    2007-05-15

    The last major reform of the German Radiation Protection Ordinance took place on July 26, 2001. The 'First Ordinance Amending Ordinances in Radiation Protection Law' now proposed is to cover primarily the necessary changes and supplements resulting from experience in the execution of the ordinances. They mainly relate to these issues: (1) the scope of application of the Radiation Protection Ordinance and of the x-ray Ordinance in medical research (2) the scope of application of the Radiation Protection Ordinance and the -ray Ordinance in unjustified types of activities (3) electronic communication ('e-government') (4) changes in the provisions about permits and announcements in the Radiation Protection Ordinance (5) new clearance levels in the Radiation Protection Ordinance (6) cross-border transports of 'NORM' materials (7) other changes in the scope of application of the Radiation Protection Ordinance (8) other changes in the x-ray area. (orig.)

  20. Market-Based and System-Wide Fuel Cycle Optimization

    Wilson, Paul [Univ. of Wisconsin, Madison, WI (United States)

    2016-06-02

    The Dynamic Resource Exchange (DRE) gives agency to consumer facilities to determine the preference of any particular trade that is offered by suppliers to satisfy its requests. This provides a natural balance of power in the relationship between consumers and suppliers. However, in situations in which suppliers have flexibility surrounding the way that they respond to individual requests, they have no mechanism to assess how different bids will be received by the consumer. Theoretically, a supplier can offer multiple bids to respond to a given request in an attempt to “cover their bases”, but this introduces more arcs into the underlying network flow problem, increasing the cost to solve the problem. In the extreme, when a supplier can continuously vary the characteristics of the bid, this can represent a large number of additional arcs and have real performance consequences. To remedy this inefficiency in the implementation of the market-level optimization, the definition of a request has been extended to include a function that can be used by the supplier to query the preference that would be assigned by a consumer for a potential bid. The supplier is then free to implement arbitrarily complex algorithms to revise/optimize its bid based on responses to this function. A supplier can chose to not invoke the function at all, mimicking the original DRE behavior, can use it to select among a small set of discrete choices, or can implement an internal algorithm to seek an optimum bid on a continuous parameter space. This capability was demonstrated with a storage facility that preferred material with a specific decay heat that was as close as possible to the maximum allowable decay heat, while requiring the specific decay heat to fall between a minimum and maximum level. This archetype was used to fill multiple storage roles in a simulation that also included a standard recipe reactor: wet storage with no maximum allowable specific decay heat, dry storage with a

  1. Pipeline heating method based on optimal control and state estimation

    Vianna, F.L.V. [Dept. of Subsea Technology. Petrobras Research and Development Center - CENPES, Rio de Janeiro, RJ (Brazil)], e-mail: fvianna@petrobras.com.br; Orlande, H.R.B. [Dept. of Mechanical Engineering. POLI/COPPE, Federal University of Rio de Janeiro - UFRJ, Rio de Janeiro, RJ (Brazil)], e-mail: helcio@mecanica.ufrj.br; Dulikravich, G.S. [Dept. of Mechanical and Materials Engineering. Florida International University - FIU, Miami, FL (United States)], e-mail: dulikrav@fiu.edu

    2010-07-01

    considered here, we used the Particle Filter. The optimal control was based on a linear quadratic controller and an associated quadratic cost functional, which was minimized through the solution of Riccati's equation. (author)

  2. A Swarm Optimization Genetic Algorithm Based on Quantum-Behaved Particle Swarm Optimization.

    Sun, Tao; Xu, Ming-Hai

    2017-01-01

    Quantum-behaved particle swarm optimization (QPSO) algorithm is a variant of the traditional particle swarm optimization (PSO). The QPSO that was originally developed for continuous search spaces outperforms the traditional PSO in search ability. This paper analyzes the main factors that impact the search ability of QPSO and converts the particle movement formula to the mutation condition by introducing the rejection region, thus proposing a new binary algorithm, named swarm optimization genetic algorithm (SOGA), because it is more like genetic algorithm (GA) than PSO in form. SOGA has crossover and mutation operator as GA but does not need to set the crossover and mutation probability, so it has fewer parameters to control. The proposed algorithm was tested with several nonlinear high-dimension functions in the binary search space, and the results were compared with those from BPSO, BQPSO, and GA. The experimental results show that SOGA is distinctly superior to the other three algorithms in terms of solution accuracy and convergence.

  3. Efficient approach for reliability-based optimization based on weighted importance sampling approach

    Yuan, Xiukai; Lu, Zhenzhou

    2014-01-01

    An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology

  4. Energy based optimization of viscous–friction dampers on cables

    Weber, F; Boston, C

    2010-01-01

    This investigation optimizes numerically a viscous–friction damper connected to a cable close to one cable anchor for fastest reduction of the total mechanical cable energy during a free vibration decay test. The optimization parameters are the viscous coefficient of the viscous part and the ratio between the friction force and displacement amplitude of the friction part of the transverse damper. Results demonstrate that an almost pure friction damper with negligibly small viscous damping generates fastest cable energy reduction over the entire decay. The ratio between the friction force and displacement amplitude of the optimal friction damper differs from that derived from the energy equivalent optimal viscous damper. The reason for this is that the nonlinearity of the friction damper causes energy spillover from the excited to higher modes of the order of 10%, i.e. cables with attached friction dampers vibrate at several frequencies. This explains why the energy equivalent approach does not yield the optimal friction damper. Analysis of the simulation data demonstrates that the optimally tuned friction damper dissipates the same energy per cycle as if each modal component of the cable were damped by its corresponding optimal linear viscous damper

  5. Economic dispatch optimization algorithm based on particle diffusion

    Han, Li; Romero, Carlos E.; Yao, Zheng

    2015-01-01

    Highlights: • A dispatch model that considers fuel, emissions control and wind power cost is built. • An optimization algorithm named diffusion particle optimization (DPO) is proposed. • DPO was used to analyze the impact of wind power risk and emissions on dispatch. - Abstract: Due to the widespread installation of emissions control equipment in fossil fuel-fired power plants, the cost of emissions control needs to be considered, together with the plant fuel cost, in providing economic power dispatch of those units to the grid. On the other hand, while using wind power decreases the overall power generation cost for the power grid, it poses a risk to a traditional grid, because of its inherent stochastic characteristics. Therefore, an economic dispatch optimization model needs to consider all of the fuel cost, emissions control cost and wind power cost for each of the generating unit conforming the fleet that meets the required grid power demand. In this study, an optimization algorithm referred as diffusion particle optimization (DPO) is proposed to solve such complex optimization problem. In this algorithm, Brownian motion theory is used to guide the movement of particles so that the particles can search for an optimal solution over the entire definition region. Several benchmark functions and power grid system data were used to test the performance of DPO, and compared to traditional algorithms used for economic dispatch optimization, such as, particle swarm optimization and artificial bee colony algorithm. It was found that DPO has less probability to be trapped in local optimums. According to results of different power systems DPO was able to find economic dispatch solutions with lower costs. DPO was also used to analyze the impact of wind power risk and fossil unit emissions coefficients on power dispatch. The result are encouraging for the use of DPO as a dynamic tool for economic dispatch of the power grid.

  6. Optimal pattern synthesis for speech recognition based on principal component analysis

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  7. Stochastic Finite Elements in Reliability-Based Structural Optimization

    Sørensen, John Dalsgaard; Engelund, S.

    1995-01-01

    Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect to optimi......Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect...... to optimization variables can be performed. A computer implementation is described and an illustrative example is given....

  8. Optimization of hydraulic turbine governor parameters based on WPA

    Gao, Chunyang; Yu, Xiangyang; Zhu, Yong; Feng, Baohao

    2018-01-01

    The parameters of hydraulic turbine governor directly affect the dynamic characteristics of the hydraulic unit, thus affecting the regulation capacity and the power quality of power grid. The governor of conventional hydropower unit is mainly PID governor with three adjustable parameters, which are difficult to set up. In order to optimize the hydraulic turbine governor, this paper proposes wolf pack algorithm (WPA) for intelligent tuning since the good global optimization capability of WPA. Compared with the traditional optimization method and PSO algorithm, the results show that the PID controller designed by WPA achieves a dynamic quality of hydraulic system and inhibits overshoot.

  9. A multi-objective improved teaching-learning based optimization algorithm for unconstrained and constrained optimization problems

    R. Venkata Rao

    2014-01-01

    Full Text Available The present work proposes a multi-objective improved teaching-learning based optimization (MO-ITLBO algorithm for unconstrained and constrained multi-objective function optimization. The MO-ITLBO algorithm is the improved version of basic teaching-learning based optimization (TLBO algorithm adapted for multi-objective problems. The basic TLBO algorithm is improved to enhance its exploration and exploitation capacities by introducing the concept of number of teachers, adaptive teaching factor, tutorial training and self-motivated learning. The MO-ITLBO algorithm uses a grid-based approach to adaptively assess the non-dominated solutions (i.e. Pareto front maintained in an external archive. The performance of the MO-ITLBO algorithm is assessed by implementing it on unconstrained and constrained test problems proposed for the Congress on Evolutionary Computation 2009 (CEC 2009 competition. The performance assessment is done by using the inverted generational distance (IGD measure. The IGD measures obtained by using the MO-ITLBO algorithm are compared with the IGD measures of the other state-of-the-art algorithms available in the literature. Finally, Lexicographic ordering is used to assess the overall performance of competitive algorithms. Results have shown that the proposed MO-ITLBO algorithm has obtained the 1st rank in the optimization of unconstrained test functions and the 3rd rank in the optimization of constrained test functions.

  10. Multi-objective optimization of Stirling engine systems using Front-based Yin-Yang-Pair Optimization

    Punnathanam, Varun; Kotecha, Prakash

    2017-01-01

    Highlights: • Efficient multi-objective optimization algorithm F-YYPO demonstrated. • Three Stirling engine applications with a total of eight cases. • Improvements in the objective function values of up to 30%. • Superior to the popularly used gamultiobj of MATLAB. • F-YYPO has extremely low time complexity. - Abstract: In this work, we demonstrate the performance of Front-based Yin-Yang-Pair Optimization (F-YYPO) to solve multi-objective problems related to Stirling engine systems. The performance of F-YYPO is compared with that of (i) a recently proposed multi-objective optimization algorithm (Multi-Objective Grey Wolf Optimizer) and (ii) an algorithm popularly employed in literature due to its easy accessibility (MATLAB’s inbuilt multi-objective Genetic Algorithm function: gamultiobj). We consider three Stirling engine based optimization problems: (i) the solar-dish Stirling engine system which considers objectives of output power, thermal efficiency and rate of entropy generation; (ii) Stirling engine thermal model which considers the associated irreversibility of the cycle with objectives of output power, thermal efficiency and pressure drop; and finally (iii) an experimentally validated polytropic finite speed thermodynamics based Stirling engine model also with objectives of output power and pressure drop. We observe F-YYPO to be significantly more effective as compared to its competitors in solving the problems, while requiring only a fraction of the computational time required by the other algorithms.

  11. ordination et classification de la vegetation des zones humides

    USER

    classes des Phragmitetea Tüxen & Pressing 1942 et des Potametea Tuxen et ... ORDINATION AND CLASSIFICATION OF WETLAND VEGETATION IN ... Principal Component Analysis, Cluster .... ensemble ; ce qui a permis d'avoir une vue.

  12. Constructing ordinal partition transition networks from multivariate time series.

    Zhang, Jiayang; Zhou, Jie; Tang, Ming; Guo, Heng; Small, Michael; Zou, Yong

    2017-08-10

    A growing number of algorithms have been proposed to map a scalar time series into ordinal partition transition networks. However, most observable phenomena in the empirical sciences are of a multivariate nature. We construct ordinal partition transition networks for multivariate time series. This approach yields weighted directed networks representing the pattern transition properties of time series in velocity space, which hence provides dynamic insights of the underling system. Furthermore, we propose a measure of entropy to characterize ordinal partition transition dynamics, which is sensitive to capturing the possible local geometric changes of phase space trajectories. We demonstrate the applicability of pattern transition networks to capture phase coherence to non-coherence transitions, and to characterize paths to phase synchronizations. Therefore, we conclude that the ordinal partition transition network approach provides complementary insight to the traditional symbolic analysis of nonlinear multivariate time series.

  13. Optimization Route of Food Logistics Distribution Based on Genetic and Graph Cluster Scheme Algorithm

    Jing Chen

    2015-01-01

    This study takes the concept of food logistics distribution as the breakthrough point, by means of the aim of optimization of food logistics distribution routes and analysis of the optimization model of food logistics route, as well as the interpretation of the genetic algorithm, it discusses the optimization of food logistics distribution route based on genetic and cluster scheme algorithm.

  14. Charging cost optimization for EV buses using neural network based energy predictor

    Nageshrao, S.P.; Jacob, J.; Wilkins, S.

    2017-01-01

    For conventional buses, based on the decades of their operational knowledge, public transport companies are able to optimize their cost of operation. However, with recent trend in the usage of electric buses, cost optimal operation can become challenging. In this paper an offline optimal charging

  15. Design of SVC Controller Based on Improved Biogeography-Based Optimization Algorithm

    Feifei Dong

    2014-01-01

    Full Text Available Considering that common subsynchronous resonance controllers cannot adapt to the characteristics of the time-varying and nonlinear behavior of a power system, the cosine migration model, the improved migration operator, and the mutative scale of chaos and Cauchy mutation strategy are introduced into an improved biogeography-based optimization (IBBO algorithm in order to design an optimal subsynchronous damping controller based on the mechanism of suppressing SSR by static var compensator (SVC. The effectiveness of the improved controller is verified by eigenvalue analysis and electromagnetic simulations. The simulation results of Jinjie plant indicate that the subsynchronous damping controller optimized by the IBBO algorithm can remarkably improve the damping of torsional modes and thus effectively depress SSR, and ensure the safety and stability of units and power grid operation. Moreover, the IBBO algorithm has the merits of a faster searching speed and higher searching accuracy in seeking the optimal control parameters over traditional algorithms, such as BBO algorithm, PSO algorithm, and GA algorithm.

  16. 1984 Ordinance on nuclear activities (1984:14)

    1984-01-01

    This Supplementary Ordinance on Nuclear Activities (1984:14) sets out a regulatory regime for the conveyance out of Sweden of equipment or material that has been specially designed or prepared for the processing, use or production of nuclear substances or which is otherwise of essential importance for the production of nuclear devices. The Annex to the Ordinance sets out the list of such equipment or material whose export is subject to Government authorisation. (NEA) [fr

  17. SPANDOM - source projection analytic nodal discrete ordinates method

    Kim, Tae Hyeong; Cho, Nam Zin

    1994-01-01

    We describe a new discrete ordinates nodal method for the two-dimensional transport equation. We solve the discrete ordinates equation analytically after the source term is projected and represented in polynomials. The method is applied to two fast reactor benchmark problems and compared with the TWOHEX code. The results indicate that the present method accurately predicts not only multiplication factor but also flux distribution

  18. Ordinance on distribution of iodine tablets to the population

    1992-01-01

    This Ordinance provides for the organization of supplies of iodine tablets to the population. The tablets will be held in case of occurrences that might endanger the population following an accident provoking the emission of radioactive iodine. The Federal Health Ministry is responsible for organizing the supply to the appropriate bodies for distribution to the population. The Ordinance entered into force on 1 August 1992. (NEA)

  19. Simulation-based robust optimization for signal timing and setting.

    2009-12-30

    The performance of signal timing plans obtained from traditional approaches for : pre-timed (fixed-time or actuated) control systems is often unstable under fluctuating traffic : conditions. This report develops a general approach for optimizing the ...

  20. Regularized Regression and Density Estimation based on Optimal Transport

    Burger, M.; Franek, M.; Schonlieb, C.-B.

    2012-01-01

    for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations

  1. Reliability-based optimal structural design by the decoupling approach

    Royset, J.O.; Der Kiureghian, A.; Polak, E.

    2001-01-01

    A decoupling approach for solving optimal structural design problems involving reliability terms in the objective function, the constraint set or both is discussed and extended. The approach employs a reformulation of each problem, in which reliability terms are replaced by deterministic functions. The reformulated problems can be solved by existing semi-infinite optimization algorithms and computational reliability methods. It is shown that the reformulated problems produce solutions that are identical to those of the original problems when the limit-state functions defining the reliability problem are affine. For nonaffine limit-state functions, approximate solutions are obtained by solving series of reformulated problems. An important advantage of the approach is that the required reliability and optimization calculations are completely decoupled, thus allowing flexibility in the choice of the optimization algorithm and the reliability computation method

  2. Optimal scheduling of micro grids based on single objective programming

    Chen, Yue

    2018-04-01

    Faced with the growing demand for electricity and the shortage of fossil fuels, how to optimally optimize the micro-grid has become an important research topic to maximize the economic, technological and environmental benefits of the micro-grid. This paper considers the role of the battery and the micro-grid and power grid to allow the exchange of power not exceeding 150kW preconditions, the main study of the economy to load for the goal is to minimize the electricity cost (abandonment of wind), to establish an optimization model, and to solve the problem by genetic algorithm. The optimal scheduling scheme is obtained and the utilization of renewable energy and the impact of the battery involved in regulation are analyzed.

  3. Simulation-based optimal Bayesian experimental design for nonlinear systems

    Huan, Xun; Marzouk, Youssef M.

    2013-01-01

    The optimal selection of experimental conditions is essential to maximizing the value of data for inference and prediction, particularly in situations where experiments are time-consuming and expensive to conduct. We propose a general mathematical

  4. Optimal Control of Polymer Flooding Based on Maximum Principle

    Yang Lei

    2012-01-01

    Full Text Available Polymer flooding is one of the most important technologies for enhanced oil recovery (EOR. In this paper, an optimal control model of distributed parameter systems (DPSs for polymer injection strategies is established, which involves the performance index as maximum of the profit, the governing equations as the fluid flow equations of polymer flooding, and the inequality constraint as the polymer concentration limitation. To cope with the optimal control problem (OCP of this DPS, the necessary conditions for optimality are obtained through application of the calculus of variations and Pontryagin’s weak maximum principle. A gradient method is proposed for the computation of optimal injection strategies. The numerical results of an example illustrate the effectiveness of the proposed method.

  5. Microservice scaling optimization based on metric collection in Kubernetes

    Blažej, Aljaž

    2017-01-01

    As web applications become more complex and the number of internet users rises, so does the need to optimize the use of hardware supporting these applications. Optimization can be achieved with microservices, as they offer several advantages compared to the monolithic approach, such as better utilization of resources, scalability and isolation of different parts of an application. Another important part is collecting metrics, since they can be used for analysis and debugging as well as the ba...

  6. Target distribution in cooperative combat based on Bayesian optimization algorithm

    Shi Zhifu; Zhang An; Wang Anli

    2006-01-01

    Target distribution in cooperative combat is a difficult and emphases. We build up the optimization model according to the rule of fire distribution. We have researched on the optimization model with BOA. The BOA can estimate the joint probability distribution of the variables with Bayesian network, and the new candidate solutions also can be generated by the joint distribution. The simulation example verified that the method could be used to solve the complex question, the operation was quickly and the solution was best.

  7. Extended Kalman Filter Modifications Based on an Optimization View Point

    Skoglund, Martin; Hendeby, Gustaf; Axehill, Daniel

    2015-01-01

    The extended Kalman filter (EKF) has been animportant tool for state estimation of nonlinear systems sinceits introduction. However, the EKF does not possess the same optimality properties as the Kalman filter, and may perform poorly. By viewing the EKF as an optimization problem it is possible to, in many cases, improve its performance and robustness. The paper derives three variations of the EKF by applying different optimisation algorithms to the EKF costfunction and relate these to the it...

  8. The Adjoint Method for Gradient-based Dynamic Optimization of UV Flash Processes

    Ritschel, Tobias Kasper Skovborg; Capolei, Andrea; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a novel single-shooting algorithm for gradient-based solution of optimal control problems with vapor-liquid equilibrium constraints. Dynamic optimization of UV flash processes is relevant in nonlinear model predictive control of distillation columns, certain two-phase flow pro......-component flash process which demonstrate the importance of the optimization solver, the compiler, and the linear algebra software for the efficiency of dynamic optimization of UV flash processes....

  9. Optimal Control of Micro Grid Operation Mode Seamless Switching Based on Radau Allocation Method

    Chen, Xiaomin; Wang, Gang

    2017-05-01

    The seamless switching process of micro grid operation mode directly affects the safety and stability of its operation. According to the switching process from island mode to grid-connected mode of micro grid, we establish a dynamic optimization model based on two grid-connected inverters. We use Radau allocation method to discretize the model, and use Newton iteration method to obtain the optimal solution. Finally, we implement the optimization mode in MATLAB and get the optimal control trajectory of the inverters.

  10. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-01-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. (paper)

  11. Optimization of natural lipstick formulation based on pitaya (Hylocereus polyrhizus) seed oil using D-optimal mixture experimental design.

    Kamairudin, Norsuhaili; Gani, Siti Salwa Abd; Masoumi, Hamid Reza Fard; Hashim, Puziah

    2014-10-16

    The D-optimal mixture experimental design was employed to optimize the melting point of natural lipstick based on pitaya (Hylocereus polyrhizus) seed oil. The influence of the main lipstick components-pitaya seed oil (10%-25% w/w), virgin coconut oil (25%-45% w/w), beeswax (5%-25% w/w), candelilla wax (1%-5% w/w) and carnauba wax (1%-5% w/w)-were investigated with respect to the melting point properties of the lipstick formulation. The D-optimal mixture experimental design was applied to optimize the properties of lipstick by focusing on the melting point with respect to the above influencing components. The D-optimal mixture design analysis showed that the variation in the response (melting point) could be depicted as a quadratic function of the main components of the lipstick. The best combination of each significant factor determined by the D-optimal mixture design was established to be pitaya seed oil (25% w/w), virgin coconut oil (37% w/w), beeswax (17% w/w), candelilla wax (2% w/w) and carnauba wax (2% w/w). With respect to these factors, the 46.0 °C melting point property was observed experimentally, similar to the theoretical prediction of 46.5 °C. Carnauba wax is the most influential factor on this response (melting point) with its function being with respect to heat endurance. The quadratic polynomial model sufficiently fit the experimental data.

  12. Advanced Gradient Based Optimization Techniques Applied on Sheet Metal Forming

    Endelt, Benny; Nielsen, Karl Brian

    2005-01-01

    The computational-costs for finite element simulations of general sheet metal forming processes are considerable, especially measured in time. In combination with optimization, the performance of the optimization algorithm is crucial for the overall performance of the system, i.e. the optimization algorithm should gain as much information about the system in each iteration as possible. Least-square formulation of the object function is widely applied for solution of inverse problems, due to the superior performance of this formulation.In this work focus will be on small problems which are defined as problems with less than 1000 design parameters; as the majority of real life optimization and inverse problems, represented in literature, can be characterized as small problems, typically with less than 20 design parameters.We will show that the least square formulation is well suited for two classes of inverse problems; identification of constitutive parameters and process optimization.The scalability and robustness of the approach are illustrated through a number of process optimizations and inverse material characterization problems; tube hydro forming, two step hydro forming, flexible aluminum tubes, inverse identification of material parameters

  13. Optimizing agent-based transmission models for infectious diseases.

    Willem, Lander; Stijven, Sean; Tijskens, Engelbert; Beutels, Philippe; Hens, Niel; Broeckhove, Jan

    2015-06-02

    Infectious disease modeling and computational power have evolved such that large-scale agent-based models (ABMs) have become feasible. However, the increasing hardware complexity requires adapted software designs to achieve the full potential of current high-performance workstations. We have found large performance differences with a discrete-time ABM for close-contact disease transmission due to data locality. Sorting the population according to the social contact clusters reduced simulation time by a factor of two. Data locality and model performance can also be improved by storing person attributes separately instead of using person objects. Next, decreasing the number of operations by sorting people by health status before processing disease transmission has also a large impact on model performance. Depending of the clinical attack rate, target population and computer hardware, the introduction of the sort phase decreased the run time from 26% up to more than 70%. We have investigated the application of parallel programming techniques and found that the speedup is significant but it drops quickly with the number of cores. We observed that the effect of scheduling and workload chunk size is model specific and can make a large difference. Investment in performance optimization of ABM simulator code can lead to significant run time reductions. The key steps are straightforward: the data structure for the population and sorting people on health status before effecting disease propagation. We believe these conclusions to be valid for a wide range of infectious disease ABMs. We recommend that future studies evaluate the impact of data management, algorithmic procedures and parallelization on model performance.

  14. Knowledge of the ordinal position of list items in pigeons.

    Scarf, Damian; Colombo, Michael

    2011-10-01

    Ordinal knowledge is a fundamental aspect of advanced cognition. It is self-evident that humans represent ordinal knowledge, and over the past 20 years it has become clear that nonhuman primates share this ability. In contrast, evidence that nonprimate species represent ordinal knowledge is missing from the comparative literature. To address this issue, in the present experiment we trained pigeons on three 4-item lists and then tested them with derived lists in which, relative to the training lists, the ordinal position of the items was either maintained or changed. Similar to the findings with human and nonhuman primates, our pigeons performed markedly better on the maintained lists compared to the changed lists, and displayed errors consistent with the view that they used their knowledge of ordinal position to guide responding on the derived lists. These findings demonstrate that the ability to acquire ordinal knowledge is not unique to the primate lineage. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  15. Overstatement in happiness reporting with ordinal, bounded scale.

    Tanaka, Saori C; Yamada, Katsunori; Kitada, Ryo; Tanaka, Satoshi; Sugawara, Sho K; Ohtake, Fumio; Sadato, Norihiro

    2016-02-18

    There are various methods by which people can express subjective evaluations quantitatively. For example, happiness can be measured on a scale from 1 to 10, and has been suggested as a measure of economic policy. However, there is resistance to these types of measurement from economists, who often regard welfare to be a cardinal, unbounded quantity. It is unclear whether there are differences between subjective evaluation reported on ordinal, bounded scales and on cardinal, unbounded scales. To answer this question, we developed functional magnetic resonance imaging experimental tasks for reporting happiness from monetary gain and the perception of visual stimulus. Subjects tended to report higher values when they used ordinal scales instead of cardinal scales. There were differences in neural activation between ordinal and cardinal reporting scales. The posterior parietal area showed greater activation when subjects used an ordinal scale instead of a cardinal scale. Importantly, the striatum exhibited greater activation when asked to report happiness on an ordinal scale than when asked to report on a cardinal scale. The finding that ordinal (bounded) scales are associated with higher reported happiness and greater activation in the reward system shows that overstatement bias in happiness data must be considered.

  16. Case management for high-intensity service users: towards a relational approach to care co-ordination.

    McEvoy, Phil; Escott, Diane; Bee, Penny

    2011-01-01

    This study is based on a formative evaluation of a case management service for high-intensity service users in Northern England. The evaluation had three main purposes: (i) to assess the quality of the organisational infrastructure; (ii) to obtain a better understanding of the key influences that played a role in shaping the development of the service; and (iii) to identify potential changes in practice that may help to improve the quality of service provision. The evaluation was informed by Gittell's relational co-ordination theory, which focuses upon cross-boundary working practices that facilitate task integration. The Assessment of Chronic Illness Care Survey was used to assess the organisational infrastructure and qualitative interviews with front line staff were conducted to explore the key influences that shaped the development of the service. A high level of strategic commitment and political support for integrated working was identified. However, the quality of care co-ordination was variable. The most prominent operational factor that appeared to influence the scope and quality of care co-ordination was the pattern of interaction between the case managers and their co-workers. The co-ordination of patient care was much more effective in integrated co-ordination networks. Key features included clearly defined, task focussed, relational workspaces with interactive forums where case managers could engage with co-workers in discussions about the management of interdependent care activities. In dispersed co-ordination networks with fewer relational workspaces, the case managers struggled to work as effectively. The evaluation concluded that the creation of flexible and efficient task focused relational workspaces that are systemically managed and adequately resourced could help to improve the quality of care co-ordination, particularly in dispersed networks. © 2010 Blackwell Publishing Ltd.

  17. Optimal design and selection of magneto-rheological brake types based on braking torque and mass

    Nguyen, Q H; Lang, V T; Choi, S B

    2015-01-01

    In developing magnetorheological brakes (MRBs), it is well known that the braking torque and the mass of the MRBs are important factors that should be considered in the product’s design. This research focuses on the optimal design of different types of MRBs, from which we identify an optimal selection of MRB types, considering braking torque and mass. In the optimization, common types of MRBs such as disc-type, drum-type, hybrid-type, and T-shape types are considered. The optimization problem is to find an optimal MRB structure that can produce the required braking torque while minimizing its mass. After a brief description of the configuration of the MRBs, the MRBs’ braking torque is derived based on the Herschel-Bulkley rheological model of the magnetorheological fluid. Then, the optimal designs of the MRBs are analyzed. The optimization objective is to minimize the mass of the brake while the braking torque is constrained to be greater than a required value. In addition, the power consumption of the MRBs is also considered as a reference parameter in the optimization. A finite element analysis integrated with an optimization tool is used to obtain optimal solutions for the MRBs. Optimal solutions of MRBs with different required braking torque values are obtained based on the proposed optimization procedure. From the results, we discuss the optimal selection of MRB types, considering braking torque and mass. (technical note)

  18. An optimal design of cluster spacing intervals for staged fracturing in horizontal shale gas wells based on the optimal SRVs

    Lan Ren

    2017-09-01

    Full Text Available When horizontal well staged cluster fracturing is applied in shale gas reservoirs, the cluster spacing is essential to fracturing performance. If the cluster spacing is too small, the stimulated area between major fractures will be overlapped, and the efficiency of fracturing stimulation will be decreased. If the cluster spacing is too large, the area between major fractures cannot be stimulated completely and reservoir recovery extent will be adversely impacted. At present, cluster spacing design is mainly based on the static model with the potential reservoir stimulation area as the target, and there is no cluster spacing design method in accordance with the actual fracturing process and targets dynamic stimulated reservoir volume (SRV. In this paper, a dynamic SRV calculation model for cluster fracture propagation was established by analyzing the coupling mechanisms among fracture propagation, fracturing fluid loss and stress. Then, the cluster spacing was optimized to reach the target of the optimal SRVs. This model was applied for validation on site in the Jiaoshiba shale gasfield in the Fuling area of the Sichuan Basin. The key geological engineering parameters influencing the optimal cluster spacing intervals were analyzed. The reference charts for the optimal cluster spacing design were prepared based on the geological characteristics of south and north blocks in the Jiaoshiba shale gasfield. It is concluded that the cluster spacing optimal design method proposed in this paper is of great significance in overcoming the blindness in current cluster perforation design and guiding the optimal design of volume fracturing in shale gas reservoirs. Keywords: Shale gas, Horizontal well, Staged fracturing, Cluster spacing, Reservoir, Stimulated reservoir volume (SRV, Mathematical model, Optimal method, Sichuan basin, Jiaoshiba shale gasfield

  19. Discrete and Continuous Optimization Based on Hierarchical Artificial Bee Colony Optimizer

    Lianbo Ma

    2014-01-01

    Full Text Available This paper presents a novel optimization algorithm, namely, hierarchical artificial bee colony optimization (HABC, to tackle complex high-dimensional problems. In the proposed multilevel model, the higher-level species can be aggregated by the subpopulations from lower level. In the bottom level, each subpopulation employing the canonical ABC method searches the part-dimensional optimum in parallel, which can be constructed into a complete solution for the upper level. At the same time, the comprehensive learning method with crossover and mutation operator is applied to enhance the global search ability between species. Experiments are conducted on a set of 20 continuous and discrete benchmark problems. The experimental results demonstrate remarkable performance of the HABC algorithm when compared with other six evolutionary algorithms.

  20. Optimization of C4.5 algorithm-based particle swarm optimization for breast cancer diagnosis

    Muslim, M. A.; Rukmana, S. H.; Sugiharti, E.; Prasetiyo, B.; Alimah, S.

    2018-03-01

    Data mining has become a basic methodology for computational applications in the field of medical domains. Data mining can be applied in the health field such as for diagnosis of breast cancer, heart disease, diabetes and others. Breast cancer is most common in women, with more than one million cases and nearly 600,000 deaths occurring worldwide each year. The most effective way to reduce breast cancer deaths was by early diagnosis. This study aims to determine the level of breast cancer diagnosis. This research data uses Wisconsin Breast Cancer dataset (WBC) from UCI machine learning. The method used in this research is the algorithm C4.5 and Particle Swarm Optimization (PSO) as a feature option and to optimize the algorithm. C4.5. Ten-fold cross-validation is used as a validation method and a confusion matrix. The result of this research is C4.5 algorithm. The particle swarm optimization C4.5 algorithm has increased by 0.88%.

  1. Ordinance on measures for preparation of a radioactive waste repository (Ordinance on preparatory measures) of 24 October 1979

    1981-01-01

    This Ordinance contains details concerning the special procedure provided for under Section 10(2) of the Federal Order of 6th October 1978 concerning the Atomic Energy Act whereby the Federal Council must grant permission before preparations for the construction of radioactive waste repositories may be undertaken. The Ordinance defines the preparatory measures, which include maps and plans of the area, a geological report, etc. (NEA) [fr

  2. Optimal colour quality of LED clusters based on memory colours.

    Smet, Kevin; Ryckaert, Wouter R; Pointer, Michael R; Deconinck, Geert; Hanselaer, Peter

    2011-03-28

    The spectral power distributions of tri- and tetrachromatic clusters of Light-Emitting-Diodes, composed of simulated and commercially available LEDs, were optimized with a genetic algorithm to maximize the luminous efficacy of radiation and the colour quality as assessed by the memory colour quality metric developed by the authors. The trade-off of the colour quality as assessed by the memory colour metric and the luminous efficacy of radiation was investigated by calculating the Pareto optimal front using the NSGA-II genetic algorithm. Optimal peak wavelengths and spectral widths of the LEDs were derived, and over half of them were found to be close to Thornton's prime colours. The Pareto optimal fronts of real LED clusters were always found to be smaller than those of the simulated clusters. The effect of binning on designing a real LED cluster was investigated and was found to be quite large. Finally, a real LED cluster of commercially available AlGaInP, InGaN and phosphor white LEDs was optimized to obtain a higher score on memory colour quality scale than its corresponding CIE reference illuminant.

  3. Density based topology optimization of turbulent flow heat transfer systems

    Dilgen, Sümer Bartug; Dilgen, Cetin Batur; Fuhrman, David R.

    2018-01-01

    The focus of this article is on topology optimization of heat sinks with turbulent forced convection. The goal is to demonstrate the extendibility, and the scalability of a previously developed fluid solver to coupled multi-physics and large 3D problems. The gradients of the objective and the con...... in the optimization process, while also demonstrating extension of the methodology to include coupling of heat transfer with turbulent flows.......The focus of this article is on topology optimization of heat sinks with turbulent forced convection. The goal is to demonstrate the extendibility, and the scalability of a previously developed fluid solver to coupled multi-physics and large 3D problems. The gradients of the objective...

  4. Optimization of Pressurizer Based on Genetic-Simplex Algorithm

    Wang, Cheng; Yan, Chang Qi; Wang, Jian Jun

    2014-01-01

    Pressurizer is one of key components in nuclear power system. It's important to control the dimension in the design of pressurizer through optimization techniques. In this work, a mathematic model of a vertical electric heating pressurizer was established. A new Genetic-Simplex Algorithm (GSA) that combines genetic algorithm and simplex algorithm was developed to enhance the searching ability, and the comparison among modified and original algorithms is conducted by calculating the benchmark function. Furthermore, the optimization design of pressurizer, taking minimization of volume and net weight as objectives, was carried out considering thermal-hydraulic and geometric constraints through GSA. The results indicate that the mathematical model is agreeable for the pressurizer and the new algorithm is more effective than the traditional genetic algorithm. The optimization design shows obvious validity and can provide guidance for real engineering design

  5. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  6. Optimization of Investment Planning Based on Game-Theoretic Approach

    Elena Vladimirovna Butsenko

    2018-03-01

    Full Text Available The game-theoretic approach has a vast potential in solving economic problems. On the other hand, the theory of games itself can be enriched by the studies of real problems of decision-making. Hence, this study is aimed at developing and testing the game-theoretic technique to optimize the management of investment planning. This technique enables to forecast the results and manage the processes of investment planning. The proposed method of optimizing the management of investment planning allows to choose the best development strategy of an enterprise. This technique uses the “game with nature” model, and the Wald criterion, the maximum criterion and the Hurwitz criterion as criteria. The article presents a new algorithm for constructing the proposed econometric method to optimize investment project management. This algorithm combines the methods of matrix games. Furthermore, I show the implementation of this technique in a block diagram. The algorithm includes the formation of initial data, the elements of the payment matrix, as well as the definition of maximin, maximal, compromise and optimal management strategies. The methodology is tested on the example of the passenger transportation enterprise of the Sverdlovsk Railway in Ekaterinburg. The application of the proposed methodology and the corresponding algorithm allowed to obtain an optimal price strategy for transporting passengers for one direction of traffic. This price strategy contributes to an increase in the company’s income with minimal risk from the launch of this direction. The obtained results and conclusions show the effectiveness of using the developed methodology for optimizing the management of investment processes in the enterprise. The results of the research can be used as a basis for the development of an appropriate tool and applied by any economic entity in its investment activities.

  7. Parametric Optimization Design of Brake Block Based on Reverse Engineering

    Jin Hua-wei

    2017-01-01

    Full Text Available As one of the key part of automotive brake,the performance of brake block has a direct impact on the safety and comfort of cars. Modeling the brake block of disc brake in reverse parameterization by reverse engineering software, analyzing and optimizing the reconstructed model by CAE software. Processing the scanned point cloud by Geomagic Studio and reconstructing the CAD model of the brake block with the parametric surface function of the software, then analyzing and optimizing it by Wrokbench. The example shows that it is quick to reconstruct the CAD model of parts by using reverse parameterization method and reduce part re-design development cycle significantly.

  8. Stochastic Finite Elements in Reliability-Based Structural Optimization

    Sørensen, John Dalsgaard; Engelund, S.

    Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect to optimi......Application of stochastic finite elements in structural optimization is considered. It is shown how stochastic fields modelling e.g. the modulus of elasticity can be discretized in stochastic variables and how a sensitivity analysis of the reliability of a structural system with respect...

  9. Overlay improvement by exposure map based mask registration optimization

    Shi, Irene; Guo, Eric; Chen, Ming; Lu, Max; Li, Gordon; Li, Rivan; Tian, Eric

    2015-03-01

    Along with the increased miniaturization of semiconductor electronic devices, the design rules of advanced semiconductor devices shrink dramatically. [1] One of the main challenges of lithography step is the layer-to-layer overlay control. Furthermore, DPT (Double Patterning Technology) has been adapted for the advanced technology node like 28nm and 14nm, corresponding overlay budget becomes even tighter. [2][3] After the in-die mask registration (pattern placement) measurement is introduced, with the model analysis of a KLA SOV (sources of variation) tool, it's observed that registration difference between masks is a significant error source of wafer layer-to-layer overlay at 28nm process. [4][5] Mask registration optimization would highly improve wafer overlay performance accordingly. It was reported that a laser based registration control (RegC) process could be applied after the pattern generation or after pellicle mounting and allowed fine tuning of the mask registration. [6] In this paper we propose a novel method of mask registration correction, which can be applied before mask writing based on mask exposure map, considering the factors of mask chip layout, writing sequence, and pattern density distribution. Our experiment data show if pattern density on the mask keeps at a low level, in-die mask registration residue error in 3sigma could be always under 5nm whatever blank type and related writer POSCOR (position correction) file was applied; it proves random error induced by material or equipment would occupy relatively fixed error budget as an error source of mask registration. On the real production, comparing the mask registration difference through critical production layers, it could be revealed that registration residue error of line space layers with higher pattern density is always much larger than the one of contact hole layers with lower pattern density. Additionally, the mask registration difference between layers with similar pattern density

  10. Quantum Behaved Particle Swarm Optimization Algorithm Based on Artificial Fish Swarm

    Yumin, Dong; Li, Zhao

    2014-01-01

    Quantum behaved particle swarm algorithm is a new intelligent optimization algorithm; the algorithm has less parameters and is easily implemented. In view of the existing quantum behaved particle swarm optimization algorithm for the premature convergence problem, put forward a quantum particle swarm optimization algorithm based on artificial fish swarm. The new algorithm based on quantum behaved particle swarm algorithm, introducing the swarm and following activities, meanwhile using the a...

  11. Chaos optimization algorithms based on chaotic maps with different probability distribution and search speed for global optimization

    Yang, Dixiong; Liu, Zhenjun; Zhou, Jilei

    2014-04-01

    Chaos optimization algorithms (COAs) usually utilize the chaotic map like Logistic map to generate the pseudo-random numbers mapped as the design variables for global optimization. Many existing researches indicated that COA can more easily escape from the local minima than classical stochastic optimization algorithms. This paper reveals the inherent mechanism of high efficiency and superior performance of COA, from a new perspective of both the probability distribution property and search speed of chaotic sequences generated by different chaotic maps. The statistical property and search speed of chaotic sequences are represented by the probability density function (PDF) and the Lyapunov exponent, respectively. Meanwhile, the computational performances of hybrid chaos-BFGS algorithms based on eight one-dimensional chaotic maps with different PDF and Lyapunov exponents are compared, in which BFGS is a quasi-Newton method for local optimization. Moreover, several multimodal benchmark examples illustrate that, the probability distribution property and search speed of chaotic sequences from different chaotic maps significantly affect the global searching capability and optimization efficiency of COA. To achieve the high efficiency of COA, it is recommended to adopt the appropriate chaotic map generating the desired chaotic sequences with uniform or nearly uniform probability distribution and large Lyapunov exponent.

  12. Optimization of Indoor Thermal Comfort Parameters with the Adaptive Network-Based Fuzzy Inference System and Particle Swarm Optimization Algorithm

    Jing Li

    2017-01-01

    Full Text Available The goal of this study is to improve thermal comfort and indoor air quality with the adaptive network-based fuzzy inference system (ANFIS model and improved particle swarm optimization (PSO algorithm. A method to optimize air conditioning parameters and installation distance is proposed. The methodology is demonstrated through a prototype case, which corresponds to a typical laboratory in colleges and universities. A laboratory model is established, and simulated flow field information is obtained with the CFD software. Subsequently, the ANFIS model is employed instead of the CFD model to predict indoor flow parameters, and the CFD database is utilized to train ANN input-output “metamodels” for the subsequent optimization. With the improved PSO algorithm and the stratified sequence method, the objective functions are optimized. The functions comprise PMV, PPD, and mean age of air. The optimal installation distance is determined with the hemisphere model. Results show that most of the staff obtain a satisfactory degree of thermal comfort and that the proposed method can significantly reduce the cost of building an experimental device. The proposed methodology can be used to determine appropriate air supply parameters and air conditioner installation position for a pleasant and healthy indoor environment.

  13. Optimal layout of radiological environment monitoring based on TOPSIS method

    Li Sufen; Zhou Chunlin

    2006-01-01

    TOPSIS is a method for multi-objective-decision-making, which can be applied to comprehensive assessment of environmental quality. This paper adopts it to get the optimal layout of radiological environment monitoring, it is proved that this method is a correct, simple and convenient, practical one, and beneficial to supervision departments to scientifically and reasonably layout Radiological Environment monitoring sites. (authors)

  14. Ensemble based multi-objective production optimization of smart wells

    Fonseca, R.M.; Leeuwenburgh, O.; Jansen, J.D.

    2012-01-01

    In a recent study two hierarchical multi-objective methods were suggested to include short-term targets in life-cycle production optimization. However this previous study has two limitations: 1) the adjoint formulation is used to obtain gradient information, requiring simulator source code access

  15. Global Launcher Trajectory Optimization for Lunar Base Settlement

    Pagano, A.; Mooij, E.

    2010-01-01

    The problem of a mission to the Moon to set a permanent outpost can be tackled by dividing the journey into three phases: the Earth ascent, the Earth-Moon transfer and the lunar landing. In this paper we present an optimization analysis of Earth ascent trajectories of existing launch vehicles

  16. Simulation-based optimization for product and process design

    Driessen, L.

    2006-01-01

    The design of products and processes has gradually shifted from a purely physical process towards a process that heavily relies on computer simulations (virtual prototyping). To optimize this virtual design process in terms of speed and final product quality, statistical methods and mathematical

  17. Space-Mapping-Based Interpolation for Engineering Optimization

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    of the fine model at off&8209;grid points and, as a result, increases the effective resolution of the design variable domain search and improves the quality of the fine model solution found by the SM optimization algorithm. The proposed method requires little computational effort; in particular no additional...

  18. Stochastic optimization-based study of dimerization kinetics

    To this end, we study dimerization kinetics of protein as a model system. We follow the dimerization kinetics using a stochastic simulation algorithm and ... optimization; dimerization kinetics; sensitivity analysis; stochastic simulation ... tion in large molecules and clusters, or the design ..... An unbiased strategy of allocating.

  19. Metamodel-based robust simulation-optimization : An overview

    Dellino, G.; Meloni, C.; Kleijnen, J.P.C.; Dellino, Gabriella; Meloni, Carlo

    2015-01-01

    Optimization of simulated systems is the goal of many methods, but most methods assume known environments. We, however, develop a "robust" methodology that accounts for uncertain environments. Our methodology uses Taguchi's view of the uncertain world but replaces his statistical techniques by

  20. Global stability-based design optimization of truss structures using ...

    The quality of current pareto front obtained in the end of a whole genetic search is assessed according to its closeness to the ...... better optimal designation with a lower displacement value of 0.3075 in. satisfying the service- .... Internal force. R.