WorldWideScience

Sample records for optimise rough set

  1. Generalized rough sets

    International Nuclear Information System (INIS)

    Rady, E.A.; Kozae, A.M.; Abd El-Monsef, M.M.E.

    2004-01-01

    The process of analyzing data under uncertainty is a main goal for many real life problems. Statistical analysis for such data is an interested area for research. The aim of this paper is to introduce a new method concerning the generalization and modification of the rough set theory introduced early by Pawlak [Int. J. Comput. Inform. Sci. 11 (1982) 314

  2. Bankruptcy Prediction with Rough Sets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); V. Popova (Viara)

    2001-01-01

    textabstractThe bankruptcy prediction problem can be considered an or dinal classification problem. The classical theory of Rough Sets describes objects by discrete attributes, and does not take into account the order- ing of the attributes values. This paper proposes a modification of the Rough Set

  3. Fuzzy sets, rough sets, multisets and clustering

    CERN Document Server

    Dahlbom, Anders; Narukawa, Yasuo

    2017-01-01

    This book is dedicated to Prof. Sadaaki Miyamoto and presents cutting-edge papers in some of the areas in which he contributed. Bringing together contributions by leading researchers in the field, it concretely addresses clustering, multisets, rough sets and fuzzy sets, as well as their applications in areas such as decision-making. The book is divided in four parts, the first of which focuses on clustering and classification. The second part puts the spotlight on multisets, bags, fuzzy bags and other fuzzy extensions, while the third deals with rough sets. Rounding out the coverage, the last part explores fuzzy sets and decision-making.

  4. Rough set classification based on quantum logic

    Science.gov (United States)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  5. Reducing surface roughness by optimising the turning parameters

    Directory of Open Access Journals (Sweden)

    Senthil Kumar, K.

    2013-08-01

    Full Text Available Modern manufacturers worldwide look for the cheapest quality-manufactured machined components to compete in the market. Good surface quality is desired for the proper functioning of the parts produced. The surface quality is influenced by the cutting speed, feed rate, depth of cut, and many other parameters. In this paper, the Taguchi method a powerful tool to design optimisation for quality is used to find the optimal machining parameters for the turning operation. An orthogonal array, the signal-to-noise (S/N ratio, and the analysis of variance (ANOVA are employed to investigate the machining characteristics of super duplex stainless steel bars using uncoated carbide cutting tools. The effect of machining parameters on surface roughness was discovered. Confirmation tests were conducted at optimal conditions to compare the experimental results with the predicted values.

  6. Generalized rough sets hybrid structure and applications

    CERN Document Server

    Mukherjee, Anjan

    2015-01-01

    The book introduces the concept of “generalized interval valued intuitionistic fuzzy soft sets”. It presents the basic properties of these sets and also, investigates an application of generalized interval valued intuitionistic fuzzy soft sets in decision making with respect to interval of degree of preference. The concept of “interval valued intuitionistic fuzzy soft rough sets” is discussed and interval valued intuitionistic fuzzy soft rough set based multi criteria group decision making scheme is presented, which refines the primary evaluation of the whole expert group and enables us to select the optimal object in a most reliable manner. The book also details concept of interval valued intuitionistic fuzzy sets of type 2. It presents the basic properties of these sets. The book also introduces the concept of “interval valued intuitionistic fuzzy soft topological space (IVIFS topological space)” together with intuitionistic fuzzy soft open sets (IVIFS open sets) and intuitionistic fuzzy soft cl...

  7. Information Measures of Roughness of Knowledge and Rough Sets for Incomplete Information Systems

    Institute of Scientific and Technical Information of China (English)

    LIANG Ji-ye; QU Kai-she

    2001-01-01

    In this paper we address information measures of roughness of knowledge and rough sets for incomplete information systems. The definition of rough entropy of knowledge and its important properties are given. In particular, the relationship between rough entropy of knowledge and the Hartley measure of uncertainty is established. We show that rough entropy of knowledge decreases monotonously as granularity of information become smaller. This gives an information interpretation for roughness of knowledge. Based on rough entropy of knowledge and roughness of rough set. a definition of rough entropy of rough set is proposed, and we show that rough entropy of rough set decreases monotonously as granularity of information become smaller. This gives more accurate measure for roughness of rough set.

  8. A Rough Set Approach for Customer Segmentation

    Directory of Open Access Journals (Sweden)

    Prabha Dhandayudam

    2014-04-01

    Full Text Available Customer segmentation is a process that divides a business's total customers into groups according to their diversity of purchasing behavior and characteristics. The data mining clustering technique can be used to accomplish this customer segmentation. This technique clusters the customers in such a way that the customers in one group behave similarly when compared to the customers in other groups. The customer related data are categorical in nature. However, the clustering algorithms for categorical data are few and are unable to handle uncertainty. Rough set theory (RST is a mathematical approach that handles uncertainty and is capable of discovering knowledge from a database. This paper proposes a new clustering technique called MADO (Minimum Average Dissimilarity between Objects for categorical data based on elements of RST. The proposed algorithm is compared with other RST based clustering algorithms, such as MMR (Min-Min Roughness, MMeR (Min Mean Roughness, SDR (Standard Deviation Roughness, SSDR (Standard deviation of Standard Deviation Roughness, and MADE (Maximal Attributes DEpendency. The results show that for the real customer data considered, the MADO algorithm achieves clusters with higher cohesion, lower coupling, and less computational complexity when compared to the above mentioned algorithms. The proposed algorithm has also been tested on a synthetic data set to prove that it is also suitable for high dimensional data.

  9. More on neutrosophic soft rough sets and its modification

    Directory of Open Access Journals (Sweden)

    Emad Marei

    2015-12-01

    Full Text Available This paper aims to introduce and discuss anew mathematical tool for dealing with uncertainties, which is a combination of neutrosophic sets, soft sets and rough sets, namely neutrosophic soft rough set model. Also, its modification is introduced. Some of their properties are studied and supported with proved propositions and many counter examples. Some of rough relations are redefined as a neutrosophic soft rough relations. Comparisons among traditional rough model, suggested neutrosophic soft rough model and its modification, by using their properties and accuracy measures are introduced. Finally, we illustrate that, classical rough set model can be viewed as a special case of suggested models in this paper.

  10. Rough sets selected methods and applications in management and engineering

    CERN Document Server

    Peters, Georg; Ślęzak, Dominik; Yao, Yiyu

    2012-01-01

    Introduced in the early 1980s, Rough Set Theory has become an important part of soft computing in the last 25 years. This book provides a practical, context-based analysis of rough set theory, with each chapter exploring a real-world application of Rough Sets.

  11. Soft sets combined with interval valued intuitionistic fuzzy sets of type-2 and rough sets

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2015-03-01

    Full Text Available Fuzzy set theory, rough set theory and soft set theory are all mathematical tools dealing with uncertainties. The concept of type-2 fuzzy sets was introduced by Zadeh in 1975 which was extended to interval valued intuitionistic fuzzy sets of type-2 by the authors.This paper is devoted to the discussions of the combinations of interval valued intuitionistic sets of type-2, soft sets and rough sets.Three different types of new hybrid models, namely-interval valued intuitionistic fuzzy soft sets of type-2, soft rough interval valued intuitionistic fuzzy sets of type-2 and soft interval valued intuitionistic fuzzy rough sets of type-2 are proposed and their properties are derived.

  12. PhysarumSoft: An update based on rough set theory

    Science.gov (United States)

    Schumann, Andrew; Pancerz, Krzysztof

    2017-07-01

    PhysarumSoft is a software tool consisting of two modules developed for programming Physarum machines and simulating Physarum games, respectively. The paper briefly discusses what has been added since the last version released in 2015. New elements in both modules are based on rough set theory. Rough sets are used to model behaviour of Physarum machines and to describe strategy games.

  13. Data for TROTS – The Radiotherapy Optimisation Test Set

    Directory of Open Access Journals (Sweden)

    Sebastiaan Breedveld

    2017-06-01

    Full Text Available The Radiotherapy Optimisation Test Set (TROTS is an extensive set of problems originating from radiotherapy (radiation therapy treatment planning. This dataset is created for 2 purposes: (1 to supply a large-scale dense dataset to measure performance and quality of mathematical solvers, and (2 to supply a dataset to investigate the multi-criteria optimisation and decision-making nature of the radiotherapy problem. The dataset contains 120 problems (patients, divided over 6 different treatment protocols/tumour types. Each problem contains numerical data, a configuration for the optimisation problem, and data required to visualise and interpret the results. The data is stored as HDF5 compatible Matlab files, and includes scripts to work with the dataset.

  14. Matroidal Structure of Generalized Rough Sets Based on Tolerance Relations

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    of the generalized rough set based on the tolerance relation. The matroid can also induce a new relation. We investigate the connection between the original tolerance relation and the induced relation.

  15. Flu Diagnosis System Using Jaccard Index and Rough Set Approaches

    Science.gov (United States)

    Efendi, Riswan; Azah Samsudin, Noor; Mat Deris, Mustafa; Guan Ting, Yip

    2018-04-01

    Jaccard index and rough set approaches have been frequently implemented in decision support systems with various domain applications. Both approaches are appropriate to be considered for categorical data analysis. This paper presents the applications of sets operations for flu diagnosis systems based on two different approaches, such as, Jaccard index and rough set. These two different approaches are established using set operations concept, namely intersection and subset. The step-by-step procedure is demonstrated from each approach in diagnosing flu system. The similarity and dissimilarity indexes between conditional symptoms and decision are measured using Jaccard approach. Additionally, the rough set is used to build decision support rules. Moreover, the decision support rules are established using redundant data analysis and elimination of unclassified elements. A number data sets is considered to attempt the step-by-step procedure from each approach. The result has shown that rough set can be used to support Jaccard approaches in establishing decision support rules. Additionally, Jaccard index is better approach for investigating the worst condition of patients. While, the definitely and possibly patients with or without flu can be determined using rough set approach. The rules may improve the performance of medical diagnosis systems. Therefore, inexperienced doctors and patients are easier in preliminary flu diagnosis.

  16. Helly-type theorems for roughly convexlike sets

    International Nuclear Information System (INIS)

    Phan Thanh An

    2005-04-01

    For a given positive real number of γ, a subset M of an n-dimensional Euclidean space is said to be roughly convexlike (with the roughness degree γ) if x 0 , x 1 is an element of M and parallel x 1 - x 0 parallel > γ imply ]x 0 , x 1 [intersection M ≠ 0. In this paper, we present Helly-type theorems for such sets then solve an open question about sets of constant width raised by Buchman and Valentine and Sallee (author)

  17. Optimisation-Based Solution Methods for Set Partitioning Models

    DEFF Research Database (Denmark)

    Rasmussen, Matias Sevel

    The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...

  18. Variable precision rough set for multiple decision attribute analysis

    Institute of Scientific and Technical Information of China (English)

    Lai; Kin; Keung

    2008-01-01

    A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confidence measures and a β-reduct, the VPRS model can rationally solve the conflicting decision analysis problem with multiple decision attributes and multiple condition attributes. For illustration, a medical diagnosis example is utilized to show the feasibility of the VPRS model in solving the MADA...

  19. UNCERTAINTY HANDLING IN DISASTER MANAGEMENT USING HIERARCHICAL ROUGH SET GRANULATION

    Directory of Open Access Journals (Sweden)

    H. Sheikhian

    2015-08-01

    Full Text Available Uncertainty is one of the main concerns in geospatial data analysis. It affects different parts of decision making based on such data. In this paper, a new methodology to handle uncertainty for multi-criteria decision making problems is proposed. It integrates hierarchical rough granulation and rule extraction to build an accurate classifier. Rough granulation provides information granules with a detailed quality assessment. The granules are the basis for the rule extraction in granular computing, which applies quality measures on the rules to obtain the best set of classification rules. The proposed methodology is applied to assess seismic physical vulnerability in Tehran. Six effective criteria reflecting building age, height and material, topographic slope and earthquake intensity of the North Tehran fault have been tested. The criteria were discretized and the data set was granulated using a hierarchical rough method, where the best describing granules are determined according to the quality measures. The granules are fed into the granular computing algorithm resulting in classification rules that provide the highest prediction quality. This detailed uncertainty management resulted in 84% accuracy in prediction in a training data set. It was applied next to the whole study area to obtain the seismic vulnerability map of Tehran. A sensitivity analysis proved that earthquake intensity is the most effective criterion in the seismic vulnerability assessment of Tehran.

  20. Thriving rough sets 10th anniversary : honoring professor Zdzisław Pawlak's life and legacy & 35 years of rough sets

    CERN Document Server

    Skowron, Andrzej; Yao, Yiyu; Ślęzak, Dominik; Polkowski, Lech

    2017-01-01

    This special book is dedicated to the memory of Professor Zdzisław Pawlak, the father of rough set theory, in order to commemorate both the 10th anniversary of his passing and 35 years of rough set theory. The book consists of 20 chapters distributed into four sections, which focus in turn on a historical review of Professor Zdzisław Pawlak and rough set theory; a review of the theory of rough sets; the state of the art of rough set theory; and major developments in rough set based data mining approaches. Apart from Professor Pawlak’s contributions to rough set theory, other areas he was interested in are also included. Moreover, recent theoretical studies and advances in applications are also presented. The book will offer a useful guide for researchers in Knowledge Engineering and Data Mining by suggesting new approaches to solving the problems they encounter.

  1. Rough Set Approach to Incomplete Multiscale Information System

    Science.gov (United States)

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu

    2014-01-01

    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  2. Rough set and rule-based multicriteria decision aiding

    Directory of Open Access Journals (Sweden)

    Roman Slowinski

    2012-08-01

    Full Text Available The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA. DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems.

  3. An IDS Alerts Aggregation Algorithm Based on Rough Set Theory

    Science.gov (United States)

    Zhang, Ru; Guo, Tao; Liu, Jianyi

    2018-03-01

    Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.

  4. OPTIMISATION OF A DRIVE SYSTEM AND ITS EPICYCLIC GEAR SET

    OpenAIRE

    Bellegarde , Nicolas; Dessante , Philippe; Vidal , Pierre; Vannier , Jean-Claude

    2007-01-01

    International audience; This paper describes the design of a drive consisting of a DC motor, a speed reducer, a lead screw transformation system, a power converter and its associated DC source. The objective is to reduce the mass of the system. Indeed, the volume and weight optimisation of an electrical drive is an important issue for embedded applications. Here, we present an analytical model of the system in a specific application and afterwards an optimisation of the motor and speed reduce...

  5. Rough sets applied in sublattices and ideals of lattices

    Directory of Open Access Journals (Sweden)

    R. Ameri

    2015-12-01

    Full Text Available The purpose of this paper is the study of rough hyperlattice. In this regards we introduce rough sublattice and rough ideals of lattices. We will proceed by obtaining lower and upper approximations in these lattices.

  6. Preference Mining Using Neighborhood Rough Set Model on Two Universes.

    Science.gov (United States)

    Zeng, Kai

    2016-01-01

    Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.

  7. Analysis and optimisation of vertical surface roughness in micro selective laser melting

    International Nuclear Information System (INIS)

    Abele, Eberhard; Kniepkamp, Michael

    2015-01-01

    Surface roughness is a major disadvantage of many additive manufacturing technologies like selective laser melting (SLM) compared to established processes like milling or drilling. With recent advancements the resolution of the SLM process could be increased to layer heights of less than 10 μm leading to a new process called micro selective laser melting (μSLM). The purpose of this paper is to analyze the influence of the μSLM process parameters and exposure strategies on the morphology of vertical surfaces. Contour scanning using varying process parameters was used to increase the surface quality. It is shown that it is possible to achieve average surface roughness of less than 1.7 μm using low scan speeds compared to 8–10 μm without contour scanning. Furthermore it is shown that a contour exposure prior to the core exposure leads to surface defects and thus increased roughness. (paper)

  8. Rough Standard Neutrosophic Sets: An Application on Standard Neutrosophic Information Systems

    Directory of Open Access Journals (Sweden)

    Nguyen Xuan Thao

    2016-12-01

    Full Text Available A rough fuzzy set is the result of the approximation of a fuzzy set with respect to a crisp approximation space. It is a mathematical tool for the knowledge discovery in the fuzzy information systems. In this paper, we introduce the concepts of rough standard neutrosophic sets and standard neutrosophic information system, and give some results of the knowledge discovery on standard neutrosophic information system based on rough standard neutrosophic sets.

  9. Factors Analysis And Profit Achievement For Trading Company By Using Rough Set Method

    Directory of Open Access Journals (Sweden)

    Muhammad Ardiansyah Sembiring

    2017-06-01

    Full Text Available This research has been done to analysis the financial raport fortrading company and it is  intimately  related  to  some  factors  which  determine  the profit of company. The result of this reseach is showed about  New Knowledge and perform of the rule. In  discussion, by followed data mining process and using Rough Set method. Rough Set is to analyzed the performance of the result. This  reseach will be assist to the manager of company with draw the intactandobjective. Rough set method is also to difined  the rule of discovery process and started the formation about Decision System, Equivalence Class, Discernibility Matrix,  Discernibility Matrix Modulo D, Reduction and General Rules. Rough set method is efective model about the performing analysis in the company.   Keywords : Data Mining, General Rules, Profit,. Rough Set.

  10. Optimising Mycobacterium tuberculosis detection in resource limited settings.

    Science.gov (United States)

    Alfred, Nwofor; Lovette, Lawson; Aliyu, Gambo; Olusegun, Obasanya; Meshak, Panwal; Jilang, Tunkat; Iwakun, Mosunmola; Nnamdi, Emenyonu; Olubunmi, Onuoha; Dakum, Patrick; Abimiku, Alash'le

    2014-03-03

    The light-emitting diode (LED) fluorescence microscopy has made acid-fast bacilli (AFB) detection faster and efficient although its optimal performance in resource-limited settings is still being studied. We assessed the optimal performances of light and fluorescence microscopy in routine conditions of a resource-limited setting and evaluated the digestion time for sputum samples for maximum yield of positive cultures. Cross-sectional study. Facility-based involving samples of routine patients receiving tuberculosis treatment and care from the main tuberculosis case referral centre in northern Nigeria. The study included 450 sputum samples from 150 new patients with clinical diagnosis of pulmonary tuberculosis. The 450 samples were pooled into 150 specimens, examined independently with mercury vapour lamp (FM), LED CysCope (CY) and Primo Star iLED (PiLED) fluorescence microscopies, and with the Ziehl-Neelsen (ZN) microscopy to assess the performance of each technique compared with liquid culture. The cultured specimens were decontaminated with BD Mycoprep (4% NaOH-1% NLAC and 2.9% sodium citrate) for 10, 15 and 20 min before incubation in Mycobacterium growth incubator tube (MGIT) system and growth examined for acid-fast bacilli (AFB). Of the 150 specimens examined by direct microscopy: 44 (29%), 60 (40%), 49 (33%) and 64 (43%) were AFB positive by ZN, FM, CY and iLED microscopy, respectively. Digestion of sputum samples for 10, 15 and 20 min yielded mycobacterial growth in 72 (48%), 81 (54%) and 68 (45%) of the digested samples, respectively, after incubation in the MGIT system. In routine laboratory conditions of a resource-limited setting, our study has demonstrated the superiority of fluorescence microscopy over the conventional ZN technique. Digestion of sputum samples for 15 min yielded more positive cultures.

  11. Rough set semantics for identity on the Web

    NARCIS (Netherlands)

    Beek, Wouter; Schlobach, Stefan; van Harmelen, Frank

    2014-01-01

    Identity relations are at the foundation of many logic-based knowledge representations. We argue that the traditional notion of equality, is unsuited for many realistic knowledge representation settings. The classical interpretation of equality is too strong when the equality statements are re-used

  12. A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization

    OpenAIRE

    Suguna, N.; Thanushkodi, K.

    2010-01-01

    Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt...

  13. Application of preprocessing filtering on Decision Tree C4.5 and rough set theory

    Science.gov (United States)

    Chan, Joseph C. C.; Lin, Tsau Y.

    2001-03-01

    This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.

  14. A rough set approach for determining weights of decision makers in group decision making.

    Science.gov (United States)

    Yang, Qiang; Du, Ping-An; Wang, Yong; Liang, Bin

    2017-01-01

    This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs' decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member' decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs' evaluations and selections.

  15. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  16. δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions

    Directory of Open Access Journals (Sweden)

    Hengrong Ju

    2014-01-01

    Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.

  17. A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses

    Science.gov (United States)

    Zhang, Chao; Li, Deyu; Yan, Yan

    2015-01-01

    In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772

  18. Uncertainty Modeling for Database Design using Intuitionistic and Rough Set Theory

    Science.gov (United States)

    2009-01-01

    Definition. An intuitionistic rough relation R is a sub- set of the set cross product P(D1)× P(D2) × · · ·× P( Dm )× Dµ.× Dv. For a specific relation, R...that aj ∈ dij for all j. The interpretation space is the cross product D1× D2 × · · ·× Dm × Dµ× Dv but is limited for a given re- lation R to the set...systems, Journal of Information Science 11 (1985), 77–87. [7] T. Beaubouef and F. Petry, Rough Querying of Crisp Data in Relational Databases, Third

  19. Recent Fuzzy Generalisations of Rough Sets Theory: A Systematic Review and Methodological Critique of the Literature

    Directory of Open Access Journals (Sweden)

    Abbas Mardani

    2017-01-01

    Full Text Available Rough set theory has been used extensively in fields of complexity, cognitive sciences, and artificial intelligence, especially in numerous fields such as expert systems, knowledge discovery, information system, inductive reasoning, intelligent systems, data mining, pattern recognition, decision-making, and machine learning. Rough sets models, which have been recently proposed, are developed applying the different fuzzy generalisations. Currently, there is not a systematic literature review and classification of these new generalisations about rough set models. Therefore, in this review study, the attempt is made to provide a comprehensive systematic review of methodologies and applications of recent generalisations discussed in the area of fuzzy-rough set theory. On this subject, the Web of Science database has been chosen to select the relevant papers. Accordingly, the systematic and meta-analysis approach, which is called “PRISMA,” has been proposed and the selected articles were classified based on the author and year of publication, author nationalities, application field, type of study, study category, study contribution, and journal in which the articles have appeared. Based on the results of this review, we found that there are many challenging issues related to the different application area of fuzzy-rough set theory which can motivate future research studies.

  20. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    Science.gov (United States)

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  1. Rough set theory and its application in fault diagnosis in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Chen Zhihui; Nuclear Power Inst. of China, Chengdu; Xia Hong; Huang Wei

    2006-01-01

    Rough Set theory is the mathematic theory that can express and deal with vague and uncertain data. There is complicated and uncertain data in the fault feature of Nuclear Power Plant, so that Rough Set theory can be introduced to analyze and process the historical data to find out the rule of fault diagnosis of Nuclear Power Plant. This paper introduces the Rough Set theory and Knowledge Acquisition briefly, and describes the reduction algorithm based on discernibility matrix and its application in the fault diagnosis to generate rules of diagnosis. Using these rules, three kinds of model faults have been diagnosed correctly. The conclusion can be drawn that this method can reduce the redundancy of the fault feature, simplify and optimize the rule of diagnosis. (authors)

  2. Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam Volume 2

    CERN Document Server

    Suraj, Zbigniew

    2013-01-01

    This book is dedicated to the memory of Professor Zdzis{\\l}aw Pawlak who passed away almost six year ago. He is the founder of the Polish school of Artificial Intelligence and one of the pioneers in Computer Engineering and Computer Science with worldwide influence. He was a truly great scientist, researcher, teacher and a human being. This book prepared in two volumes contains more than 50 chapters. This demonstrates that the scientific approaches  discovered by of Professor Zdzis{\\l}aw Pawlak, especially the rough set approach as a tool for dealing with imperfect knowledge, are vivid and intensively explored by many researchers in many places throughout the world. The submitted papers prove that interest in rough set research is growing and is possible to see many new excellent results both on theoretical foundations and applications of rough sets alone or in combination with other approaches. We are proud to offer the readers this book.

  3. Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam Volume 1

    CERN Document Server

    Suraj, Zbigniew

    2013-01-01

    This book is dedicated to the memory of Professor Zdzis{\\l}aw Pawlak who passed away almost six year ago. He is the founder of the Polish school of Artificial Intelligence and one of the pioneers in Computer Engineering and Computer Science with worldwide influence. He was a truly great scientist, researcher, teacher and a human being. This book prepared in two volumes contains more than 50 chapters. This demonstrates that the scientific approaches  discovered by of Professor Zdzis{\\l}aw Pawlak, especially the rough set approach as a tool for dealing with imperfect knowledge, are vivid and intensively explored by many researchers in many places throughout the world. The submitted papers prove that interest in rough set research is growing and is possible to see many new excellent results both on theoretical foundations and applications of rough sets alone or in combination with other approaches. We are proud to offer the readers this book.

  4. The Dynamic Evaluation of Enterprise's Strategy Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    刘恒江; 陈继祥

    2003-01-01

    This paper presents dynamic evaluation of enterprise's strategy which is suitable for dealing with the complex and dynamic problems of strategic evaluation. Rough Set Theory is a powerful mathematical tool to handle vagueness and uncertainty of dynamic evaluation. By the application of Rough Set Theory, this paper computes the significance and weights of each evaluation criterion and helps to lay evaluation emphasis on the main and effective criteria. From the reduced decision table,evaluators can get decision rules Which direct them to give judgment or suggestion of strategy. The whole evaluation process is decided by data, so the results are certain and reasonable.

  5. Modeling of Two-Phase Flow in Rough-Walled Fracture Using Level Set Method

    Directory of Open Access Journals (Sweden)

    Yunfeng Dai

    2017-01-01

    Full Text Available To describe accurately the flow characteristic of fracture scale displacements of immiscible fluids, an incompressible two-phase (crude oil and water flow model incorporating interfacial forces and nonzero contact angles is developed. The roughness of the two-dimensional synthetic rough-walled fractures is controlled with different fractal dimension parameters. Described by the Navier–Stokes equations, the moving interface between crude oil and water is tracked using level set method. The method accounts for differences in densities and viscosities of crude oil and water and includes the effect of interfacial force. The wettability of the rough fracture wall is taken into account by defining the contact angle and slip length. The curve of the invasion pressure-water volume fraction is generated by modeling two-phase flow during a sudden drainage. The volume fraction of water restricted in the rough-walled fracture is calculated by integrating the water volume and dividing by the total cavity volume of the fracture while the two-phase flow is quasistatic. The effect of invasion pressure of crude oil, roughness of fracture wall, and wettability of the wall on two-phase flow in rough-walled fracture is evaluated.

  6. Analysis of the experimental data of air pollution using atmospheric dispersion modeling and rough set

    International Nuclear Information System (INIS)

    Halfa, I.K.I

    2008-01-01

    This thesis contains four chapters and list of references:In chapter 1, we introduce a brief survey about the atmospheric concepts and the topological methods for data analysis.In section 1.1, we give introduce a general introduction. We recall some of atmospheric fundamentals in Section 1.2. Section 1.3, shows the concepts of modern topological methods for data analysis.In chapter 2, we have studied the properties of atmosphere and focus on concept of Rough set and its properties. This concepts of rough set has been applied to analyze the atmospheric data.In section 2.1, we introduce a general introduction about concept of rough set and properties of atmosphere. Section 2.2 focuses on the concept of rough set and its properties and generalization of approximation of rough set theory by using topological space. In section 2.3 we have studied the stabilities of atmosphere for Inshas location for all seasons using different schemes and compared these schemes using statistical and rough set methods. In section 2.4, we introduce mixing height of plume for all seasons. Section 2.5 introduced seasonal surface layer turbulence processes for Inshas location. Section 2.6 gives a comparison between the seasonal surface layer turbulence processes for Inshas location and for different locations using rough set theory.In chapter 3 we focus on the concept of variable precision rough set (VPRS) and its properties and using it to compare, between the estimated and observed data of the concentration of air pollution for Inshas location. In Section 3.1 we introduce a general introduction about VPRS and air pollution. In Section 3.2 we have focused on the concept and properties of VPRS. In Section 3.3 we have introduced a method to estimate the concentration of air pollution for Inshas location using Gaussian plume model. Section 3.4 has showed the experimental data. The estimated data have been compared with the observed data using statistical methods in Section 3.5. In Section 3

  7. Risk Decision Making Based on Decision-theoretic Rough Set: A Three-way View Decision Model

    OpenAIRE

    Huaxiong Li; Xianzhong Zhou

    2011-01-01

    Rough set theory has witnessed great success in data mining and knowledge discovery, which provides a good support for decision making on a certain data. However, a practical decision problem always shows diversity under the same circumstance according to different personality of the decision makers. A simplex decision model can not provide a full description on such diverse decisions. In this article, a review of Pawlak rough set models and probabilistic rough set models is presented, and a ...

  8. Fault Diagnosis Method of Polymerization Kettle Equipment Based on Rough Sets and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Shu-zhi Gao

    2013-01-01

    Full Text Available Polyvinyl chloride (PVC polymerizing production process is a typical complex controlled object, with complexity features, such as nonlinear, multivariable, strong coupling, and large time-delay. Aiming at the real-time fault diagnosis and optimized monitoring requirements of the large-scale key polymerization equipment of PVC production process, a real-time fault diagnosis strategy is proposed based on rough sets theory with the improved discernibility matrix and BP neural networks. The improved discernibility matrix is adopted to reduct the attributes of rough sets in order to decrease the input dimensionality of fault characteristics effectively. Levenberg-Marquardt BP neural network is trained to diagnose the polymerize faults according to the reducted decision table, which realizes the nonlinear mapping from fault symptom set to polymerize fault set. Simulation experiments are carried out combining with the industry history datum to show the effectiveness of the proposed rough set neural networks fault diagnosis method. The proposed strategy greatly increased the accuracy rate and efficiency of the polymerization fault diagnosis system.

  9. A Rough Set Approach of Mechanical Fault Diagnosis for Five-Plunger Pump

    Directory of Open Access Journals (Sweden)

    Jiangping Wang

    2013-01-01

    Full Text Available Five-plunger pumps are widely used in oil field to recover petroleum due to their reliability and relatively low cost. Petroleum production is, to a great extent, dependent upon the running condition of the pumps. Closely monitoring the condition of the pumps and carrying out timely system diagnosis whenever a fault symptom is detected would help to reduce the production downtime and improve overall productivity. In this paper, a rough set approach of mechanical fault diagnosis is proposed to identify the five-plunger pump faults. The details of the approach, together with the basic concepts of the rough sets theory, are presented. The rough classifier is a set of decision rules derived from lower and upper approximations of decision classes. The definitions of these approximations are based on the indiscernibility relation in the set of objects. The spectrum features of vibration signals are abstracted as the attributes of the learning samples. The minimum decision rule set is used to classify technical states of the considered object. The diagnostic investigation is done on data from a five-plunger pump in outdoor conditions on a real industrial object. Results show that the approach can effectively identify the different operating states of the pump.

  10. Merger and Acquisition Target Selection Based on Interval Neutrosophic Multigranulation Rough Sets over Two Universes

    Directory of Open Access Journals (Sweden)

    Chao Zhang

    2017-07-01

    Full Text Available As a significant business activity, merger and acquisition (M&A generally means transactions in which the ownership of companies, other business organizations or their operating units are transferred or combined. In a typical M&A procedure, M&A target selection is an important issue that tends to exert an increasingly significant impact on different business areas. Although some research works based on fuzzy methods have been explored on this issue, they can only deal with incomplete and uncertain information, but not inconsistent and indeterminate information that exists universally in the decision making process. Additionally, it is advantageous to solve M&A problems under the group decision making context. In order to handle these difficulties in M&A target selection background, we introduce a novel rough set model by combining interval neutrosophic sets (INSs with multigranulation rough sets over two universes, called an interval neutrosophic (IN multigranulation rough set over two universes. Then, we discuss the definition and some fundamental properties of the proposed model. Finally, we establish decision making rules and computing approaches for the proposed model in M&A target selection background, and the effectiveness of the decision making approach is demonstrated by an illustrative case analysis.

  11. Study of different effectives on wind energy by using mathematical methods and rough set theory

    International Nuclear Information System (INIS)

    Marrouf, A.A.

    2009-01-01

    Analysis of data plays an important role in all fields of life, a huge number of data that results from experimental data in all scientific and social sciences. The analysis of these data was carried out by statistical methods and its representation depended on classical Euclidean geometric concepts.In the 21 st century, new direction for data analysis have been started in applications. These direction depend basically on modern mathematical theories. The quality of data and information can be characterized as interfering and man is unable to distinguish between its vocabularies. The topological methods are the most compatible for this process of analysis for making decision. At the end of 20 th century, a new topological method appeared, this is known by R ough Set Theory Approach , this doesn't depend on external suppositions. It is known as (let data speak). This is good for all types of data. The theory was originated by Pawlak in 1982 [48] as a result of long term program of fundamental research on logical properties of information systems, carried out by him and a group of logicians from Phlish Academy of sciences and the University of Warsaw, Poland. Various real life application of rough sets have shown its usefulness in many domains as civil engineering, medical data analysis, generating of a cement kiln control algorithm from observation of stocker's actions, vibration analysis, air craft pilot performance evaluation, hydrology, pharmacology, image processing and ecology.Variable Precision Rough Set (VPRS)-model is proposed by W. Ziarko [80]. It is a new generalization of the rough set model. It is aimed at handling underlain information and is directly derived from the original model without any additional assumptions.Topology is a mathematical tool to study information systems and variable precision rough sets. Ziarko presumed that the notion of variable precision rough sets depend on special types of topological spaces. In this space, the families of

  12. Method research of fault diagnosis based on rough set for nuclear power plant

    International Nuclear Information System (INIS)

    Chen Zhihui; Xia Hong

    2005-01-01

    Nuclear power equipment fault feature is complicated and uncertain. Rough set theory can express and deal with vagueness and uncertainty, so that it can be introduced nuclear power fault diagnosis to analyze and process historical data to find rule of fault feature. Rough set theory treatment step: Data preprocessing, attribute reduction, attribute value reduction, rule generation. According to discernibility matrix definition and nature, we can utilize discernibility matrix in reduction algorithm that make attribute and attribute value reduction, so that it can minish algorithmic complication and simplify programming. This algorithm is applied to the nuclear power fault diagnosis to generate rules of diagnosis. Using these rules, we have diagnosed five kinds of model faults correctly. (authors)

  13. Intelligent fault diagnosis of rolling bearing based on kernel neighborhood rough sets and statistical features

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xiao Ran; Zhang, You Yun; Zhu, Yong Sheng [Xi' an Jiaotong Univ., Xi' an (China)

    2012-09-15

    Intelligent fault diagnosis benefits from efficient feature selection. Neighborhood rough sets are effective in feature selection. However, determining the neighborhood value accurately remains a challenge. The wrapper feature selection algorithm is designed by combining the kernel method and neighborhood rough sets to self-adaptively select sensitive features. The combination effectively solves the shortcomings in selecting the neighborhood value in the previous application process. The statistical features of time and frequency domains are used to describe the characteristic of the rolling bearing to make the intelligent fault diagnosis approach work. Three classification algorithms, namely, classification and regression tree (CART), commercial version 4.5 (C4.5), and radial basis function support vector machines (RBFSVM), are used to test UCI datasets and 10 fault datasets of rolling bearing. The results indicate that the diagnostic approach presented could effectively select the sensitive fault features and simultaneously identify the type and degree of the fault.

  14. Intelligent fault diagnosis of rolling bearing based on kernel neighborhood rough sets and statistical features

    International Nuclear Information System (INIS)

    Zhu, Xiao Ran; Zhang, You Yun; Zhu, Yong Sheng

    2012-01-01

    Intelligent fault diagnosis benefits from efficient feature selection. Neighborhood rough sets are effective in feature selection. However, determining the neighborhood value accurately remains a challenge. The wrapper feature selection algorithm is designed by combining the kernel method and neighborhood rough sets to self-adaptively select sensitive features. The combination effectively solves the shortcomings in selecting the neighborhood value in the previous application process. The statistical features of time and frequency domains are used to describe the characteristic of the rolling bearing to make the intelligent fault diagnosis approach work. Three classification algorithms, namely, classification and regression tree (CART), commercial version 4.5 (C4.5), and radial basis function support vector machines (RBFSVM), are used to test UCI datasets and 10 fault datasets of rolling bearing. The results indicate that the diagnostic approach presented could effectively select the sensitive fault features and simultaneously identify the type and degree of the fault

  15. The prefabricated building risk decision research of DM technology on the basis of Rough Set

    Science.gov (United States)

    Guo, Z. L.; Zhang, W. B.; Ma, L. H.

    2017-08-01

    With the resources crises and more serious pollution, the green building has been strongly advocated by most countries and become a new building style in the construction field. Compared with traditional building, the prefabricated building has its own irreplaceable advantages but is influenced by many uncertainties. So far, a majority of scholars have been studying based on qualitative researches from all of the word. This paper profoundly expounds its significance about the prefabricated building. On the premise of the existing research methods, combined with rough set theory, this paper redefines the factors which affect the prefabricated building risk. Moreover, it quantifies risk factors and establish an expert knowledge base through assessing. And then reduced risk factors about the redundant attributes and attribute values, finally form the simplest decision rule. This simplest decision rule, which is based on the DM technology of rough set theory, provides prefabricated building with a controllable new decision-making method.

  16. Research of Strategic Alliance Stable Decision-making Model Based on Rough Set and DEA

    OpenAIRE

    Zhang Yi

    2013-01-01

    This article uses rough set theory for stability evaluation system of strategic alliance at first. Uses data analysis method for reduction, eliminates redundant indexes. Selected 6 enterprises as a decision-making unit, then select 4 inputs and 2 outputs indexes data, using DEA model to calculate, analysis reasons for poor benefit of decision-making unit, find out improvement direction and quantity for changing, provide a reference for the alliance stability.

  17. Optimisation of window settings for traditional and noise-optimised virtual monoenergetic imaging in dual-energy computed tomography pulmonary angiography

    International Nuclear Information System (INIS)

    D'Angelo, Tommaso; ''G. Martino'' University Hospital, Messina; Bucher, Andreas M.; Lenga, Lukas; Arendt, Christophe T.; Peterke, Julia L.; Martin, Simon S.; Leithner, Doris; Vogl, Thomas J.; Wichmann, Julian L.; Caruso, Damiano; University Hospital, Latina; Mazziotti, Silvio; Blandino, Alfredo; Ascenti, Giorgio; University Hospital, Messina; Othman, Ahmed E.

    2018-01-01

    To define optimal window settings for displaying virtual monoenergetic images (VMI) of dual-energy CT pulmonary angiography (DE-CTPA). Forty-five patients who underwent clinically-indicated third-generation dual-source DE-CTPA were retrospectively evaluated. Standard linearly-blended (M 0 .6), 70-keV traditional VMI (M70), and 40-keV noise-optimised VMI (M40+) reconstructions were analysed. For M70 and M40+ datasets, the subjectively best window setting (width and level, B-W/L) was independently determined by two observers and subsequently related with pulmonary artery attenuation to calculate separate optimised values (O-W/L) using linear regression. Subjective evaluation of image quality (IQ) between W/L settings were assessed by two additional readers. Repeated measures of variance were performed to compare W/L settings and IQ indices between M 0 .6, M70, and M40+. B-W/L and O-W/L for M70 were 460/140 and 450/140, and were 1100/380 and 1070/380 for M40+, respectively, differing from standard DE-CTPA W/L settings (450/100). Highest subjective scores were observed for M40+ regarding vascular contrast, embolism demarcation, and overall IQ (all p<0.001). Application of O-W/L settings is beneficial to optimise subjective IQ of VMI reconstructions of DE-CTPA. A width slightly less than two times the pulmonary trunk attenuation and a level approximately of overall pulmonary vessel attenuation are recommended. (orig.)

  18. Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2012-01-01

    Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.

  19. Extraction of design rules from multi-objective design exploration (MODE) using rough set theory

    International Nuclear Information System (INIS)

    Obayashi, Shigeru

    2011-01-01

    Multi-objective design exploration (MODE) and its application for design rule extraction are presented. MODE reveals the structure of design space from the trade-off information. The self-organizing map (SOM) is incorporated into MODE as a visual data-mining tool for design space. SOM divides the design space into clusters with specific design features. The sufficient conditions for belonging to a cluster of interest are extracted using rough set theory. The resulting MODE was applied to the multidisciplinary wing design problem, which revealed a cluster of good designs, and we extracted the design rules of such designs successfully.

  20. Rough set soft computing cancer classification and network: one stone, two birds.

    Science.gov (United States)

    Zhang, Yue

    2010-07-15

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article.

  1. Crop Evaluation System Optimization: Attribute Weights Determination Based on Rough Sets Theory

    Directory of Open Access Journals (Sweden)

    Ruihong Wang

    2017-01-01

    Full Text Available The present study is mainly a continuation of our previous study, which is about a crop evaluation system development that is based on grey relational analysis. In that system, the attribute weight determination affects the evaluation result directly. Attribute weight is usually ascertained by decision-makers experience knowledge. In this paper, we utilize rough sets theory to calculate attribute significance and then combine it with weight given by decision-maker. This method is a comprehensive consideration of subjective experience knowledge and objective situation; thus it can acquire much more ideal results. Finally, based on this method, we improve the system based on ASP.NET technology.

  2. Prediction of protein interaction hot spots using rough set-based multiple criteria linear programming.

    Science.gov (United States)

    Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong

    2011-01-21

    Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Incremental Knowledge Acquisition for WSD: A Rough Set and IL based Method

    Directory of Open Access Journals (Sweden)

    Xu Huang

    2015-07-01

    Full Text Available Word sense disambiguation (WSD is one of tricky tasks in natural language processing (NLP as it needs to take into full account all the complexities of language. Because WSD involves in discovering semantic structures from unstructured text, automatic knowledge acquisition of word sense is profoundly difficult. To acquire knowledge about Chinese multi-sense verbs, we introduce an incremental machine learning method which combines rough set method and instance based learning. First, context of a multi-sense verb is extracted into a table; its sense is annotated by a skilled human and stored in the same table. By this way, decision table is formed, and then rules can be extracted within the framework of attributive value reduction of rough set. Instances not entailed by any rule are treated as outliers. When new instances are added to decision table, only the new added and outliers need to be learned further, thus incremental leaning is fulfilled. Experiments show the scale of decision table can be reduced dramatically by this method without performance decline.

  4. A rough set-based association rule approach implemented on a brand trust evaluation model

    Science.gov (United States)

    Liao, Shu-Hsien; Chen, Yin-Ju

    2017-09-01

    In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.

  5. Prediction of financial crises by means of rough sets and decision trees

    Directory of Open Access Journals (Sweden)

    Zuleyka Díaz-Martínez

    2011-03-01

    Full Text Available This paper tries to further investigate the factors behind a financial crisis. By using a large sample of countries in the period 1981 to 1999, it intends to apply two methods coming from the Artificial Intelligence (Rough Sets theory and C4.5 algorithm and analyze the role of a set of macroeconomic and financial variables in explaining banking crises. These variables are both quantitative and qualitative. These methods do not require variables or data used to satisfy any assumptions. Statistical methods traditionally employed call for the explicative variables to satisfy statistical assumptions which is quite difficult to happen. This fact complicates the analysis. We obtained good results based on the classification accuracies (80% of correctly classified countries from an independent sample, which proves the suitability of both methods.

  6. Optimising molecular diagnostic capacity for effective control of tuberculosis in high-burden settings.

    Science.gov (United States)

    Sabiiti, W; Mtafya, B; Kuchaka, D; Azam, K; Viegas, S; Mdolo, A; Farmer, E C W; Khonga, M; Evangelopoulos, D; Honeyborne, I; Rachow, A; Heinrich, N; Ntinginya, N E; Bhatt, N; Davies, G R; Jani, I V; McHugh, T D; Kibiki, G; Hoelscher, M; Gillespie, S H

    2016-08-01

    The World Health Organization's 2035 vision is to reduce tuberculosis (TB) associated mortality by 95%. While low-burden, well-equipped industrialised economies can expect to see this goal achieved, it is challenging in the low- and middle-income countries that bear the highest burden of TB. Inadequate diagnosis leads to inappropriate treatment and poor clinical outcomes. The roll-out of the Xpert(®) MTB/RIF assay has demonstrated that molecular diagnostics can produce rapid diagnosis and treatment initiation. Strong molecular services are still limited to regional or national centres. The delay in implementation is due partly to resources, and partly to the suggestion that such techniques are too challenging for widespread implementation. We have successfully implemented a molecular tool for rapid monitoring of patient treatment response to anti-tuberculosis treatment in three high TB burden countries in Africa. We discuss here the challenges facing TB diagnosis and treatment monitoring, and draw from our experience in establishing molecular treatment monitoring platforms to provide practical insights into successful optimisation of molecular diagnostic capacity in resource-constrained, high TB burden settings. We recommend a holistic health system-wide approach for molecular diagnostic capacity development, addressing human resource training, institutional capacity development, streamlined procurement systems, and engagement with the public, policy makers and implementers of TB control programmes.

  7. Candidate Smoke Region Segmentation of Fire Video Based on Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Candidate smoke region segmentation is the key link of smoke video detection; an effective and prompt method of candidate smoke region segmentation plays a significant role in a smoke recognition system. However, the interference of heavy fog and smoke-color moving objects greatly degrades the recognition accuracy. In this paper, a novel method of candidate smoke region segmentation based on rough set theory is presented. First, Kalman filtering is used to update video background in order to exclude the interference of static smoke-color objects, such as blue sky. Second, in RGB color space smoke regions are segmented by defining the upper approximation, lower approximation, and roughness of smoke-color distribution. Finally, in HSV color space small smoke regions are merged by the definition of equivalence relation so as to distinguish smoke images from heavy fog images in terms of V component value variety from center to edge of smoke region. The experimental results on smoke region segmentation demonstrated the effectiveness and usefulness of the proposed scheme.

  8. Processing and filtrating of driver fatigue characteristic parameters based on rough set

    Science.gov (United States)

    Ye, Wenwu; Zhao, Xuyang

    2018-05-01

    With the rapid development of economy, people become increasingly rich, and cars have become a common means of transportation in daily life. However, the problem of traffic safety is becoming more and more serious. And fatigue driving is one of the main causes of traffic accidents. Therefore, it is of great importance for us to study the detection of fatigue driving to improve traffic safety. In the cause of determining whether the driver is tired, the characteristic quantity related to the steering angle of the steering wheel and the characteristic quantity of the driver's pulse are all important indicators. The fuzzy c-means clustering is used to discretize the above indexes. Because the characteristic parameters are too miscellaneous, rough set is used to filtrate these characteristics. Finally, this paper finds out the highest correlation with fatigue driving. It is proved that these selected characteristics are of great significance to the evaluation of fatigue driving.

  9. Novel Approach to Tourism Analysis with Multiple Outcome Capability Using Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Chun-Che Huang

    2016-12-01

    Full Text Available To explore the relationship between characteristics and decision-making outcomes of the tourist is critical to keep competitive tourism business. In investigation of tourism development, most of the existing studies lack of a systematic approach to analyze qualitative data. Although the traditional Rough Set (RS based approach is an excellent classification method in qualitative modeling, but it is canarsquo;t deal with the case of multiple outcomes, which is a common situation in tourism. Consequently, the Multiple Outcome Reduct Generation (MORG and Multiple Outcome Rule Extraction (MORE approaches based on RS to handle multiple outcomes are proposed. This study proposes a ranking based approach to induct meaningful reducts and ensure the strength and robustness of decision rules, which helps decision makers understand touristarsquo;s characteristics in a tourism case.

  10. Rough Set Theory based prognostication of life expectancy for terminally ill patients.

    Science.gov (United States)

    Gil-Herrera, Eleazar; Yalcin, Ali; Tsalatsanis, Athanasios; Barnes, Laura E; Djulbegovic, Benjamin

    2011-01-01

    We present a novel knowledge discovery methodology that relies on Rough Set Theory to predict the life expectancy of terminally ill patients in an effort to improve the hospice referral process. Life expectancy prognostication is particularly valuable for terminally ill patients since it enables them and their families to initiate end-of-life discussions and choose the most desired management strategy for the remainder of their lives. We utilize retrospective data from 9105 patients to demonstrate the design and implementation details of a series of classifiers developed to identify potential hospice candidates. Preliminary results confirm the efficacy of the proposed methodology. We envision our work as a part of a comprehensive decision support system designed to assist terminally ill patients in making end-of-life care decisions.

  11. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Rahman Ali

    2015-07-01

    Full Text Available Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1 restricted one type of diabetes; (2 lack understandability and explanatory power of the techniques and decision; (3 limited either to prediction purpose or management over the structured contents; and (4 lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM and type-2 diabetes mellitus (T2DM. For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  12. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

    Science.gov (United States)

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-07-03

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  13. Knowledge Mining from Clinical Datasets Using Rough Sets and Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Kindie Biredagn Nahato

    2015-01-01

    Full Text Available The availability of clinical datasets and knowledge mining methodologies encourages the researchers to pursue research in extracting knowledge from clinical datasets. Different data mining techniques have been used for mining rules, and mathematical models have been developed to assist the clinician in decision making. The objective of this research is to build a classifier that will predict the presence or absence of a disease by learning from the minimal set of attributes that has been extracted from the clinical dataset. In this work rough set indiscernibility relation method with backpropagation neural network (RS-BPNN is used. This work has two stages. The first stage is handling of missing values to obtain a smooth data set and selection of appropriate attributes from the clinical dataset by indiscernibility relation method. The second stage is classification using backpropagation neural network on the selected reducts of the dataset. The classifier has been tested with hepatitis, Wisconsin breast cancer, and Statlog heart disease datasets obtained from the University of California at Irvine (UCI machine learning repository. The accuracy obtained from the proposed method is 97.3%, 98.6%, and 90.4% for hepatitis, breast cancer, and heart disease, respectively. The proposed system provides an effective classification model for clinical datasets.

  14. Study on intelligence fault diagnosis method for nuclear power plant equipment based on rough set and fuzzy neural network

    International Nuclear Information System (INIS)

    Liu Yongkuo; Xia Hong; Xie Chunli; Chen Zhihui; Chen Hongxia

    2007-01-01

    Rough set theory and fuzzy neural network are combined, to take full advantages of the two of them. Based on the reduction technology to knowledge of Rough set method, and by drawing the simple rule from a large number of initial data, the fuzzy neural network was set up, which was with better topological structure, improved study speed, accurate judgment, strong fault-tolerant ability, and more practical. In order to test the validity of the method, the inverted U-tubes break accident of Steam Generator and etc are used as examples, and many simulation experiments are performed. The test result shows that it is feasible to incorporate the fault intelligence diagnosis method based on rough set and fuzzy neural network in the nuclear power plant equipment, and the method is simple and convenience, with small calculation amount and reliable result. (authors)

  15. Noninvasive evaluation of mental stress using by a refined rough set technique based on biomedical signals.

    Science.gov (United States)

    Liu, Tung-Kuan; Chen, Yeh-Peng; Hou, Zone-Yuan; Wang, Chao-Chih; Chou, Jyh-Horng

    2014-06-01

    Evaluating and treating of stress can substantially benefits to people with health problems. Currently, mental stress evaluated using medical questionnaires. However, the accuracy of this evaluation method is questionable because of variations caused by factors such as cultural differences and individual subjectivity. Measuring of biomedical signals is an effective method for estimating mental stress that enables this problem to be overcome. However, the relationship between the levels of mental stress and biomedical signals remain poorly understood. A refined rough set algorithm is proposed to determine the relationship between mental stress and biomedical signals, this algorithm combines rough set theory with a hybrid Taguchi-genetic algorithm, called RS-HTGA. Two parameters were used for evaluating the performance of the proposed RS-HTGA method. A dataset obtained from a practice clinic comprising 362 cases (196 male, 166 female) was adopted to evaluate the performance of the proposed approach. The empirical results indicate that the proposed method can achieve acceptable accuracy in medical practice. Furthermore, the proposed method was successfully used to identify the relationship between mental stress levels and bio-medical signals. In addition, the comparison between the RS-HTGA and a support vector machine (SVM) method indicated that both methods yield good results. The total averages for sensitivity, specificity, and precision were greater than 96%, the results indicated that both algorithms produced highly accurate results, but a substantial difference in discrimination existed among people with Phase 0 stress. The SVM algorithm shows 89% and the RS-HTGA shows 96%. Therefore, the RS-HTGA is superior to the SVM algorithm. The kappa test results for both algorithms were greater than 0.936, indicating high accuracy and consistency. The area under receiver operating characteristic curve for both the RS-HTGA and a SVM method were greater than 0.77, indicating

  16. Engineering Application Way of Faults Knowledge Discovery Based on Rough Set Theory

    International Nuclear Information System (INIS)

    Zhao Rongzhen; Deng Linfeng; Li Chao

    2011-01-01

    For the knowledge acquisition puzzle of intelligence decision-making technology in mechanical industry, to use the Rough Set Theory (RST) as a kind of tool to solve the puzzle was researched. And the way to realize the knowledge discovery in engineering application is explored. A case extracting out the knowledge rules from a concise data table shows out some important information. It is that the knowledge discovery similar to the mechanical faults diagnosis is an item of complicated system engineering project. In where, first of all-important tasks is to preserve the faults knowledge into a table with data mode. And the data must be derived from the plant site and should also be as concise as possible. On the basis of the faults knowledge data obtained so, the methods and algorithms to process the data and extract the knowledge rules from them by means of RST can be processed only. The conclusion is that the faults knowledge discovery by the way is a process of rising upward. But to develop the advanced faults diagnosis technology by the way is a large-scale knowledge engineering project for long time. Every step in which should be designed seriously according to the tool's demands firstly. This is the basic guarantees to make the knowledge rules obtained have the values of engineering application and the studies have scientific significance. So, a general framework is designed for engineering application to go along the route developing the faults knowledge discovery technology.

  17. Rough Set Theory Based Fuzzy TOPSIS on Serious Game Design Evaluation Framework

    Directory of Open Access Journals (Sweden)

    Chung-Ho Su

    2013-01-01

    Full Text Available This study presents a hybrid methodology for solving the serious game design evaluation in which evaluation criteria are based on meaningful learning, ARCS motivation, cognitive load, and flow theory (MACF by rough set theory (RST and experts’ selection. The purpose of this study tends to develop an evaluation model with RST based fuzzy Delphi-AHP-TOPSIS for MACF characteristics. Fuzzy Delphi method is utilized for selecting the evaluation criteria, Fuzzy AHP is used for analyzing the criteria structure and determining the evaluation weight of criteria, and Fuzzy TOPSIS is applied to determine the sequence of the evaluations. A real case is also used for evaluating the selection of MACF criteria design for four serious games, and both the practice and evaluation of the case could be explained. The results show that the playfulness (C24, skills (C22, attention (C11, and personalized (C35 are determined as the four most important criteria in the MACF selection process. And evaluation results of case study point out that Game 1 has the best score overall (Game 1 > Game 3 > Game 2 > Game 4. Finally, proposed evaluation framework tends to evaluate the effectiveness and the feasibility of the evaluation model and provide design criteria for relevant multimedia game design educators.

  18. The use of principal component, discriminate and rough sets analysis methods of radiological data

    International Nuclear Information System (INIS)

    Seddeek, M.K.; Kozae, A.M.; Sharshar, T.; Badran, H.M.

    2006-01-01

    In this work, computational methods of finding clusters of multivariate data points were explored using principal component analysis (PCA), discriminate analysis (DA) and rough set analysis (RSA) methods. The variables were the concentrations of four natural isotopes and the texture characteristics of 100 sand samples from the coast of North Sinai, Egypt. Beach and dune sands are the two types of samples included. These methods were used to reduce the dimensionality of multivariate data and as classification and clustering methods. The results showed that the classification of sands in the environment of North Sinai is dependent upon the radioactivity contents of the naturally occurring radioactive materials and not upon the characteristics of the sand. The application of DA enables the creation of a classification rule for sand type and it revealed that samples with high negatively values of the first score have the highest contamination of black sand. PCA revealed that radioactivity concentrations alone can be considered to predict the classification of other samples. The results of RSA showed that only one of the concentrations of 238 U, 226 Ra and 232 Th with 40 K content, can characterize the clusters together with characteristics of the sand. Both PCA and RSA result in the following conclusion: 238 U, 226 Ra and 232 Th behave similarly. RSA revealed that one/two of them may not be considered without affecting the body of knowledge

  19. A Rough Set-Based Model of HIV-1 Reverse Transcriptase Resistome

    Directory of Open Access Journals (Sweden)

    Marcin Kierczak

    2009-10-01

    Full Text Available Reverse transcriptase (RT is a viral enzyme crucial for HIV-1 replication. Currently, 12 drugs are targeted against the RT. The low fidelity of the RT-mediated transcription leads to the quick accumulation of drug-resistance mutations. The sequence-resistance relationship remains only partially understood. Using publicly available data collected from over 15 years of HIV proteome research, we have created a general and predictive rule-based model of HIV-1 resistance to eight RT inhibitors. Our rough set-based model considers changes in the physicochemical properties of a mutated sequence as compared to the wild-type strain. Thanks to the application of the Monte Carlo feature selection method, the model takes into account only the properties that significantly contribute to the resistance phenomenon. The obtained results show that drug-resistance is determined in more complex way than believed. We confirmed the importance of many resistance-associated sites, found some sites to be less relevant than formerly postulated and— more importantly—identified several previously neglected sites as potentially relevant. By mapping some of the newly discovered sites on the 3D structure of the RT, we were able to suggest possible molecular-mechanisms of drug-resistance. Importantly, our model has the ability to generalize predictions to the previously unseen cases. The study is an example of how computational biology methods can increase our understanding of the HIV-1 resistome.

  20. Hyperspectral band selection based on consistency-measure of neighborhood rough set theory

    International Nuclear Information System (INIS)

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen

    2016-01-01

    Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)

  1. Evaluating the Utility of Web-Based Consumer Support Tools Using Rough Sets

    Science.gov (United States)

    Maciag, Timothy; Hepting, Daryl H.; Slezak, Dominik; Hilderman, Robert J.

    On the Web, many popular e-commerce sites provide consumers with decision support tools to assist them in their commerce-related decision-making. Many consumers will rank the utility of these tools quite highly. Data obtained from web usage mining analyses, which may provide knowledge about a user's online experiences, could help indicate the utility of these tools. This type of analysis could provide insight into whether provided tools are adequately assisting consumers in conducting their online shopping activities or if new or additional enhancements need consideration. Although some research in this regard has been described in previous literature, there is still much that can be done. The authors of this paper hypothesize that a measurement of consumer decision accuracy, i.e. a measurement preferences, could help indicate the utility of these tools. This paper describes a procedure developed towards this goal using elements of rough set theory. The authors evaluated the procedure using two support tools, one based on a tool developed by the US-EPA and the other developed by one of the authors called cogito. Results from the evaluation did provide interesting insights on the utility of both support tools. Although it was shown that the cogito tool obtained slightly higher decision accuracy, both tools could be improved from additional enhancements. Details of the procedure developed and results obtained from the evaluation will be provided. Opportunities for future work are also discussed.

  2. A Novel Method for Predicting Anisakid Nematode Infection of Atlantic Cod Using Rough Set Theory.

    Science.gov (United States)

    Wąsikowska, Barbara; Sobecka, Ewa; Bielat, Iwona; Legierko, Monika; Więcaszek, Beata

    2018-03-01

    Atlantic cod ( Gadus morhua L.) is one of the most important fish species in the fisheries industries of many countries; however, these fish are often infected with parasites. The detection of pathogenic larval nematodes is usually performed in fish processing facilities by visual examination using candling or by digesting muscles in artificial digestive juices, but these methods are both time and labor intensive. This article presents an innovative approach to the analysis of cod parasites from both the Atlantic and Baltic Sea areas through the application of rough set theory, one of the methods of artificial intelligence, for the prediction of food safety in a food production chain. The parasitological examinations were performed focusing on nematode larvae pathogenic to humans, e.g., Anisakis simplex, Contracaecum osculatum, and Pseudoterranova decipiens. The analysis allowed identification of protocols with which it is possible to make preliminary estimates of the quantity and quality of parasites found in cod catches before detailed analyses are performed. The results indicate that the method used can be an effective analytical tool for these types of data. To achieve this goal, a database is needed that contains the patterns intensity of parasite infections and the conditions of commercial fish species in different localities in their distributions.

  3. Analysis of Roadway Traffic Accidents Based on Rough Sets and Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Xiaoxia Xiong

    2018-02-01

    Full Text Available The paper integrates Rough Sets (RS and Bayesian Networks (BN for roadway traffic accident analysis. RS reduction of attributes is first employed to generate the key set of attributes affecting accident outcomes, which are then fed into a BN structure as nodes for BN construction and accident outcome classification. Such RS-based BN framework combines the advantages of RS in knowledge reduction capability and BN in describing interrelationships among different attributes. The framework is demonstrated using the 100-car naturalistic driving data from Virginia Tech Transportation Institute to predict accident type. Comparative evaluation with the baseline BNs shows the RS-based BNs generally have a higher prediction accuracy and lower network complexity while with comparable prediction coverage and receiver operating characteristic curve area, proving that the proposed RS-based BN overall outperforms the BNs with/without traditional feature selection approaches. The proposed RS-based BN indicates the most significant attributes that affect accident types include pre-crash manoeuvre, driver’s attention from forward roadway to centre mirror, number of secondary tasks undertaken, traffic density, and relation to junction, most of which feature pre-crash driver states and driver behaviours that have not been extensively researched in literature, and could give further insight into the nature of traffic accidents.

  4. In the Context of Multiple Intelligences Theory, Intelligent Data Analysis of Learning Styles Was Based on Rough Set Theory

    Science.gov (United States)

    Narli, Serkan; Ozgen, Kemal; Alkan, Huseyin

    2011-01-01

    The present study aims to identify the relationship between individuals' multiple intelligence areas and their learning styles with mathematical clarity using the concept of rough sets which is used in areas such as artificial intelligence, data reduction, discovery of dependencies, prediction of data significance, and generating decision…

  5. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    Science.gov (United States)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  6. Rough Sets as a Knowledge Discovery and Classification Tool for the Diagnosis of Students with Learning Disabilities

    OpenAIRE

    Yu-Chi Lin; Tung-Kuang Wu; Shian-Chang Huang; Ying-Ru Meng; Wen-Yau Liang

    2011-01-01

    Due to the implicit characteristics of learning disabilities (LDs), the diagnosis of students with learning disabilities has long been a difficult issue. Artificial intelligence techniques like artificial neural network (ANN) and support vector machine (SVM) have been applied to the LD diagnosis problem with satisfactory outcomes. However, special education teachers or professionals tend to be skeptical to these kinds of black-box predictors. In this study, we adopt the rough set theory (RST)...

  7. A new intelligent classifier for breast cancer diagnosis based on a rough set and extreme learning machine: RS + ELM

    OpenAIRE

    KAYA, Yılmaz

    2014-01-01

    Breast cancer is one of the leading causes of death among women all around the world. Therefore, true and early diagnosis of breast cancer is an important problem. The rough set (RS) and extreme learning machine (ELM) methods were used collectively in this study for the diagnosis of breast cancer. The unnecessary attributes were discarded from the dataset by means of the RS approach. The classification process by means of ELM was performed using the remaining attributes. The Wisconsin B...

  8. A ROUGH SET DECISION TREE BASED MLP-CNN FOR VERY HIGH RESOLUTION REMOTELY SENSED IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2017-09-01

    Full Text Available Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP, which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  9. Using Variable Precision Rough Set for Selection and Classification of Biological Knowledge Integrated in DNA Gene Expression

    Directory of Open Access Journals (Sweden)

    Calvo-Dmgz D.

    2012-12-01

    Full Text Available DNA microarrays have contributed to the exponential growth of genomic and experimental data in the last decade. This large amount of gene expression data has been used by researchers seeking diagnosis of diseases like cancer using machine learning methods. In turn, explicit biological knowledge about gene functions has also grown tremendously over the last decade. This work integrates explicit biological knowledge, provided as gene sets, into the classication process by means of Variable Precision Rough Set Theory (VPRS. The proposed model is able to highlight which part of the provided biological knowledge has been important for classification. This paper presents a novel model for microarray data classification which is able to incorporate prior biological knowledge in the form of gene sets. Based on this knowledge, we transform the input microarray data into supergenes, and then we apply rough set theory to select the most promising supergenes and to derive a set of easy interpretable classification rules. The proposed model is evaluated over three breast cancer microarrays datasets obtaining successful results compared to classical classification techniques. The experimental results shows that there are not significat differences between our model and classical techniques but it is able to provide a biological-interpretable explanation of how it classifies new samples.

  10. An optimised set-up for total reflection particle induced X-ray emission

    International Nuclear Information System (INIS)

    Kan, J.A. van; Vis, R.D.

    1997-01-01

    MeV proton beams at small angles of incidence (0-35 mrad) are used to analyse trace elements on flat surfaces such as Si wafers or quartz substrates. In these experiments, the particle induced X-ray emission (PIXE) signal is used in a new optimized set-up. This set-up is constructed in such a way that the X-ray detector can reach very large solid angles, larger than 1 sr. Use of these large detector solid angles, combined with the reduction of bremsstrahlung background, affords limits of detection (LOD) of the order of 10 10 at cm -2 using total reflection particle induced X-ray emission (TPIXE). The LODs from earlier TPIXE measurements in a non-optimized set-up are used to estimate LODs in the new TPIXE set-up. Si wafers with low surface concentrations of V, Ni, Cu and Ag are used as standards to calibrate the LODs found with this set-up. The metal concentrations are determined by total reflection X-ray fluorescence (TXRF). The TPIXE measurements are compared with TXRF measurements on the same wafers. (Author)

  11. The Logical Properties of Lower and Upper Approximation Operations in Rough Sets%粗集中上下近似运算的逻辑性质

    Institute of Scientific and Technical Information of China (English)

    祝峰; 何华灿

    2000-01-01

    In this paper,we discuss the logical properties of rough sets through topological boolean algebras and closure topological boolean algebras.We get representation theorems of finite topological boolean algebras and closure topological boolean algebras under the upper-lower relation condition,which establish the relationship between topological boolean algebras or closure topological boolean algebras and rough sets in the general sets are similar to the Stone's representation theorem of boolean algebras.

  12. Optimisation of the Lowest Robin Eigenvalue in the Exterior of a Compact Set

    Czech Academy of Sciences Publication Activity Database

    Krejčiřík, David; Lotoreichik, Vladimir

    2018-01-01

    Roč. 25, č. 1 (2018), s. 319-337 ISSN 0944-6532 R&D Projects: GA ČR GA17-01706S Institutional support: RVO:61389005 Keywords : Robin Laplacian * negative boundary parameter * exterior of a convex set * spectral isoperimetric inequality * spectral isochoric inequality * parallel coordinates Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 0.496, year: 2016

  13. Rough Sets as a Knowledge Discovery and Classification Tool for the Diagnosis of Students with Learning Disabilities

    Directory of Open Access Journals (Sweden)

    Yu-Chi Lin

    2011-02-01

    Full Text Available Due to the implicit characteristics of learning disabilities (LDs, the diagnosis of students with learning disabilities has long been a difficult issue. Artificial intelligence techniques like artificial neural network (ANN and support vector machine (SVM have been applied to the LD diagnosis problem with satisfactory outcomes. However, special education teachers or professionals tend to be skeptical to these kinds of black-box predictors. In this study, we adopt the rough set theory (RST, which can not only perform as a classifier, but may also produce meaningful explanations or rules, to the LD diagnosis application. Our experiments indicate that the RST approach is competitive as a tool for feature selection, and it performs better in term of prediction accuracy than other rulebased algorithms such as decision tree and ripper algorithms. We also propose to mix samples collected from sources with different LD diagnosis procedure and criteria. By pre-processing these mixed samples with simple and readily available clustering algorithms, we are able to improve the quality and support of rules generated by the RST. Overall, our study shows that the rough set approach, as a classification and knowledge discovery tool, may have great potential in playing an essential role in LD diagnosis.

  14. Selection of an evaluation index for water ecological civilizations of water-shortage cities based on the grey rough set

    Science.gov (United States)

    Zhang, X. Y.; Zhu, J. W.; Xie, J. C.; Liu, J. L.; Jiang, R. G.

    2017-08-01

    According to the characteristics and existing problems of water ecological civilization of water-shortage cities, the evaluation index system of water ecological civilization was established using a grey rough set. From six aspects of water resources, water security, water environment, water ecology, water culture and water management, this study established the prime frame of the evaluation system, including 28 items, and used rough set theory to undertake optimal selection of the index system. Grey correlation theory then was used for weightings in order that the integrated evaluation index system for water ecology civilization of water-shortage cities could be constituted. Xi’an City was taken as an example, for which the results showed that 20 evaluation indexes could be obtained after optimal selection of the preliminary framework of evaluation index. The most influential indices were the water-resource category index and water environment category index. The leakage rate of the public water supply pipe network, as well as the disposal, treatment and usage rate of polluted water, urban water surface area ratio, the water quality of the main rivers, and so on also are important. It was demonstrated that the evaluation index could provide an objectively reflection of regional features and key points for the development of water ecology civilization for cities with scarce water resources. It is considered that the application example has universal applicability.

  15. Internet TV set-top devices for web-based projects: smooth sailing or rough surfing?

    Science.gov (United States)

    Johnson, K B; Ravert, R D; Everton, A

    1999-01-01

    The explosion of projects utilizing the World Wide Web in the home environment offer a select group of patients a tremendous tool for information management and health-related support. However, many patients do not have ready access to the Internet in their homes. For these patients, Internet TV set-top devices may provide a low cost alternative to PC-based web browsers. As a part of a larger descriptive study providing adolescents with access to an on-line support group, we investigated the feasibility of using an Internet TV set-top device for those patients in need of Internet access. Although the devices required some configuration before being installed in the home environment, they required a minimum of support and were well accepted by these patients. However, these patients used the Internet less frequently than their peers with home personal computers--most likely due to a lack of easy availability of the telephone or television at all times. Internet TV set-top devices represent a feasible alternative access to the World Wide Web for some patients. Any attempt to use these devices should, however, be coupled with education to all family members, and an attempt at providing a dedicated television and phone line.

  16. Desining an Expert System for Analyzing the Energy Consumption Behavior of Employees in Organizations Using Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Tooraj Karimi

    2015-06-01

    Full Text Available Understanding and changing the energy consumption behavior requires extensive knowledge about the motives of behavior. In this research, Rough Set Theory is used to investigate the energy consumption behavior of employees in organizations. So, thirteen condition attributes and a decision attribute are selected and the decision system is created. Condition attributes include demographic, values, attitudes and organizational characteristics of employees and decision attribute relates to energy consumption behavior. 482 employees are selected randomly from 37 office buildings of ministry of Petroleum and rough modeling are performed for them. By combining different methods of discretizing, reduction algorithms and rule generating, nine models are made using ROSETTA software. The results show that four of the 13 condition attributes, involving “organizational citizenship”, “satisfaction”, “attitude toward behavior” and “lighting control” are selected as the main features of the system. After cross validation of the various models, the model of manually discretizing using genetic algorithms and ORR approach to extract reducts has the most accuracy and selected as the most reliable model.

  17. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    Science.gov (United States)

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  18. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    Science.gov (United States)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  19. Diamonds in the rough: key performance indicators for reticles and design sets

    Science.gov (United States)

    Ackmann, Paul

    2008-10-01

    The discussion on reticle cost continues to raise questions by many in the semiconductor industry. The diamond industry developed a method to judge and grade diamonds. [1, 11] The diamond-marketing tool of "The 4Cs of Diamonds" and other slogans help explain the multiple, complex variables that determine the value of a particular stone. Understanding the critical factors of Carat, Clarity, Color, and Cut allows all customers to choose a gem that matches their unique desires. I apply the same principles of "The 4Cs of Diamonds" to develop an analogous method for rating and tracking reticle performance. I introduced the first 3Cs of reticle manufacturing during my BACUS presentation panel at SPIE in February 2008. [2] To these first 3Cs (Capital, Complexity, and Content), I now add a fourth, Cycle time. I will look at how our use of reticles changes by node and use "The 4Cs of Reticles" to develop the key performance indicators (KPI) that will help our industry set standards for evaluating reticle technology. Capital includes both cost and utilization. This includes tools, people, facilities, and support systems required for building the most critical reticles. Tools have highest value in the first two years of use, and each new technology node will likely increase the Capital cost of reticles. New technologies, specifications, and materials drive Complexity for reticles, including smaller feature size, increased optical proximity correction (OPC), and more levels at sub-wavelength. The large data files needed to create finer features require the use of the newest tools for writing, inspection, and repair. Content encompasses the customer's specifications and requirements, which the mask shop must meet. The specifications are critical because they drive wafer yield. A clear increase of the number of masking levels has occurred since the 90 nm node. Cycle time starts when the design is finished and lasts until the mask house ships the reticle to the fab. Depending on

  20. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    Science.gov (United States)

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  1. Assessment of physiological performance and perception of pushing different wheelchairs on indoor modular units simulating a surface roughness often encountered in under-resourced settings.

    Science.gov (United States)

    Sasaki, Kotaro; Rispin, Karen

    2017-01-01

    In under-resourced settings where motorized wheelchairs are rarely available, manual wheelchair users with limited upper-body strength and functionalities need to rely on assisting pushers for their mobility. Because traveling surfaces in under-resourced settings are often unpaved and rough, wheelchair pushers could experience high physiological loading. In order to evaluate pushers' physiological loading and to improve wheelchair designs, we built indoor modular units that simulate rough surface conditions, and tested a hypothesis that pushing different wheelchairs would result in different physiological performances and pushers' perception of difficulty on the simulated rough surface. Eighteen healthy subjects pushed two different types of pediatric wheelchairs (Moti-Go manufactured by Motivation, and KidChair by Hope Haven) fitted with a 50-kg dummy on the rough and smooth surfaces at self-selected speeds. Oxygen uptake, traveling distance for 6 minutes, and the rating of difficulty were obtained. The results supported our hypothesis, showing that pushing Moti-Go on the rough surface was physiologically less loading than KidChair, but on the smooth surface, the two wheelchairs did not differ significantly. These results indicate wheelchair designs to improve pushers' performance in under-resourced settings should be evaluated on rough surfaces.

  2. Rough multiple objective decision making

    CERN Document Server

    Xu, Jiuping

    2011-01-01

    Rough Set TheoryBasic concepts and properties of rough sets Rough Membership Rough Intervals Rough FunctionApplications of Rough SetsMultiple Objective Rough Decision Making Reverse Logistics Problem with Rough Interval Parameters MODM based Rough Approximation for Feasible RegionEVRMCCRMDCRM Reverse Logistics Network Design Problem of Suji Renewable Resource MarketBilevel Multiple Objective Rough Decision Making Hierarchical Supply Chain Planning Problem with Rough Interval Parameters Bilevel Decision Making ModelBL-EVRM BL-CCRMBL-DCRMApplication to Supply Chain Planning of Mianyang Co., LtdStochastic Multiple Objective Rough Decision Multi-Objective Resource-Constrained Project Scheduling UnderRough Random EnvironmentRandom Variable Stochastic EVRM Stochastic CCRM Stochastic DCRM Multi-Objective rc-PSP/mM/Ro-Ra for Longtan Hydropower StationFuzzy Multiple Objective Rough Decision Making Allocation Problem under Fuzzy Environment Fuzzy Variable Fu-EVRM Fu-CCRM Fu-DCRM Earth-Rock Work Allocation Problem.

  3. Shadow analysis of soil surface roughness compared to the chain set method and direct measurement of micro-relief

    Directory of Open Access Journals (Sweden)

    R. García Moreno

    2010-08-01

    Full Text Available Soil surface roughness (SSR expresses soil susceptibility to wind and water erosion and plays an important role in the development and the maintenance of soil biota. Several methods have been developed to characterise SSR based on different methods of acquiring data. Because the main problems related to these methods involve the use and handling of equipment in the field, the present study aims to fill the need for a method for measuring SSR that is more reliable, low-cost and convenient in the field than traditional field methods. Shadow analysis, which interprets micro-topographic shadows, is based on the principle that there is a direct relationship between the soil surface roughness and the shadows cast by soil structures under fixed sunlight conditions. SSR was calculated with shadows analysis in the laboratory using hemispheres of different diameter with a diverse distribution of known altitudes and a surface area of 1 m2.

    Data obtained from the shadow analysis were compared to data obtained with the chain method and simulation of the micro-relief. The results show a relationship among the SSR calculated using the different methods. To further improve the method, shadow analysis was used to measure the SSR in a sandy clay loam field using different tillage tools (chisel, tiller and roller and in a control of 4 m2 surface plots divided into subplots of 1 m2. The measurements were compared to the data obtained using the chain set and pin meter methods. The SSR measured was the highest when the chisel was used, followed by the tiller and the roller, and finally the control, for each of the three methods. Shadow analysis is shown to be a reliable method that does not disturb the measured surface, is easy to handle and analyse, and shortens the time involved in field operations by a factor ranging from 4 to 20 compared to well known techniques such as the chain set and pin meter methods.

  4. Classification of sand samples according to radioactivity content by the use of euclidean and rough sets techniques

    International Nuclear Information System (INIS)

    Abd El-Monsef, M.M.; Kozae, A.M.; Seddeek, M.K.; Medhat, T.; Sharshar, T.; Badran, H.M.

    2004-01-01

    Form the geological point of view, the origin and transport of black and normal sands is particularly important. Black and normal sands came to their places along the Mediterranean-sea coast after transport by some natural process. Both types of sands have different radiological properties. This study is, therefore, attempts to use mathematical methods to classify Egyptian sand samples collected from 42 locations in an area of 40 x 19 km 2 based on their radioactivity contents. The use of all information resulted from the experimental measurements of radioactivity contents as well as some other parameters can be a time and effort consuming task. So that the process of eliminating unnecessary attributes is of prime importance. This elimination process of the superfluous attributes that cannot affect the decision was carried out. Some topological techniques to classify the information systems resulting from the radioactivity measurements were then carried out. These techniques were applied in Euclidean and quasi-discrete topological cases. While there are some applications in environmental radioactivity of the former case, the use of the quasi-discrete in the so-called rough set information analysis is new in such a study. The mathematical methods are summarized and the results and their radiological implications are discussed. Generally, the results indicate no radiological anomaly and it supports the hypothesis previously suggested about the presence of two types of sand in the studied area

  5. Design of cognitive engine for cognitive radio based on the rough sets and radial basis function neural network

    Science.gov (United States)

    Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli

    2013-03-01

    Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.

  6. Discrete rough set analysis of two different soil-behavior-induced landslides in National Shei-Pa Park, Taiwan

    Directory of Open Access Journals (Sweden)

    Shih-Hsun Chang

    2015-11-01

    Full Text Available The governing factors that influence landslide occurrences are complicated by the different soil conditions at various sites. To resolve the problem, this study focused on spatial information technology to collect data and information on geology. GIS, remote sensing and digital elevation model (DEM were used in combination to extract the attribute values of the surface material in the vast study area of Shei-Pa National Park, Taiwan. The factors influencing landslides were collected and quantification values computed. The major soil component of loam and gravel in the Shei-Pa area resulted in different landslide problems. The major factors were successfully extracted from the influencing factors. Finally, the discrete rough set (DRS classifier was used as a tool to find the threshold of each attribute contributing to landslide occurrence, based upon the knowledge database. This rule-based knowledge database provides an effective and urgent system to manage landslides. NDVI (Normalized Difference Vegetation Index, VI (Vegetation Index, elevation, and distance from the road are the four major influencing factors for landslide occurrence. The landslide hazard potential diagrams (landslide susceptibility maps were drawn and a rational accuracy rate of landslide was calculated. This study thus offers a systematic solution to the investigation of landslide disasters.

  7. A combined data mining approach using rough set theory and case-based reasoning in medical datasets

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Rezvan

    2014-06-01

    Full Text Available Case-based reasoning (CBR is the process of solving new cases by retrieving the most relevant ones from an existing knowledge-base. Since, irrelevant or redundant features not only remarkably increase memory requirements but also the time complexity of the case retrieval, reducing the number of dimensions is an issue worth considering. This paper uses rough set theory (RST in order to reduce the number of dimensions in a CBR classifier with the aim of increasing accuracy and efficiency. CBR exploits a distance based co-occurrence of categorical data to measure similarity of cases. This distance is based on the proportional distribution of different categorical values of features. The weight used for a feature is the average of co-occurrence values of the features. The combination of RST and CBR has been applied to real categorical datasets of Wisconsin Breast Cancer, Lymphography, and Primary cancer. The 5-fold cross validation method is used to evaluate the performance of the proposed approach. The results show that this combined approach lowers computational costs and improves performance metrics including accuracy and interpretability compared to other approaches developed in the literature.

  8. Extraction of spatial-temporal rules from mesoscale eddies in the South China Sea Based on rough set theory

    Science.gov (United States)

    Du, Y.; Fan, X.; He, Z.; Su, F.; Zhou, C.; Mao, H.; Wang, D.

    2011-06-01

    In this paper, a rough set theory is introduced to represent spatial-temporal relationships and extract the corresponding rules from typical mesoscale-eddy states in the South China Sea (SCS). Three decision attributes are adopted in this study, which make the approach flexible in retrieving spatial-temporal rules with different features. Spatial-temporal rules of typical states in the SCS are extracted as three decision attributes, which then are confirmed by the previous works. The results demonstrate that this approach is effective in extracting spatial-temporal rules from typical mesoscale-eddy states, and therefore provides a powerful approach to forecasts in the future. Spatial-temporal rules in the SCS indicate that warm eddies following the rules are generally in the southeastern and central SCS around 2000 m isobaths in winter. Their intensity and vorticity are weaker than those of cold eddies. They usually move a shorter distance. By contrast, cold eddies are in 2000 m-deeper regions of the southwestern and northeastern SCS in spring and fall. Their intensity and vorticity are strong. Usually they move a long distance. In winter, a few rules are followed by cold eddies in the northern tip of the basin and southwest of Taiwan Island rather than warm eddies, indicating cold eddies may be well-regulated in the region. Several warm-eddy rules are achieved west of Luzon Island, indicating warm eddies may be well-regulated in the region as well. Otherwise, warm and cold eddies are distributed not only in the jet flow off southern Vietnam induced by intraseasonal wind stress in summer-fall, but also in the northern shallow water, which should be a focus of future study.

  9. Can We Make Definite Categorization of Student Attitudes? A Rough Set Approach to Investigate Students' Implicit Attitudinal Typologies toward Living Things

    Science.gov (United States)

    Narli, Serkan; Yorek, Nurettin; Sahin, Mehmet; Usak, Muhammet

    2010-01-01

    This study investigates the possibility of analyzing educational data using the theory of rough sets which is mostly employed in the fields of data analysis and data mining. Data were collected using an open-ended conceptual understanding test of the living things administered to first-year high school students. The responses of randomly selected…

  10. Traceability of optical roughness measurements on polymers

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Gasparin, Stefania; Carli, Lorenzo

    2008-01-01

    -focus instrument, and a confocal microscope. Using stylus measurements as reference, parameter settings on the optical instruments were optimised and residual noise reduced by low pass filtering. Traceability of optical measurements could be established with expanded measuring uncertainties (k=2) of 4......An experimental investigation on surface roughness measurements on plastics was carried out with the objective of developing a methodology to achieve traceability of optical instruments. A ground steel surface and its replicas were measured using a stylus instrument, an optical auto......% for the auto-focus instrument and 10% for confocal microscope....

  11. Comparative analysis of targeted metabolomics: dominance-based rough set approach versus orthogonal partial least square-discriminant analysis.

    Science.gov (United States)

    Blasco, H; Błaszczyński, J; Billaut, J C; Nadal-Desbarats, L; Pradat, P F; Devos, D; Moreau, C; Andres, C R; Emond, P; Corcia, P; Słowiński, R

    2015-02-01

    Metabolomics is an emerging field that includes ascertaining a metabolic profile from a combination of small molecules, and which has health applications. Metabolomic methods are currently applied to discover diagnostic biomarkers and to identify pathophysiological pathways involved in pathology. However, metabolomic data are complex and are usually analyzed by statistical methods. Although the methods have been widely described, most have not been either standardized or validated. Data analysis is the foundation of a robust methodology, so new mathematical methods need to be developed to assess and complement current methods. We therefore applied, for the first time, the dominance-based rough set approach (DRSA) to metabolomics data; we also assessed the complementarity of this method with standard statistical methods. Some attributes were transformed in a way allowing us to discover global and local monotonic relationships between condition and decision attributes. We used previously published metabolomics data (18 variables) for amyotrophic lateral sclerosis (ALS) and non-ALS patients. Principal Component Analysis (PCA) and Orthogonal Partial Least Square-Discriminant Analysis (OPLS-DA) allowed satisfactory discrimination (72.7%) between ALS and non-ALS patients. Some discriminant metabolites were identified: acetate, acetone, pyruvate and glutamine. The concentrations of acetate and pyruvate were also identified by univariate analysis as significantly different between ALS and non-ALS patients. DRSA correctly classified 68.7% of the cases and established rules involving some of the metabolites highlighted by OPLS-DA (acetate and acetone). Some rules identified potential biomarkers not revealed by OPLS-DA (beta-hydroxybutyrate). We also found a large number of common discriminating metabolites after Bayesian confirmation measures, particularly acetate, pyruvate, acetone and ascorbate, consistent with the pathophysiological pathways involved in ALS. DRSA provides

  12. Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms

    International Nuclear Information System (INIS)

    Ebert, M.

    1997-01-01

    This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)

  13. Water Quality Assessment in the Harbin Reach of the Songhuajiang River (China Based on a Fuzzy Rough Set and an Attribute Recognition Theoretical Model

    Directory of Open Access Journals (Sweden)

    Yan An

    2014-03-01

    Full Text Available A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A. Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B, was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system.

  14. OzPythonPlex: An optimised forensic STR multiplex assay set for the Australasian carpet python (Morelia spilota).

    Science.gov (United States)

    Ciavaglia, Sherryn; Linacre, Adrian

    2018-05-01

    Reptile species, and in particular snakes, are protected by national and international agreements yet are commonly handled illegally. To aid in the enforcement of such legislation, we report on the development of three 11-plex assays from the genome of the carpet python to type 24 loci of tetra-nucleotide and penta-nucleotide repeat motifs (pure, compound and complex included). The loci range in size between 70 and 550 bp. Seventeen of the loci are newly characterised with the inclusion of seven previously developed loci to facilitate cross-comparison with previous carpet python genotyping studies. Assays were optimised in accordance with human forensic profiling kits using one nanogram template DNA. Three loci are included in all three of the multiplex reactions as quality assurance markers, to ensure sample identity and genotyping accuracy is maintained across the three profiling assays. Allelic ladders have been developed for the three assays to ensure consistent and precise allele designation. A DNA reference database of allele frequencies is presented based on 249 samples collected from throughout the species native range. A small number of validation tests are conducted to demonstrate the utility of these multiplex assays. We suggest further appropriate validation tests that should be conducted prior to the application of the multiplex assays in criminal investigations involving carpet pythons. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Cost evaluation to optimise radiation therapy implementation in different income settings: A time-driven activity-based analysis.

    Science.gov (United States)

    Van Dyk, Jacob; Zubizarreta, Eduardo; Lievens, Yolande

    2017-11-01

    With increasing recognition of growing cancer incidence globally, efficient means of expanding radiotherapy capacity is imperative, and understanding the factors impacting human and financial needs is valuable. A time-driven activity-based costing analysis was performed, using a base case of 2-machine departments, with defined cost inputs and operating parameters. Four income groups were analysed, ranging from low to high income. Scenario analyses included department size, operating hours, fractionation, treatment complexity, efficiency, and centralised versus decentralised care. The base case cost/course is US$5,368 in HICs, US$2,028 in LICs; the annual operating cost is US$4,595,000 and US$1,736,000, respectively. Economies of scale show cost/course decreasing with increasing department size, mainly related to the equipment cost and most prominent up to 3 linacs. The cost in HICs is two or three times as high as in U-MICs or LICs, respectively. Decreasing operating hours below 8h/day has a dramatic impact on the cost/course. IMRT increases the cost/course by 22%. Centralising preparatory activities has a moderate impact on the costs. The results indicate trends that are useful for optimising local and regional circumstances. This methodology can provide input into a uniform and accepted approach to evaluating the cost of radiotherapy. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  16. Simulation optimisation

    International Nuclear Information System (INIS)

    Anon

    2010-01-01

    Over the past decade there has been a significant advance in flotation circuit optimisation through performance benchmarking using metallurgical modelling and steady-state computer simulation. This benchmarking includes traditional measures, such as grade and recovery, as well as new flotation measures, such as ore floatability, bubble surface area flux and froth recovery. To further this optimisation, Outotec has released its HSC Chemistry software with simulation modules. The flotation model developed by the AMIRA P9 Project, of which Outotec is a sponsor, is regarded by industry as the most suitable flotation model to use for circuit optimisation. This model incorporates ore floatability with flotation cell pulp and froth parameters, residence time, entrainment and water recovery. Outotec's HSC Sim enables you to simulate mineral processes in different levels, from comminution circuits with sizes and no composition, through to flotation processes with minerals by size by floatability components, to full processes with true particles with MLA data.

  17. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    Science.gov (United States)

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  18. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    Directory of Open Access Journals (Sweden)

    Kongmeng Liew

    2018-02-01

    Full Text Available Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  19. Development of a Mobile-Optimised Website to Support Students with Special Needs Transitioning from Primary to Secondary Settings

    Science.gov (United States)

    Chambers, Dianne; Coffey, Anne

    2013-01-01

    With an increasing number of students with special needs being included in regular classroom environments, consideration of, and planning for, a smooth transition between different school settings is important for parents, classroom teachers and school administrators. The transition between primary and secondary school can be difficult for…

  20. How can general paediatric training be optimised in highly specialised tertiary settings? Twelve tips from an interview-based study of trainees.

    Science.gov (United States)

    Al-Yassin, Amina; Long, Andrew; Sharma, Sanjiv; May, Joanne

    2017-01-01

    Both general and subspecialty paediatric trainees undertake attachments in highly specialised tertiary hospitals. Trainee feedback suggests that mismatches in expectations between trainees and supervisors and a perceived lack of educational opportunities may lead to trainee dissatisfaction in such settings. With the 'Shape of Training' review (reshaping postgraduate training in the UK to focus on more general themes), this issue is likely to become more apparent. We wished to explore the factors that contribute to a positive educational environment and training experience and identify how this may be improved in highly specialised settings. General paediatric trainees working at all levels in subspecialty teams at a tertiary hospital were recruited (n=12). Semistructured interviews were undertaken to explore the strengths and weaknesses of training in such a setting and how this could be optimised. Appreciative inquiry methodology was used to identify areas of perceived best practice and consider how these could be promoted and disseminated. Twelve best practice themes were identified: (1) managing expectations by acknowledging the challenges; (2) educational contracting to identify learning needs and opportunities; (3) creative educational supervision; (4) centralised teaching events; (5) signposting learning opportunities; (6) curriculum-mapped pan-hospital teaching programmes; (7) local faculty groups with trainee representation; (8) interprofessional learning; (9) pastoral support systems; (10) crossover weeks to increase clinical exposure; (11) adequate clinical supervision; and (12) rota design to include teaching and clinic time. Tertiary settings have strengths, as well as challenges, for general paediatric training. Twelve trainee-generated tips have been identified to capitalise on the educational potential within these settings. Trainee feedback is essential to diagnose and improve educational environments and appreciative inquiry is a useful tool for

  1. Predicting High or Low Transfer Efficiency of Photovoltaic Systems Using a Novel Hybrid Methodology Combining Rough Set Theory, Data Envelopment Analysis and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lee-Ing Tong

    2012-02-01

    Full Text Available Solar energy has become an important energy source in recent years as it generates less pollution than other energies. A photovoltaic (PV system, which typically has many components, converts solar energy into electrical energy. With the development of advanced engineering technologies, the transfer efficiency of a PV system has been increased from low to high. The combination of components in a PV system influences its transfer efficiency. Therefore, when predicting the transfer efficiency of a PV system, one must consider the relationship among system components. This work accurately predicts whether transfer efficiency of a PV system is high or low using a novel hybrid model that combines rough set theory (RST, data envelopment analysis (DEA, and genetic programming (GP. Finally, real data-set are utilized to demonstrate the accuracy of the proposed method.

  2. A Model to Identify the Most Effective Business Rule in Information Systems using Rough Set Theory: Study on Loan Business Process

    Directory of Open Access Journals (Sweden)

    Mohammad Aghdasi

    2011-09-01

    In this paper, a practical model is used to identify the most effective rules in information systems. In this model, first, critical business attributes which fit to strategic expectations are taken into account. These are the attributes which their changes are more important than others in achieving the strategic expectations. To identify these attributes we utilize rough set theory. Those business rules which use critical information attribute in their structures are identified as the most effective business rules. The Proposed model helps information system developers to identify scope of effective business rules. It causes a decrease in time and cost of information system maintenance. Also it helps business analyst to focus on managing critical business attributes in order to achieve a specific goal.

  3. Examination of Routine Practice Patterns in the Hospital Information Data Warehouse: Use of OLAP and Rough Set Analysis with Clinician Feedback

    Science.gov (United States)

    Grant, Andrew; Grant, Gwyneth; Gagné, Jean; Blanchette, Carl; Comeau, Émilie; Brodeur, Guillaume; Dionne, Jonathon; Ayite, Alphonse; Synak, Piotr; Wroblewski, Jakub; Apanowitz, Cas

    2001-01-01

    The patient centred electronic patient record enables retrospective analysis of practice patterns as one means to assist clinicians adjust and improve their practice. An interrogation of the data-warehouse linking test use to Diagnostic Related Group (DRG) of one years data of the Sherbrooke University Hospital showed that one-third of patients used two-thirds of these diagnostic tests. Using RoughSets analysis, zones of repeated tests were demonstrated where results remained within stable limits. It was concluded that 30% of fluid and electrolyte testing was probably unnecessary. These findings led to an endorsement of changing the test request formats in the hospital information system from profiles to individual tests requiring justification.

  4. La metodología Rough Set frente al Análisis Discriminante en los problemas de clasificación multiatributo

    Directory of Open Access Journals (Sweden)

    Vilar Zanón, J.L.

    2003-01-01

    Full Text Available Muchas decisiones financieras implican la clasificación de una observación (empresas, títulos... en una categoría o grupo, lo que ha propiciado la aplicación de métodos de investigación operativa a los problemas financieros. Un caso particular de los problemas de clasificación, es cuando el número de grupos se limita a dos. Existen numerosos estudios financieros dedicados a los problemas de clasificación binaria: clasificación de créditos entre fallidos y no, fusiones y adquisiciones, clasificación de bonos o la predicción del fracaso empresarial. Se han empleado numerosos métodos estadísticos para abordar los problemas mencionados. En la mayoría de las ocasiones, las variables explicativas utilizadas no suelen cumplir las hipótesis estadísticas que requieren estos métodos, lo cual ha motivado la búsqueda de otras herramientas que superen estos inconvenientes como es la Teoría Rough Set. Este trabajo describe una investigación empírica consistente en un estudio comparativo de la utilización del Análisis Discriminante y de la Teoría Rough Set sobre un sistema de información compuesto por 72 empresas españolas de seguros no-vida descritas mediante 21 ratios financieros. Hemos comparado su efectividad aplicándolos a la detección de la insolvencia como problema de clasificación multiatributo entre empresas sanas y fracasadas y utilizando como atributos los ratios financieros.

  5. Rough Finite State Automata and Rough Languages

    Science.gov (United States)

    Arulprakasam, R.; Perumal, R.; Radhakrishnan, M.; Dare, V. R.

    2018-04-01

    Sumita Basu [1, 2] recently introduced the concept of a rough finite state (semi)automaton, rough grammar and rough languages. Motivated by the work of [1, 2], in this paper, we investigate some closure properties of rough regular languages and establish the equivalence between the classes of rough languages generated by rough grammar and the classes of rough regular languages accepted by rough finite automaton.

  6. Optimisation by hierarchical search

    Science.gov (United States)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  7. Notions of Rough Neutrosophic Digraphs

    Directory of Open Access Journals (Sweden)

    Nabeela Ishfaq

    2018-01-01

    Full Text Available [-3]Graph theory has numerous applications in various disciplines, including computer networks, neural networks, expert systems, cluster analysis, and image capturing. Rough neutrosophic set (NS theory is a hybrid tool for handling uncertain information that exists in real life. In this research paper, we apply the concept of rough NS theory to graphs and present a new kind of graph structure, rough neutrosophic digraphs. We present certain operations, including lexicographic products, strong products, rejection and tensor products on rough neutrosophic digraphs. We investigate some of their properties. We also present an application of a rough neutrosophic digraph in decision-making.

  8. Assessment of wind speed and wind power through three stations in Egypt, including air density variation and analysis results with rough set theory

    International Nuclear Information System (INIS)

    Essa, K.S.M.; Embaby, M.; Marrouf, A.A.; Koza, A.M.; Abd El-Monsef, M.E.

    2007-01-01

    It is well known that the wind energy potential is proportional to both air density and the third power of the wind speed average over a suitable time period. The wind speed and air density have random variables depending on both time and location. The main objective of this work is to derive the most general wind energy potential of the wind formulation putting into consideration the time variable in both wind speed and air density. The correction factor is derived explicitly in terms of the cross-correlation and the coefficients of variation.The application is performed for environmental and wind speed measurements at the Cairo Airport, Kosseir and Hurguada, Egypt. Comparisons are made between Weibull, Rayleigh, and actual data distributions of wind speed and wind power of one year 2005. A Weibull distribution is the best match to the actual probability distribution of wind speed data for most stations. The maximum wind energy potential was 373 W/m 2 in June at Hurguada (Red Sea coast) where the annual mean value was 207 W/m 2 . By Using Rough Set Theory, We Find That the Wind Power Depends on the Wind Speed with greater than air density

  9. APPLICATION OF ROUGH SET THEORY TO MAINTENANCE LEVEL DECISION-MAKING FOR AERO-ENGINE MODULES BASED ON INCREMENTAL KNOWLEDGE LEARNING

    Institute of Scientific and Technical Information of China (English)

    陆晓华; 左洪福; 蔡景

    2013-01-01

    The maintenance of an aero-engine usually includes three levels ,and the maintenance cost and period greatly differ depending on the different maintenance levels .To plan a reasonable maintenance budget program , airlines would like to predict the maintenance level of aero-engine before repairing in terms of performance parame-ters ,which can provide more economic benefits .The maintenance level decision rules are mined using the histori-cal maintenance data of a civil aero-engine based on the rough set theory ,and a variety of possible models of upda-ting rules produced by newly increased maintenance cases added to the historical maintenance case database are in-vestigated by the means of incremental machine learning .The continuously updated rules can provide reasonable guidance suggestions for engineers and decision support for planning a maintenance budget program before repai-ring .The results of an example show that the decision rules become more typical and robust ,and they are more accurate to predict the maintenance level of an aero-engine module as the maintenance data increase ,which illus-trates the feasibility of the represented method .

  10. Study on Supplier Selection for Photovoltaic Enterprises Based on Rough Set%基于粗糙集的光伏企业供应商选择

    Institute of Scientific and Technical Information of China (English)

    甘卫华; 张蕊

    2013-01-01

    In this paper, in light of the problems faced by the photovoltaic enterprises after industrial expansion, such as glut of production capacity, technical complexity, short development history and volatile industrial environment, we analyzed the raw materials of a key unit to capture the solar energy in the empirical case of a certain photovoltaic enterprises. Then we used the rough set method to select the suppliers of such raw materials.%目前光伏企业正面临大量扩张后的产能过剩、技术复杂、发展年限少、情况变数大的问题.粗糙集能够在没有先知条件的情况下对不确定的事物进行相对客观的处理.以某光伏企业为例,针对其中的关键太阳能组件的原材料进行分析.运用粗糙集的方法,对光伏企业原材料进行供应商的选择,为企业提供科学的建议.

  11. Cooling-load prediction by the combination of rough set theory and an artificial neural-network based on data-fusion technique

    International Nuclear Information System (INIS)

    Hou Zhijian; Lian Zhiwei; Yao Ye; Yuan Xinjian

    2006-01-01

    A novel method integrating rough sets (RS) theory and an artificial neural network (ANN) based on data-fusion technique is presented to forecast an air-conditioning load. Data-fusion technique is the process of combining multiple sensors data or related information to estimate or predict entity states. In this paper, RS theory is applied to find relevant factors to the load, which are used as inputs of an artificial neural-network to predict the cooling load. To improve the accuracy and enhance the robustness of load forecasting results, a general load-prediction model, by synthesizing multi-RSAN (MRAN), is presented so as to make full use of redundant information. The optimum principle is employed to deduce the weights of each RSAN model. Actual prediction results from a real air-conditioning system show that, the MRAN forecasting model is better than the individual RSAN and moving average (AMIMA) ones, whose relative error is within 4%. In addition, individual RSAN forecasting results are better than that of ARIMA

  12. Classification of breast masses in ultrasound images using self-adaptive differential evolution extreme learning machine and rough set feature selection.

    Science.gov (United States)

    Prabusankarlal, Kadayanallur Mahadevan; Thirumoorthy, Palanisamy; Manavalan, Radhakrishnan

    2017-04-01

    A method using rough set feature selection and extreme learning machine (ELM) whose learning strategy and hidden node parameters are optimized by self-adaptive differential evolution (SaDE) algorithm for classification of breast masses is investigated. A pathologically proven database of 140 breast ultrasound images, including 80 benign and 60 malignant, is used for this study. A fast nonlocal means algorithm is applied for speckle noise removal, and multiresolution analysis of undecimated discrete wavelet transform is used for accurate segmentation of breast lesions. A total of 34 features, including 29 textural and five morphological, are applied to a [Formula: see text]-fold cross-validation scheme, in which more relevant features are selected by quick-reduct algorithm, and the breast masses are discriminated into benign or malignant using SaDE-ELM classifier. The diagnosis accuracy of the system is assessed using parameters, such as accuracy (Ac), sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), Matthew's correlation coefficient (MCC), and area ([Formula: see text]) under receiver operating characteristics curve. The performance of the proposed system is also compared with other classifiers, such as support vector machine and ELM. The results indicated that the proposed SaDE algorithm has superior performance with [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] compared to other classifiers.

  13. Fuzzy Rough Ring and Its Prop erties

    Institute of Scientific and Technical Information of China (English)

    REN Bi-jun; FU Yan-ling

    2013-01-01

    This paper is devoted to the theories of fuzzy rough ring and its properties. The fuzzy approximation space generated by fuzzy ideals and the fuzzy rough approximation operators were proposed in the frame of fuzzy rough set model. The basic properties of fuzzy rough approximation operators were analyzed and the consistency between approximation operators and the binary operation of ring was discussed.

  14. Automated Detection of Cancer Associated Genes Using a Combined Fuzzy-Rough-Set-Based F-Information and Water Swirl Algorithm of Human Gene Expression Data.

    Directory of Open Access Journals (Sweden)

    Pugalendhi Ganesh Kumar

    Full Text Available This study describes a novel approach to reducing the challenges of highly nonlinear multiclass gene expression values for cancer diagnosis. To build a fruitful system for cancer diagnosis, in this study, we introduced two levels of gene selection such as filtering and embedding for selection of potential genes and the most relevant genes associated with cancer, respectively. The filter procedure was implemented by developing a fuzzy rough set (FR-based method for redefining the criterion function of f-information (FI to identify the potential genes without discretizing the continuous gene expression values. The embedded procedure is implemented by means of a water swirl algorithm (WSA, which attempts to optimize the rule set and membership function required to classify samples using a fuzzy-rule-based multiclassification system (FRBMS. Two novel update equations are proposed in WSA, which have better exploration and exploitation abilities while designing a self-learning FRBMS. The efficiency of our new approach was evaluated on 13 multicategory and 9 binary datasets of cancer gene expression. Additionally, the performance of the proposed FRFI-WSA method in designing an FRBMS was compared with existing methods for gene selection and optimization such as genetic algorithm (GA, particle swarm optimization (PSO, and artificial bee colony algorithm (ABC on all the datasets. In the global cancer map with repeated measurements (GCM_RM dataset, the FRFI-WSA showed the smallest number of 16 most relevant genes associated with cancer using a minimal number of 26 compact rules with the highest classification accuracy (96.45%. In addition, the statistical validation used in this study revealed that the biological relevance of the most relevant genes associated with cancer and their linguistics detected by the proposed FRFI-WSA approach are better than those in the other methods. The simple interpretable rules with most relevant genes and effectively

  15. Multi-Optimisation Consensus Clustering

    Science.gov (United States)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  16. Digital radiography: are the manufacturers' settings too high? Optimisation of the Kodak digital radiography system with aid of the computed radiography dose index

    International Nuclear Information System (INIS)

    Peters, Sinead E.; Brennan, Patrick C.

    2002-01-01

    Manufacturers offer exposure indices as a safeguard against overexposure in computed radiography, but the basis for recommended values is unclear. This study establishes an optimum exposure index to be used as a guideline for a specific CR system to minimise radiation exposures for computed mobile chest radiography, and compares this with manufacturer guidelines and current practice. An anthropomorphic phantom was employed to establish the minimum milliamperes consistent with acceptable image quality for mobile chest radiography images. This was found to be 2 mAs. Consecutively, 10 patients were exposed with this optimised milliampere value and 10 patients were exposed with the 3.2 mAs routinely used in the department of the study. Image quality was objectively assessed using anatomical criteria. Retrospective analyses of 717 exposure indices recorded over 2 months from mobile chest examinations were performed. The optimised milliampere value provided a significant reduction of the average exposure index from 1840 to 1570 (p<0.0001). This new ''optimum'' exposure index is substantially lower than manufacturer guidelines of 2000 and significantly lower than exposure indices from the retrospective study (1890). Retrospective data showed a significant increase in exposure indices if the examination was performed out of hours. The data provided by this study emphasise the need for clinicians and personnel to consider establishing their own optimum exposure indices for digital investigations rather than simply accepting manufacturers' guidelines. Such an approach, along with regular monitoring of indices, may result in a substantial reduction in patient exposure. (orig.)

  17. Implementing large-scale programmes to optimise the health workforce in low- and middle-income settings: a multicountry case study synthesis.

    Science.gov (United States)

    Gopinathan, Unni; Lewin, Simon; Glenton, Claire

    2014-12-01

    To identify factors affecting the implementation of large-scale programmes to optimise the health workforce in low- and middle-income countries. We conducted a multicountry case study synthesis. Eligible programmes were identified through consultation with experts and using Internet searches. Programmes were selected purposively to match the inclusion criteria. Programme documents were gathered via Google Scholar and PubMed and from key informants. The SURE Framework - a comprehensive list of factors that may influence the implementation of health system interventions - was used to organise the data. Thematic analysis was used to identify the key issues that emerged from the case studies. Programmes from Brazil, Ethiopia, India, Iran, Malawi, Venezuela and Zimbabwe were selected. Key system-level factors affecting the implementation of the programmes were related to health worker training and continuing education, management and programme support structures, the organisation and delivery of services, community participation, and the sociopolitical environment. Existing weaknesses in health systems may undermine the implementation of large-scale programmes to optimise the health workforce. Changes in the roles and responsibilities of cadres may also, in turn, impact the health system throughout. © 2014 John Wiley & Sons Ltd.

  18. Computer Based Optimisation Rutines

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    In this paper the need for optimisation methods for the laser cutting process has been identified as three different situations. Demands on the optimisation methods for these situations are presented, and one method for each situation is suggested. The adaptation and implementation of the methods...

  19. Optimal Optimisation in Chemometrics

    NARCIS (Netherlands)

    Hageman, J.A.

    2004-01-01

    The use of global optimisation methods is not straightforward, especially for the more difficult optimisation problems. Solutions have to be found for items such as the evaluation function, representation, step function and meta-parameters, before any useful results can be obtained. This thesis aims

  20. Optimisation of milling parameters using neural network

    Directory of Open Access Journals (Sweden)

    Lipski Jerzy

    2017-01-01

    Full Text Available The purpose of this study was to design and test an intelligent computer software developed with the purpose of increasing average productivity of milling not compromising the design features of the final product. The developed system generates optimal milling parameters based on the extent of tool wear. The introduced optimisation algorithm employs a multilayer model of a milling process developed in the artificial neural network. The input parameters for model training are the following: cutting speed vc, feed per tooth fz and the degree of tool wear measured by means of localised flank wear (VB3. The output parameter is the surface roughness of a machined surface Ra. Since the model in the neural network exhibits good approximation of functional relationships, it was applied to determine optimal milling parameters in changeable tool wear conditions (VB3 and stabilisation of surface roughness parameter Ra. Our solution enables constant control over surface roughness parameters and productivity of milling process after each assessment of tool condition. The recommended parameters, i.e. those which applied in milling ensure desired surface roughness and maximal productivity, are selected from all the parameters generated by the model. The developed software may constitute an expert system supporting a milling machine operator. In addition, the application may be installed on a mobile device (smartphone, connected to a tool wear diagnostics instrument and the machine tool controller in order to supply updated optimal parameters of milling. The presented solution facilitates tool life optimisation and decreasing tool change costs, particularly during prolonged operation.

  1. Axis Problem of Rough 3-Valued Algebras

    Institute of Scientific and Technical Information of China (English)

    Jianhua Dai; Weidong Chen; Yunhe Pan

    2006-01-01

    The collection of all the rough sets of an approximation space has been given several algebraic interpretations, including Stone algebras, regular double Stone algebras, semi-simple Nelson algebras, pre-rough algebras and 3-valued Lukasiewicz algebras. A 3-valued Lukasiewicz algebra is a Stone algebra, a regular double Stone algebra, a semi-simple Nelson algebra, a pre-rough algebra. Thus, we call the algebra constructed by the collection of rough sets of an approximation space a rough 3-valued Lukasiewicz algebra. In this paper,the rough 3-valued Lukasiewicz algebras, which are a special kind of 3-valued Lukasiewicz algebras, are studied. Whether the rough 3-valued Lukasiewicz algebra is a axled 3-valued Lukasiewicz algebra is examined.

  2. Optimised Renormalisation Group Flows

    CERN Document Server

    Litim, Daniel F

    2001-01-01

    Exact renormalisation group (ERG) flows interpolate between a microscopic or classical theory and the corresponding macroscopic or quantum effective theory. For most problems of physical interest, the efficiency of the ERG is constrained due to unavoidable approximations. Approximate solutions of ERG flows depend spuriously on the regularisation scheme which is determined by a regulator function. This is similar to the spurious dependence on the ultraviolet regularisation known from perturbative QCD. Providing a good control over approximated ERG flows is at the root for reliable physical predictions. We explain why the convergence of approximate solutions towards the physical theory is optimised by appropriate choices of the regulator. We study specific optimised regulators for bosonic and fermionic fields and compare the optimised ERG flows with generic ones. This is done up to second order in the derivative expansion at both vanishing and non-vanishing temperature. An optimised flow for a ``proper-time ren...

  3. Lambda based control O{sub 2} set point optimisation and evaluation; Lambdabaserad reglering. Boervaerdesoptimering av O{sub 2} och utvaerdering

    Energy Technology Data Exchange (ETDEWEB)

    Svensson, Mikael; Brodin, Peter [Vattenfall Utveckling, Aelvkarleby (Sweden)

    2004-10-01

    During winter and spring 2003, the project 'Lambda based control' was carried out at Vattenfall Utveckling AB in Aelvkarleby, Sweden. The main purpose of the project was to explore if conventional lambda sensors could be used to control the fuel/air-ratio in small boilers. The conclusion was that this is possible. To be able to make use of the result, the question of what the numerical set value for O{sub 2} should be, has to be answered. Several parameters have impact on the oxygen level in combustion gas. The main purpose of this project is to explore if there is a cost efficient way of controlling fuel/air-ratio by using lambda sensors. The scope of the project is achieve the following, by using the experience from project P4-209: find out which parameters that correlate most strongly with lambda; develop a method to decide which and how many parameters to use, in order to optimize cost efficiency; calculate optimal set value for O{sub 2} in one of the boilers used for experiments in the project; and evaluate the method and compare important parameters of operation, such as efficiency and emissions. The method developed in the project uses initial measurements to find out the relation between O{sub 2} and emissions at different power levels. Then a set point curve is calculated where set point for O{sub 2} is expressed as a function of power level in the current boiler. The method has been implemented and evaluated at a 400 kW boiler in Aelvkarleby, Sweden. The results are improvements in efficiency (6 %) and emissions, CO decreased 40 %, NO decreased by 20 %. The conclusion is that lambda based control according to this method could be a profitable investment under the right circumstances, where stability in characteristics is the most important property. What makes the method uncertain is its inability to handle changes in characteristics of a boiler.

  4. Surface correlations of hydrodynamic drag for transitionally rough engineering surfaces

    Science.gov (United States)

    Thakkar, Manan; Busse, Angela; Sandham, Neil

    2017-02-01

    Rough surfaces are usually characterised by a single equivalent sand-grain roughness height scale that typically needs to be determined from laboratory experiments. Recently, this method has been complemented by a direct numerical simulation approach, whereby representative surfaces can be scanned and the roughness effects computed over a range of Reynolds number. This development raises the prospect over the coming years of having enough data for different types of rough surfaces to be able to relate surface characteristics to roughness effects, such as the roughness function that quantifies the downward displacement of the logarithmic law of the wall. In the present contribution, we use simulation data for 17 irregular surfaces at the same friction Reynolds number, for which they are in the transitionally rough regime. All surfaces are scaled to the same physical roughness height. Mean streamwise velocity profiles show a wide range of roughness function values, while the velocity defect profiles show a good collapse. Profile peaks of the turbulent kinetic energy also vary depending on the surface. We then consider which surface properties are important and how new properties can be incorporated into an empirical model, the accuracy of which can then be tested. Optimised models with several roughness parameters are systematically developed for the roughness function and profile peak turbulent kinetic energy. In determining the roughness function, besides the known parameters of solidity (or frontal area ratio) and skewness, it is shown that the streamwise correlation length and the root-mean-square roughness height are also significant. The peak turbulent kinetic energy is determined by the skewness and root-mean-square roughness height, along with the mean forward-facing surface angle and spanwise effective slope. The results suggest feasibility of relating rough-wall flow properties (throughout the range from hydrodynamically smooth to fully rough) to surface

  5. Methods for Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte

    This thesis deals with the adaptation and implementation of various optimisation methods, in the field of experimental design, for the laser cutting process. The problem in optimising the laser cutting process has been defined and a structure for at Decision Support System (DSS......) for the optimisation of the laser cutting process has been suggested. The DSS consists of a database with the currently used and old parameter settings. Also one of the optimisation methods has been implemented in the DSS in order to facilitate the optimisation procedure for the laser operator. The Simplex Method has...... been adapted in two versions. A qualitative one, that by comparing the laser cut items optimise the process and a quantitative one that uses a weighted quality response in order to achieve a satisfactory quality and after that maximises the cutting speed thus increasing the productivity of the process...

  6. Vaccine strategies: Optimising outcomes.

    Science.gov (United States)

    Hardt, Karin; Bonanni, Paolo; King, Susan; Santos, Jose Ignacio; El-Hodhod, Mostafa; Zimet, Gregory D; Preiss, Scott

    2016-12-20

    Successful immunisation programmes generally result from high vaccine effectiveness and adequate uptake of vaccines. In the development of new vaccination strategies, the structure and strength of the local healthcare system is a key consideration. In high income countries, existing infrastructures are usually used, while in less developed countries, the capacity for introducing new vaccines may need to be strengthened, particularly for vaccines administered beyond early childhood, such as the measles or human papillomavirus (HPV) vaccine. Reliable immunisation service funding is another important factor and low income countries often need external supplementary sources of finance. Many regions also obtain support in generating an evidence base for vaccination via initiatives created by organisations including World Health Organization (WHO), the Pan American Health Organization (PAHO), the Agence de Médecine Préventive and the Sabin Vaccine Institute. Strong monitoring and surveillance mechanisms are also required. An example is the efficient and low-cost approaches for measuring the impact of the hepatitis B control initiative and evaluating achievement of goals that have been established in the WHO Western Pacific region. A review of implementation strategies reveals differing degrees of success. For example, in the Americas, PAHO advanced a measles-mumps-rubella vaccine strategy, targeting different population groups in mass, catch-up and follow-up vaccination campaigns. This has had much success but coverage data from some parts of the region suggest that children are still not receiving all appropriate vaccines, highlighting problems with local service infrastructures. Stark differences in coverage levels are also observed among high income countries, as is the case with HPV vaccine implementation in the USA versus the UK and Australia, reflecting differences in delivery settings. Experience and research have shown which vaccine strategies work well and the

  7. Optimising Magnetostatic Assemblies

    DEFF Research Database (Denmark)

    Insinga, Andrea Roberto; Smith, Anders

    theorem. This theorem formulates an energy equivalence principle with several implications concerning the optimisation of objective functionals that are linear with respect to the magnetic field. Linear functionals represent different optimisation goals, e.g. maximising a certain component of the field...... approached employing a heuristic algorithm, which led to new design concepts. Some of the procedures developed for linear objective functionals have been extended to non-linear objectives, by employing iterative techniques. Even though most the optimality results discussed in this work have been derived...

  8. Risk based test interval and maintenance optimisation - Application and uses

    International Nuclear Information System (INIS)

    Sparre, E.

    1999-10-01

    The project is part of an IAEA co-ordinated Research Project (CRP) on 'Development of Methodologies for Optimisation of Surveillance Testing and Maintenance of Safety Related Equipment at NPPs'. The purpose of the project is to investigate the sensitivity of the results obtained when performing risk based optimisation of the technical specifications. Previous projects have shown that complete LPSA models can be created and that these models allow optimisation of technical specifications. However, these optimisations did not include any in depth check of the result sensitivity with regards to methods, model completeness etc. Four different test intervals have been investigated in this study. Aside from an original, nominal, optimisation a set of sensitivity analyses has been performed and the results from these analyses have been compared to the original optimisation. The analyses indicate that the result of an optimisation is rather stable. However, it is not possible to draw any certain conclusions without performing a number of sensitivity analyses. Significant differences in the optimisation result were discovered when analysing an alternative configuration. Also deterministic uncertainties seem to affect the result of an optimisation largely. The sensitivity of failure data uncertainties is important to investigate in detail since the methodology is based on the assumption that the unavailability of a component is dependent on the length of the test interval

  9. Measurement of surface roughness

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with two 3 hours laboratory exercises that are part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The laboratories include a demonstration of the function of roughness measuring instruments plus a series of exercises illustrating roughness measurement...

  10. Optimisation of radiation protection

    International Nuclear Information System (INIS)

    1988-01-01

    Optimisation of radiation protection is one of the key elements in the current radiation protection philosophy. The present system of dose limitation was issued in 1977 by the International Commission on Radiological Protection (ICRP) and includes, in addition to the requirements of justification of practices and limitation of individual doses, the requirement that all exposures be kept as low as is reasonably achievable, taking social and economic factors into account. This last principle is usually referred to as optimisation of radiation protection, or the ALARA principle. The NEA Committee on Radiation Protection and Public Health (CRPPH) organised an ad hoc meeting, in liaison with the NEA committees on the safety of nuclear installations and radioactive waste management. Separate abstracts were prepared for individual papers presented at the meeting

  11. How to apply the Score-Function method to standard discrete event simulation tools in order to optimise a set of system parameters simultaneously: A Job-Shop example will be discussed

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2000-01-01

    During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ...

  12. Advanced optimisation - coal fired power plant operations

    Energy Technology Data Exchange (ETDEWEB)

    Turney, D.M.; Mayes, I. [E.ON UK, Nottingham (United Kingdom)

    2005-03-01

    The purpose of this unit optimization project is to develop an integrated approach to unit optimisation and develop an overall optimiser that is able to resolve any conflicts between the individual optimisers. The individual optimisers have been considered during this project are: on-line thermal efficiency package, GNOCIS boiler optimiser, GNOCIS steam side optimiser, ESP optimisation, and intelligent sootblowing system. 6 refs., 7 figs., 3 tabs.

  13. Optimisation of monochrome images

    International Nuclear Information System (INIS)

    Potter, R.

    1983-01-01

    Gamma cameras with modern imaging systems usually digitize the signals to allow storage and processing of the image in a computer. Although such computer systems are widely used for the extraction of quantitative uptake estimates and the analysis of time variant data, the vast majority of nuclear medicine images is still interpreted on the basis of an observer's visual assessment of a photographic hardcopy image. The optimisation of hardcopy devices is therefore vital and factors such as resolution, uniformity, noise grey scales and display matrices are discussed. Once optimum display parameters have been determined, routine procedures for quality control need to be established; suitable procedures are discussed. (U.K.)

  14. Fingerprinting the type of line edge roughness

    Science.gov (United States)

    Fernández Herrero, A.; Pflüger, M.; Scholze, F.; Soltwisch, V.

    2017-06-01

    Lamellar gratings are widely used diffractive optical elements and are prototypes of structural elements in integrated electronic circuits. EUV scatterometry is very sensitive to structure details and imperfections, which makes it suitable for the characterization of nanostructured surfaces. As compared to X-ray methods, EUV scattering allows for steeper angles of incidence, which is highly preferable for the investigation of small measurement fields on semiconductor wafers. For the control of the lithographic manufacturing process, a rapid in-line characterization of nanostructures is indispensable. Numerous studies on the determination of regular geometry parameters of lamellar gratings from optical and Extreme Ultraviolet (EUV) scattering also investigated the impact of roughness on the respective results. The challenge is to appropriately model the influence of structure roughness on the diffraction intensities used for the reconstruction of the surface profile. The impact of roughness was already studied analytically but for gratings with a periodic pseudoroughness, because of practical restrictions of the computational domain. Our investigation aims at a better understanding of the scattering caused by line roughness. We designed a set of nine lamellar Si-gratings to be studied by EUV scatterometry. It includes one reference grating with no artificial roughness added, four gratings with a periodic roughness distribution, two with a prevailing line edge roughness (LER) and another two with line width roughness (LWR), and four gratings with a stochastic roughness distribution (two with LER and two with LWR). We show that the type of line roughness has a strong impact on the diffuse scatter angular distribution. Our experimental results are not described well by the present modelling approach based on small, periodically repeated domains.

  15. Beam position optimisation for IMRT

    International Nuclear Information System (INIS)

    Holloway, L.; Hoban, P.

    2001-01-01

    Full text: The introduction of IMRT has not generally resulted in the use of optimised beam positions because to find the global solution of the problem a time consuming stochastic optimisation method must be used. Although a deterministic method may not achieve the global minimum it should achieve a superior dose distribution compared to no optimisation. This study aimed to develop and test such a method. The beam optimisation method developed relies on an iterative process to achieve the desired number of beams from a large initial number of beams. The number of beams is reduced in a 'weeding-out' process based on the total fluence which each beam delivers. The process is gradual, with only three beams removed each time (following a small number of iterations), ensuring that the reduction in beams does not dramatically affect the fluence maps of those remaining. A comparison was made between the dose distributions achieved when the beams positions were optimised in this fashion and when the beams positions were evenly distributed. The method has been shown to work quite effectively and efficiently. The Figure shows a comparison in dose distribution with optimised and non optimised beam positions for 5 beams. It can be clearly seen that there is an improvement in the dose distribution delivered to the tumour and a reduction in the dose to the critical structure with beam position optimisation. A method for beam position optimisation for use in IMRT optimisations has been developed. This method although not necessarily achieving the global minimum in beam position still achieves quite a dramatic improvement compared with no beam position optimisation and is very efficiently achieved. Copyright (2001) Australasian College of Physical Scientists and Engineers in Medicine

  16. Optimisation of occupational exposure

    International Nuclear Information System (INIS)

    Webb, G.A.M.; Fleishman, A.B.

    1982-01-01

    The general concept of the optimisation of protection of the public is briefly described. Some ideas being developed for extending the cost benefit framework to include radiation workers with full implementation of the ALARA criterion are described. The role of cost benefit analysis in radiological protection and the valuation of health detriment including the derivation of monetary values and practical implications are discussed. Cost benefit analysis can lay out for inspection the doses, the associated health detriment costs and the costs of protection for alternative courses of action. However it is emphasised that the cost benefit process is an input to decisions on what is 'as low as reasonably achievable' and not a prescription for making them. (U.K.)

  17. Standardised approach to optimisation

    International Nuclear Information System (INIS)

    Warren-Forward, Helen M.; Beckhaus, Ronald

    2004-01-01

    Optimisation of radiographic images is said to have been obtained if the patient has achieved an acceptable level of dose and the image is of diagnostic value. In the near future, it will probably be recommended that radiographers measure patient doses and compare them to reference levels. The aim of this paper is to describe a standardised approach to optimisation of radiographic examinations in a diagnostic imaging department. A three-step approach is outlined with specific examples for some common examinations (chest, abdomen, pelvis and lumbar spine series). Step One: Patient doses are calculated. Step Two: Doses are compared to existing reference levels and the technique used compared to image quality criteria. Step Three: Appropriate action is taken if doses are above the reference level. Results: Average entrance surface doses for two rooms were as follows AP Abdomen (6.3mGy and 3.4mGy); AP Lumbar Spine (6.4mGy and 4.1mGy) for AP Pelvis (4.8mGy and 2.6mGy) and PA chest (0.19mGy and 0.20mGy). Comparison with the Commission of the European Communities (CEC) recommended techniques identified large differences in the applied potential. The kVp values in this study were significantly lower (by up to lOkVp) than the CEC recommendations. The results of this study have indicated that there is a need to monitor radiation doses received by patients undergoing diagnostic radiography examinations. Not only has the assessment allowed valuable comparison with International Diagnostic Reference Levels and Radiography Good Practice but has demonstrated large variations in mean doses being delivered from different rooms of the same radiology department. Following the simple 3-step approach advocated in this paper should either provide evidence that department are practising the ALARA principle or assist in making suitable changes to current practice. Copyright (2004) Australian Institute of Radiography

  18. Roughing up Beta

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Li, Sophia Zhengzi; Todorov, Viktor

    -section. An investment strategy that goes long stocks with high jump betas and short stocks with low jump betas produces significant average excess returns. These higher risk premiums for the discontinuous and overnight market betas remain significant after controlling for a long list of other firm characteristics......Motivated by the implications from a stylized equilibrium pricing framework, we investigate empirically how individual equity prices respond to continuous, or \\smooth," and jumpy, or \\rough," market price moves, and how these different market price risks, or betas, are priced in the cross......-section of expected returns. Based on a novel highfrequency dataset of almost one-thousand individual stocks over two decades, we find that the two rough betas associated with intraday discontinuous and overnight returns entail significant risk premiums, while the intraday continuous beta is not priced in the cross...

  19. Topology optimised wavelength dependent splitters

    DEFF Research Database (Denmark)

    Hede, K. K.; Burgos Leon, J.; Frandsen, Lars Hagedorn

    A photonic crystal wavelength dependent splitter has been constructed by utilising topology optimisation1. The splitter has been fabricated in a silicon-on-insulator material (Fig. 1). The topology optimised wavelength dependent splitter demonstrates promising 3D FDTD simulation results....... This complex photonic crystal structure is very sensitive against small fabrication variations from the expected topology optimised design. A wavelength dependent splitter is an important basic building block for high-performance nanophotonic circuits. 1J. S. Jensen and O. Sigmund, App. Phys. Lett. 84, 2022...

  20. Measurement of Turbulent Skin Friction Drag Coefficients Produced by Distributed Surface Roughness of Pristine Marine Coatings

    DEFF Research Database (Denmark)

    Zafiryadis, Frederik; Meyer, Knud Erik; Gökhan Ergin, F.

    drag coefficients as well as roughness Reynolds numbers for the various marine coatings across the range of Rex by fitting of the van Driest profile. The results demonstrate sound agreement with the present ITTC method for determining skin friction coefficients for practically smooth surfaces at low...... Reynolds numbers compared to normal operation mode for the antifouling coatings. Thus, better estimates for skin friction of rough hulls can be realised using the proposed method to optimise preliminary vessel design....

  1. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  2. Optimisation of the Laser Cutting Process

    DEFF Research Database (Denmark)

    Dragsted, Birgitte; Olsen, Flemmming Ove

    1996-01-01

    The problem in optimising the laser cutting process is outlined. Basic optimisation criteria and principles for adapting an optimisation method, the simplex method, are presented. The results of implementing a response function in the optimisation are discussed with respect to the quality as well...

  3. Turbulence optimisation in stellarator experiments

    Energy Technology Data Exchange (ETDEWEB)

    Proll, Josefine H.E. [Max-Planck/Princeton Center for Plasma Physics (Germany); Max-Planck-Institut fuer Plasmaphysik, Wendelsteinstr. 1, 17491 Greifswald (Germany); Faber, Benjamin J. [HSX Plasma Laboratory, University of Wisconsin-Madison, Madison, WI 53706 (United States); Helander, Per; Xanthopoulos, Pavlos [Max-Planck/Princeton Center for Plasma Physics (Germany); Lazerson, Samuel A.; Mynick, Harry E. [Plasma Physics Laboratory, Princeton University, P.O. Box 451 Princeton, New Jersey 08543-0451 (United States)

    2015-05-01

    Stellarators, the twisted siblings of the axisymmetric fusion experiments called tokamaks, have historically suffered from confining the heat of the plasma insufficiently compared with tokamaks and were therefore considered to be less promising candidates for a fusion reactor. This has changed, however, with the advent of stellarators in which the laminar transport is reduced to levels below that of tokamaks by shaping the magnetic field accordingly. As in tokamaks, the turbulent transport remains as the now dominant transport channel. Recent analytical theory suggests that the large configuration space of stellarators allows for an additional optimisation of the magnetic field to also reduce the turbulent transport. In this talk, the idea behind the turbulence optimisation is explained. We also present how an optimised equilibrium is obtained and how it might differ from the equilibrium field of an already existing device, and we compare experimental turbulence measurements in different configurations of the HSX stellarator in order to test the optimisation procedure.

  4. Optimisation of load control

    International Nuclear Information System (INIS)

    Koponen, P.

    1998-01-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  5. Optimisation of load control

    Energy Technology Data Exchange (ETDEWEB)

    Koponen, P [VTT Energy, Espoo (Finland)

    1998-08-01

    Electricity cannot be stored in large quantities. That is why the electricity supply and consumption are always almost equal in large power supply systems. If this balance were disturbed beyond stability, the system or a part of it would collapse until a new stable equilibrium is reached. The balance between supply and consumption is mainly maintained by controlling the power production, but also the electricity consumption or, in other words, the load is controlled. Controlling the load of the power supply system is important, if easily controllable power production capacity is limited. Temporary shortage of capacity causes high peaks in the energy price in the electricity market. Load control either reduces the electricity consumption during peak consumption and peak price or moves electricity consumption to some other time. The project Optimisation of Load Control is a part of the EDISON research program for distribution automation. The following areas were studied: Optimization of space heating and ventilation, when electricity price is time variable, load control model in power purchase optimization, optimization of direct load control sequences, interaction between load control optimization and power purchase optimization, literature on load control, optimization methods and field tests and response models of direct load control and the effects of the electricity market deregulation on load control. An overview of the main results is given in this chapter

  6. SPS batch spacing optimisation

    CERN Document Server

    Velotti, F M; Carlier, E; Goddard, B; Kain, V; Kotzian, G

    2017-01-01

    Until 2015, the LHC filling schemes used the batch spac-ing as specified in the LHC design report. The maximumnumber of bunches injectable in the LHC directly dependson the batch spacing at injection in the SPS and hence onthe MKP rise time.As part of the LHC Injectors Upgrade project for LHCheavy ions, a reduction of the batch spacing is needed. In thisdirection, studies to approach the MKP design rise time of150ns(2-98%) have been carried out. These measurementsgave clear indications that such optimisation, and beyond,could be done also for higher injection momentum beams,where the additional slower MKP (MKP-L) is needed.After the successful results from 2015 SPS batch spacingoptimisation for the Pb-Pb run [1], the same concept wasthought to be used also for proton beams. In fact, thanksto the SPS transverse feed back, it was already observedthat lower batch spacing than the design one (225ns) couldbe achieved. For the 2016 p-Pb run, a batch spacing of200nsfor the proton beam with100nsbunch spacing wasreque...

  7. ATLAS software configuration and build tool optimisation

    Science.gov (United States)

    Rybkin, Grigory; Atlas Collaboration

    2014-06-01

    ATLAS software code base is over 6 million lines organised in about 2000 packages. It makes use of some 100 external software packages, is developed by more than 400 developers and used by more than 2500 physicists from over 200 universities and laboratories in 6 continents. To meet the challenge of configuration and building of this software, the Configuration Management Tool (CMT) is used. CMT expects each package to describe its build targets, build and environment setup parameters, dependencies on other packages in a text file called requirements, and each project (group of packages) to describe its policies and dependencies on other projects in a text project file. Based on the effective set of configuration parameters read from the requirements files of dependent packages and project files, CMT commands build the packages, generate the environment for their use, or query the packages. The main focus was on build time performance that was optimised within several approaches: reduction of the number of reads of requirements files that are now read once per package by a CMT build command that generates cached requirements files for subsequent CMT build commands; introduction of more fine-grained build parallelism at package task level, i.e., dependent applications and libraries are compiled in parallel; code optimisation of CMT commands used for build; introduction of package level build parallelism, i. e., parallelise the build of independent packages. By default, CMT launches NUMBER-OF-PROCESSORS build commands in parallel. The other focus was on CMT commands optimisation in general that made them approximately 2 times faster. CMT can generate a cached requirements file for the environment setup command, which is especially useful for deployment on distributed file systems like AFS or CERN VMFS. The use of parallelism, caching and code optimisation significantly-by several times-reduced software build time, environment setup time, increased the efficiency of

  8. Modelling dynamic roughness during floods

    NARCIS (Netherlands)

    Paarlberg, Andries; Dohmen-Janssen, Catarine M.; Hulscher, Suzanne J.M.H.; Termes, A.P.P.

    2007-01-01

    In this paper, we present a dynamic roughness model to predict water levels during floods. Hysteresis effects of dune development are explicitly included. It is shown that differences between the new dynamic roughness model, and models where the roughness coefficient is calibrated, are most

  9. Rough flows and homogenization in stochastic turbulence

    OpenAIRE

    Bailleul, I.; Catellier, R.

    2016-01-01

    We provide in this work a tool-kit for the study of homogenisation of random ordinary differential equations, under the form of a friendly-user black box based on the tehcnology of rough flows. We illustrate the use of this setting on the example of stochastic turbulence.

  10. Research on Risk Allocation of Public-private Partnership Projects based on Rough Set Theory%基于粗糙集理论的PPP项目风险分担研究

    Institute of Scientific and Technical Information of China (English)

    巴希; 乌云娜; 胡新亮; 李泽众

    2013-01-01

      公私合作制将私人资本、技术和管理经验引入基础设施建设及运营项目,发挥了巨大的经济效益和社会效益。在我国公私合作项目的实践过程中,风险分担不明确始终是阻碍该模式在基础设施投融资领域进行广泛推广的关键因素,严重时甚至导致项目的失败。针对承担风险的主体不确定这一影响公私双方持久稳定合作的问题,通过案例分析和文献研究识别出风险分担主体不明确的风险因素,在此基础上利用粗糙集方法对风险分担评价指标体系中指标进行属性约简,以剔除对于分担结果影响较小的因素。理想点法能够对具有不同风险分担偏好的评价人做出的风险承担选择进行评价,以确定合理的风险分担方案。评价结果为公私双方制定合理的风险分担方案提供参考。%Public-Private-Partnership introduces private capital, technology and management experience into infrastructure con-struction and operation of projects, which brings huge economic and social benefits. In the practice of public-private partnership pro-ject, the unclear risk-sharing is always the key factor hindering the PPP pattern to widely promote in infrastructure investment and fi-nancing, even lead to project failure. Aimed at the uncertainty of main body bearing risk, which affects the lasting stability of Public-Private-Partnership. Through case studies and literature research, this paper identified the risk factors which has unclear risk-sharing body. On this basis, using Rough Set method to evaluate the risk-sharing index system and reduce the attributes which affect the risk-sharing result least. The TOPSIS method can evaluate the program by the people with different risk-sharing preferences that will deter-mine the most reasonable one. Meantime, the evaluation results provide a reference for both public and private to develop a reasonable risk-sharing scheme.

  11. Optimising end of generation of Magnox reactors

    International Nuclear Information System (INIS)

    Hall, D.; Hopper, E.D.A.

    2014-01-01

    Designing, justifying and gaining regulatory approval for optimised, terminal fuel cycles for the last 4 of the 13 strong Magnox Fleet is described, covering: - constraints set by the plant owner's integrated closure plan, opportunities for innovative fuel cycles while preserving flexibility to respond to business changes; - methods of collectively determining best options for each site; - selected strategies including lower fuel element retention and inter-reactor transfer of fuel; - the required work scope, its technical, safety case and resource challenges and how they were met; - achieving additional electricity generation worth in excess of Pound 1 b from 4 sites (a total of 8 reactors); - the keys to success. (authors)

  12. 全序优势关系下区间信息系统多粒度粗糙集的粒度约简%On Particle Size Reduction of Multi-granularity Rough set in Interval Information System with Total Order Dominance Relation

    Institute of Scientific and Technical Information of China (English)

    于莹莹

    2017-01-01

    Multi-granularity rough set is a rise as a research direction in rough set theory in recent years.According to information system based on dominance relation,the interval of the granularity of rough sets,the paper puts forward the concept of relative particle size reduction,size reduction algorithm based on granularity importance,and use instance for the specific analysis of the effectiveness of the proposed method.%多粒度粗糙集是近年来粗糙集理论中兴起的一个研究方向.该文针对优势关系下的区间信息系统的多粒度粗糙集,提出了相对粒度约简的概念,给出了基于粒度重要性的粒度约简算法.用实例来进行具体分析该方法的有效性.

  13. Rough Surface Contact

    Directory of Open Access Journals (Sweden)

    T Nguyen

    2017-06-01

    Full Text Available This paper studies the contact of general rough curved surfaces having nearly identical geometries, assuming the contact at each differential area obeys the model proposed by Greenwood and Williamson. In order to account for the most general gross geometry, principles of differential geometry of surface are applied. This method while requires more rigorous mathematical manipulations, the fact that it preserves the original surface geometries thus makes the modeling procedure much more intuitive. For subsequent use, differential geometry of axis-symmetric surface is considered instead of general surface (although this “general case” can be done as well in Chapter 3.1. The final formulas for contact area, load, and frictional torque are derived in Chapter 3.2.

  14. A conceptual optimisation strategy for radiography in a digital environment

    International Nuclear Information System (INIS)

    Baath, M.; Haakansson, M.; Hansson, J.; Maansson, L. G.

    2005-01-01

    Using a completely digital environment for the entire imaging process leads to new possibilities for optimisation of radiography since many restrictions of screen/film systems, such as the small dynamic range and the lack of possibilities for image processing, do not apply any longer. However, at the same time these new possibilities lead to a more complicated optimisation process, since more freedom is given to alter parameters. This paper focuses on describing an optimisation strategy that concentrates on taking advantage of the conceptual differences between digital systems and screen/film systems. The strategy can be summarised as: (a) always include the anatomical background during the optimisation, (b) perform all comparisons at a constant effective dose and (c) separate the image display stage from the image collection stage. A three-step process is proposed where the optimal setting of the technique parameters is determined at first, followed by an optimisation of the image processing. In the final step the optimal dose level - given the optimal settings of the image collection and image display stages - is determined. (authors)

  15. Automatic optimisation of gamma dose rate sensor networks: The DETECT Optimisation Tool

    DEFF Research Database (Denmark)

    Helle, K.B.; Müller, T.O.; Astrup, Poul

    2014-01-01

    of the EU FP 7 project DETECT. It evaluates the gamma dose rates that a proposed set of sensors might measure in an emergency and uses this information to optimise the sensor locations. The gamma dose rates are taken from a comprehensive library of simulations of atmospheric radioactive plumes from 64......Fast delivery of comprehensive information on the radiological situation is essential for decision-making in nuclear emergencies. Most national radiological agencies in Europe employ gamma dose rate sensor networks to monitor radioactive pollution of the atmosphere. Sensor locations were often...... source locations. These simulations cover the whole European Union, so the DOT allows evaluation and optimisation of sensor networks for all EU countries, as well as evaluation of fencing sensors around possible sources. Users can choose from seven cost functions to evaluate the capability of a given...

  16. Optimization of Surface Roughness and Wall Thickness in Dieless Incremental Forming Of Aluminum Sheet Using Taguchi

    Science.gov (United States)

    Hamedon, Zamzuri; Kuang, Shea Cheng; Jaafar, Hasnulhadi; Azhari, Azmir

    2018-03-01

    Incremental sheet forming is a versatile sheet metal forming process where a sheet metal is formed into its final shape by a series of localized deformation without a specialised die. However, it still has many shortcomings that need to be overcome such as geometric accuracy, surface roughness, formability, forming speed, and so on. This project focus on minimising the surface roughness of aluminium sheet and improving its thickness uniformity in incremental sheet forming via optimisation of wall angle, feed rate, and step size. Besides, the effect of wall angle, feed rate, and step size to the surface roughness and thickness uniformity of aluminium sheet was investigated in this project. From the results, it was observed that surface roughness and thickness uniformity were inversely varied due to the formation of surface waviness. Increase in feed rate and decrease in step size will produce a lower surface roughness, while uniform thickness reduction was obtained by reducing the wall angle and step size. By using Taguchi analysis, the optimum parameters for minimum surface roughness and uniform thickness reduction of aluminium sheet were determined. The finding of this project helps to reduce the time in optimising the surface roughness and thickness uniformity in incremental sheet forming.

  17. Fuzzy multi-project rough-cut capacity planning

    NARCIS (Netherlands)

    Masmoudi, Malek; Hans, Elias W.; Leus, Roel; Hait, Alain; Sotskov, Yuri N.; Werner, Frank

    2014-01-01

    This chapter studies the incorporation of uncertainty into multi-project rough-cut capacity planning. We use fuzzy sets to model uncertainties, adhering to the so-called possibilistic approach. We refer to the resulting proactive planning environment as Fuzzy Rough Cut Capacity Planning (FRCCP).

  18. Geometrical exploration of a flux-optimised sodium receiver through multi-objective optimisation

    Science.gov (United States)

    Asselineau, Charles-Alexis; Corsi, Clothilde; Coventry, Joe; Pye, John

    2017-06-01

    A stochastic multi-objective optimisation method is used to determine receiver geometries with maximum second law efficiency, minimal average temperature and minimal surface area. The method is able to identify a set of Pareto optimal candidates that show advantageous geometrical features, mainly in being able to maximise the intercepted flux within the geometrical boundaries set. Receivers with first law thermal efficiencies ranging from 87% to 91% are also evaluated using the second law of thermodynamics and found to have similar efficiencies of over 60%, highlighting the influence that the geometry can play in the maximisation of the work output of receivers by influencing the distribution of the flux from the concentrator.

  19. Roughness Effects on Fretting Fatigue

    Science.gov (United States)

    Yue, Tongyan; Abdel Wahab, Magd

    2017-05-01

    Fretting is a small oscillatory relative motion between two normal loaded contact surfaces. It may cause fretting fatigue, fretting wear and/or fretting corrosion damage depending on various fretting couples and working conditions. Fretting fatigue usually occurs at partial slip condition, and results in catastrophic failure at the stress levels below the fatigue limit of the material. Many parameters may affect fretting behaviour, including the applied normal load and displacement, material properties, roughness of the contact surfaces, frequency, etc. Since fretting damage is undesirable due to contacting, the effect of rough contact surfaces on fretting damage has been studied by many researchers. Experimental method on this topic is usually focusing on rough surface effects by finishing treatment and random rough surface effects in order to increase fretting fatigue life. However, most of numerical models on roughness are based on random surface. This paper reviewed both experimental and numerical methodology on the rough surface effects on fretting fatigue.

  20. Isogeometric Analysis and Shape Optimisation

    DEFF Research Database (Denmark)

    Gravesen, Jens; Evgrafov, Anton; Gersborg, Allan Roulund

    of the whole domain. So in every optimisation cycle we need to extend a parametrisation of the boundary of a domain to the whole domain. It has to be fast in order not to slow the optimisation down but it also has to be robust and give a parametrisation of high quality. These are conflicting requirements so we...... will explain how the validity of a parametrisation can be checked and we will describe various ways to parametrise a domain. We will in particular study the Winslow functional which turns out to have some desirable properties. Other problems we touch upon is clustering of boundary control points (design...

  1. Urban Aerodynamic Roughness Length Mapping Using Multitemporal SAR Data

    Directory of Open Access Journals (Sweden)

    Fengli Zhang

    2017-01-01

    Full Text Available Aerodynamic roughness is very important to urban meteorological and climate studies. Radar remote sensing is considered to be an effective means for aerodynamic roughness retrieval because radar backscattering is sensitive to the surface roughness and geometric structure of a given target. In this paper, a methodology for aerodynamic roughness length estimation using SAR data in urban areas is introduced. The scale and orientation characteristics of backscattering of various targets in urban areas were firstly extracted and analyzed, which showed great potential of SAR data for urban roughness elements characterization. Then the ground truth aerodynamic roughness was calculated from wind gradient data acquired by the meteorological tower using fitting and iterative method. And then the optimal dimension of the upwind sector for the aerodynamic roughness calculation was determined through a correlation analysis between backscattering extracted from SAR data at various upwind sector areas and the aerodynamic roughness calculated from the meteorological tower data. Finally a quantitative relationship was set up to retrieve the aerodynamic roughness length from SAR data. Experiments based on ALOS PALSAR and COSMO-SkyMed data from 2006 to 2011 prove that the proposed methodology can provide accurate roughness length estimations for the spatial and temporal analysis of urban surface.

  2. A new fiber optic sensor for inner surface roughness measurement

    Science.gov (United States)

    Xu, Xiaomei; Liu, Shoubin; Hu, Hong

    2009-11-01

    In order to measure inner surface roughness of small holes nondestructively, a new fiber optic sensor is researched and developed. Firstly, a new model for surface roughness measurement is proposed, which is based on intensity-modulated fiber optic sensors and scattering modeling of rough surfaces. Secondly, a fiber optical measurement system is designed and set up. Under the help of new techniques, the fiber optic sensor can be miniaturized. Furthermore, the use of micro prism makes the light turn 90 degree, so the inner side surface roughness of small holes can be measured. Thirdly, the fiber optic sensor is gauged by standard surface roughness specimens, and a series of measurement experiments have been done. The measurement results are compared with those obtained by TR220 Surface Roughness Instrument and Form Talysurf Laser 635, and validity of the developed fiber optic sensor is verified. Finally, precision and influence factors of the fiber optic sensor are analyzed.

  3. Numerical simulations of seepage flow in rough single rock fractures

    Directory of Open Access Journals (Sweden)

    Qingang Zhang

    2015-09-01

    Full Text Available To investigate the relationship between the structural characteristics and seepage flow behavior of rough single rock fractures, a set of single fracture physical models were produced using the Weierstrass–Mandelbrot functions to test the seepage flow performance. Six single fractures, with various surface roughnesses characterized by fractal dimensions, were built using COMSOL multiphysics software. The fluid flow behavior through the rough fractures and the influences of the rough surfaces on the fluid flow behavior was then monitored. The numerical simulation indicates that there is a linear relationship between the average flow velocity over the entire flow path and the fractal dimension of the rough surface. It is shown that there is good a agreement between the numerical results and the experimental data in terms of the properties of the fluid flowing through the rough single rock fractures.

  4. Effect of truncated cone roughness element density on hydrodynamic drag

    Science.gov (United States)

    Womack, Kristofer; Schultz, Michael; Meneveau, Charles

    2017-11-01

    An experimental study was conducted on rough-wall, turbulent boundary layer flow with roughness elements whose idealized shape model barnacles that cause hydrodynamic drag in many applications. Varying planform densities of truncated cone roughness elements were investigated. Element densities studied ranged from 10% to 79%. Detailed turbulent boundary layer velocity statistics were recorded with a two-component LDV system on a three-axis traverse. Hydrodynamic roughness length (z0) and skin-friction coefficient (Cf) were determined and compared with the estimates from existing roughness element drag prediction models including Macdonald et al. (1998) and other recent models. The roughness elements used in this work model idealized barnacles, so implications of this data set for ship powering are considered. This research was supported by the Office of Naval Research and by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.

  5. Day-ahead economic optimisation of energy storage

    NARCIS (Netherlands)

    Lampropoulos, I.; Garoufalis, P.; Bosch, van den P.P.J.; Groot, de R.J.W.; Kling, W.L.

    2014-01-01

    This article addresses the day-ahead economic optimisation of energy storage systems within the setting of electricity spot markets. The case study is about a lithium-ion battery system integrated in a low voltage distribution grid with residential customers and photovoltaic generation in the

  6. Optimising performance in steady state for a supermarket refrigeration system

    DEFF Research Database (Denmark)

    Green, Torben; Kinnaert, Michel; Razavi-Far, Roozbeh

    2012-01-01

    Using a supermarket refrigeration system as an illustrative example, the paper postulates that by appropriately utilising knowledge of plant operation, the plant wide performance can be optimised based on a small set of variables. Focusing on steady state operations, the total system performance...

  7. Global Topology Optimisation

    Science.gov (United States)

    2016-10-31

    boundary Γ, but the boundary points are not equally spaced along Γ ( recall Fig. 2). The idea is that a given boundary Γ has many possible discretisations...b+ b a (a) (b) (B) = 4 abc radius, R m ea n cu rv at ur e, ̄ 0 100 200 300 4000 0.01 0.02 0.03(c) 1/R geometric finite dierence perturbation...perimeter of a curve. The setting is illustrated in Fig. 9. Recall the sensitivity is defined by (3). For a curve that is represented by a set of

  8. Nonlinear equations and optimisation

    CERN Document Server

    Watson, LT; Bartholomew-Biggs, M

    2001-01-01

    /homepage/sac/cam/na2000/index.html7-Volume Set now available at special set price ! In one of the papers in this collection, the remark that ""nothing at all takes place in the universe in which some rule of maximum of minimum does not appear"" is attributed to no less an authority than Euler. Simplifying the syntax a little, we might paraphrase this as Everything is an optimization problem. While this might be something of an overstatement, the element of exaggeration is certainly reduced if we consider the extended form: Every

  9. Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information

    Directory of Open Access Journals (Sweden)

    Muhammad Akram

    2018-03-01

    Full Text Available Soft sets (SSs, neutrosophic sets (NSs, and rough sets (RSs are different mathematical models for handling uncertainties, but they are mutually related. In this research paper, we introduce the notions of soft rough neutrosophic sets (SRNSs and neutrosophic soft rough sets (NSRSs as hybrid models for soft computing. We describe a mathematical approach to handle decision-making problems in view of NSRSs. We also present an efficient algorithm of our proposed hybrid model to solve decision-making problems.

  10. Variations in roughness predictions (flume experiments)

    NARCIS (Netherlands)

    Noordam, Daniëlle; Blom, Astrid; van der Klis, H.; Hulscher, Suzanne J.M.H.; Makaske, A.; Wolfert, H.P.; van Os, A.G.

    2005-01-01

    Data of flume experiments with bed forms are used to analyze and compare different roughness predictors. In this study, the hydraulic roughness consists of grain roughness and form roughness. We predict the grain roughness by means of the size of the sediment. The form roughness is predicted by

  11. Cogeneration technologies, optimisation and implementation

    CERN Document Server

    Frangopoulos, Christos A

    2017-01-01

    Cogeneration refers to the use of a power station to deliver two or more useful forms of energy, for example, to generate electricity and heat at the same time. This book provides an integrated treatment of cogeneration, including a tour of the available technologies and their features, and how these systems can be analysed and optimised.

  12. For Time-Continuous Optimisation

    DEFF Research Database (Denmark)

    Heinrich, Mary Katherine; Ayres, Phil

    2016-01-01

    Strategies for optimisation in design normatively assume an artefact end-point, disallowing continuous architecture that engages living systems, dynamic behaviour, and complex systems. In our Flora Robotica investigations of symbiotic plant-robot bio-hybrids, we re- quire computational tools...

  13. LBA-ECO LC-15 Aerodynamic Roughness Maps of Vegetation Canopies, Amazon Basin: 2000

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set, LBA-ECO LC-15 Aerodynamic Roughness Maps of Vegetation Canopies, Amazon Basin: 2000, provides physical roughness maps of vegetation canopies in the...

  14. Comparison of vegetation roughness descriptions

    NARCIS (Netherlands)

    Augustijn, Dionysius C.M.; Huthoff, Freek; van Velzen, E.H.; Altinakar, M.S.; Kokpinar, M.A.; Aydin, I.; Cokgor, S.; Kirkgoz, S.

    2008-01-01

    Vegetation roughness is an important parameter in describing flow through river systems. Vegetation impedes the flow, which affects the stage-discharge curve and may increase flood risks. Roughness is often used as a calibration parameter in river models, however when vegetation is allowed to

  15. Generalizing roughness: experiments with flow-oriented roughness

    Science.gov (United States)

    Trevisani, Sebastiano

    2015-04-01

    Surface texture analysis applied to High Resolution Digital Terrain Models (HRDTMs) improves the capability to characterize fine-scale morphology and permits the derivation of useful morphometric indexes. An important indicator to be taken into account in surface texture analysis is surface roughness, which can have a discriminant role in the detection of different geomorphic processes and factors. The evaluation of surface roughness is generally performed considering it as an isotropic surface parameter (e.g., Cavalli, 2008; Grohmann, 2011). However, surface texture has often an anisotropic character, which means that surface roughness could change according to the considered direction. In some applications, for example involving surface flow processes, the anisotropy of roughness should be taken into account (e.g., Trevisani, 2012; Smith, 2014). Accordingly, we test the application of a flow-oriented directional measure of roughness, computed considering surface gravity-driven flow. For the calculation of flow-oriented roughness we use both classical variogram-based roughness (e.g., Herzfeld,1996; Atkinson, 2000) as well as an ad-hoc developed robust modification of variogram (i.e. MAD, Trevisani, 2014). The presented approach, based on a D8 algorithm, shows the potential impact of considering directionality in the calculation of roughness indexes. The use of flow-oriented roughness could improve the definition of effective proxies of impedance to flow. Preliminary results on the integration of directional roughness operators with morphometric-based models, are promising and can be extended to more complex approaches. Atkinson, P.M., Lewis, P., 2000. Geostatistical classification for remote sensing: an introduction. Computers & Geosciences 26, 361-371. Cavalli, M. & Marchi, L. 2008, "Characterization of the surface morphology of an alpine alluvial fan using airborne LiDAR", Natural Hazards and Earth System Science, vol. 8, no. 2, pp. 323-333. Grohmann, C

  16. HVAC system optimisation-in-building section

    Energy Technology Data Exchange (ETDEWEB)

    Lu, L.; Cai, W.; Xie, L.; Li, S.; Soh, Y.C. [School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore (Singapore)

    2004-07-01

    This paper presents a practical method to optimise in-building section of centralised Heating, Ventilation and Air-Conditioning (HVAC) systems which consist of indoor air loops and chilled water loops. First, through component characteristic analysis, mathematical models associated with cooling loads and energy consumption for heat exchangers and energy consuming devices are established. By considering variation of cooling load of each end user, adaptive neuro-fuzzy inference system (ANFIS) is employed to model duct and pipe networks and obtain optimal differential pressure (DP) set points based on limited sensor information. A mix-integer nonlinear constraint optimization of system energy is formulated and solved by a modified genetic algorithm. The main feature of our paper is a systematic approach in optimizing the overall system energy consumption rather than that of individual component. A simulation study for a typical centralized HVAC system is provided to compare the proposed optimisation method with traditional ones. The results show that the proposed method indeed improves the system performance significantly. (author)

  17. Simulation and optimisation modelling approach for operation of the Hoa Binh Reservoir, Vietnam

    DEFF Research Database (Denmark)

    Ngo, Long le; Madsen, Henrik; Rosbjerg, Dan

    2007-01-01

    Hoa Binh, the largest reservoir in Vietnam, plays an important role in flood control for the Red River delta and hydropower generation. Due to its multi-purpose character, conflicts and disputes in operating the reservoir have been ongoing since its construction, particularly in the flood season....... This paper proposes to optimise the control strategies for the Hoa Binh reservoir operation by applying a combination of simulation and optimisation models. The control strategies are set up in the MIKE 11 simulation model to guide the releases of the reservoir system according to the current storage level......, the hydro-meteorological conditions, and the time of the year. A heuristic global optimisation tool, the shuffled complex evolution (SCE) algorithm, is adopted for optimising the reservoir operation. The optimisation puts focus on the trade-off between flood control and hydropower generation for the Hoa...

  18. Optimising Comprehensibility in Interlingual Translation

    DEFF Research Database (Denmark)

    Nisbeth Jensen, Matilde

    2015-01-01

    The increasing demand for citizen engagement in areas traditionally belonging exclusively to experts, such as health, law and technology has given rise to the necessity of making expert knowledge available to the general public through genres such as instruction manuals for consumer goods, patien...... the functional text type of Patient Information Leaflet. Finally, the usefulness of applying the principles of Plain Language and intralingual translation for optimising comprehensibility in interlingual translation is discussed....

  19. TEM turbulence optimisation in stellarators

    Science.gov (United States)

    Proll, J. H. E.; Mynick, H. E.; Xanthopoulos, P.; Lazerson, S. A.; Faber, B. J.

    2016-01-01

    With the advent of neoclassically optimised stellarators, optimising stellarators for turbulent transport is an important next step. The reduction of ion-temperature-gradient-driven turbulence has been achieved via shaping of the magnetic field, and the reduction of trapped-electron mode (TEM) turbulence is addressed in the present paper. Recent analytical and numerical findings suggest TEMs are stabilised when a large fraction of trapped particles experiences favourable bounce-averaged curvature. This is the case for example in Wendelstein 7-X (Beidler et al 1990 Fusion Technol. 17 148) and other Helias-type stellarators. Using this knowledge, a proxy function was designed to estimate the TEM dynamics, allowing optimal configurations for TEM stability to be determined with the STELLOPT (Spong et al 2001 Nucl. Fusion 41 711) code without extensive turbulence simulations. A first proof-of-principle optimised equilibrium stemming from the TEM-dominated stellarator experiment HSX (Anderson et al 1995 Fusion Technol. 27 273) is presented for which a reduction of the linear growth rates is achieved over a broad range of the operational parameter space. As an important consequence of this property, the turbulent heat flux levels are reduced compared with the initial configuration.

  20. Armor Plate Surface Roughness Measurements

    National Research Council Canada - National Science Library

    Stanton, Brian; Coburn, William; Pizzillo, Thomas J

    2005-01-01

    ...., surface texture and coatings) that could become important at high frequency. We measure waviness and roughness of various plates to know the parameter range for smooth aluminum and rolled homogenous armor (RHA...

  1. Optimisation of wire-cut EDM process parameter by Grey-based response surface methodology

    Science.gov (United States)

    Kumar, Amit; Soota, Tarun; Kumar, Jitendra

    2018-03-01

    Wire electric discharge machining (WEDM) is one of the advanced machining processes. Response surface methodology coupled with Grey relation analysis method has been proposed and used to optimise the machining parameters of WEDM. A face centred cubic design is used for conducting experiments on high speed steel (HSS) M2 grade workpiece material. The regression model of significant factors such as pulse-on time, pulse-off time, peak current, and wire feed is considered for optimising the responses variables material removal rate (MRR), surface roughness and Kerf width. The optimal condition of the machining parameter was obtained using the Grey relation grade. ANOVA is applied to determine significance of the input parameters for optimising the Grey relation grade.

  2. Analysis of accuracy in photogrammetric roughness measurements

    Science.gov (United States)

    Olkowicz, Marcin; Dąbrowski, Marcin; Pluymakers, Anne

    2017-04-01

    Regarding permeability, one of the most important features of shale gas reservoirs is the effective aperture of cracks opened during hydraulic fracturing, both propped and unpropped. In a propped fracture, the aperture is controlled mostly by proppant size and its embedment, and fracture surface roughness only has a minor influence. In contrast, in an unpropped fracture aperture is controlled by the fracture roughness and the wall displacement. To measure fracture surface roughness, we have used the photogrammetric method since it is time- and cost-efficient. To estimate the accuracy of this method we compare the photogrammetric measurements with reference measurements taken with a White Light Interferometer (WLI). Our photogrammetric setup is based on high resolution 50 Mpx camera combined with a focus stacking technique. The first step for photogrammetric measurements is to determine the optimal camera positions and lighting. We compare multiple scans of one sample, taken with different settings of lighting and camera positions, with the reference WLI measurement. The second step is to perform measurements of all studied fractures with the parameters that produced the best results in the first step. To compare photogrammetric and WLI measurements we regrid both data sets onto a regular 10 μm grid and determined the best fit, followed by a calculation of the difference between the measurements. The first results of the comparison show that for 90 % of measured points the absolute vertical distance between WLI and photogrammetry is less than 10 μm, while the mean absolute vertical distance is 5 μm. This proves that our setup can be used for fracture roughness measurements in shales.

  3. Particle swarm optimisation classical and quantum perspectives

    CERN Document Server

    Sun, Jun; Wu, Xiao-Jun

    2016-01-01

    IntroductionOptimisation Problems and Optimisation MethodsRandom Search TechniquesMetaheuristic MethodsSwarm IntelligenceParticle Swarm OptimisationOverviewMotivationsPSO Algorithm: Basic Concepts and the ProcedureParadigm: How to Use PSO to Solve Optimisation ProblemsSome Harder Examples Some Variants of Particle Swarm Optimisation Why Does the PSO Algorithm Need to Be Improved? Inertia and Constriction-Acceleration Techniques for PSOLocal Best ModelProbabilistic AlgorithmsOther Variants of PSO Quantum-Behaved Particle Swarm Optimisation OverviewMotivation: From Classical Dynamics to Quantum MechanicsQuantum Model: Fundamentals of QPSOQPSO AlgorithmSome Essential ApplicationsSome Variants of QPSOSummary Advanced Topics Behaviour Analysis of Individual ParticlesConvergence Analysis of the AlgorithmTime Complexity and Rate of ConvergenceParameter Selection and PerformanceSummaryIndustrial Applications Inverse Problems for Partial Differential EquationsInverse Problems for Non-Linear Dynamical SystemsOptimal De...

  4. An Optimisation Approach for Room Acoustics Design

    DEFF Research Database (Denmark)

    Holm-Jørgensen, Kristian; Kirkegaard, Poul Henning; Andersen, Lars

    2005-01-01

    This paper discuss on a conceptual level the value of optimisation techniques in architectural acoustics room design from a practical point of view. It is chosen to optimise one objective room acoustics design criterium estimated from the sound field inside the room. The sound field is modeled...... using the boundary element method where absorption is incorporated. An example is given where the geometry of a room is defined by four design modes. The room geometry is optimised to get a uniform sound pressure....

  5. Thickness Optimisation of Textiles Subjected to Heat and Mass Transport during Ironing

    Directory of Open Access Journals (Sweden)

    Korycki Ryszard

    2016-09-01

    Full Text Available Let us next analyse the coupled problem during ironing of textiles, that is, the heat is transported with mass whereas the mass transport with heat is negligible. It is necessary to define both physical and mathematical models. Introducing two-phase system of mass sorption by fibres, the transport equations are introduced and accompanied by the set of boundary and initial conditions. Optimisation of material thickness during ironing is gradient oriented. The first-order sensitivity of an arbitrary objective functional is analysed and included in optimisation procedure. Numerical example is the thickness optimisation of different textile materials in ironing device.

  6. Optimisation of technical specifications using probabilistic methods

    International Nuclear Information System (INIS)

    Ericsson, G.; Knochenhauer, M.; Hultqvist, G.

    1986-01-01

    During the last few years the development of methods for modifying and optimising nuclear power plant Technical Specifications (TS) for plant operations has received increased attention. Probalistic methods in general, and the plant and system models of probabilistic safety assessment (PSA) in particular, seem to provide the most forceful tools for optimisation. This paper first gives some general comments on optimisation, identifying important parameters and then gives a description of recent Swedish experiences from the use of nuclear power plant PSA models and results for TS optimisation

  7. Layout Optimisation of Wave Energy Converter Arrays

    DEFF Research Database (Denmark)

    Ruiz, Pau Mercadé; Nava, Vincenzo; Topper, Mathew B. R.

    2017-01-01

    This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC) arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation......, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA), a genetic algorithm (GA) and the glowworm swarm optimisation (GSO) algorithm...

  8. Optimisation of the LHCb detector

    CERN Document Server

    Hierck, R H

    2003-01-01

    This thesis describes a comparison of the LHCb classic and LHCb light concept from a tracking perspective. The comparison includes the detector occupancies, the various pattern recognition algorithms and the reconstruction performance. The final optimised LHCb setup is used to study the physics performance of LHCb for the Bs->DsK and Bs->DsPi decay channels. This includes both the event selection and a study of the sensitivity for the Bs oscillation frequency, delta m_s, the Bs lifetime difference, DGamma_s, and the CP parameter gamma-2delta gamma.

  9. Optimisation combinatoire Theorie et algorithmes

    CERN Document Server

    Korte, Bernhard; Fonlupt, Jean

    2010-01-01

    Ce livre est la traduction fran aise de la quatri me et derni re dition de Combinatorial Optimization: Theory and Algorithms crit par deux minents sp cialistes du domaine: Bernhard Korte et Jens Vygen de l'universit de Bonn en Allemagne. Il met l accent sur les aspects th oriques de l'optimisation combinatoire ainsi que sur les algorithmes efficaces et exacts de r solution de probl mes. Il se distingue en cela des approches heuristiques plus simples et souvent d crites par ailleurs. L ouvrage contient de nombreuses d monstrations, concises et l gantes, de r sultats difficiles. Destin aux tudia

  10. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  11. Stochastic control with rough paths

    International Nuclear Information System (INIS)

    Diehl, Joscha; Friz, Peter K.; Gassiat, Paul

    2017-01-01

    We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically involve anticipating stochastic analysis. We make the link to old work of Davis and Burstein (Stoch Stoch Rep 40:203–256, 1992) and then prove a continuous-time generalization of Roger’s duality formula [SIAM J Control Optim 46:1116–1132, 2007]. The generic case of controlled volatility is seen to give trivial duality bounds, and explains the focus in Burstein–Davis’ (and this) work on controlled drift. Our study of controlled rough differential equations also relates to work of Mazliak and Nourdin (Stoch Dyn 08:23, 2008).

  12. Heat transfer from rough surfaces

    International Nuclear Information System (INIS)

    Dalle Donne, M.

    1977-01-01

    Artificial roughness is often used in nuclear reactors to improve the thermal performance of the fuel elements. Although these are made up of clusters of rods, the experiments to measure the heat transfer and friction coefficients of roughness are performed with single rods contained in smooth tubes. This work illustrated a new transformation method to obtain data applicable to reactor fuel elements from these annulus experiments. New experimental friction data are presented for ten rods, each with a different artificial roughness made up of two-dimensional rectangular ribs. For each rod four tests have been performed, each in a different outer smooth tube. For two of these rods, each for two different outer tubes, heat transfer data are also given. The friction and heat transfer data, transformed with the present method, are correlated by simple equations. In the paper, these equations are applied to a case typical for a Gas Cooled Fast Reactor fuel element. (orig.) [de

  13. Stochastic control with rough paths

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Joscha [University of California San Diego (United States); Friz, Peter K., E-mail: friz@math.tu-berlin.de [TU & WIAS Berlin (Germany); Gassiat, Paul [CEREMADE, Université Paris-Dauphine, PSL Research University (France)

    2017-04-15

    We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically involve anticipating stochastic analysis. We make the link to old work of Davis and Burstein (Stoch Stoch Rep 40:203–256, 1992) and then prove a continuous-time generalization of Roger’s duality formula [SIAM J Control Optim 46:1116–1132, 2007]. The generic case of controlled volatility is seen to give trivial duality bounds, and explains the focus in Burstein–Davis’ (and this) work on controlled drift. Our study of controlled rough differential equations also relates to work of Mazliak and Nourdin (Stoch Dyn 08:23, 2008).

  14. The influence of roughness and obstacle on wind power map

    International Nuclear Information System (INIS)

    Abas Ab Wahab; Mohd Fadhil Abas; Mohd Hafiz Ismail

    2006-01-01

    In the development of wind energy in Malaysia, the need for wind power map of Peninsular Malaysia has aroused. The map is needed to help in determining the potential areas where low wind speed wind turbines could operate optimally. In establishing the wind power map the effects of roughness and obstacles have been investigated. Wind data from 24 meteorological stations around the country have been utilized in conjunction with the respective local roughness and obstacles. Two sets of wind power maps have been developed i.e. the wind power maps with and without roughness and obstacles. These two sets of wind power maps exhibit great significant amount of difference in the wind power values especially in the inland areas where the wind power map without roughness and obstacles gives much lower values than those with roughness and obstacles. This paper outlines the process of establishing the two sets of wind power map as well as discussing the influence of roughness and obstacles based on the results obtained

  15. Optimising resource management in neurorehabilitation.

    Science.gov (United States)

    Wood, Richard M; Griffiths, Jeff D; Williams, Janet E; Brouwers, Jakko

    2014-01-01

    To date, little research has been published regarding the effective and efficient management of resources (beds and staff) in neurorehabilitation, despite being an expensive service in limited supply. To demonstrate how mathematical modelling can be used to optimise service delivery, by way of a case study at a major 21 bed neurorehabilitation unit in the UK. An automated computer program for assigning weekly treatment sessions is developed. Queue modelling is used to construct a mathematical model of the hospital in terms of referral submissions to a waiting list, admission and treatment, and ultimately discharge. This is used to analyse the impact of hypothetical strategic decisions on a variety of performance measures and costs. The project culminates in a hybridised model of these two approaches, since a relationship is found between the number of therapy hours received each week (scheduling output) and length of stay (queuing model input). The introduction of the treatment scheduling program has substantially improved timetable quality (meaning a better and fairer service to patients) and has reduced employee time expended in its creation by approximately six hours each week (freeing up time for clinical work). The queuing model has been used to assess the effect of potential strategies, such as increasing the number of beds or employing more therapists. The use of mathematical modelling has not only optimised resources in the short term, but has allowed the optimality of longer term strategic decisions to be assessed.

  16. Wave scattering from statistically rough surfaces

    CERN Document Server

    Bass, F G; ter Haar, D

    2013-01-01

    Wave Scattering from Statistically Rough Surfaces discusses the complications in radio physics and hydro-acoustics in relation to wave transmission under settings seen in nature. Some of the topics that are covered include radar and sonar, the effect of variations in topographic relief or ocean waves on the transmission of radio and sound waves, the reproduction of radio waves from the lower layers of the ionosphere, and the oscillations of signals within the earth-ionosphere waveguide. The book begins with some fundamental idea of wave transmission theory and the theory of random processes a

  17. Study on the evolutionary optimisation of the topology of network control systems

    Science.gov (United States)

    Zhou, Zude; Chen, Benyuan; Wang, Hong; Fan, Zhun

    2010-08-01

    Computer networks have been very popular in enterprise applications. However, optimisation of network designs that allows networks to be used more efficiently in industrial environment and enterprise applications remains an interesting research topic. This article mainly discusses the topology optimisation theory and methods of the network control system based on switched Ethernet in an industrial context. Factors that affect the real-time performance of the industrial control network are presented in detail, and optimisation criteria with their internal relations are analysed. After the definition of performance parameters, the normalised indices for the evaluation of the topology optimisation are proposed. The topology optimisation problem is formulated as a multi-objective optimisation problem and the evolutionary algorithm is applied to solve it. Special communication characteristics of the industrial control network are considered in the optimisation process. In respect to the evolutionary algorithm design, an improved arena algorithm is proposed for the construction of the non-dominated set of the population. In addition, for the evaluation of individuals, the integrated use of the dominative relation method and the objective function combination method, for reducing the computational cost of the algorithm, are given. Simulation tests show that the performance of the proposed algorithm is preferable and superior compared to other algorithms. The final solution greatly improves the following indices: traffic localisation, traffic balance and utilisation rate balance of switches. In addition, a new performance index with its estimation process is proposed.

  18. Ants Colony Optimisation of a Measuring Path of Prismatic Parts on a CMM

    Directory of Open Access Journals (Sweden)

    Stojadinovic Slavenko M.

    2016-03-01

    Full Text Available This paper presents optimisation of a measuring probe path in inspecting the prismatic parts on a CMM. The optimisation model is based on: (i the mathematical model that establishes an initial collision-free path presented by a set of points, and (ii the solution of Travelling Salesman Problem (TSP obtained with Ant Colony Optimisation (ACO. In order to solve TSP, an ACO algorithm that aims to find the shortest path of ant colony movement (i.e. the optimised path is applied. Then, the optimised path is compared with the measuring path obtained with online programming on CMM ZEISS UMM500 and with the measuring path obtained in the CMM inspection module of Pro/ENGINEER® software. The results of comparing the optimised path with the other two generated paths show that the optimised path is at least 20% shorter than the path obtained by on-line programming on CMM ZEISS UMM500, and at least 10% shorter than the path obtained by using the CMM module in Pro/ENGINEER®.

  19. Does Surface Roughness Amplify Wetting?

    Czech Academy of Sciences Publication Activity Database

    Malijevský, Alexandr

    2014-01-01

    Roč. 141, č. 18 (2014), s. 184703 ISSN 0021-9606 R&D Projects: GA ČR GA13-09914S Institutional support: RVO:67985858 Keywords : density functional theory * wetting * roughness Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.952, year: 2014

  20. Calibration of surface roughness standards

    DEFF Research Database (Denmark)

    Thalmann, R.; Nicolet, A.; Meli, F.

    2016-01-01

    organisations. Five surface texture standards of different type were circulated and on each of the standards several roughness parameters according to the standard ISO 4287 had to be determined. 32 out of 395 individual results were not consistent with the reference value. After some corrective actions...

  1. Human roughness perception and possible factors effecting roughness sensation.

    Science.gov (United States)

    Aktar, Tugba; Chen, Jianshe; Ettelaie, Rammile; Holmes, Melvin; Henson, Brian

    2017-06-01

    Surface texture sensation is significant for business success, in particular for solid surfaces for most of the materials; including foods. Mechanisms of roughness perception are still unknown, especially under different conditions such as lubricants with varying viscosities, different temperatures, or under different force loads during the observation of the surface. This work aims to determine the effect of those unknown factors, with applied sensory tests on 62 healthy participants. Roughness sensation of fingertip was tested under different lubricants including water and diluted syrup solutions at room temperature (25C) and body temperature (37C) by using simple pair-wise comparison to observe the just noticeable difference threshold and perception levels. Additionally, in this research applied force load during roughness observation was tested with pair-wise ranking method to illustrate its possible effect on human sensation. Obtained results showed that human's capability of roughness discrimination reduces with increased viscosity of the lubricant, where the influence of the temperature was not found to be significant. Moreover, the increase in the applied force load showed an increase in the sensitivity of roughness discrimination. Observed effects of the applied factors were also used for estimating the oral sensation of texture during eating. These findings are significant for our fundamental understanding to texture perception, and for the development of new food products with controlled textural features. Texture discrimination ability, more specifically roughness discrimination capability, is a significant factor for preference and appreciation for a wide range of materials, including food, furniture, or fabric. To explore the mechanism of sensation capability through tactile senses, it is necessary to identify the relevant factors and define characteristics that dominate the process involved. The results that will be obtained under these principles

  2. Evolutionary programming for neutron instrument optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Bentley, Phillip M. [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany)]. E-mail: phillip.bentley@hmi.de; Pappas, Catherine [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Habicht, Klaus [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Lelievre-Berna, Eddy [Institut Laue-Langevin, 6 rue Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France)

    2006-11-15

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  3. Evolutionary programming for neutron instrument optimisation

    International Nuclear Information System (INIS)

    Bentley, Phillip M.; Pappas, Catherine; Habicht, Klaus; Lelievre-Berna, Eddy

    2006-01-01

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations

  4. Optimising Boltzmann codes for the PLANCK era

    International Nuclear Information System (INIS)

    Hamann, Jan; Lesgourgues, Julien; Balbi, Amedeo; Quercellini, Claudia

    2009-01-01

    High precision measurements of the Cosmic Microwave Background (CMB) anisotropies, as can be expected from the PLANCK satellite, will require high-accuracy theoretical predictions as well. One possible source of theoretical uncertainty is the numerical error in the output of the Boltzmann codes used to calculate angular power spectra. In this work, we carry out an extensive study of the numerical accuracy of the public Boltzmann code CAMB, and identify a set of parameters which determine the error of its output. We show that at the current default settings, the cosmological parameters extracted from data of future experiments like Planck can be biased by several tenths of a standard deviation for the six parameters of the standard ΛCDM model, and potentially more seriously for extended models. We perform an optimisation procedure that leads the code to achieve sufficient precision while at the same time keeping the computation time within reasonable limits. Our conclusion is that the contribution of numerical errors to the theoretical uncertainty of model predictions is well under control—the main challenges for more accurate calculations of CMB spectra will be of an astrophysical nature instead

  5. Multiscale Analysis of the Roughness Effect on Lubricated Rough Contact

    OpenAIRE

    Demirci , Ibrahim; MEZGHANI , Sabeur; YOUSFI , Mohammed; El Mansori , Mohamed

    2014-01-01

    Determining friction is as equally essential as determining the film thickness in the lubricated contact, and is an important research subject. Indeed, reduction of friction in the automotive industry is important for both the minimization of fuel consumption as well as the decrease in the emissions of greenhouse gases. However, the progress in friction reduction has been limited by the difficulty in understanding the mechanism of roughness effects on friction. It was observed that micro-surf...

  6. Dose optimisation in computed radiography

    International Nuclear Information System (INIS)

    Schreiner-Karoussou, A.

    2005-01-01

    After the installation of computed radiography (CR) systems in three hospitals in Luxembourg a patient dose survey was carried out for three radiographic examinations, thorax, pelvis and lumbar spine. It was found that the patient doses had changed in comparison with the patient doses measured for conventional radiography in the same three hospitals. A close collaboration between the manufacturers of the X-ray installations, the CR imaging systems and the medical physicists led to the discovery that the speed class with which each radiographic examination was to be performed, had been ignored, during installation of the digital imaging systems. A number of procedures were carried out in order to calibrate and program the X-ray installations in conjunction with the CR systems. Following this optimisation procedure, a new patient dose survey was carried out for the three radiographic examinations. It was found that patient doses for the three hospitals were reduced. (authors)

  7. Optimising costs in WLCG operations

    CERN Document Server

    Pradillo, Mar; Flix, Josep; Forti, Alessandra; Sciabà, Andrea

    2015-01-01

    The Worldwide LHC Computing Grid project (WLCG) provides the computing and storage resources required by the LHC collaborations to store, process and analyse the 50 Petabytes of data annually generated by the LHC. The WLCG operations are coordinated by a distributed team of managers and experts and performed by people at all participating sites and from all the experiments. Several improvements in the WLCG infrastructure have been implemented during the first long LHC shutdown to prepare for the increasing needs of the experiments during Run2 and beyond. However, constraints in funding will affect not only the computing resources but also the available effort for operations. This paper presents the results of a detailed investigation on the allocation of the effort in the different areas of WLCG operations, identifies the most important sources of inefficiency and proposes viable strategies for optimising the operational cost, taking into account the current trends in the evolution of the computing infrastruc...

  8. Pre-segmented 2-Step IMRT with subsequent direct machine parameter optimisation – a planning study

    International Nuclear Information System (INIS)

    Bratengeier, Klaus; Meyer, Jürgen; Flentje, Michael

    2008-01-01

    Modern intensity modulated radiotherapy (IMRT) mostly uses iterative optimisation methods. The integration of machine parameters into the optimisation process of step and shoot leaf positions has been shown to be successful. For IMRT segmentation algorithms based on the analysis of the geometrical structure of the planning target volumes (PTV) and the organs at risk (OAR), the potential of such procedures has not yet been fully explored. In this work, 2-Step IMRT was combined with subsequent direct machine parameter optimisation (DMPO-Raysearch Laboratories, Sweden) to investigate this potential. In a planning study DMPO on a commercial planning system was compared with manual primary 2-Step IMRT segment generation followed by DMPO optimisation. 15 clinical cases and the ESTRO Quasimodo phantom were employed. Both the same number of optimisation steps and the same set of objective values were used. The plans were compared with a clinical DMPO reference plan and a traditional IMRT plan based on fluence optimisation and consequent segmentation. The composite objective value (the weighted sum of quadratic deviations of the objective values and the related points in the dose volume histogram) was used as a measure for the plan quality. Additionally, a more extended set of parameters was used for the breast cases to compare the plans. The plans with segments pre-defined with 2-Step IMRT were slightly superior to DMPO alone in the majority of cases. The composite objective value tended to be even lower for a smaller number of segments. The total number of monitor units was slightly higher than for the DMPO-plans. Traditional IMRT fluence optimisation with subsequent segmentation could not compete. 2-Step IMRT segmentation is suitable as starting point for further DMPO optimisation and, in general, results in less complex plans which are equal or superior to plans generated by DMPO alone

  9. Ultrasonic backward radiation on painted rough interface

    International Nuclear Information System (INIS)

    Kwon, Yong Gyu; Yoon, Seok Soo; Kwon, Sung Duck

    2002-01-01

    The angular dependence(profile) of backscattered ultrasound was measured for steel and brass specimens with periodical surface roughness (1-71μm). Backward radiations showed more linear dependency than normal profile. Direct amplitude increased and averaging amplitude decreased with surface roughness. Painting treatment improved the linearity in direct backward radiation below roughness of 0.03. Scholte and Rayleigh-like waves were observed in the spectrum of averaging backward radiation on periodically rough surface. Painting on periodically rough surface could be used in removing the interface mode effect by periodic roughness.

  10. An FMEA analysis using grey theory and grey rough sets

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-10-01

    Full Text Available This paper presents a hybrid method for detecting the most important failure items as well as the most effective alternative strategy to cope with possible events. The proposed model of this paper uses grey technique to rank various alternatives and FMEA technique to find important faults. The implementation of the proposed method has been illustrated for an existing example on the literature. The results of this method show that the proposed model has been capable of detecting the most trouble making problems with fuzzy logic and finds the most important solution strategy using FMEA technique.

  11. Rough set based decision rule generation to find behavioural ...

    Indian Academy of Sciences (India)

    L Sumalatha

    conducted experiments over data of Portuguese Banking institution. From the proposed ... nomic, banking [7, 8], pharmacology [9], and text mining. [10]. In this paper we ..... Age. Numeric. Job. Categorical: admin, unemployed, management,.

  12. MULTI-OBJECTIVE OPTIMISATION OF LASER CUTTING USING CUCKOO SEARCH ALGORITHM

    Directory of Open Access Journals (Sweden)

    M. MADIĆ

    2015-03-01

    Full Text Available Determining of optimal laser cutting conditions for improving cut quality characteristics is of great importance in process planning. This paper presents multi-objective optimisation of the CO2 laser cutting process considering three cut quality characteristics such as surface roughness, heat affected zone (HAZ and kerf width. It combines an experimental design by using Taguchi’s method, modelling the relationships between the laser cutting factors (laser power, cutting speed, assist gas pressure and focus position and cut quality characteristics by artificial neural networks (ANNs, formulation of the multiobjective optimisation problem using weighting sum method, and solving it by the novel meta-heuristic cuckoo search algorithm (CSA. The objective is to obtain optimal cutting conditions dependent on the importance order of the cut quality characteristics for each of four different case studies presented in this paper. The case studies considered in this study are: minimisation of cut quality characteristics with equal priority, minimisation of cut quality characteristics with priority given to surface roughness, minimisation of cut quality characteristics with priority given to HAZ, and minimisation of cut quality characteristics with priority given to kerf width. The results indicate that the applied CSA for solving the multi-objective optimisation problem is effective, and that the proposed approach can be used for selecting the optimal laser cutting factors for specific production requirements.

  13. Combining simulation and multi-objective optimisation for equipment quantity optimisation in container terminals

    OpenAIRE

    Lin, Zhougeng

    2013-01-01

    This thesis proposes a combination framework to integrate simulation and multi-objective optimisation (MOO) for container terminal equipment optimisation. It addresses how the strengths of simulation and multi-objective optimisation can be integrated to find high quality solutions for multiple objectives with low computational cost. Three structures for the combination framework are proposed respectively: pre-MOO structure, integrated MOO structure and post-MOO structure. The applications of ...

  14. Robust surface roughness indices and morphological interpretation

    Science.gov (United States)

    Trevisani, Sebastiano; Rocca, Michele

    2016-04-01

    Geostatistical-based image/surface texture indices based on variogram (Atkison and Lewis, 2000; Herzfeld and Higginson, 1996; Trevisani et al., 2012) and on its robust variant MAD (median absolute differences, Trevisani and Rocca, 2015) offer powerful tools for the analysis and interpretation of surface morphology (potentially not limited to solid earth). In particular, the proposed robust index (Trevisani and Rocca, 2015) with its implementation based on local kernels permits the derivation of a wide set of robust and customizable geomorphometric indices capable to outline specific aspects of surface texture. The stability of MAD in presence of signal noise and abrupt changes in spatial variability is well suited for the analysis of high-resolution digital terrain models. Moreover, the implementation of MAD by means of a pixel-centered perspective based on local kernels, with some analogies to the local binary pattern approach (Lucieer and Stein, 2005; Ojala et al., 2002), permits to create custom roughness indices capable to outline different aspects of surface roughness (Grohmann et al., 2011; Smith, 2015). In the proposed poster, some potentialities of the new indices in the context of geomorphometry and landscape analysis will be presented. At same time, challenges and future developments related to the proposed indices will be outlined. Atkinson, P.M., Lewis, P., 2000. Geostatistical classification for remote sensing: an introduction. Computers & Geosciences 26, 361-371. Grohmann, C.H., Smith, M.J., Riccomini, C., 2011. Multiscale Analysis of Topographic Surface Roughness in the Midland Valley, Scotland. IEEE Transactions on Geoscience and Remote Sensing 49, 1220-1213. Herzfeld, U.C., Higginson, C.A., 1996. Automated geostatistical seafloor classification - Principles, parameters, feature vectors, and discrimination criteria. Computers and Geosciences, 22 (1), pp. 35-52. Lucieer, A., Stein, A., 2005. Texture-based landform segmentation of LiDAR imagery

  15. Layout Optimisation of Wave Energy Converter Arrays

    Directory of Open Access Journals (Sweden)

    Pau Mercadé Ruiz

    2017-08-01

    Full Text Available This paper proposes an optimisation strategy for the layout design of wave energy converter (WEC arrays. Optimal layouts are sought so as to maximise the absorbed power given a minimum q-factor, the minimum distance between WECs, and an area of deployment. To guarantee an efficient optimisation, a four-parameter layout description is proposed. Three different optimisation algorithms are further compared in terms of performance and computational cost. These are the covariance matrix adaptation evolution strategy (CMA, a genetic algorithm (GA and the glowworm swarm optimisation (GSO algorithm. The results show slightly higher performances for the latter two algorithms; however, the first turns out to be significantly less computationally demanding.

  16. Topology optimisation of natural convection problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Aage, Niels; Andreasen, Casper Schousboe

    2014-01-01

    This paper demonstrates the application of the density-based topology optimisation approach for the design of heat sinks and micropumps based on natural convection effects. The problems are modelled under the assumptions of steady-state laminar flow using the incompressible Navier-Stokes equations...... coupled to the convection-diffusion equation through the Boussinesq approximation. In order to facilitate topology optimisation, the Brinkman approach is taken to penalise velocities inside the solid domain and the effective thermal conductivity is interpolated in order to accommodate differences...... in thermal conductivity of the solid and fluid phases. The governing equations are discretised using stabilised finite elements and topology optimisation is performed for two different problems using discrete adjoint sensitivity analysis. The study shows that topology optimisation is a viable approach...

  17. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    This thesis deals with topology optimisation for coupled convection problems. The aim is to extend and apply topology optimisation to steady-state conjugate heat transfer problems, where the heat conduction equation governs the heat transfer in a solid and is coupled to thermal transport...... in a surrounding uid, governed by a convection-diffusion equation, where the convective velocity field is found from solving the isothermal incompressible steady-state Navier-Stokes equations. Topology optimisation is also applied to steady-state natural convection problems. The modelling is done using stabilised...... finite elements, the formulation and implementation of which was done partly during a special course as prepatory work for this thesis. The formulation is extended with a Brinkman friction term in order to facilitate the topology optimisation of fluid flow and convective cooling problems. The derived...

  18. Benchmarks for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available When algorithms solve dynamic multi-objective optimisation problems (DMOOPs), benchmark functions should be used to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for dynamic multi...

  19. Credit price optimisation within retail banking

    African Journals Online (AJOL)

    2014-02-14

    Feb 14, 2014 ... cost based pricing, where the price of a product or service is based on the .... function obtained from fitting a logistic regression model .... Note that the proposed optimisation approach below will allow us to also incorporate.

  20. Towards predictive models for transitionally rough surfaces

    Science.gov (United States)

    Abderrahaman-Elena, Nabil; Garcia-Mayoral, Ricardo

    2017-11-01

    We analyze and model the previously presented decomposition for flow variables in DNS of turbulence over transitionally rough surfaces. The flow is decomposed into two contributions: one produced by the overlying turbulence, which has no footprint of the surface texture, and one induced by the roughness, which is essentially the time-averaged flow around the surface obstacles, but modulated in amplitude by the first component. The roughness-induced component closely resembles the laminar steady flow around the roughness elements at the same non-dimensional roughness size. For small - yet transitionally rough - textures, the roughness-free component is essentially the same as over a smooth wall. Based on these findings, we propose predictive models for the onset of the transitionally rough regime. Project supported by the Engineering and Physical Sciences Research Council (EPSRC).

  1. User perspectives in public transport timetable optimisation

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2014-01-01

    The present paper deals with timetable optimisation from the perspective of minimising the waiting time experienced by passengers when transferring either to or from a bus. Due to its inherent complexity, this bi-level minimisation problem is extremely difficult to solve mathematically, since tim...... on the large-scale public transport network in Denmark. The timetable optimisation approach yielded a yearly reduction in weighted waiting time equivalent to approximately 45 million Danish kroner (9 million USD)....

  2. Methodological principles for optimising functional MRI experiments

    International Nuclear Information System (INIS)

    Wuestenberg, T.; Giesel, F.L.; Strasburger, H.

    2005-01-01

    Functional magnetic resonance imaging (fMRI) is one of the most common methods for localising neuronal activity in the brain. Even though the sensitivity of fMRI is comparatively low, the optimisation of certain experimental parameters allows obtaining reliable results. In this article, approaches for optimising the experimental design, imaging parameters and analytic strategies will be discussed. Clinical neuroscientists and interested physicians will receive practical rules of thumb for improving the efficiency of brain imaging experiments. (orig.) [de

  3. Optimisation: how to develop stake holder involvement

    International Nuclear Information System (INIS)

    Weiss, W.

    2003-01-01

    The Precautionary Principle is an internationally recognised approach for dealing with risk situations characterised by uncertainties and potential irreversible damages. Since the late fifties, ICRP has adopted this prudent attitude because of the lack of scientific evidence concerning the existence of a threshold at low doses for stochastic effects. The 'linear, no-threshold' model and the 'optimisation of protection' principle have been developed as a pragmatic response for the management of the risk. The progress in epidemiology and radiobiology over the last decades have affirmed the initial assumption and the optimisation remains the appropriate response for the application of the precautionary principle in the context of radiological protection. The basic objective of optimisation is, for any source within the system of radiological protection, to maintain the level of exposure as low as reasonably achievable, taking into account social and economical factors. Methods tools and procedures have been developed over the last two decades to put into practice the optimisation principle with a central role given to the cost-benefit analysis as a means to determine the optimised level of protection. However, with the advancement in the implementation of the principle more emphasis was progressively given to good practice, as well as on the importance of controlling individual levels of exposure through the optimisation process. In the context of the revision of its present recommendations, the Commission is reenforcing the emphasis on protection of the individual with the adoption of an equity-based system that recognizes individual rights and a basic level of health protection. Another advancement is the role that is now recognised to 'stakeholders involvement' in the optimisation process as a mean to improve the quality of the decision aiding process for identifying and selecting protection actions considered as being accepted by all those involved. The paper

  4. Dose optimisation in single plane interstitial brachytherapy

    DEFF Research Database (Denmark)

    Tanderup, Kari; Hellebust, Taran Paulsen; Honoré, Henriette Benedicte

    2006-01-01

    patients,       treated for recurrent rectal and cervical cancer, flexible catheters were       sutured intra-operatively to the tumour bed in areas with compromised       surgical margin. Both non-optimised, geometrically and graphically       optimised CT -based dose plans were made. The overdose index...... on the       regularity of the implant, such that the benefit of optimisation was       larger for irregular implants. OI and HI correlated strongly with target       volume limiting the usability of these parameters for comparison of dose       plans between patients. CONCLUSIONS: Dwell time optimisation significantly......BACKGROUND AND PURPOSE: Brachytherapy dose distributions can be optimised       by modulation of source dwell times. In this study dose optimisation in       single planar interstitial implants was evaluated in order to quantify the       potential benefit in patients. MATERIAL AND METHODS: In 14...

  5. Benchmarking performance measurement and lean manufacturing in the rough mill

    Science.gov (United States)

    Dan Cumbo; D. Earl Kline; Matthew S. Bumgardner

    2006-01-01

    Lean manufacturing represents a set of tools and a stepwise strategy for achieving smooth, predictable product flow, maximum product flexibility, and minimum system waste. While lean manufacturing principles have been successfully applied to some components of the secondary wood products value stream (e.g., moulding, turning, assembly, and finishing), the rough mill is...

  6. Reduction environmental effects of civil aircraft through multi-objective flight plan optimisation

    International Nuclear Information System (INIS)

    Lee, D S; Gonzalez, L F; Walker, R; Periaux, J; Onate, E

    2010-01-01

    With rising environmental alarm, the reduction of critical aircraft emissions including carbon dioxides (CO 2 ) and nitrogen oxides (NO x ) is one of most important aeronautical problems. There can be many possible attempts to solve such problem by designing new wing/aircraft shape, new efficient engine, etc. The paper rather provides a set of acceptable flight plans as a first step besides replacing current aircrafts. The paper investigates a green aircraft design optimisation in terms of aircraft range, mission fuel weight (CO 2 ) and NO x using advanced Evolutionary Algorithms coupled to flight optimisation system software. Two multi-objective design optimisations are conducted to find the best set of flight plans for current aircrafts considering discretised altitude and Mach numbers without designing aircraft shape and engine types. The objectives of first optimisation are to maximise range of aircraft while minimising NO x with constant mission fuel weight. The second optimisation considers minimisation of mission fuel weight and NO x with fixed aircraft range. Numerical results show that the method is able to capture a set of useful trade-offs that reduce NO x and CO 2 (minimum mission fuel weight).

  7. Optimising the refrigeration cycle with a two-stage centrifugal compressor and a flash intercooler

    Energy Technology Data Exchange (ETDEWEB)

    Roeyttae, Pekka; Turunen-Saaresti, Teemu; Honkatukia, Juha [Lappeenranta University of Technology, Laboratory of Energy and Environmental Technology, PO Box 20, 53851 Lappeenranta (Finland)

    2009-09-15

    The optimisation of a refrigeration process with a two-stage centrifugal compressor and flash intercooler is presented in this paper. The two-stage centrifugal compressor stages are on the same shaft and the electric motor is cooled with the refrigerant. The performance of the centrifugal compressor is evaluated based on semi-empirical specific-speed curves and the effect of the Reynolds number, surface roughness and tip clearance have also been taken into account. The thermodynamic and transport properties of the working fluids are modelled with a real-gas model. The condensing and evaporation temperatures, the temperature after the flash intercooler, and cooling power have been chosen as fixed values in the process. The aim is to gain a maximum coefficient of performance (COP). The method of optimisation, the operation of the compressor and flash intercooler, and the method for estimating the electric motor cooling are also discussed in the article. (author)

  8. Thermal performance monitoring and optimisation

    International Nuclear Information System (INIS)

    Sunde, Svein; Berg; Oeyvind

    1998-01-01

    Monitoring of the thermal efficiency of nuclear power plants is expected to become increasingly important as energy-market liberalisation exposes plants to increasing availability requirements and fiercer competition. The general goal in thermal performance monitoring is straightforward: to maximise the ratio of profit to cost under the constraints of safe operation. One may perceive this goal to be pursued in two ways, one oriented towards fault detection and cost-optimal predictive maintenance, and another determined at optimising target values of parameters in response to any component degradation detected, changes in ambient conditions, or the like. Annual savings associated with effective thermal-performance monitoring are expected to be in the order of $ 100 000 for power plants of representative size. A literature review shows that a number of computer systems for thermal-performance monitoring exists, either as prototypes or commercially available. The characteristics and needs of power plants may vary widely, however, and decisions concerning the exact scope, content and configuration of a thermal-performance monitor may well follow a heuristic approach. Furthermore, re-use of existing software modules may be desirable. Therefore, we suggest here the design of a flexible workbench for easy assembly of an experimental thermal-performance monitor at the Halden Project. The suggested design draws heavily on our extended experience in implementing control-room systems featured by assets like high levels of customisation, flexibility in configuration and modularity in structure, and on a number of relevant adjoining activities. The design includes a multi-computer communication system and a graphical user's interface, and aims at a system adaptable to any combination of in-house or end user's modules, as well as commercially available software. (author)

  9. Computer simulations of a rough sphere fluid

    International Nuclear Information System (INIS)

    Lyklema, J.W.

    1978-01-01

    A computer simulation is described on rough hard spheres with a continuously variable roughness parameter, including the limits of smooth and completely rough spheres. A system of 500 particles is simulated with a homogeneous mass distribution at 8 different densities and for 5 different values of the roughness parameter. For these 40 physically different situations the intermediate scattering function for 6 values of the wave number, the orientational correlation functions and the velocity autocorrelation functions have been calculated. A comparison has been made with a neutron scattering experiment on neopentane and agreement was good for an intermediate value of the roughness parameter. Some often made approximations in neutron scattering experiments are also checked. The influence of the variable roughness parameter on the correlation functions has been investigated and three simple stochastic models studied to describe the orientational correlation function which shows the most pronounced dependence on the roughness. (Auth.)

  10. Optimised performance of industrial high resolution computerised tomography

    International Nuclear Information System (INIS)

    Maangaard, M.

    2000-01-01

    The purpose of non-destructive evaluation (NDE) is to acquire knowledge of the investigated sample. Digital x-ray imaging techniques such as radiography or computerised tomography (CT) produce images of the interior of a sample. The obtained image quality determines the possibility of detecting sample related features, e.g. details and flaws. This thesis presents a method of optimising the performance of industrial X-ray equipment for the imaging task at issue in order to obtain images with high quality. CT produces maps of the X-ray linear attenuation of the sample's interior. CT can produce two dimensional cross-section images or three-dimensional images with volumetric information on the investigated sample. The image contrast and noise depend on both the investigated sample and the equipment and settings used (X-ray tube potential, X-ray filtration, exposure time, etc.). Hence, it is vital to find the optimal equipment settings in order to obtain images of high quality. To be able to mathematically optimise the image quality, it is necessary to have a model of the X-ray imaging system together with an appropriate measure of image quality. The optimisation is performed with a developed model for an X-ray image-intensifier-based radiography system. The model predicts the mean value and variance of the measured signal level in the collected radiographic images. The traditionally used measure of physical image quality is the signal-to-noise ratio (SNR). To calculate the signal-to-noise ratio, a well-defined detail (flaw) is required. It was found that maximising the SNR leads to ambiguities, the optimised settings found by maximising the SNR were dependent on the material in the detail. When CT is performed on irregular shaped samples containing density and compositional variations, it is difficult to define which SNR to use for optimisation. This difficulty is solved by the measures of physical image quality proposed here, the ratios geometry

  11. Generation of safe optimised execution strategies for uml models

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas; Herbert-Hansen, Zaza Nadja Lee

    When designing safety critical systems there is a need for verification of safety properties while ensuring system operations have a specific performance profile. We present a novel application of model checking to derive execution strategies, sequences of decisions at workflow branch points...... which optimise a set of reward variables, while simultaneously observing constraints which encode any required safety properties and accounting for the underlying stochastic nature of the system. By evaluating quantitative properties of the generated adversaries we are able to construct an execution...

  12. Determination of forest road surface roughness by Kinect depth imaging

    Directory of Open Access Journals (Sweden)

    Francesco Marinello

    2017-12-01

    Full Text Available Roughness is a dynamic property of the gravel road surface that affects safety, ride comfort as well as vehicle tyre life and maintenance costs. A rapid survey of gravel road condition is fundamental for an effective maintenance planning and definition of the intervention priorities.Different non-contact techniques such as laser scanning, ultrasonic sensors and photogrammetry have recently been proposed to reconstruct three-dimensional topography of road surface and allow extraction of roughness metrics. The application of Microsoft Kinect™ depth camera is proposed and discussed here for collection of 3D data sets from gravel roads, to be implemented in order to allow quantification of surface roughness.The objectives are to: i verify the applicability of the Kinect sensor for characterization of different forest roads, ii identify the appropriateness and potential of different roughness parameters and iii analyse the correlation with vibrations recoded by 3-axis accelerometers installed on different vehicles. The test took advantage of the implementation of the Kinect depth camera for surface roughness determination of 4 different forest gravel roads and one well-maintained asphalt road as reference. Different vehicles (mountain bike, off-road motorcycle, ATV vehicle, 4WD car and compact crossover were included in the experiment in order to verify the vibration intensity when travelling on different road surface conditions. Correlations between the extracted roughness parameters and vibration levels of the tested vehicles were then verified. Coefficients of determination of between 0.76 and 0.97 were detected between average surface roughness and standard deviation of relative accelerations, with higher values in the case of lighter vehicles.

  13. Sensing roughness and polish direction

    DEFF Research Database (Denmark)

    Jakobsen, Michael Linde; Olesen, Anders Sig; Larsen, Henning Engelbrecht

    2016-01-01

    As a part of the work carried out in a project supported by the Danish Council for Technology and Innovation, we have investigated the option of smoothing standard CNC-machined surfaces. In the process of constructing optical prototypes, involving custom-designed optics, the development cost...... and time consumption can become prohibitive in a research budget. Machining the optical surfaces directly is expensive and time consuming. Alternatively, a more standardized and cheaper machining method can be used, calling for the object to be manually polished. During the polishing process, the operator...... needs information about the RMS-value of the surface roughness and the current direction of the scratches introduced by the polishing process. The RMS-value indicates to the operator how far he is from the final finish, and the scratch orientation is often specified by the customer in order to avoid...

  14. Software testing in roughness calculation

    International Nuclear Information System (INIS)

    Chen, Y L; Hsieh, P F; Fu, W E

    2005-01-01

    A test method to determine the function quality provided by the software for roughness measurement is presented in this study. The function quality of the software requirements should be part of and assessed through the entire life cycle of the software package. The specific function, or output accuracy, is crucial for the analysis of the experimental data. For scientific applications, however, commercial software is usually embedded with specific instrument, which is used for measurement or analysis during the manufacture process. In general, the error ratio caused by the software would be more apparent especially when dealing with relatively small quantities, like the measurements in the nanometer-scale range. The model of 'using a data generator' proposed by NPL of UK was applied in this study. An example of the roughness software is tested and analyzed by the above mentioned process. After selecting the 'reference results', the 'reference data' was generated by a programmable 'data generator'. The filter function of 0.8 mm long cutoff value, defined in ISO 11562 was tested with 66 sinusoid data at different wavelengths. Test results from commercial software and CMS written program were compared to the theoretical data calculated from ISO standards. As for the filter function in this software, the result showed a significant disagreement between the reference and test results. The short cutoff feature for filtering at the high frequencies does not function properly, while the long cutoff feature has the maximum difference in the filtering ratio, which is more than 70% between the wavelength of 300 μm and 500 μm. Conclusively, the commercial software needs to be tested more extensively for specific application by appropriate design of reference dataset to ensure its function quality

  15. Shape optimisation and performance analysis of flapping wings

    KAUST Repository

    Ghommem, Mehdi

    2012-09-04

    In this paper, shape optimisation of flapping wings in forward flight is considered. This analysis is performed by combining a local gradient-based optimizer with the unsteady vortex lattice method (UVLM). Although the UVLM applies only to incompressible, inviscid flows where the separation lines are known a priori, Persson et al. [1] showed through a detailed comparison between UVLM and higher-fidelity computational fluid dynamics methods for flapping flight that the UVLM schemes produce accurate results for attached flow cases and even remain trend-relevant in the presence of flow separation. As such, they recommended the use of an aerodynamic model based on UVLM to perform preliminary design studies of flapping wing vehicles Unlike standard computational fluid dynamics schemes, this method requires meshing of the wing surface only and not of the whole flow domain [2]. From the design or optimisation perspective taken in our work, it is fairly common (and sometimes entirely necessary, as a result of the excessive computational cost of the highest fidelity tools such as Navier-Stokes solvers) to rely upon such a moderate level of modelling fidelity to traverse the design space in an economical manner. The objective of the work, described in this paper, is to identify a set of optimised shapes that maximise the propulsive efficiency, defined as the ratio of the propulsive power over the aerodynamic power, under lift, thrust, and area constraints. The shape of the wings is modelled using B-splines, a technology used in the computer-aided design (CAD) field for decades. This basis can be used to smoothly discretize wing shapes with few degrees of freedom, referred to as control points. The locations of the control points constitute the design variables. The results suggest that changing the shape yields significant improvement in the performance of the flapping wings. The optimisation pushes the design to "bird-like" shapes with substantial increase in the time

  16. Roughness analysis of graphite surfaces of casting elements

    Directory of Open Access Journals (Sweden)

    M. Wieczorowski

    2010-01-01

    Full Text Available In the paper profilometric measurements of graphite casting elements were described. Basic topics necessary to assess roughness of their surfaces and influence of asperities on various properties related to manufacturing and use were discussed. Stylus profilometer technique of surface irregularities measurements including its limits resulting from pickup geometry and its contact with measured object were ana-lyzed. Working principle of tactile profilometer and phenomena taking place during movement of a probe on a measured surface were shown. One of the important aspects is a flight phenomenon, which means movement of a pickup without contact with a surface during inspection resulting from too high scanning speed. results of comparison research for graphite elements of new and used mould and pin composing a set were presented. Using some surface roughness, waviness and primary profile parameters (arithmetical mean of roughness profile heights Ra, biggest roughness profile height Rz, maximum primary profile height Pt as well as maximum waviness profile height Wt a possibility of using surface asperities parameters as a measure of wear of chill graphite elements was proved. The most often applied parameter is Ra, but with a help of parameters from W and P family it was shown, that big changes occur not only for roughness but also for other components of surface irregularities.

  17. Optimisation of Investment Resources at Small Enterprises

    Directory of Open Access Journals (Sweden)

    Shvets Iryna B.

    2014-03-01

    Full Text Available The goal of the article lies in the study of the process of optimisation of the structure of investment resources, development of criteria and stages of optimisation of volumes of investment resources for small enterprises by types of economic activity. The article characterises the process of transformation of investment resources into assets and liabilities of the balances of small enterprises and conducts calculation of the structure of sources of formation of investment resources in Ukraine at small enterprises by types of economic activity in 2011. On the basis of the conducted analysis of the structure of investment resources of small enterprises the article forms main groups of criteria of optimisation in the context of individual small enterprises by types of economic activity. The article offers an algorithm and step-by-step scheme of optimisation of investment resources at small enterprises in the form of a multi-stage process of management of investment resources in the context of increase of their mobility and rate of transformation of existing resources into investments. The prospect of further studies in this direction is development of a structural and logic scheme of optimisation of volumes of investment resources at small enterprises.

  18. Multicriteria Optimisation in Logistics Forwarder Activities

    Directory of Open Access Journals (Sweden)

    Tanja Poletan Jugović

    2007-05-01

    Full Text Available Logistics forwarder, as organizer and planner of coordinationand integration of all the transport and logistics chains elements,uses adequate ways and methods in the process of planningand decision-making. One of these methods, analysed inthis paper, which could be used in optimisation of transportand logistics processes and activities of logistics forwarder, isthe multicriteria optimisation method. Using that method, inthis paper is suggested model of multicriteria optimisation of logisticsforwarder activities. The suggested model of optimisationis justified in keeping with method principles of multicriteriaoptimization, which is included in operation researchmethods and it represents the process of multicriteria optimizationof variants. Among many different processes of multicriteriaoptimization, PROMETHEE (Preference Ranking OrganizationMethod for Enrichment Evaluations and Promcalc& Gaia V. 3.2., computer program of multicriteria programming,which is based on the mentioned process, were used.

  19. Noise aspects at aerodynamic blade optimisation projects

    International Nuclear Information System (INIS)

    Schepers, J.G.

    1997-06-01

    The Netherlands Energy Research Foundation (ECN) has often been involved in industrial projects, in which blade geometries are created automatic by means of numerical optimisation. Usually, these projects aim at the determination of the aerodynamic optimal wind turbine blade, i.e. the goal is to design a blade which is optimal with regard to energy yield. In other cases, blades have been designed which are optimal with regard to cost of generated energy. However, it is obvious that the wind turbine blade designs which result from these optimisations, are not necessarily optimal with regard to noise emission. In this paper an example is shown of an aerodynamic blade optimisation, using the ECN-program PVOPT. PVOPT calculates the optimal wind turbine blade geometry such that the maximum energy yield is obtained. Using the aerodynamic optimal blade design as a basis, the possibilities of noise reduction are investigated. 11 figs., 8 refs

  20. Rough-fuzzy pattern recognition applications in bioinformatics and medical imaging

    CERN Document Server

    Maji, Pradipta

    2012-01-01

    Learn how to apply rough-fuzzy computing techniques to solve problems in bioinformatics and medical image processing Emphasizing applications in bioinformatics and medical image processing, this text offers a clear framework that enables readers to take advantage of the latest rough-fuzzy computing techniques to build working pattern recognition models. The authors explain step by step how to integrate rough sets with fuzzy sets in order to best manage the uncertainties in mining large data sets. Chapters are logically organized according to the major phases of pattern recognition systems dev

  1. Topology Optimisation of Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Thike Aye Min

    2016-01-01

    Full Text Available Wireless sensor networks are widely used in a variety of fields including industrial environments. In case of a clustered network the location of cluster head affects the reliability of the network operation. Finding of the optimum location of the cluster head, therefore, is critical for the design of a network. This paper discusses the optimisation approach, based on the brute force algorithm, in the context of topology optimisation of a cluster structure centralised wireless sensor network. Two examples are given to verify the approach that demonstrate the implementation of the brute force algorithm to find an optimum location of the cluster head.

  2. Simplified Approach to Predicting Rough Surface Transition

    Science.gov (United States)

    Boyle, Robert J.; Stripf, Matthias

    2009-01-01

    Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consiste nt with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparison s are presented with published experimental data. Some of the data ar e for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach ta ken in this analysis is to treat the roughness in a statistical sense , consistent with what would be obtained from blades measured after e xposure to actual engine environments. An approach is given to determ ine the equivalent sand grain roughness from the statistics of the re gular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test co nditions. Additional comparisons are made with experimental heat tran sfer data, where the roughness geometries are both regular as well a s statistical. Using the developed analysis, heat transfer calculatio ns are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.

  3. Concepts of optimisation and justification consequences for radiological mass screening

    International Nuclear Information System (INIS)

    Carmichael, J.H.E.

    1987-01-01

    Mass radiological screening campaigns have been mounted in many countries for different conditions and the needs of one country are not necessarily those of another. However, in the European community there is a reasonable uniformity about disease patterns and therefore, a mass screening situation applicable to one country is probably equally applicable throughout the Community. In radiation protection terms, all these potential surveys must be looked at under the same factors. In radiation protection, one thinks first of all justification of the practice. Then one follows it by optimisation of the technique used, so as to obtain the best balance between benefit and detriment, and at this point one must remember that the radiation protection concept of optimisation includes a financial element as well as a purely clinical element and this must lead us on eventually to touch on cost effectiveness. The last portion of the ICRP system is the actual setting of dose limits. Now these are really only applicable to workers not to patients. One cannot set an upper limit on the dose one is prepared to use in a diagnostic radiological examination, but one can say that the dose per examination, should be examined and that the dose range of that examination between various institutions should be ascertained. This should enable any one institution to see where their dose range lies in the larger dose range, and to see that their radiological practice is giving as low a dose as is reasonably achievable

  4. Rock discontinuity surface roughness variation with scale

    Science.gov (United States)

    Bitenc, Maja; Kieffer, D. Scott; Khoshelham, Kourosh

    2017-04-01

    ABSTRACT: Rock discontinuity surface roughness refers to local departures of the discontinuity surface from planarity and is an important factor influencing the shear resistance. In practice, the Joint Roughness Coefficient (JRC) roughness parameter is commonly relied upon and input to a shear strength criterion such as developed by Barton and Choubey [1977]. The estimation of roughness by JRC is hindered firstly by the subjective nature of visually comparing the joint profile to the ten standard profiles. Secondly, when correlating the standard JRC values and other objective measures of roughness, the roughness idealization is limited to a 2D profile of 10 cm length. With the advance of measuring technologies that provide accurate and high resolution 3D data of surface topography on different scales, new 3D roughness parameters have been developed. A desirable parameter is one that describes rock surface geometry as well as the direction and scale dependency of roughness. In this research a 3D roughness parameter developed by Grasselli [2001] and adapted by Tatone and Grasselli [2009] is adopted. It characterizes surface topography as the cumulative distribution of local apparent inclination of asperities with respect to the shear strength (analysis) direction. Thus, the 3D roughness parameter describes the roughness amplitude and anisotropy (direction dependency), but does not capture the scale properties. In different studies the roughness scale-dependency has been attributed to data resolution or size of the surface joint (see a summary of researches in [Tatone and Grasselli, 2012]). Clearly, the lower resolution results in lower roughness. On the other hand, have the investigations of surface size effect produced conflicting results. While some studies have shown a decrease in roughness with increasing discontinuity size (negative scale effect), others have shown the existence of positive scale effects, or both positive and negative scale effects. We

  5. Granular computing in decision approximation an application of rough mereology

    CERN Document Server

    Polkowski, Lech

    2015-01-01

    This book presents a study in knowledge discovery in data with knowledge understood as a set of relations among objects and their properties. Relations in this case are implicative decision rules and the paradigm in which they are induced is that of computing with granules defined by rough inclusions, the latter introduced and studied  within rough mereology, the fuzzified version of mereology. In this book basic classes of rough inclusions are defined and based on them methods for inducing granular structures from data are highlighted. The resulting granular structures are subjected to classifying algorithms, notably k—nearest  neighbors and bayesian classifiers. Experimental results are given in detail both in tabular and visualized form for fourteen data sets from UCI data repository. A striking feature of granular classifiers obtained by this approach is that preserving the accuracy of them on original data, they reduce  substantially the size of the granulated data set as well as the set of granular...

  6. Application of Surpac and Whittle Software in Open Pit Optimisation ...

    African Journals Online (AJOL)

    Application of Surpac and Whittle Software in Open Pit Optimisation and Design. ... This paper studies the Surpac and Whittle software and their application in designing an optimised pit. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  7. (MBO) algorithm in multi-reservoir system optimisation

    African Journals Online (AJOL)

    A comparative study of marriage in honey bees optimisation (MBO) algorithm in ... A practical application of the marriage in honey bees optimisation (MBO) ... to those of other evolutionary algorithms, such as the genetic algorithm (GA), ant ...

  8. AMSRIce03 Surface Roughness Data

    Data.gov (United States)

    National Aeronautics and Space Administration — Notice to Data Users: The documentation for this data set was provided solely by the Principal Investigator(s) and was not further developed, thoroughly reviewed, or...

  9. Portfolio optimisation for hydropower producers that balances riverine ecosystem protection and producer needs

    Science.gov (United States)

    Yin, X. A.; Yang, Z. F.; Liu, C. L.

    2014-04-01

    In deregulated electricity markets, hydropower portfolio design has become an essential task for producers. The previous research on hydropower portfolio optimisation focused mainly on the maximisation of profits but did not take into account riverine ecosystem protection. Although profit maximisation is the major objective for producers in deregulated markets, protection of riverine ecosystems must be incorporated into the process of hydropower portfolio optimisation, especially against a background of increasing attention to environmental protection and stronger opposition to hydropower generation. This research seeks mainly to remind hydropower producers of the requirement of river protection when they design portfolios and help shift portfolio optimisation from economically oriented to ecologically friendly. We establish a framework to determine the optimal portfolio for a hydropower reservoir, accounting for both economic benefits and ecological needs. In this framework, the degree of natural flow regime alteration is adopted as a constraint on hydropower generation to protect riverine ecosystems, and the maximisation of mean annual revenue is set as the optimisation objective. The electricity volumes assigned in different electricity submarkets are optimised by the noisy genetic algorithm. The proposed framework is applied to China's Wangkuai Reservoir to test its effectiveness. The results show that the new framework could help to design eco-friendly portfolios that can ensure a planned profit and reduce alteration of the natural flow regime.

  10. Simultaneous Topology, Shape, and Sizing Optimisation of Plane Trusses with Adaptive Ground Finite Elements Using MOEAs

    Directory of Open Access Journals (Sweden)

    Norapat Noilublao

    2013-01-01

    Full Text Available This paper proposes a novel integrated design strategy to accomplish simultaneous topology shape and sizing optimisation of a two-dimensional (2D truss. An optimisation problem is posed to find a structural topology, shape, and element sizes of the truss such that two objective functions, mass and compliance, are minimised. Design constraints include stress, buckling, and compliance. The procedure for an adaptive ground elements approach is proposed and its encoding/decoding process is detailed. Two sets of design variables defining truss layout, shape, and element sizes at the same time are applied. A number of multiobjective evolutionary algorithms (MOEAs are implemented to solve the design problem. Comparative performance based on a hypervolume indicator shows that multiobjective population-based incremental learning (PBIL is the best performer. Optimising three design variable types simultaneously is more efficient and effective.

  11. Reflections on the juridicial roots of the principle of optimisation

    International Nuclear Information System (INIS)

    Lochard, J.; Boehler, M.C.

    1992-01-01

    The disciplines of jurisprudence tend in general towards a rationalisation and stabilisation of social or economic practice and are oriented towards concepts or practices which belong to the field of the determinate. When it comes to the principle of optimising radiological protection, however, the classical juridical technique of administrative law does not exactly answer the problems of implementing this. From the obligations of performance traditionally imposed by the government, a transition to obligation of behaviour by those involved seems to be called for, and this is what makes the optimisation principle difficult to qualify juridically. Instead of a law of command, exemption and control the government must essentially put its trust in the operators of nuclear installations by issuing a standard which sets an objective rather than a standard with the force of a regulation as in the past. Does the future of the juridical sciences in fact lie in the development of an administrative law of the indeterminate which would oblige the government to recognise that, even in the field of the determinate, it is not always government which knows best? While our classical administrative law is a law of command and control, the administrative law of the indeterminate will be that of the law of common effort, framed in collective acts and based on trust, consultation, and obligations of behaviour, all under the control of a judge who intervenes when there is a manifest contradiction between the acts and the promised behaviour. In French law optimisation has remained a general principle unaccompanied by specific provisions for its implementation. The object of our paper is to examine on what juridical foundations it would be possible to apply this principle at practical level without betraying its spirit. (author)

  12. Extending Particle Swarm Optimisers with Self-Organized Criticality

    DEFF Research Database (Denmark)

    Løvbjerg, Morten; Krink, Thiemo

    2002-01-01

    Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions.......Particle swarm optimisers (PSOs) show potential in function optimisation, but still have room for improvement. Self-organized criticality (SOC) can help control the PSO and add diversity. Extending the PSO with SOC seems promising reaching faster convergence and better solutions....

  13. Operational Radiological Protection and Aspects of Optimisation

    International Nuclear Information System (INIS)

    Lazo, E.; Lindvall, C.G.

    2005-01-01

    Since 1992, the Nuclear Energy Agency (NEA), along with the International Atomic Energy Agency (IAEA), has sponsored the Information System on Occupational Exposure (ISOE). ISOE collects and analyses occupational exposure data and experience from over 400 nuclear power plants around the world and is a forum for radiological protection experts from both nuclear power plants and regulatory authorities to share lessons learned and best practices in the management of worker radiation exposures. In connection to the ongoing work of the International Commission on Radiological Protection (ICRP) to develop new recommendations, the ISOE programme has been interested in how the new recommendations would affect operational radiological protection application at nuclear power plants. Bearing in mind that the ICRP is developing, in addition to new general recommendations, a new recommendation specifically on optimisation, the ISOE programme created a working group to study the operational aspects of optimisation, and to identify the key factors in optimisation that could usefully be reflected in ICRP recommendations. In addition, the Group identified areas where further ICRP clarification and guidance would be of assistance to practitioners, both at the plant and the regulatory authority. The specific objective of this ISOE work was to provide operational radiological protection input, based on practical experience, to the development of new ICRP recommendations, particularly in the area of optimisation. This will help assure that new recommendations will best serve the needs of those implementing radiation protection standards, for the public and for workers, at both national and international levels. (author)

  14. Optimisation of surgical care for rectal cancer

    NARCIS (Netherlands)

    Borstlap, W.A.A.

    2017-01-01

    Optimisation of surgical care means weighing the risk of treatment related morbidity against the patients’ potential benefits of a surgical intervention. The first part of this thesis focusses on the anaemic patient undergoing colorectal surgery. Hypothesizing that a more profound haemoglobin

  15. On optimal development and becoming an optimiser

    NARCIS (Netherlands)

    de Ruyter, D.J.

    2012-01-01

    The article aims to provide a justification for the claim that optimal development and becoming an optimiser are educational ideals that parents should pursue in raising their children. Optimal development is conceptualised as enabling children to grow into flourishing persons, that is persons who

  16. Particle Swarm Optimisation with Spatial Particle Extension

    DEFF Research Database (Denmark)

    Krink, Thiemo; Vesterstrøm, Jakob Svaneborg; Riget, Jacques

    2002-01-01

    In this paper, we introduce spatial extension to particles in the PSO model in order to overcome premature convergence in iterative optimisation. The standard PSO and the new model (SEPSO) are compared w.r.t. performance on well-studied benchmark problems. We show that the SEPSO indeed managed...

  17. OPTIMISATION OF COMPRESSIVE STRENGTH OF PERIWINKLE ...

    African Journals Online (AJOL)

    In this paper, a regression model is developed to predict and optimise the compressive strength of periwinkle shell aggregate concrete using Scheffe's regression theory. The results obtained from the derived regression model agreed favourably with the experimental data. The model was tested for adequacy using a student ...

  18. An efficient optimisation method in groundwater resource ...

    African Journals Online (AJOL)

    DRINIE

    2003-10-04

    Oct 4, 2003 ... theories developed in the field of stochastic subsurface hydrology. In reality, many ... Recently, some researchers have applied the multi-stage ... Then a robust solution of the optimisation problem given by Eqs. (1) to (3) is as ...

  19. Water distribution systems design optimisation using metaheuristics ...

    African Journals Online (AJOL)

    The topic of multi-objective water distribution systems (WDS) design optimisation using metaheuristics is investigated, comparing numerous modern metaheuristics, including several multi-objective evolutionary algorithms, an estimation of distribution algorithm and a recent hyperheuristic named AMALGAM (an evolutionary ...

  20. Optimisation of efficiency of axial fans

    NARCIS (Netherlands)

    Kruyt, Nicolaas P.; Pennings, P.C.; Faasen, R.

    2014-01-01

    A three-stage research project has been executed to develop ducted axial-fans with increased efficiency. In the first stage a design method has been developed in which various conflicting design criteria can be incorporated. Based on this design method, an optimised design has been determined

  1. Thermodynamic optimisation of a heat exchanger

    NARCIS (Netherlands)

    Cornelissen, Rene; Hirs, Gerard

    1999-01-01

    The objective of this paper is to show that for the optimal design of an energy system, where there is a trade-off between exergy saving during operation and exergy use during construction of the energy system, exergy analysis and life cycle analysis should be combined. An exergy optimisation of a

  2. Self-optimising control of sewer systems

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Montero-Castro, Ignacio; Mollerup, Ane Loft

    2013-01-01

    . The definition of an optimal performance was carried out by through a two-stage optimisation (stochastic and deterministic) to take into account both the overflow during the current rain event as well as the expected overflow, given the probability of a future rain event. The methodology is successfully applied...

  3. Bed roughness experiments in supply limited conditions

    NARCIS (Netherlands)

    Spekkers, Matthieu; Tuijnder, Arjan; Ribberink, Jan S.; Hulscher, Suzanne J.M.H.; Parsons, D.R.; Garlan, T.; Best, J.L.

    2008-01-01

    Reliable roughness models are of great importance, for example, when predicting water levels in rivers. The currently available roughness models are based on fully mobile bed conditions. However, in rivers where widely graded sediments are present more or less permanent armour layers can develop

  4. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  5. Wall roughness induces asymptotic ultimate turbulence

    NARCIS (Netherlands)

    Zhu, Xiaojue; Verschoof, Ruben Adriaan; Bakhuis, Dennis; Huisman, Sander Gerard; Verzicco, Roberto; Sun, Chao; Lohse, Detlef

    2018-01-01

    Turbulence governs the transport of heat, mass and momentum on multiple scales. In real-world applications, wall-bounded turbulence typically involves surfaces that are rough; however, characterizing and understanding the effects of wall roughness on turbulence remains a challenge. Here, by

  6. Optimising Shovel-Truck Fuel Consumption using Stochastic ...

    African Journals Online (AJOL)

    Optimising the fuel consumption and truck waiting time can result in significant fuel savings. The paper demonstrates that stochastic simulation is an effective tool for optimising the utilisation of fossil-based fuels in mining and related industries. Keywords: Stochastic, Simulation Modelling, Mining, Optimisation, Shovel-Truck ...

  7. Design of optimised backstepping controller for the synchronisation ...

    Indian Academy of Sciences (India)

    Ehsan Fouladi

    2017-12-18

    Dec 18, 2017 ... for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller. Keywords. Colpitts oscillator; backstepping controller; chaos synchronisation; shark smell algorithm; particle .... The velocity model is based on the gradient of the objective function, tilting ...

  8. Efficient topology optimisation of multiscale and multiphysics problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe

    The aim of this Thesis is to present efficient methods for optimising high-resolution problems of a multiscale and multiphysics nature. The Thesis consists of two parts: one treating topology optimisation of microstructural details and the other treating topology optimisation of conjugate heat...

  9. Pre-IceBridge ATM L2 Icessn Elevation, Slope, and Roughness

    Data.gov (United States)

    National Aeronautics and Space Administration — The NASA Pre-IceBridge ATM Level-2 Icessn Elevation, Slope, and Roughness (BLATM2) data set contains resampled and smoothed elevation measurements of Arctic and...

  10. Optimisation of NMR dynamic models II. A new methodology for the dual optimisation of the model-free parameters and the Brownian rotational diffusion tensor

    International Nuclear Information System (INIS)

    D'Auvergne, Edward J.; Gooley, Paul R.

    2008-01-01

    Finding the dynamics of an entire macromolecule is a complex problem as the model-free parameter values are intricately linked to the Brownian rotational diffusion of the molecule, mathematically through the autocorrelation function of the motion and statistically through model selection. The solution to this problem was formulated using set theory as an element of the universal set U-the union of all model-free spaces (d'Auvergne EJ and Gooley PR (2007) Mol BioSyst 3(7), 483-494). The current procedure commonly used to find the universal solution is to initially estimate the diffusion tensor parameters, to optimise the model-free parameters of numerous models, and then to choose the best model via model selection. The global model is then optimised and the procedure repeated until convergence. In this paper a new methodology is presented which takes a different approach to this diffusion seeded model-free paradigm. Rather than starting with the diffusion tensor this iterative protocol begins by optimising the model-free parameters in the absence of any global model parameters, selecting between all the model-free models, and finally optimising the diffusion tensor. The new model-free optimisation protocol will be validated using synthetic data from Schurr JM et al. (1994) J Magn Reson B 105(3), 211-224 and the relaxation data of the bacteriorhodopsin (1-36)BR fragment from Orekhov VY (1999) J Biomol NMR 14(4), 345-356. To demonstrate the importance of this new procedure the NMR relaxation data of the Olfactory Marker Protein (OMP) of Gitti R et al. (2005) Biochem 44(28), 9673-9679 is reanalysed. The result is that the dynamics for certain secondary structural elements is very different from those originally reported

  11. Electrochemically grown rough-textured nanowires

    International Nuclear Information System (INIS)

    Tyagi, Pawan; Postetter, David; Saragnese, Daniel; Papadakis, Stergios J.; Gracias, David H.

    2010-01-01

    Nanowires with a rough surface texture show unusual electronic, optical, and chemical properties; however, there are only a few existing methods for producing these nanowires. Here, we describe two methods for growing both free standing and lithographically patterned gold (Au) nanowires with a rough surface texture. The first strategy is based on the deposition of nanowires from a silver (Ag)-Au plating solution mixture that precipitates an Ag-Au cyanide complex during electrodeposition at low current densities. This complex disperses in the plating solution, thereby altering the nanowire growth to yield a rough surface texture. These nanowires are mass produced in alumina membranes. The second strategy produces long and rough Au nanowires on lithographically patternable nickel edge templates with corrugations formed by partial etching. These rough nanowires can be easily arrayed and integrated with microscale devices.

  12. Modeling surface roughness scattering in metallic nanowires

    Energy Technology Data Exchange (ETDEWEB)

    Moors, Kristof, E-mail: kristof@itf.fys.kuleuven.be [KU Leuven, Institute for Theoretical Physics, Celestijnenlaan 200D, B-3001 Leuven (Belgium); IMEC, Kapeldreef 75, B-3001 Leuven (Belgium); Sorée, Bart [IMEC, Kapeldreef 75, B-3001 Leuven (Belgium); Physics Department, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerpen (Belgium); KU Leuven, Electrical Engineering (ESAT) Department, Kasteelpark Arenberg 10, B-3001 Leuven (Belgium); Magnus, Wim [IMEC, Kapeldreef 75, B-3001 Leuven (Belgium); Physics Department, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerpen (Belgium)

    2015-09-28

    Ando's model provides a rigorous quantum-mechanical framework for electron-surface roughness scattering, based on the detailed roughness structure. We apply this method to metallic nanowires and improve the model introducing surface roughness distribution functions on a finite domain with analytical expressions for the average surface roughness matrix elements. This approach is valid for any roughness size and extends beyond the commonly used Prange-Nee approximation. The resistivity scaling is obtained from the self-consistent relaxation time solution of the Boltzmann transport equation and is compared to Prange-Nee's approach and other known methods. The results show that a substantial drop in resistivity can be obtained for certain diameters by achieving a large momentum gap between Fermi level states with positive and negative momentum in the transport direction.

  13. Suppression of intrinsic roughness in encapsulated graphene

    DEFF Research Database (Denmark)

    Thomsen, Joachim Dahl; Gunst, Tue; Gregersen, Søren Schou

    2017-01-01

    Roughness in graphene is known to contribute to scattering effects which lower carrier mobility. Encapsulating graphene in hexagonal boron nitride (hBN) leads to a significant reduction in roughness and has become the de facto standard method for producing high-quality graphene devices. We have...... fabricated graphene samples encapsulated by hBN that are suspended over apertures in a substrate and used noncontact electron diffraction measurements in a transmission electron microscope to measure the roughness of encapsulated graphene inside such structures. We furthermore compare the roughness...... of these samples to suspended bare graphene and suspended graphene on hBN. The suspended heterostructures display a root mean square (rms) roughness down to 12 pm, considerably less than that previously reported for both suspended graphene and graphene on any substrate and identical within experimental error...

  14. Optimisation of logistics processes of energy grass collection

    Science.gov (United States)

    Bányai, Tamás.

    2010-05-01

    objective function of the optimisation is the maximisation of the profit which means the maximization of the difference between revenue and cost. The objective function trades off the income of the assigned transportation demands against the logistic costs. The constraints are the followings: (1) the free capacity of the assigned transportation resource is more than the re-quested capacity of the transportation demand; the calculated arrival time of the transportation resource to the harvesting place is not later than the requested arrival time of them; (3) the calculated arrival time of the transportation demand to the processing and production facility is not later than the requested arrival time; (4) one transportation demand is assigned to one transportation resource and one resource is assigned to one transportation resource. The decision variable of the optimisation problem is the set of scheduling variables and the assignment of resources to transportation demands. The evaluation parameters of the optimised system are the followings: total costs of the collection process; utilisation of transportation resources and warehouses; efficiency of production and/or processing facilities. However the multidimensional heuristic optimisation method is based on genetic algorithm, but the routing sequence of the optimisation works on the base of an ant colony algorithm. The optimal routes are calculated by the aid of the ant colony algorithm as a subroutine of the global optimisation method and the optimal assignment is given by the genetic algorithm. One important part of the mathematical method is the sensibility analysis of the objective function, which shows the influence rate of the different input parameters. Acknowledgements This research was implemented within the frame of the project entitled "Development and operation of the Technology and Knowledge Transfer Centre of the University of Miskolc". with support by the European Union and co-funding of the European Social

  15. Fractures in sport: Optimising their management and outcome

    Science.gov (United States)

    Robertson, Greg AJ; Wood, Alexander M

    2015-01-01

    Fractures in sport are a specialised cohort of fracture injuries, occurring in a high functioning population, in which the goals are rapid restoration of function and return to play with the minimal symptom profile possible. While the general principles of fracture management, namely accurate fracture reduction, appropriate immobilisation and timely rehabilitation, guide the treatment of these injuries, management of fractures in athletic populations can differ significantly from those in the general population, due to the need to facilitate a rapid return to high demand activities. However, despite fractures comprising up to 10% of all of sporting injuries, dedicated research into the management and outcome of sport-related fractures is limited. In order to assess the optimal methods of treating such injuries, and so allow optimisation of their outcome, the evidence for the management of each specific sport-related fracture type requires assessment and analysis. We present and review the current evidence directing management of fractures in athletes with an aim to promote valid innovative methods and optimise the outcome of such injuries. From this, key recommendations are provided for the management of the common fracture types seen in the athlete. Six case reports are also presented to illustrate the management planning and application of sport-focussed fracture management in the clinical setting. PMID:26716081

  16. Optimisation of Transmission Systems by use of Phase Shifting Transformers

    Energy Technology Data Exchange (ETDEWEB)

    Verboomen, J

    2008-10-13

    In this thesis, transmission grids with PSTs (Phase Shifting Transformers) are investigated. In particular, the following goals are put forward: (a) The analysis and quantification of the impact of a PST on a meshed grid. This includes the development of models for the device; (b) The development of methods to obtain optimal coordination of several PSTs in a meshed grid. An objective function should be formulated, and an optimisation method must be adopted to solve the problem; and (c) The investigation of different strategies to use a PST. Chapter 2 gives a short overview of active power flow controlling devices. In chapter 3, a first step towards optimal PST coordination is taken. In chapter 4, metaheuristic optimisation methods are discussed. Chapter 5 introduces DC load flow approximations, leading to analytically closed equations that describe the relation between PST settings and active power flows. In chapter 6, some applications of the methods that are developed in earlier chapters are presented. Chapter 7 contains the conclusions of this thesis, as well as recommendations for future work.

  17. A knowledge representation model for the optimisation of electricity generation mixes

    International Nuclear Information System (INIS)

    Chee Tahir, Aidid; Bañares-Alcántara, René

    2012-01-01

    Highlights: ► Prototype energy model which uses semantic representation (ontologies). ► Model accepts both quantitative and qualitative based energy policy goals. ► Uses logic inference to formulate equations for linear optimisation. ► Proposes electricity generation mix based on energy policy goals. -- Abstract: Energy models such as MARKAL, MESSAGE and DNE-21 are optimisation tools which aid in the formulation of energy policies. The strength of these models lie in their solid theoretical foundations built on rigorous mathematical equations designed to process numerical (quantitative) data related to economics and the environment. Nevertheless, a complete consideration of energy policy issues also requires the consideration of the political and social aspects of energy. These political and social issues are often associated with non-numerical (qualitative) information. To enable the evaluation of these aspects in a computer model, we hypothesise that a different approach to energy model optimisation design is required. A prototype energy model that is based on a semantic representation using ontologies and is integrated to engineering models implemented in Java has been developed. The model provides both quantitative and qualitative evaluation capabilities through the use of logical inference. The semantic representation of energy policy goals is used (i) to translate a set of energy policy goals into a set of logic queries which is then used to determine the preferred electricity generation mix and (ii) to assist in the formulation of a set of equations which is then solved in order to obtain a proposed electricity generation mix. Scenario case studies have been developed and tested on the prototype energy model to determine its capabilities. Knowledge queries were made on the semantic representation to determine an electricity generation mix which fulfilled a set of energy policy goals (e.g. CO 2 emissions reduction, water conservation, energy supply

  18. Surface roughness effects on turbulent Couette flow

    Science.gov (United States)

    Lee, Young Mo; Lee, Jae Hwa

    2017-11-01

    Direct numerical simulation of a turbulent Couette flow with two-dimensional (2-D) rod roughness is performed to examine the effects of the surface roughness. The Reynolds number based on the channel centerline laminar velocity (Uco) and channel half height (h) is Re =7200. The 2-D rods are periodically arranged with a streamwise pitch of λ = 8 k on the bottom wall, and the roughness height is k = 0.12 h. It is shown that the wall-normal extent for the logarithmic layer is significantly shortened in the rough-wall turbulent Couette flow, compared to a turbulent Couette flow with smooth wall. Although the Reynolds stresses are increased in a turbulent channel flow with surface roughness in the outer layer due to large-scale ejection motions produced by the 2-D rods, those of the rough-wall Couette flow are decreased. Isosurfaces of the u-structures averaged in time suggest that the decrease of the turbulent activity near the centerline is associated with weakened large-scale counter-rotating roll modes by the surface roughness. This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1A09000537) and the Ministry of Science, ICT & Future Planning (NRF-2017R1A5A1015311).

  19. Skin friction measurements of systematically-varied roughness: Probing the role of roughness amplitude and skewness

    Science.gov (United States)

    Barros, Julio; Flack, Karen; Schultz, Michael

    2017-11-01

    Real-world engineering systems which feature either external or internal wall-bounded turbulent flow are routinely affected by surface roughness. This gives rise to performance degradation in the form of increased drag or head loss. However, at present there is no reliable means to predict these performance losses based upon the roughness topography alone. This work takes a systematic approach by generating random surface roughness in which the surface statistics are closely controlled. Skin friction and roughness function results will be presented for two groups of these rough surfaces. The first group is Gaussian (i.e. zero skewness) in which the root-mean-square roughness height (krms) is varied. The second group has a fixed krms, and the skewness is varied from approximately -1 to +1. The effect of the roughness amplitude and skewness on the skin friction will be discussed. Particular attention will be paid to the effect of these parameters on the roughness function in the transitionally-rough flow regime. For example, the role these parameters play in the monotonic or inflectional nature of the roughness function will be addressed. Future research into the details of the turbulence structure over these rough surfaces will also be outlined. Research funded by U.S. Office of Naval Research (ONR).

  20. Real-time optimisation of the Hoa Binh reservoir, Vietnam

    DEFF Research Database (Denmark)

    Richaud, Bertrand; Madsen, Henrik; Rosbjerg, Dan

    2011-01-01

    -time optimisation. First, the simulation-optimisation framework is applied for optimising reservoir operating rules. Secondly, real-time and forecast information is used for on-line optimisation that focuses on short-term goals, such as flood control or hydropower generation, without compromising the deviation...... in the downstream part of the Red River, and at the same time to increase hydropower generation and to save water for the dry season. The real-time optimisation procedure further improves the efficiency of the reservoir operation and enhances the flexibility for the decision-making. Finally, the quality......Multi-purpose reservoirs often have to be managed according to conflicting objectives, which requires efficient tools for trading-off the objectives. This paper proposes a multi-objective simulation-optimisation approach that couples off-line rule curve optimisation with on-line real...

  1. Numerical Schemes for Rough Parabolic Equations

    Energy Technology Data Exchange (ETDEWEB)

    Deya, Aurelien, E-mail: deya@iecn.u-nancy.fr [Universite de Nancy 1, Institut Elie Cartan Nancy (France)

    2012-04-15

    This paper is devoted to the study of numerical approximation schemes for a class of parabolic equations on (0,1) perturbed by a non-linear rough signal. It is the continuation of Deya (Electron. J. Probab. 16:1489-1518, 2011) and Deya et al. (Probab. Theory Relat. Fields, to appear), where the existence and uniqueness of a solution has been established. The approach combines rough paths methods with standard considerations on discretizing stochastic PDEs. The results apply to a geometric 2-rough path, which covers the case of the multidimensional fractional Brownian motion with Hurst index H>1/3.

  2. Research article – Optimisation of paediatrics computed radiographyfor full spine curvature measurements using a phantom: a pilot study

    NARCIS (Netherlands)

    de Haan, Seraphine; Reis, Cláudia; Ndlovu, Junior; Serrenho, Catarina; Akhtar, Ifrah; Garcia, José Antonio; Linde, Daniël; Thorskog, Martine; Franco, Loris; Hogg, Peter

    2015-01-01

    Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters:

  3. Acoustic Resonator Optimisation for Airborne Particle Manipulation

    Science.gov (United States)

    Devendran, Citsabehsan; Billson, Duncan R.; Hutchins, David A.; Alan, Tuncay; Neild, Adrian

    Advances in micro-electromechanical systems (MEMS) technology and biomedical research necessitate micro-machined manipulators to capture, handle and position delicate micron-sized particles. To this end, a parallel plate acoustic resonator system has been investigated for the purposes of manipulation and entrapment of micron sized particles in air. Numerical and finite element modelling was performed to optimise the design of the layered acoustic resonator. To obtain an optimised resonator design, careful considerations of the effect of thickness and material properties are required. Furthermore, the effect of acoustic attenuation which is dependent on frequency is also considered within this study, leading to an optimum operational frequency range. Finally, experimental results demonstrated good particle levitation and capture of various particle properties and sizes ranging to as small as 14.8 μm.

  4. Techno-economic optimisation of energy systems

    International Nuclear Information System (INIS)

    Mansilla Pellen, Ch.

    2006-07-01

    The traditional approach currently used to assess the economic interest of energy systems is based on a defined flow-sheet. Some studies have shown that the flow-sheets corresponding to the best thermodynamic efficiencies do not necessarily lead to the best production costs. A method called techno-economic optimisation was proposed. This method aims at minimising the production cost of a given energy system, including both investment and operating costs. It was implemented using genetic algorithms. This approach was compared to the heat integration method on two different examples, thus validating its interest. Techno-economic optimisation was then applied to different energy systems dealing with hydrogen as well as electricity production. (author)

  5. Pre-operative optimisation of lung function

    Directory of Open Access Journals (Sweden)

    Naheed Azhar

    2015-01-01

    Full Text Available The anaesthetic management of patients with pre-existing pulmonary disease is a challenging task. It is associated with increased morbidity in the form of post-operative pulmonary complications. Pre-operative optimisation of lung function helps in reducing these complications. Patients are advised to stop smoking for a period of 4–6 weeks. This reduces airway reactivity, improves mucociliary function and decreases carboxy-haemoglobin. The widely used incentive spirometry may be useful only when combined with other respiratory muscle exercises. Volume-based inspiratory devices have the best results. Pharmacotherapy of asthma and chronic obstructive pulmonary disease must be optimised before considering the patient for elective surgery. Beta 2 agonists, inhaled corticosteroids and systemic corticosteroids, are the main drugs used for this and several drugs play an adjunctive role in medical therapy. A graded approach has been suggested to manage these patients for elective surgery with an aim to achieve optimal pulmonary function.

  6. Rough Neutrosophic Multi-Attribute Decision-Making Based on Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Kalyan Mondal

    2015-01-01

    Full Text Available This paper presents rough netrosophic multiattribute decision making based on grey relational analysis. While the concept of neutrosophic sets is a powerful logic to deal with indeterminate and inconsistent data, the theory of rough neutrosophic sets is also a powerful mathematical tool to deal with incompleteness. The rating of all alternatives is expressed with the upper and lower approximation operator and the pair of neutrosophic sets which are characterized by truth-membership degree, indeterminacy-membership degree, and falsitymembership degree. Weight of each attribute is partially known to decision maker. We extend the neutrosophic grey relational analysis method to rough neutrosophic grey relational analysis method and apply it to multiattribute decision making problem. Information entropy method is used to obtain the partially known attribute weights. Accumulated geometric operator is defined to transform rough neutrosophic number (neutrosophic pair to single valued neutrosophic number. Neutrosophic grey relational coefficient is determined by using Hamming distance between each alternative to ideal rough neutrosophic estimates reliability solution and the ideal rough neutrosophic estimates un-reliability solution. Then rough neutrosophic relational degree is defined to determine the ranking order of all alternatives. Finally, a numerical example is provided to illustrate the applicability and efficiency of the proposed approach.

  7. Optimisation of rocker sole footwear for prevention of first plantar ulcer: comparison of group-optimised and individually-selected footwear designs.

    Science.gov (United States)

    Preece, Stephen J; Chapman, Jonathan D; Braunstein, Bjoern; Brüggemann, Gert-Peter; Nester, Christopher J

    2017-01-01

    Appropriate footwear for individuals with diabetes but no ulceration history could reduce the risk of first ulceration. However, individuals who deem themselves at low risk are unlikely to seek out bespoke footwear which is personalised. Therefore, our primary aim was to investigate whether group-optimised footwear designs, which could be prefabricated and delivered in a retail setting, could achieve appropriate pressure reduction, or whether footwear selection must be on a patient-by-patient basis. A second aim was to compare responses to footwear design between healthy participants and people with diabetes in order to understand the transferability of previous footwear research, performed in healthy populations. Plantar pressures were recorded from 102 individuals with diabetes, considered at low risk of ulceration. This cohort included 17 individuals with peripheral neuropathy. We also collected data from 66 healthy controls. Each participant walked in 8 rocker shoe designs (4 apex positions × 2 rocker angles). ANOVA analysis was then used to understand the effect of two design features and descriptive statistics used to identify the group-optimised design. Using 200 kPa as a target, this group-optimised design was then compared to the design identified as the best for each participant (using plantar pressure data). Peak plantar pressure increased significantly as apex position was moved distally and rocker angle reduced ( p  footwear which was individually selected. In terms of optimised footwear designs, healthy participants demonstrated the same response as participants with diabetes, despite having lower plantar pressures. This is the first study demonstrating that a group-optimised, generic rocker shoe might perform almost as well as footwear selected on a patient by patient basis in a low risk patient group. This work provides a starting point for clinical evaluation of generic versus personalised pressure reducing footwear.

  8. Optimised dipper fine tunes shovel performance

    Energy Technology Data Exchange (ETDEWEB)

    Fiscor, S.

    2005-06-01

    Joint efforts between mine operators, OEMs, and researchers yields unexpected benefits from dippers for shovels for coal, oil, or hardrock mining that can now be tailored to meet site-specific conditions. The article outlines a process being developed by CRCMining and P & H MIning Equipment to optimise the dipper that involves rapid prototyping and scale modelling of the dipper and the mine conditions. Scale models have been successfully field tested. 2 photos.

  9. Public transport optimisation emphasising passengers’ travel behaviour.

    OpenAIRE

    Jensen, Jens Parbo; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    Passengers in public transport complaining about their travel experiences are not uncommon. This might seem counterintuitive since several operators worldwide are presenting better key performance indicators year by year. The present PhD study focuses on developing optimisation algorithms to enhance the operations of public transport while explicitly emphasising passengers’ travel behaviour and preferences. Similar to economic theory, interactions between supply and demand are omnipresent in ...

  10. Natural Erosion of Sandstone as Shape Optimisation.

    Science.gov (United States)

    Ostanin, Igor; Safonov, Alexander; Oseledets, Ivan

    2017-12-11

    Natural arches, pillars and other exotic sandstone formations have always been attracting attention for their unusual shapes and amazing mechanical balance that leave a strong impression of intelligent design rather than the result of a stochastic process. It has been recently demonstrated that these shapes could have been the result of the negative feedback between stress and erosion that originates in fundamental laws of friction between the rock's constituent particles. Here we present a deeper analysis of this idea and bridge it with the approaches utilized in shape and topology optimisation. It appears that the processes of natural erosion, driven by stochastic surface forces and Mohr-Coulomb law of dry friction, can be viewed within the framework of local optimisation for minimum elastic strain energy. Our hypothesis is confirmed by numerical simulations of the erosion using the topological-shape optimisation model. Our work contributes to a better understanding of stochastic erosion and feasible landscape formations that could be found on Earth and beyond.

  11. Exploration of automatic optimisation for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez

    2014-09-16

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  12. Exploration of automatic optimisation for CUDA programming

    KAUST Repository

    Al-Mouhamed, Mayez; Khan, Ayaz ul Hassan

    2014-01-01

    © 2014 Taylor & Francis. Writing optimised compute unified device architecture (CUDA) program for graphic processing units (GPUs) is complex even for experts. We present a design methodology for a restructuring tool that converts C-loops into optimised CUDA kernels based on a three-step algorithm which are loop tiling, coalesced memory access and resource optimisation. A method for finding possible loop tiling solutions with coalesced memory access is developed and a simplified algorithm for restructuring C-loops into an efficient CUDA kernel is presented. In the evaluation, we implement matrix multiply (MM), matrix transpose (M-transpose), matrix scaling (M-scaling) and matrix vector multiply (MV) using the proposed algorithm. We present the analysis of the execution time and GPU throughput for the above applications, which favourably compare to other proposals. Evaluation is carried out while scaling the problem size and running under a variety of kernel configurations. The obtained speedup is about 28-35% for M-transpose compared to NVIDIA Software Development Kit, 33% speedup for MV compared to general purpose computation on graphics processing unit compiler, and more than 80% speedup for MM and M-scaling compared to CUDA-lite.

  13. Optimisation and symmetry in experimental radiation physics

    International Nuclear Information System (INIS)

    Ghose, A.

    1988-01-01

    The present monograph is concerned with the optimisation of geometric factors in radiation physics experiments. The discussions are essentially confined to those systems in which optimisation is equivalent to symmetrical configurations of the measurement systems. They include, measurements of interaction cross section of diverse types, determination of polarisations, development of detectors with almost ideal characteristics, production of radiations with continuously variable energies and development of high efficiency spectrometers etc. The monograph is intended for use by experimental physicists investigating primary interactions of radiations with matter and associated technologies. We have illustrated the various optimisation procedures by considering the cases of the so-called ''14 MeV'' on d-t neutrons and gamma rays with energies less than 3 MeV. Developments in fusion technology are critically dependent on the availability accurate cross sections of nuclei for fast neutrons of energies at least as high as d-t neutrons. In this monograph we have discussed various techniques which can be used to improve the accuracy of such measurements and have also presented a method for generating almost monoenergetic neutrons in the 8 MeV to 13 MeV energy range which can be used to measure cross sections in this sparingly investigated region

  14. Spin Hall effect by surface roughness

    KAUST Repository

    Zhou, Lingjun; Grigoryan, Vahram L.; Maekawa, Sadamichi; Wang, Xuhui; Xiao, Jiang

    2015-01-01

    induced by surface roughness subscribes only to the side-jump contribution but not the skew scattering. The paradigm proposed in this paper provides the second, not if only, alternative to generate a sizable spin Hall effect.

  15. Roughness coefficients for stream channels in Arizona

    Science.gov (United States)

    Aldridge, B.N.; Garrett, J.M.

    1973-01-01

    When water flows in an open channel, energy is lost through friction along the banks and bed of the channel and through turbulence within the channel. The amount of energy lost is governed by channel roughness, which is expressed in terms of a roughness coefficient. An evaluation of the roughness coefficient is necessary in many hydraulic computations that involve flow in an open channel. Owing to the lack of satisfactory quantitative procedure, the ability of evaluate roughness coefficients can be developed only through experience; however, a basic knowledge of the methods used to assign the coefficients and the factors affecting them will be a great help. One of the most commonly used equations in open-channel hydraulics is that of Manning. The Manning equation is       1.486

  16. Investigation on Surface Roughness in Cylindrical Grinding

    Science.gov (United States)

    Rudrapati, Ramesh; Bandyopadhyay, Asish; Pal, Pradip Kumar

    2011-01-01

    Cylindrical grinding is a complex machining process. And surface roughness is often a key factor in any machining process while considering the machine tool or machining performance. Further, surface roughness is one of the measures of the technological quality of the product and is a factor that greatly influences cost and quality. The present work is related to some aspects of surface finish in the context of traverse-cut cylindrical grinding. The parameters considered have been: infeed, longitudinal feed and work speed. Taguchi quality design is used to design the experiments and to identify the significantly import parameter(s) affecting the surface roughness. By utilization of Response Surface Methodology (RSM), second order differential equation has been developed and attempts have also been made for optimization of the process in the context of surface roughness by using C- programming.

  17. Rough horizontal plates: heat transfer and hysteresis

    Energy Technology Data Exchange (ETDEWEB)

    Tisserand, J-C; Gasteuil, Y; Pabiou, H; Castaing, B; Chilla, F [Universite de Lyon, ENS Lyon, CNRS, 46 Allee d' ltalie, 69364 Lyon Cedex 7 (France); Creyssels, M [LMFA, CNRS, Ecole Centrale Lyon, 69134 Ecully Cedex (France); Gibert, M, E-mail: mathieu.creyssels@ec-lyon.fr [Also at MPI-DS (LFPN) Gottingen (Germany)

    2011-12-22

    To investigate the influence of a rough-wall boundary layer on turbulent heat transport, an experiment of high-Rayleigh convection in water is carried out in a Rayleigh-Benard cell with a rough lower plate and a smooth upper plate. A transition in the heat transport is observed when the thermal boundary layer thickness becomes comparable to or smaller than the roughness height. Besides, at larger Rayleigh numbers than the threshold value, heat transport is found to be increased up to 60%. This enhancement cannot be explained simply by an increase in the contact area of the rough surface since the contact area is increased only by a factor of 40%. Finally, a simple model is proposed to explain the enhanced heat transport.

  18. Surface excitation parameter for rough surfaces

    International Nuclear Information System (INIS)

    Da, Bo; Salma, Khanam; Ji, Hui; Mao, Shifeng; Zhang, Guanghui; Wang, Xiaoping; Ding, Zejun

    2015-01-01

    Graphical abstract: - Highlights: • Instead of providing a general mathematical model of roughness, we directly use a finite element triangle mesh method to build a fully 3D rough surface from the practical sample. • The surface plasmon excitation can be introduced to the realistic sample surface by dielectric response theory and finite element method. • We found that SEP calculated based on ideal plane surface model are still reliable for real sample surface with common roughness. - Abstract: In order to assess quantitatively the importance of surface excitation effect in surface electron spectroscopy measurement, surface excitation parameter (SEP) has been introduced to describe the surface excitation probability as an average number of surface excitations that electrons can undergo when they move through solid surface either in incoming or outgoing directions. Meanwhile, surface roughness is an inevitable issue in experiments particularly when the sample surface is cleaned with ion beam bombardment. Surface roughness alters not only the electron elastic peak intensity but also the surface excitation intensity. However, almost all of the popular theoretical models for determining SEP are based on ideal plane surface approximation. In order to figure out whether this approximation is efficient or not for SEP calculation and the scope of this assumption, we proposed a new way to determine the SEP for a rough surface by a Monte Carlo simulation of electron scattering process near to a realistic rough surface, which is modeled by a finite element analysis method according to AFM image. The elastic peak intensity is calculated for different electron incident and emission angles. Assuming surface excitations obey the Poisson distribution the SEPs corrected for surface roughness are then obtained by analyzing the elastic peak intensity for several materials and for different incident and emission angles. It is found that the surface roughness only plays an

  19. Small-Scale Surf Zone Geometric Roughness

    Science.gov (United States)

    2017-12-01

    using stereo imagery techniques. A waterproof two- camera system with self-logging and internal power was developed using commercial-off-the-shelf...estimates. 14. SUBJECT TERMS surface roughness, nearshore, aerodynamic roughness, surf zone, structure from motion, 3D imagery 15. NUMBER OF... power was developed using commercial-off-the- shelf components and commercial software for operations 1m above the sea surface within the surf zone

  20. How supercontinents and superoceans affect seafloor roughness.

    Science.gov (United States)

    Whittaker, Joanne M; Müller, R Dietmar; Roest, Walter R; Wessel, Paul; Smith, Walter H F

    2008-12-18

    Seafloor roughness varies considerably across the world's ocean basins and is fundamental to controlling the circulation and mixing of heat in the ocean and dissipating eddy kinetic energy. Models derived from analyses of active mid-ocean ridges suggest that ocean floor roughness depends on seafloor spreading rates, with rougher basement forming below a half-spreading rate threshold of 30-35 mm yr(-1) (refs 4, 5), as well as on the local interaction of mid-ocean ridges with mantle plumes or cold-spots. Here we present a global analysis of marine gravity-derived roughness, sediment thickness, seafloor isochrons and palaeo-spreading rates of Cretaceous to Cenozoic ridge flanks. Our analysis reveals that, after eliminating effects related to spreading rate and sediment thickness, residual roughness anomalies of 5-20 mGal remain over large swaths of ocean floor. We found that the roughness as a function of palaeo-spreading directions and isochron orientations indicates that most of the observed excess roughness is not related to spreading obliquity, as this effect is restricted to relatively rare occurrences of very high obliquity angles (>45 degrees ). Cretaceous Atlantic ocean floor, formed over mantle previously overlain by the Pangaea supercontinent, displays anomalously low roughness away from mantle plumes and is independent of spreading rates. We attribute this observation to a sub-Pangaean supercontinental mantle temperature anomaly leading to slightly thicker than normal Late Jurassic and Cretaceous Atlantic crust, reduced brittle fracturing and smoother basement relief. In contrast, ocean crust formed above Pacific superswells, probably reflecting metasomatized lithosphere underlain by mantle at only slightly elevated temperatures, is not associated with basement roughness anomalies. These results highlight a fundamental difference in the nature of large-scale mantle upwellings below supercontinents and superoceans, and their impact on oceanic crustal

  1. Role of surface roughness in superlubricity

    International Nuclear Information System (INIS)

    Tartaglino, U; Samoilov, V N; Persson, B N J

    2006-01-01

    We study the sliding of elastic solids in adhesive contact with flat and rough interfaces. We consider the dependence of the sliding friction on the elastic modulus of the solids. For elastically hard solids with planar surfaces with incommensurate surface structures we observe extremely low friction (superlubricity), which very abruptly increases as the elastic modulus decreases. We show that even a relatively small surface roughness may completely kill the superlubricity state

  2. Wall roughness induces asymptotic ultimate turbulence

    Science.gov (United States)

    Zhu, Xiaojue; Verschoof, Ruben A.; Bakhuis, Dennis; Huisman, Sander G.; Verzicco, Roberto; Sun, Chao; Lohse, Detlef

    2018-04-01

    Turbulence governs the transport of heat, mass and momentum on multiple scales. In real-world applications, wall-bounded turbulence typically involves surfaces that are rough; however, characterizing and understanding the effects of wall roughness on turbulence remains a challenge. Here, by combining extensive experiments and numerical simulations, we examine the paradigmatic Taylor-Couette system, which describes the closed flow between two independently rotating coaxial cylinders. We show how wall roughness greatly enhances the overall transport properties and the corresponding scaling exponents associated with wall-bounded turbulence. We reveal that if only one of the walls is rough, the bulk velocity is slaved to the rough side, due to the much stronger coupling to that wall by the detaching flow structures. If both walls are rough, the viscosity dependence is eliminated, giving rise to asymptotic ultimate turbulence—the upper limit of transport—the existence of which was predicted more than 50 years ago. In this limit, the scaling laws can be extrapolated to arbitrarily large Reynolds numbers.

  3. Optimisation of bitumen emulsion properties for ballast stabilisation

    International Nuclear Information System (INIS)

    D’Angelo, G.; Lo Presti, D.; Thom, N.

    2017-01-01

    Ballasted track, while providing economical and practical advantages, is associated with high costs and material consumption due to frequent maintenance. More sustainable alternatives to conventional ballasted trackbeds should therefore aim at extending its durability, particularly considering ongoing increases in traffic speed and loads. In this regard, the authors have investigated a solution consisting of bitumen stabilised ballast (BSB), designed to be used for new trackbeds as well as in reinforcing existing ones. This study presents the idea behind the technology and then focuses on a specific part of its development: the optimisation of bitumen emulsion properties and dosage in relation to ballast field conditions. Results showed that overall bitumen stabilisation improved ballast resistance to permanent deformation by enhancing stiffness and damping properties. Scenarios with higher dosage of bitumen emulsion, higher viscosity, quicker setting behaviour, and harder base bitumen seem to represent the most desirable conditions to achieve enhanced in-field performance. [es

  4. Optimisation of electron beam characteristics by simulated annealing

    International Nuclear Information System (INIS)

    Ebert, M.A.; University of Adelaide, SA; Hoban, P.W.

    1996-01-01

    Full text: With the development of technology in the field of treatment beam delivery, the possibility of tailoring radiation beams (via manipulation of the beam's phase space) is foreseeable. This investigation involved evaluating a method for determining the characteristics of pure electron beams which provided dose distributions that best approximated desired distributions. The aim is to determine which degrees of freedom are advantageous and worth pursuing in a clinical setting. A simulated annealing routine was developed to determine optimum electron beam characteristics. A set of beam elements are defined at the surface of a homogeneous water equivalent phantom defining discrete positions and angles of incidence, and electron energies. The optimal weighting of these elements is determined by the (generally approximate) solution to the linear equation, Dw = d, where d represents the dose distribution calculated over the phantom, w the vector of (50 - 2x10 4 ) beam element relative weights, and D a normalised matrix of dose deposition kernels. In the iterative annealing procedure, beam elements are randomly selected and beam weighting distributions are sampled and used to perturb the selected elements. Perturbations are accepted or rejected according to standard simulated annealing criteria. The result (after the algorithm has terminated due to meeting an iteration or optimisation specification) is an approximate solution for the beam weight vector (w) specified by the above equation. This technique has been applied for several sample dose distributions and phase space restrictions. An example is given of the phase space obtained when endeavouring to conform to a rectangular 100% dose region with polyenergetic though normally incident electrons. For regular distributions, intuitive conclusions regarding the benefits of energy/angular manipulation may be made, whereas for complex distributions, variations in intensity over beam elements of varying energy and

  5. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    International Nuclear Information System (INIS)

    Taylor, A.; Blake, W.H.; Keith-Roach, M.J.

    2012-01-01

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were 7 Be geochemical behaviour is required to support tracer studies. ► Sequential extraction with natural 7 Be returns high analytical uncertainties. ► Preconcentrating extracts from a large sample mass improved analytical uncertainty. ► This optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic 7 Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of 7 Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout 7 Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of 7 Be (t 1/2 = 53.3 days). Here, three different methods of preparing and quantifying 7 Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the 7 Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural 7 Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period (∼10% (2σ) where extract activity >40% of total activity) and generated statistically useful sequential extraction profiles. Total recoveries of 7 Be fell between 84 and 112%. The stable Be data demonstrated that the

  6. Dissolution of minerals with rough surfaces

    Science.gov (United States)

    de Assis, Thiago A.; Aarão Reis, Fábio D. A.

    2018-05-01

    We study dissolution of minerals with initial rough surfaces using kinetic Monte Carlo simulations and a scaling approach. We consider a simple cubic lattice structure, a thermally activated rate of detachment of a molecule (site), and rough surface configurations produced by fractional Brownian motion algorithm. First we revisit the problem of dissolution of initial flat surfaces, in which the dissolution rate rF reaches an approximately constant value at short times and is controlled by detachment of step edge sites. For initial rough surfaces, the dissolution rate r at short times is much larger than rF ; after dissolution of some hundreds of molecular layers, r decreases by some orders of magnitude across several time decades. Meanwhile, the surface evolves through configurations of decreasing energy, beginning with dissolution of isolated sites, then formation of terraces with disordered boundaries, their growth, and final smoothing. A crossover time to a smooth configuration is defined when r = 1.5rF ; the surface retreat at the crossover is approximately 3 times the initial roughness and is temperature-independent, while the crossover time is proportional to the initial roughness and is controlled by step-edge site detachment. The initial dissolution process is described by the so-called rough rates, which are measured for fixed ratios between the surface retreat and the initial roughness. The temperature dependence of the rough rates indicates control by kink site detachment; in general, it suggests that rough rates are controlled by the weakest microscopic bonds during the nucleation and formation of the lowest energy configurations of the crystalline surface. Our results are related to recent laboratory studies which show enhanced dissolution in polished calcite surfaces. In the application to calcite dissolution in alkaline environment, the minimal values of recently measured dissolution rate spectra give rF ∼10-9 mol/(m2 s), and the calculated rate

  7. Particle roughness in magnetorheology: effect on the strength of the field-induced structures

    International Nuclear Information System (INIS)

    Vereda, F; Segovia-Gutiérrez, J P; De Vicente, J; Hidalgo-Alvarez, R

    2015-01-01

    We report a study on the effect of particle roughness on the strength of the field-induced structures of magnetorheological (MR) fluids in the quasi-static regime. We prepared one set of MR fluids with carbonyl iron particles and another set with magnetite particles, and in both sets we had particles with different degrees of surface roughness. Small amplitude oscillatory shear (SAOS) magnetosweeps and steady shear (SS) tests were carried out on the suspensions to measure their elastic modulus (G′) and static yield stress (τ static ). Results for both the iron and the magnetite sets of suspensions were consistent: for the MR fluids prepared with rougher particles, G′ increased at smaller fields and τ static was ca. 20% larger than for the suspensions prepared with relatively smooth particles. In addition to the experimental study, we carried out finite element method calculations to assess the effect of particle roughness on the magnetic interaction between particles. These calculations showed that roughness can facilitate the magnetization of the particles, thus increasing the magnetic energy of the system for a given field, but that this effect depends on the concrete morphology of the surface. For our real systems, no major differences were observed between the magnetization cycles of the MR fluids prepared with particles with different degree of roughness, which implied that the effect of roughness on the measured G′ and τ static was due mainly to friction between the solid surfaces of adjacent particles. (paper)

  8. Rough mill simulator version 3.0: an analysis tool for refining rough mill operations

    Science.gov (United States)

    Edward Thomas; Joel Weiss

    2006-01-01

    ROMI-3 is a rough mill computer simulation package designed to be used by both rip-first and chop-first rough mill operators and researchers. ROMI-3 allows users to model and examine the complex relationships among cutting bill, lumber grade mix, processing options, and their impact on rough mill yield and efficiency. Integrated into the ROMI-3 software is a new least-...

  9. Investigation of the effect of cutting speed on the Surface Roughness parameters in CNC End Milling using Artificial Neural Network

    International Nuclear Information System (INIS)

    Al Hazza, Muataz H F; Adesta, Erry Y T

    2013-01-01

    This research presents the effect of high cutting speed on the surface roughness in the end milling process by using the Artificial Neural Network (ANN). An experimental investigation was conducted to measure the surface roughness for end milling. A set of sparse experimental data for finish end milling on AISI H13 at hardness of 48 HRC have been conducted. The artificial neural network (ANN) was applied to simulate and study the effect of high cutting speed on the surface roughness

  10. Effect of surface roughness on ultrasonic echo amplitude in aluminium-copper alloy castings

    International Nuclear Information System (INIS)

    Ambardar, R.; Pathak, S.D.; Prabhakar, O.; Jayakumar, T.

    1996-01-01

    In the present investigation, the influence of test surface roughness on ultrasonic back-wall echo (BWE) amplitude in Al-4.5%Cu alloy cast specimens has been studied. The results indicate that as the value of surface roughness of the specimen increases, the value of relating BWE amplitude at a given probe frequency decreases. However, under the present set of experimental conditions, the decrease in BWE amplitude with the increase in surface roughness of the test specimen is found to be appreciable at 10 MHz probe frequency. (author)

  11. Discussion on Implementation of ICRP Recommendations Concerning Reference Levels and Optimisation

    International Nuclear Information System (INIS)

    2013-02-01

    International Commission on Radiological Protection (ICRP) Publication 103, 'The 2007 Recommendations of the International Commission on Radiological Protection', issued in 2007, defines emergency exposure situations as unexpected situations that may require the implementation of urgent protective actions and perhaps longer term protective actions. The ICRP continues to recommend optimisation and the use of reference levels to ensure an adequate degree of protection in regard to exposure to ionising radiation in emergency exposure situations. Reference levels represent the level of dose or risk above which it is judged to be inappropriate to plan to allow exposures to occur and for which protective actions should therefore be planned and optimised. National authorities are responsible for establishing reference levels. The Expert Group on the Implementation of New International Recommendations for Emergency Exposure Situations (EGIRES) performed a survey to analyse the established processes for optimisation of the protection strategy for emergency exposure situations and for practical implementation of the reference level concept in several member states of the Nuclear Energy Agency (NEA). The EGIRES collected information on several national optimisation strategy definitions, on optimisation of protection for different protective actions, and also on optimisation of urgent protective actions. In addition, national criteria for setting reference levels, their use, and relevant processes, including specific triggers and dosimetric quantifies in setting reference levels, are focus points that the EGIRES also evaluated. The analysis of national responses to this 2011 survey shows many differences in the interpretation and application of the established processes and suggests that most countries are still in the early stages of implementing these processes. Since 2011, national authorities have continued their study of the ICRP recommendations to incorporate them into

  12. Application of the principles of justification and optimisation to products causing public exposure

    International Nuclear Information System (INIS)

    Fleishman, A.B.; Wrixon, A.D.

    1980-01-01

    The purpose of this paper is to explore the practical incorporation of the ICRP principles of justification and optimisation into a policy designed to control radioactive consumer products. ICRP recommends the use of cost benefit analysis and differential cost benefit analysis for justification and optimisation respectively, and expresses these procedures in simple mathematical forms. This might suggest that their application should involve quantification and therefore be objective. The problems which arise in such quantification are discussed. These include the derivation of a market demand curve for a given product and its adjustment to remove any distortion of perceived value or risk produced by advertising; the costing of detriment to those who receive no benefit (e.g., as a consequence of uncontrolled disposal); and the costs of the analyses themselves. Furthermore, both cost benefit and differential cost benefit analyses are dependent on availability of market and performance data. This is incompatible with prior approval schemes in which a decision must be made before the product is distributed. Ultimately criteria for product justification must therefore be based on judgements on the acceptability of risk that are political rather than scientific in nature. Optimisation must initially be carried out on an intuitive basis. However as experience is gained with similar products the opportunity for more formal optimisation and the setting of radiological protection standards arise. This requires further value judgements which are illustrated with reference to examples. (author)

  13. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    Science.gov (United States)

    Trianni, Vito; López-Ibáñez, Manuel

    2015-01-01

    The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled). However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i) allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii) supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii) avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv) solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  14. Advantages of Task-Specific Multi-Objective Optimisation in Evolutionary Robotics.

    Directory of Open Access Journals (Sweden)

    Vito Trianni

    Full Text Available The application of multi-objective optimisation to evolutionary robotics is receiving increasing attention. A survey of the literature reveals the different possibilities it offers to improve the automatic design of efficient and adaptive robotic systems, and points to the successful demonstrations available for both task-specific and task-agnostic approaches (i.e., with or without reference to the specific design problem to be tackled. However, the advantages of multi-objective approaches over single-objective ones have not been clearly spelled out and experimentally demonstrated. This paper fills this gap for task-specific approaches: starting from well-known results in multi-objective optimisation, we discuss how to tackle commonly recognised problems in evolutionary robotics. In particular, we show that multi-objective optimisation (i allows evolving a more varied set of behaviours by exploring multiple trade-offs of the objectives to optimise, (ii supports the evolution of the desired behaviour through the introduction of objectives as proxies, (iii avoids the premature convergence to local optima possibly introduced by multi-component fitness functions, and (iv solves the bootstrap problem exploiting ancillary objectives to guide evolution in the early phases. We present an experimental demonstration of these benefits in three different case studies: maze navigation in a single robot domain, flocking in a swarm robotics context, and a strictly collaborative task in collective robotics.

  15. Mechatronic System Design Based On An Optimisation Approach

    DEFF Research Database (Denmark)

    Andersen, Torben Ole; Pedersen, Henrik Clemmensen; Hansen, Michael Rygaard

    The envisaged objective of this paper project is to extend the current state of the art regarding the design of complex mechatronic systems utilizing an optimisation approach. We propose to investigate a novel framework for mechatronic system design. The novelty and originality being the use...... of optimisation techniques. The methods used to optimise/design within the classical disciplines will be identified and extended to mechatronic system design....

  16. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2005-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  17. Analysis and optimisation of heterogeneous real-time embedded systems

    DEFF Research Database (Denmark)

    Pop, Paul; Eles, Petru; Peng, Zebo

    2006-01-01

    . The success of such new design methods depends on the availability of analysis and optimisation techniques. Analysis and optimisation techniques for heterogeneous real-time embedded systems are presented in the paper. The authors address in more detail a particular class of such systems called multi...... of application messages to frames. Optimisation heuristics for frame packing aimed at producing a schedulable system are presented. Extensive experiments and a real-life example show the efficiency of the frame-packing approach....

  18. Mitigating mask roughness via pupil filtering

    Science.gov (United States)

    Baylav, B.; Maloney, C.; Levinson, Z.; Bekaert, J.; Vaglio Pret, A.; Smith, B.

    2014-03-01

    The roughness present on the sidewalls of lithographically defined patterns imposes a very important challenge for advanced technology nodes. It can originate from the aerial image or the photoresist chemistry/processing [1]. The latter remains to be the dominant group in ArF and KrF lithography; however, the roughness originating from the mask transferred to the aerial image is gaining more attention [2-9], especially for the imaging conditions with large mask error enhancement factor (MEEF) values. The mask roughness contribution is usually in the low frequency range, which is particularly detrimental to the device performance by causing variations in electrical device parameters on the same chip [10-12]. This paper explains characteristic differences between pupil plane filtering in amplitude and in phase for the purpose of mitigating mask roughness transfer under interference-like lithography imaging conditions, where onedirectional periodic features are to be printed by partially coherent sources. A white noise edge roughness was used to perturbate the mask features for validating the mitigation.

  19. Development of nano-roughness calibration standards

    International Nuclear Information System (INIS)

    Baršić, Gorana; Mahović, Sanjin; Zorc, Hrvoje

    2012-01-01

    At the Laboratory for Precise Measurements of Length, currently the Croatian National Laboratory for Length, unique nano-roughness calibration standards were developed, which have been physically implemented in cooperation with the company MikroMasch Trading OU and the Ruđer Bošković Institute. In this paper, a new design for a calibration standard with two measuring surfaces is presented. One of the surfaces is for the reproduction of roughness parameters, while the other is for the traceability of length units below 50 nm. The nominal values of the groove depths on these measuring surfaces are the same. Thus, a link between the measuring surfaces has been ensured, which makes these standards unique. Furthermore, the calibration standards available on the market are generally designed specifically for individual groups of measuring instrumentation, such as interferometric microscopes, stylus instruments, scanning electron microscopes (SEM) or scanning probe microscopes. In this paper, a new design for nano-roughness standards has been proposed for use in the calibration of optical instruments, as well as for stylus instruments, SEM, atomic force microscopes and scanning tunneling microscopes. Therefore, the development of these new nano-roughness calibration standards greatly contributes to the reproducibility of the results of groove depth measurement as well as the 2D and 3D roughness parameters obtained by various measuring methods. (paper)

  20. Flotation process control optimisation at Prominent Hill

    International Nuclear Information System (INIS)

    Lombardi, Josephine; Muhamad, Nur; Weidenbach, M.

    2012-01-01

    OZ Minerals' Prominent Hill copper- gold concentrator is located 130 km south east of the town of Coober Pedy in the Gawler Craton of South Australia. The concentrator was built in 2008 and commenced commercial production in early 2009. The Prominent Hill concentrator is comprised of a conventional grinding and flotation processing plant with a 9.6 Mtpa ore throughput capacity. The flotation circuit includes six rougher cells, an IseMill for regrinding the rougher concentrate and a Jameson cell heading up the three stage conventional cell cleaner circuit. In total there are four level controllers in the rougher train and ten level controllers in the cleaning circuit for 18 cells. Generic proportional — integral and derivative (PID) control used on the level controllers alone propagated any disturbances downstream in the circuit that were generated from the grinding circuit, hoppers, between cells and interconnected banks of cells, having a negative impact on plant performance. To better control such disturbances, FloatStar level stabiliser was selected for installation on the flotation circuit to account for the interaction between the cells. Multivariable control was also installed on the five concentrate hoppers to maintain consistent feed to the cells and to the IsaMill. An additional area identified for optimisation in the flotation circuit was the mass pull rate from the rougher cells. FloatStar flow optimiser was selected to be installed subsequent to the FloatStar level stabiliser. This allowed for a unified, consistent and optimal approach to running the rougher circuit. This paper describes the improvement in the stabilisation of the circuit achieved by the FloatStar level stabiliser by using the interaction matrix between cell level controllers and the results and benefits of implementing the FloatStar flow optimiser on the rougher train.

  1. The current regulatory requirements on optimisation and BAT in Sweden in the context of geological disposal

    International Nuclear Information System (INIS)

    Dverstorp, B.

    2010-01-01

    the repository system and should be applied to the whole process of developing a disposal system, i.e. all steps from siting, design, construction, operation to closure of the repository. There are limits on what can be expected in terms optimisation and BAT. The principle of voluntary participation in the Swedish Nuclear Fuel and Waste Management Co (SKB) site investigations on part of the municipalities is one example of a government-accepted societal limitation on site selection. Cost considerations also set boundaries to SKB optimisation process. However, society may provide feedback to SKB on optimisation and BAT considerations during the development process, through the recurrent regulatory reviews and subsequent government decisions on SKB programme for research, development and demonstration (RD and D programme). Finally, technical constraints could be availability of technology and the effectiveness of various measures for enhancing the repositories protective capability. Regulatory review of optimisation and BAT will be based on demonstrating compliance with Swedish radiation safety regulations. It is the responsibility of SKB to motivate the balancing between radiological protection and societal and economical factors. Because we cannot foresee exactly what issues that will appear in SKB's safety case, it is more or less impossible to, a priori, define a comprehensive set of acceptance criteria for BAT and optimisation. In this respect, SKB will not get the final answer to what is an appropriate level of optimisation and BAT until the licensing review. However, a stepwise process of developing a repository makes it possible to provide guidance along the way. Different ways are used to provide regulatory feedback to SKB, prior to the license application. Nevertheless it is important that the safety case/license application contains a road map of the most important BAT considerations, i.e. the ones really affecting safety, throughout the development of the

  2. Improving Vector Evaluated Particle Swarm Optimisation by incorporating nondominated solutions.

    Science.gov (United States)

    Lim, Kian Sheng; Ibrahim, Zuwairie; Buyamin, Salinda; Ahmad, Anita; Naim, Faradila; Ghazali, Kamarul Hawari; Mokhtar, Norrima

    2013-01-01

    The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  3. Improving Vector Evaluated Particle Swarm Optimisation by Incorporating Nondominated Solutions

    Directory of Open Access Journals (Sweden)

    Kian Sheng Lim

    2013-01-01

    Full Text Available The Vector Evaluated Particle Swarm Optimisation algorithm is widely used to solve multiobjective optimisation problems. This algorithm optimises one objective using a swarm of particles where their movements are guided by the best solution found by another swarm. However, the best solution of a swarm is only updated when a newly generated solution has better fitness than the best solution at the objective function optimised by that swarm, yielding poor solutions for the multiobjective optimisation problems. Thus, an improved Vector Evaluated Particle Swarm Optimisation algorithm is introduced by incorporating the nondominated solutions as the guidance for a swarm rather than using the best solution from another swarm. In this paper, the performance of improved Vector Evaluated Particle Swarm Optimisation algorithm is investigated using performance measures such as the number of nondominated solutions found, the generational distance, the spread, and the hypervolume. The results suggest that the improved Vector Evaluated Particle Swarm Optimisation algorithm has impressive performance compared with the conventional Vector Evaluated Particle Swarm Optimisation algorithm.

  4. Advanced manufacturing: optimising the factories of tomorrow

    International Nuclear Information System (INIS)

    Philippon, Patrick

    2013-01-01

    Faced with competition Patrick Philippon - Les Defis du CEA no.179 - April 2013 from the emerging countries, the competitiveness of the industrialised nations depends on the ability of their industries to innovate. This strategy necessarily entails the reorganisation and optimisation of the production systems. This is the whole challenge for 'advanced manufacturing', which relies on the new information and communication technologies. Interactive robotics, virtual reality and non-destructive testing are all technological building blocks developed by CEA, now approved within a cross-cutting programme, to meet the needs of industry and together build the factories of tomorrow. (author)

  5. Biorefinery plant design, engineering and process optimisation

    DEFF Research Database (Denmark)

    Holm-Nielsen, Jens Bo; Ehimen, Ehiazesebhor Augustine

    2014-01-01

    Before new biorefinery systems can be implemented, or the modification of existing single product biomass processing units into biorefineries can be carried out, proper planning of the intended biorefinery scheme must be performed initially. This chapter outlines design and synthesis approaches...... applicable for the planning and upgrading of intended biorefinery systems, and includes discussions on the operation of an existing lignocellulosic-based biorefinery platform. Furthermore, technical considerations and tools (i.e., process analytical tools) which could be applied to optimise the operations...... of existing and potential biorefinery plants are elucidated....

  6. Specification, Verification and Optimisation of Business Processes

    DEFF Research Database (Denmark)

    Herbert, Luke Thomas

    is extended with stochastic branching, message passing and reward annotations which allow for the modelling of resources consumed during the execution of a business process. Further, it is shown how this structure can be used to formalise the established business process modelling language Business Process...... fault tree analysis and the automated optimisation of business processes by means of an evolutionary algorithm. This work is motivated by problems that stem from the healthcare sector, and examples encountered in this field are used to illustrate these developments....

  7. Topology Optimisation for Coupled Convection Problems

    DEFF Research Database (Denmark)

    Alexandersen, Joe; Andreasen, Casper Schousboe; Aage, Niels

    stabilised finite elements implemented in a parallel multiphysics analysis and optimisation framework DFEM [1], developed and maintained in house. Focus is put on control of the temperature field within the solid structure and the problems can therefore be seen as conjugate heat transfer problems, where heat...... conduction governs in the solid parts of the design domain and couples to convection-dominated heat transfer to a surrounding fluid. Both loosely coupled and tightly coupled problems are considered. The loosely coupled problems are convection-diffusion problems, based on an advective velocity field from...

  8. Cost optimisation studies of high power accelerators

    Energy Technology Data Exchange (ETDEWEB)

    McAdams, R.; Nightingale, M.P.S.; Godden, D. [AEA Technology, Oxon (United Kingdom)] [and others

    1995-10-01

    Cost optimisation studies are carried out for an accelerator based neutron source consisting of a series of linear accelerators. The characteristics of the lowest cost design for a given beam current and energy machine such as power and length are found to depend on the lifetime envisaged for it. For a fixed neutron yield it is preferable to have a low current, high energy machine. The benefits of superconducting technology are also investigated. A Separated Orbit Cyclotron (SOC) has the potential to reduce capital and operating costs and intial estimates for the transverse and longitudinal current limits of such machines are made.

  9. Effective Boundary Slip Induced by Surface Roughness and Their Coupled Effect on Convective Heat Transfer of Liquid Flow

    Directory of Open Access Journals (Sweden)

    Yunlu Pan

    2018-05-01

    Full Text Available As a significant interfacial property for micro/nano fluidic system, the effective boundary slip can be induced by the surface roughness. However, the effect of surface roughness on the effective slip is still not clear, both increased and decreased effective boundary slip were found with increased roughness. The present work develops a simplified model to study the effect of surface roughness on the effective boundary slip. In the created rough models, the reference position of the rough surfaces to determinate effective boundary slip was set based on ISO/ASME standard and the surface roughness parameters including Ra (arithmetical mean deviation of the assessed profile, Rsm (mean width of the assessed profile elements and shape of the texture varied to form different surface roughness. Then, the effective boundary slip of fluid flow through the rough surface was analyzed by using COMSOL 5.3. The results show that the effective boundary slip induced by surface roughness of fully wetted rough surface keeps negative and further decreases with increasing Ra or decreasing Rsm. Different shape of roughness texture also results in different effective slip. A simplified corrected method for the measured effective boundary slip was developed and proved to be efficient when the Rsm is no larger than 200 nm. Another important finding in the present work is that the convective heat transfer firstly increases followed by an unobvious change with increasing Ra, while the effective boundary slip keeps decreasing. It is believed that the increasing Ra enlarges the area of solid-liquid interface for convective heat transfer, however, when Ra is large enough, the decreasing roughness-induced effective boundary slip counteracts the enhancement effect of roughness itself on the convective heat transfer.

  10. Turbulent flow velocity distribution at rough walls

    International Nuclear Information System (INIS)

    Baumann, W.

    1978-08-01

    Following extensive measurements of the velocity profile in a plate channel with artificial roughness geometries specific investigations were carried out to verify the results obtained. The wall geometry used was formed by high transverse square ribs having a large pitch. The measuring position relative to the ribs was varied as a parameter thus providing a statement on the local influence of roughness ribs on the values measured. As a fundamental result it was found that the gradient of the logarithmic rough wall velocity profiles, which differs widely from the value 2.5, depends but slightly on the measuring position relative to the ribs. The gradients of the smooth wall velocity profiles deviate from 2.5 near the ribs, only. This fact can be explained by the smooth wall shear stress varying with the pitch of the ribs. (orig.) 891 GL [de

  11. Spin Hall effect by surface roughness

    KAUST Repository

    Zhou, Lingjun

    2015-01-08

    The spin Hall and its inverse effects, driven by the spin orbit interaction, provide an interconversion mechanism between spin and charge currents. Since the spin Hall effect generates and manipulates spin current electrically, to achieve a large effect is becoming an important topic in both academia and industries. So far, materials with heavy elements carrying a strong spin orbit interaction, provide the only option. We propose here a new mechanism, using the surface roughness in ultrathin films, to enhance the spin Hall effect without heavy elements. Our analysis based on Cu and Al thin films suggests that surface roughness is capable of driving a spin Hall angle that is comparable to that in bulk Au. We also demonstrate that the spin Hall effect induced by surface roughness subscribes only to the side-jump contribution but not the skew scattering. The paradigm proposed in this paper provides the second, not if only, alternative to generate a sizable spin Hall effect.

  12. Why do rough surfaces appear glossy?

    Science.gov (United States)

    Qi, Lin; Chantler, Mike J; Siebert, J Paul; Dong, Junyu

    2014-05-01

    The majority of work on the perception of gloss has been performed using smooth surfaces (e.g., spheres). Previous studies that have employed more complex surfaces reported that increasing mesoscale roughness increases perceived gloss [Psychol. Sci.19, 196 (2008), J. Vis.10(9), 13 (2010), Curr. Biol.22, 1909 (2012)]. We show that the use of realistic rendering conditions is important and that, in contrast to [Psychol. Sci.19, 196 (2008), J. Vis.10(9), 13 (2010)], after a certain point increasing roughness further actually reduces glossiness. We investigate five image statistics of estimated highlights and show that for our stimuli, one in particular, which we term "percentage of highlight area," is highly correlated with perceived gloss. We investigate a simple model that explains the unimodal, nonmonotonic relationship between mesoscale roughness and percentage highlight area.

  13. Velocity distribution in a turbulent flow near a rough wall

    Science.gov (United States)

    Korsun, A. S.; Pisarevsky, M. I.; Fedoseev, V. N.; Kreps, M. V.

    2017-11-01

    Velocity distribution in the zone of developed wall turbulence, regardless of the conditions on the wall, is described by the well-known Prandtl logarithmic profile. In this distribution, the constant, that determines the value of the velocity, is determined by the nature of the interaction of the flow with the wall and depends on the viscosity of the fluid, the dynamic velocity, and the parameters of the wall roughness.In extreme cases depending on the ratio between the thickness of the viscous sublayer and the size of the roughness the constant takes on a value that does not depend on viscosity, or leads to a ratio for a smooth wall.It is essential that this logarithmic profile is the result not only of the Prandtl theory, but can be derived from general considerations of the theory of dimensions, and also follows from the condition of local equilibrium of generation and dissipation of turbulent energy in the wall area. This allows us to consider the profile as a universal law of velocity distribution in the wall area of a turbulent flow.The profile approximation up to the maximum speed line with subsequent integration makes possible to obtain the resistance law for channels of simple shape. For channels of complex shape with rough walls, the universal profile can be used to formulate the boundary condition when applied to the calculation of turbulence models.This paper presents an empirical model for determining the constant of the universal logarithmic profile. The zone of roughness is described by a set of parameters and is considered as a porous structure with variable porosity.

  14. Mars radar clutter and surface roughness characteristics from MARSIS data

    Science.gov (United States)

    Campbell, Bruce A.; Schroeder, Dustin M.; Whitten, Jennifer L.

    2018-01-01

    Radar sounder studies of icy, sedimentary, and volcanic settings can be affected by reflections from surface topography surrounding the sensor nadir location. These off-nadir ;clutter; returns appear at similar time delays to subsurface echoes and complicate geologic interpretation. Additionally, broadening of the radar echo in delay by surface returns sets a limit on the detectability of subsurface interfaces. We use MARSIS 4 MHz data to study variations in the nadir and off-nadir clutter echoes, from about 300 km to 1000 km altitude, R, for a wide range of surface roughness. This analysis uses a new method of characterizing ionospheric attenuation to merge observations over a range of solar zenith angle and date. Mirror-like reflections should scale as R-2, but the observed 4 MHz nadir echoes often decline by a somewhat smaller power-law factor because MARSIS on-board processing increases the number of summed pulses with altitude. Prior predictions of the contributions from clutter suggest a steeper decline with R than the nadir echoes, but in very rough areas the ratio of off-nadir returns to nadir echoes shows instead an increase of about R1/2 with altitude. This is likely due in part to an increase in backscatter from the surface as the radar incidence angle at some round-trip time delay declines with increasing R. It is possible that nadir and clutter echo properties in other planetary sounding observations, including RIME and REASON flyby data for Europa, will vary in the same way with altitude, but there may be differences in the nature and scale of target roughness (e.g., icy versus rocky surfaces). We present global maps of the ionosphere- and altitude-corrected nadir echo strength, and of a ;clutter; parameter based on the ratio of off-nadir to nadir echoes. The clutter map offers a view of surface roughness at ∼75 m length scale, bridging the spatial-scale gap between SHARAD roughness estimates and MOLA-derived parameters.

  15. Starch/polyester films: simultaneous optimisation of the properties for the production of biodegradable plastic bags

    OpenAIRE

    Olivato, J. B.; Grossmann, M. V. E.; Bilck, A. P.; Yamashita, F.; Oliveira, L. M.

    2013-01-01

    Blends of starch/polyester have been of great interest in the development of biodegradable packaging. A method based on multiple responses optimisation (Desirability) was used to evaluate the properties of tensile strength, perforation force, elongation and seal strength of cassava starch/poly(butylene adipate-co-terephthalate) (PBAT) blown films produced via a one-step reactive extrusion using tartaric acid (TA) as a compatibiliser. Maximum results for all the properties were set as more des...

  16. Optimisation and process control of steam and cooling cycles by use of online TOC analysis

    Energy Technology Data Exchange (ETDEWEB)

    Schroeter, Jens-Uwe [LAR Process Analysers AG, Berlin (Germany). Domestic Sales

    2013-06-01

    Online monitoring of organic pollution is of great importance in processes with steam, condensate and boiler feed water due to the influence of impurities on corrosion as well as the formation of biofilms, and deposits. Today, the recommended TOC limit value is set between 0.1 to 0.2 mg/l C. Plants can be optimised when monitoring the TOC values. Only some online-OC analysers that are available on the market meet the measurement requirements. (orig.)

  17. Selecting a climate model subset to optimise key ensemble properties

    Directory of Open Access Journals (Sweden)

    N. Herger

    2018-02-01

    Full Text Available End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  18. Paramedic literature search filters: optimised for clinicians and academics.

    Science.gov (United States)

    Olaussen, Alexander; Semple, William; Oteir, Alaa; Todd, Paula; Williams, Brett

    2017-10-11

    Search filters aid clinicians and academics to accurately locate literature. Despite this, there is no search filter or Medical Subject Headings (MeSH) term pertaining to paramedics. Therefore, the aim of this study was to create two filters to meet to different needs of paramedic clinicians and academics. We created a gold standard from a reference set, which we measured against single terms and search filters. The words and phrases used stemmed from selective exclusion of terms from the previously published Prehospital Search Filter 2.0 as well as a Delphi session with an expert panel of paramedic researchers. Independent authors deemed articles paramedic-relevant or not following an agreed definition. We measured sensitivity, specificity, accuracy and number needed to read (NNR). We located 2102 articles of which 431 (20.5%) related to paramedics. The performance of single terms was on average of high specificity (97.1% (Standard Deviation 7.4%), but of poor sensitivity (12.0%, SD 18.7%). The NNR ranged from 1 to 8.6. The sensitivity-maximising search filter yielded 98.4% sensitivity, with a specificity of 74.3% and a NNR of 2. The specificity-maximising filter achieved 88.3% in specificity, which only lowered the sensitivity to 94.7%, and thus a NNR of 1.48. We have created the first two paramedic specific search filters, one optimised for sensitivity and one optimised for specificity. The sensitivity-maximising search filter yielded 98.4% sensitivity, and a NNR of 2. The specificity-maximising filter achieved 88.3% in specificity, which only lowered the sensitivity to 94.7%, and a NNR of 1.48. A paramedic MeSH term is needed.

  19. Selecting a climate model subset to optimise key ensemble properties

    Science.gov (United States)

    Herger, Nadja; Abramowitz, Gab; Knutti, Reto; Angélil, Oliver; Lehmann, Karsten; Sanderson, Benjamin M.

    2018-02-01

    End users studying impacts and risks caused by human-induced climate change are often presented with large multi-model ensembles of climate projections whose composition and size are arbitrarily determined. An efficient and versatile method that finds a subset which maintains certain key properties from the full ensemble is needed, but very little work has been done in this area. Therefore, users typically make their own somewhat subjective subset choices and commonly use the equally weighted model mean as a best estimate. However, different climate model simulations cannot necessarily be regarded as independent estimates due to the presence of duplicated code and shared development history. Here, we present an efficient and flexible tool that makes better use of the ensemble as a whole by finding a subset with improved mean performance compared to the multi-model mean while at the same time maintaining the spread and addressing the problem of model interdependence. Out-of-sample skill and reliability are demonstrated using model-as-truth experiments. This approach is illustrated with one set of optimisation criteria but we also highlight the flexibility of cost functions, depending on the focus of different users. The technique is useful for a range of applications that, for example, minimise present-day bias to obtain an accurate ensemble mean, reduce dependence in ensemble spread, maximise future spread, ensure good performance of individual models in an ensemble, reduce the ensemble size while maintaining important ensemble characteristics, or optimise several of these at the same time. As in any calibration exercise, the final ensemble is sensitive to the metric, observational product, and pre-processing steps used.

  20. Diffuse neutron scattering signatures of rough films

    International Nuclear Information System (INIS)

    Pynn, R.; Lujan, M. Jr.

    1992-01-01

    Patterns of diffuse neutron scattering from thin films are calculated from a perturbation expansion based on the distorted-wave Born approximation. Diffuse fringes can be categorised into three types: those that occur at constant values of the incident or scattered neutron wavevectors, and those for which the neutron wavevector transfer perpendicular to the film is constant. The variation of intensity along these fringes can be used to deduce the spectrum of surface roughness for the film and the degree of correlation between the film's rough surfaces

  1. Noise aspects at aerodynamic blade optimisation projects

    Energy Technology Data Exchange (ETDEWEB)

    Schepers, J.G. [Netherlands Energy Research Foundation, Petten (Netherlands)

    1997-12-31

    This paper shows an example of an aerodynamic blade optimisation, using the program PVOPT. PVOPT calculates the optimal wind turbine blade geometry such that the maximum energy yield is obtained. Using the aerodynamic optimal blade design as a basis, the possibilities of noise reduction are investigated. The aerodynamic optimised geometry from PVOPT is the `real` optimum (up to the latest decimal). The most important conclusion from this study is, that it is worthwhile to investigate the behaviour of the objective function (in the present case the energy yield) around the optimum: If the optimum is flat, there is a possibility to apply modifications to the optimum configuration with only a limited loss in energy yield. It is obvious that the modified configurations emits a different (and possibly lower) noise level. In the BLADOPT program (the successor of PVOPT) it will be possible to quantify the noise level and hence to assess the reduced noise emission more thoroughly. At present the most promising approaches for noise reduction are believed to be a reduction of the rotor speed (if at all possible), and a reduction of the tip angle by means of low lift profiles, or decreased twist at the outboard stations. These modifications were possible without a significant loss in energy yield. (LN)

  2. Dose optimisation of double-contrast barium enema examinations.

    Science.gov (United States)

    Berner, K; Båth, M; Jonasson, P; Cappelen-Smith, J; Fogelstam, P; Söderberg, J

    2010-01-01

    The purpose of the present work was to optimise the filtration and dose setting for double-contrast barium enema examinations using a Philips MultiDiagnost Eleva FD system. A phantom study was performed prior to a patient study. A CDRAD phantom was used in a study where copper and aluminium filtration, different detector doses and tube potentials were examined. The image quality was evaluated using the software CDRAD Analyser and the phantom dose was determined using the Monte Carlo-based software PCXMC. The original setting [100 % detector dose (660 nGy air kerma) and a total filtration of 3.5 mm Al, at 81 kVp] and two other settings identified by the phantom study (100 % detector dose and additional filtration of 1 mm Al and 0.2 mm Cu as well as 80 % detector dose and added filtration of 1 mm Al and 0.2 mm Cu) were included in the patient study. The patient study included 60 patients and up to 8 images from each patient. Six radiologists performed a visual grading characteristics study to evaluate the image quality. A four-step scale was used to judge the fulfillment of three image quality criteria. No overall statistical significant difference in image quality was found between the three settings (P > 0.05). The decrease in the effective dose for the settings in the patient study was 15 % when filtration was added and 34 % when both filtrations was added and detector dose was reduced. The study indicates that additional filtration of 1 mm Al and 0.2 mm Cu and a decrease in detector dose by 20 % from the original setting can be used in colon examinations with Philips MultiDiagnost Eleva FD to reduce the patient dose by 30 % without significantly affecting the image quality. For 20 exposures, this corresponds to a decrease in the effective dose from 1.6 to 1.1 mSv.

  3. Revitalizing the setting approach

    DEFF Research Database (Denmark)

    Bloch, Paul; Toft, Ulla; Reinbach, Helene Christine

    2014-01-01

    BackgroundThe concept of health promotion rests on aspirations aiming at enabling people to increase control over and improve their health. Health promotion action is facilitated in settings such as schools, homes and work places. As a contribution to the promotion of healthy lifestyles, we have ...... approach is based on ecological and whole-systems thinking, and stipulates important principles and values of integration, participation, empowerment, context and knowledge-based development....... further developed the setting approach in an effort to harmonise it with contemporary realities (and complexities) of health promotion and public health action. The paper introduces a modified concept, the supersetting approach, which builds on the optimised use of diverse and valuable resources embedded...... in local community settings and on the strengths of social interaction and local ownership as drivers of change processes. Interventions based on a supersetting approach are first and foremost characterised by being integrated, but also participatory, empowering, context-sensitive and knowledge...

  4. Potential roughness near lithographically fabricated atom chips

    DEFF Research Database (Denmark)

    Krüger, Peter; Andersson, L. M.; Wildermuth, Stefan

    2007-01-01

    Potential roughness has been reported to severely impair experiments in magnetic microtraps. We show that these obstacles can be overcome as we measure disorder potentials that are reduced by two orders of magnitude near lithographically patterned high-quality gold layers on semiconductor atom chip...

  5. Reproducibility of surface roughness in reaming

    DEFF Research Database (Denmark)

    Müller, Pavel; De Chiffre, Leonardo

    An investigation on the reproducibility of surface roughness in reaming was performed to document the applicability of this approach for testing cutting fluids. Austenitic stainless steel was used as a workpiece material and HSS reamers as cutting tools. Reproducibility of the results was evaluat...

  6. Optical measurement of surface roughness in manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Brodmann, R.

    1984-11-01

    The measuring system described here is based on the light-scattering method, and was developed by Optische Werke G. Rodenstock, Munich. It is especially useful for rapid non-contact monitoring of surface roughness in production-related areas. This paper outlines the differences between this system and the common stylus instrument, including descriptions of some applications in industry.

  7. Microscopic Holography for flow over rough plate

    Science.gov (United States)

    Talapatra, Siddharth; Hong, Jiarong; Lu, Yuan; Katz, Joseph

    2008-11-01

    Our objective is to measure the near wall flow structures in a turbulent channel flow over a rough wall. In-line microscopic holographic PIV can resolve the 3-D flow field in a small sample volume, but recording holograms through a rough surface is a challenge. To solve this problem, we match the refractive indices of the fluid with that of the wall. Proof of concept tests involve an acrylic plate containing uniformly distributed, closely packed 0.45mm high pyramids with slope angle of 22^^o located within a concentrated sodium iodide solution. Holograms recorded by a 4864 x 3248 pixel digital camera at 10X magnification provide a field of view of 3.47mm x 2.32mm and pixel resolution of 0.714 μm. Due to index matching, reconstructed seed particles can be clearly seen over the entire volume, with only faint traces with the rough wall that can be removed. Planned experiments will be performed in a 20 x 5 cm rectangular channel with the top and bottom plates having the same roughness as the sample plate.

  8. Factors influencing surface roughness of polyimide film

    International Nuclear Information System (INIS)

    Yao Hong; Zhang Zhanwen; Huang Yong; Li Bo; Li Sai

    2011-01-01

    The polyimide (PI) films of pyromellitic dianhydride-oxydiamiline (PMDA-ODA) were fabricated using vapor deposition polymerization (VDP) method under high vacuum pressure of 10-4 Pa level. The influence of equipment, substrate temperature, the process of heating and deposition ratio of monomers on the surface roughness of the PI films was investigated. The surface topography of films was measured by interferometer microscopy and scanning electron microscopy(SEM), and the surface roughness was probed with atomic force microscopy(AFM). The results show that consecutive films can be formed when the distance from steering flow pipe to substrate is 74 cm. The surface roughnesses are 291.2 nm and 61.9 nm respectively for one-step heating process and multi-step heating process, and using fine mesh can effectively avoid the splash of materials. The surface roughness can be 3.3 nm when the deposition rate ratio of PMDA to ODA is 0.9:1, and keeping the temperature of substrate around 30 degree C is advantageous to form a film with planar micro-surface topography. (authors)

  9. Roughly isometric minimal immersions into Riemannian manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    of the intrinsic combinatorial discrete Laplacian, and we will show that they share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in $N$. The intrinsic properties thus obtained may hence serve as roughly invariant descriptors for the original metric space $X$....

  10. Three-tier rough superhydrophobic surfaces

    International Nuclear Information System (INIS)

    Cao, Yuanzhi; Yuan, Longyan; Hu, Bin; Zhou, Jun

    2015-01-01

    A three-tier rough superhydrophobic surface was fabricated by growing hydrophobic modified (fluorinated silane) zinc oxide (ZnO)/copper oxide (CuO) hetero-hierarchical structures on silicon (Si) micro-pillar arrays. Compared with the other three control samples with a less rough tier, the three-tier surface exhibits the best water repellency with the largest contact angle 161° and the lowest sliding angle 0.5°. It also shows a robust Cassie state which enables the water to flow with a speed over 2 m s"−"1. In addition, it could prevent itself from being wetted by the droplet with low surface tension (mixed water and ethanol 1:1 in volume) which reveals a flow speed of 0.6 m s"−"1 (dropped from the height of 2 cm). All these features prove that adding another rough tier on a two-tier rough surface could futher improve its water-repellent properties. (paper)

  11. Roughness-induced streaming in turbulent wave boundary layers

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Sumer, B. Mutlu; Fredsøe, Jørgen

    2011-01-01

    -averaged streaming characteristics induced by bottom roughness variations are systematically assessed. The effects of variable roughness ratio, gradual roughness transitions, as well as changing flow orientation in plan are all considered. As part of the latter, roughness-induced secondary flows are predicted...

  12. Self-affine roughness influence on redox reaction charge admittance

    NARCIS (Netherlands)

    Palasantzas, G

    2005-01-01

    In this work we investigate the influence of self-affine electrode roughness on the admittance of redox reactions during facile charge transfer kinetics. The self-affine roughness is characterized by the rms roughness amplitude w, the correlation length xi and the roughness exponent H (0

  13. Optimising polarised neutron scattering measurements--XYZ and polarimetry analysis

    International Nuclear Information System (INIS)

    Cussen, L.D.; Goossens, D.J.

    2002-01-01

    The analytic optimisation of neutron scattering measurements made using XYZ polarisation analysis and neutron polarimetry techniques is discussed. Expressions for the 'quality factor' and the optimum division of counting time for the XYZ technique are presented. For neutron polarimetry the optimisation is identified as analogous to that for measuring the flipping ratio and reference is made to the results already in the literature

  14. Optimising polarised neutron scattering measurements--XYZ and polarimetry analysis

    CERN Document Server

    Cussen, L D

    2002-01-01

    The analytic optimisation of neutron scattering measurements made using XYZ polarisation analysis and neutron polarimetry techniques is discussed. Expressions for the 'quality factor' and the optimum division of counting time for the XYZ technique are presented. For neutron polarimetry the optimisation is identified as analogous to that for measuring the flipping ratio and reference is made to the results already in the literature.

  15. Application of ant colony optimisation in distribution transformer sizing

    African Journals Online (AJOL)

    This study proposes an optimisation method for transformer sizing in power system using ant colony optimisation and a verification of the process by MATLAB software. The aim is to address the issue of transformer sizing which is a major challenge affecting its effective performance, longevity, huge capital cost and power ...

  16. Multi-objective evolutionary optimisation for product design and manufacturing

    CERN Document Server

    2011-01-01

    Presents state-of-the-art research in the area of multi-objective evolutionary optimisation for integrated product design and manufacturing Provides a comprehensive review of the literature Gives in-depth descriptions of recently developed innovative and novel methodologies, algorithms and systems in the area of modelling, simulation and optimisation

  17. Design Optimisation and Conrol of a Pilot Operated Seat Valve

    DEFF Research Database (Denmark)

    Nielsen, Brian; Andersen, Torben Ole; Hansen, Michael Rygaard

    2004-01-01

    The paper gives an approach for optimisation of the bandwidth of a pilot operated seat valve for mobile applications. Physical dimensions as well as parameters of the implemented control loop are optimised simultaneously. The frequency response of the valve varies as a function of the pressure drop...

  18. DACIA LOGAN LIVE AXLE OPTIMISATION USING COMPUTER GRAPHICS

    Directory of Open Access Journals (Sweden)

    KIRALY Andrei

    2017-05-01

    Full Text Available The paper presents some contributions to the calculus and optimisation of a live axle used at Dacia Logan using computer graphics software for creating the model and afterwards using FEA evaluation to determine the effectiveness of the optimisation. Thus using specialized computer software, a simulation is made and the results were compared to the measured real prototype.

  19. Spatial-structural interaction and strain energy structural optimisation

    NARCIS (Netherlands)

    Hofmeyer, H.; Davila Delgado, J.M.; Borrmann, A.; Geyer, P.; Rafiq, Y.; Wilde, de P.

    2012-01-01

    A research engine iteratively transforms spatial designs into structural designs and vice versa. Furthermore, spatial and structural designs are optimised. It is suggested to optimise a structural design by evaluating the strain energy of its elements and by then removing, adding, or changing the

  20. Adjoint Optimisation of the Turbulent Flow in an Annular Diffuser

    DEFF Research Database (Denmark)

    Gotfredsen, Erik; Agular Knudsen, Christian; Kunoy, Jens Dahl

    2017-01-01

    In the present study, a numerical optimisation of guide vanes in an annular diffuser, is performed. The optimisation is preformed for the purpose of improving the following two parameters simultaneously; the first parameter is the uniformity perpen-dicular to the flow direction, a 1/3 diameter do...

  1. Optimising of Steel Fiber Reinforced Concrete Mix Design | Beddar ...

    African Journals Online (AJOL)

    Optimising of Steel Fiber Reinforced Concrete Mix Design. ... as a result of the loss of mixture workability that will be translated into a difficult concrete casting in site. ... An experimental study of an optimisation method of fibres in reinforced ...

  2. GAOS: Spatial optimisation of crop and nature within agricultural fields

    NARCIS (Netherlands)

    Bruin, de S.; Janssen, H.; Klompe, A.; Lerink, P.; Vanmeulebrouk, B.

    2010-01-01

    This paper proposes and demonstrates a spatial optimiser that allocates areas of inefficient machine manoeuvring to field margins thus improving the use of available space and supporting map-based Controlled Traffic Farming. A prototype web service (GAOS) allows farmers to optimise tracks within

  3. Turbulent boundary layer over roughness transition with variation in spanwise roughness length scale

    Science.gov (United States)

    Westerweel, Jerry; Tomas, Jasper; Eisma, Jerke; Pourquie, Mathieu; Elsinga, Gerrit; Jonker, Harm

    2016-11-01

    Both large-eddy simulations (LES) and water-tunnel experiments, using simultaneous stereoscopic PIV and LIF were done to investigate pollutant dispersion in a region where the surface changes from rural to urban roughness. This consists of rectangular obstacles where we vary the spanwise aspect ratio of the obstacles. A line source of passive tracer was placed upstream of the roughness transition. The objectives of the study are: (i) to determine the influence of the aspect ratio on the roughness-transition flow, and (ii) to determine the dominant mechanisms of pollutant removal from street canyons in the transition region. It is found that for a spanwise aspect ratio of 2 the drag induced by the roughness is largest of all considered cases, which is caused by a large-scale secondary flow. In the roughness transition the vertical advective pollutant flux is the main ventilation mechanism in the first three streets. Furthermore, by means of linear stochastic estimation the mean flow structure is identied that is responsible for exchange of the fluid between the roughness obstacles and the outer part of the boundary layer. Furthermore, it is found that the vertical length scale of this structure increases with increasing aspect ratio of the obstacles in the roughness region.

  4. Numerical Investigation of Effect of Surface Roughness in a Microchannel

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Myung Seob; Byun, Sung Jun; Yoon, Joon Yong [Hanyang University, Seoul (Korea, Republic of)

    2010-05-15

    In this paper, lattice Boltzmann method(LBM) results for a laminar flow in a microchannel with rough surface are presented. The surface roughness is modeled as an array of rectangular modules placed on the top and bottom surface of a parallel-plate channel. The effects of relative surface roughness, roughness distribution, and roughness size are presented in terms of the Poiseuille number. The roughness distribution characterized by the ratio of the roughness height to the spacing between the modules has a negligible effect on the flow and friction factors. Finally, a significant increase in the Poiseuille number is observed when the surface roughness is considered, and the effects of roughness on the microflow field mainly depend on the surface roughness.

  5. Modified cuckoo search: A new gradient free optimisation algorithm

    International Nuclear Information System (INIS)

    Walton, S.; Hassan, O.; Morgan, K.; Brown, M.R.

    2011-01-01

    Highlights: → Modified cuckoo search (MCS) is a new gradient free optimisation algorithm. → MCS shows a high convergence rate, able to outperform other optimisers. → MCS is particularly strong at high dimension objective functions. → MCS performs well when applied to engineering problems. - Abstract: A new robust optimisation algorithm, which can be regarded as a modification of the recently developed cuckoo search, is presented. The modification involves the addition of information exchange between the top eggs, or the best solutions. Standard optimisation benchmarking functions are used to test the effects of these modifications and it is demonstrated that, in most cases, the modified cuckoo search performs as well as, or better than, the standard cuckoo search, a particle swarm optimiser, and a differential evolution strategy. In particular the modified cuckoo search shows a high convergence rate to the true global minimum even at high numbers of dimensions.

  6. Results of the 2010 IGSC Topical Session on Optimisation

    International Nuclear Information System (INIS)

    Bailey, Lucy

    2014-01-01

    Document available in abstract form only. Full text follows: The 2010 IGSC topical session on optimisation explored a wide range of issues concerning optimisation throughout the radioactive waste management process. Philosophical and ethical questions were discussed, such as: - To what extent is the process of optimisation more important than the end result? - How do we balance long-term environmental safety with near-term operational safety? - For how long should options be kept open? - In balancing safety and excessive cost, when is BAT achieved and who decides on this? * How should we balance the needs of current society with those of future generations? It was clear that optimisation is about getting the right balance between a range of issues that cover: radiation protection, environmental protection, operational safety, operational requirements, social expectations and cost. The optimisation process will also need to respect various constraints, which are likely to include: regulatory requirements, site restrictions, community-imposed requirements or restrictions and resource constraints. These issues were explored through a number of presentations that discussed practical cases of optimisation occurring at different stages of international radioactive waste management programmes. These covered: - Operations and decommissioning - management of large disused components, from the findings of an international study, presented by WPDD; - Concept option selection, prior to site selection - upstream and disposal system optioneering in the UK; - Siting decisions - examples from both Germany and France, explaining how optimisation is being used to support site comparisons and communicate siting decisions; - Repository design decisions - comparison of KBS-3 horizontal and vertical deposition options in Finland; and - On-going optimisation during repository operation - operational experience from WIPP in the US. The variety of the remarks and views expressed during the

  7. Work management to optimise occupational radiological protection

    International Nuclear Information System (INIS)

    Ahier, B.

    2009-01-01

    Although work management is no longer a new concept, continued efforts are still needed to ensure that good performance, outcomes and trends are maintained in the face of current and future challenges. The ISOE programme thus created an Expert Group on Work Management in 2007 to develop an updated report reflecting the current state of knowledge, technology and experience in the occupational radiological protection of workers at nuclear power plants. Published in 2009, the new ISOE report on Work Management to Optimise Occupational Radiological Protection in the Nuclear Power Industry provides up-to-date practical guidance on the application of work management principles. Work management measures aim at optimising occupational radiological protection in the context of the economic viability of the installation. Important factors in this respect are measures and techniques influencing i) dose and dose rate, including source- term reduction; ii) exposure, including amount of time spent in controlled areas for operations; and iii) efficiency in short- and long-term planning, worker involvement, coordination and training. Equally important due to their broad, cross-cutting nature are the motivational and organisational arrangements adopted. The responsibility for these aspects may reside in various parts of an installation's organisational structure, and thus, a multi-disciplinary approach must be recognised, accounted for and well-integrated in any work. Based on the operational experience within the ISOE programme, the following key areas of work management have been identified: - regulatory aspects; - ALARA management policy; - worker involvement and performance; - work planning and scheduling; - work preparation; - work implementation; - work assessment and feedback; - ensuring continuous improvement. The details of each of these areas are elaborated and illustrated in the report through examples and case studies arising from ISOE experience. They are intended to

  8. A comparison of forward planning and optimised inverse planning

    International Nuclear Information System (INIS)

    Oldham, Mark; Neal, Anthony; Webb, Steve

    1995-01-01

    A radiotherapy treatment plan optimisation algorithm has been applied to 48 prostate plans and the results compared with those of an experienced human planner. Twelve patients were used in the study, and a 3, 4, 6 and 8 field plan (with standard coplanar beam angles for each plan type) were optimised by both the human planner and the optimisation algorithm. The human planner 'optimised' the plan by conventional forward planning techniques. The optimisation algorithm was based on fast-simulated-annealing. 'Importance factors' assigned to different regions of the patient provide a method for controlling the algorithm, and it was found that the same values gave good results for almost all plans. The plans were compared on the basis of dose statistics and normal-tissue-complication-probability (NTCP) and tumour-control-probability (TCP). The results show that the optimisation algorithm yielded results that were at least as good as the human planner for all plan types, and on the whole slightly better. A study of the beam-weights chosen by the optimisation algorithm and the planner will be presented. The optimisation algorithm showed greater variation, in response to individual patient geometry. For simple (e.g. 3 field) plans it was found to consistently achieve slightly higher TCP and lower NTCP values. For more complicated (e.g. 8 fields) plans the optimisation also achieved slightly better results with generally less numbers of beams. The optimisation time was always ≤5 minutes; a factor of up to 20 times faster than the human planner

  9. Rough case-based reasoning system for continues casting

    Science.gov (United States)

    Su, Wenbin; Lei, Zhufeng

    2018-04-01

    The continuous casting occupies a pivotal position in the iron and steel industry. The rough set theory and the CBR (case based reasoning, CBR) were combined in the research and implementation for the quality assurance of continuous casting billet to improve the efficiency and accuracy in determining the processing parameters. According to the continuous casting case, the object-oriented method was applied to express the continuous casting cases. The weights of the attributes were calculated by the algorithm which was based on the rough set theory and the retrieval mechanism for the continuous casting cases was designed. Some cases were adopted to test the retrieval mechanism, by analyzing the results, the law of the influence of the retrieval attributes on determining the processing parameters was revealed. A comprehensive evaluation model was established by using the attribute recognition theory. According to the features of the defects, different methods were adopted to describe the quality condition of the continuous casting billet. By using the system, the knowledge was not only inherited but also applied to adjust the processing parameters through the case based reasoning method as to assure the quality of the continuous casting and improve the intelligent level of the continuous casting.

  10. VEHICLE DRIVING CYCLE OPTIMISATION ON THE HIGHWAY

    Directory of Open Access Journals (Sweden)

    Zinoviy STOTSKO

    2016-06-01

    Full Text Available This paper is devoted to the problem of reducing vehicle energy consumption. The authors consider the optimisation of highway driving cycle a way to use the kinetic energy of a car more effectively at various road conditions. The model of a vehicle driving control at the highway which consists of elementary cycles, such as accelerating, free rolling and deceleration under forces of external resistance, was designed. Braking, as an energy dissipation regime, was not included. The influence of the various longitudinal profiles of the road was taken into consideration and included in the model. Ways to use the results of monitoring road and traffic conditions are presented. The method of non-linear programming is used to design the optimal vehicle control function and phase trajectory. The results are presented by improved typical driving cycles that present energy saving as a subject of choice at a specified schedule.

  11. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  12. Optimisation and constraints - a view from ICRP

    International Nuclear Information System (INIS)

    Dunster, H.J.

    1994-01-01

    The optimisation of protection has been the major policy underlying the recommendations of the International Commission on Radiological Protection for more than 20 years. In earlier forms, the concept can be traced back to 1951. Constraints are more recent, appearing in their present form only in the 1990 recommendations of the Commission. The requirement to keep all exposures as low as reasonably achievable applies to both normal and potential exposures. The policy and the techniques are well established for normal exposures, i.e. exposures that are certain to occur. The application to potential exposures, i.e. exposures that have a probability of occurring that is less than unity, is more difficult and is still under international discussion. Constraints are needed to limit the inequity associated with the use of collective dose in cost-benefit analysis and to provide a margin to protect individuals who may be exposed to more than one source. (author)

  13. Optimising Impact in Astronomy for Development Projects

    Science.gov (United States)

    Grant, Eli

    2015-08-01

    Positive outcomes in the fields of science education and international development are notoriously difficult to achieve. Among the challenges facing projects that use astronomy to improve education and socio-economic development is how to optimise project design in order to achieve the greatest possible benefits. Over the past century, medical scientists along with statisticians and economists have progressed an increasingly sophisticated and scientific approach to designing, testing and improving social intervention and public health education strategies. This talk offers a brief review of the history and current state of `intervention science'. A similar framework is then proposed for astronomy outreach and education projects, with applied examples given of how existing evidence can be used to inform project design, predict and estimate cost-effectiveness, minimise the risk of unintended negative consequences and increase the likelihood of target outcomes being achieved.

  14. Optimisation of Multilayer Insulation an Engineering Approach

    CERN Document Server

    Chorowski, M; Parente, C; Riddone, G

    2001-01-01

    A mathematical model has been developed to describe the heat flux through multilayer insulation (MLI). The total heat flux between the layers is the result of three distinct heat transfer modes: radiation, residual gas conduction and solid spacer conduction. The model describes the MLI behaviour considering a layer-to-layer approach and is based on an electrical analogy, in which the three heat transfer modes are treated as parallel thermal impedances. The values of each of the transfer mode vary from layer to layer, although the total heat flux remains constant across the whole MLI blanket. The model enables the optimisation of the insulation with regard to different MLI parameters, such as residual gas pressure, number of layers and boundary temperatures. The model has been tested with experimental measurements carried out at CERN and the results revealed to be in a good agreement, especially for insulation vacuum between 10-5 Pa and 10-3 Pa.

  15. Public transport optimisation emphasising passengers’ travel behaviour

    DEFF Research Database (Denmark)

    Jensen, Jens Parbo

    to the case where the two problems are solved sequentially without taking into account interdependencies. Figure 1 - Planning public transport The PhD study develops a metaheuristic algorithm to adapt the line plan configuration in order better to match passengers’ travel demand in terms of transfers as well......Passengers in public transport complaining about their travel experiences are not uncommon. This might seem counterintuitive since several operators worldwide are presenting better key performance indicators year by year. The present PhD study focuses on developing optimisation algorithms...... to enhance the operations of public transport while explicitly emphasising passengers’ travel behaviour and preferences. Similar to economic theory, interactions between supply and demand are omnipresent in the context of public transport operations. In public transport, the demand is represented...

  16. Value Chain Optimisation of Biogas Production

    DEFF Research Database (Denmark)

    Jensen, Ida Græsted

    economically feasible. In this PhD thesis, the focus is to create models for investigating the profitability of biogas projects by: 1) including the whole value chain in a mathematical model and considering mass and energy changes on the upstream part of the chain; and 2) including profit allocation in a value......, the costs on the biogas plant has been included in the model using economy of scale. For the second point, a mathematical model considering profit allocation was developed applying three allocation mechanisms. This mathematical model can be applied as a second step after the value chain optimisation. After...... in the energy systems model to find the optimal end use of each type of gas and fuel. The main contributions of this thesis are the methods developed on plant level. Both the mathematical model for the value chain and the profit allocation model can be generalised and used in other industries where mass...

  17. Expect systems and optimisation in process control

    Energy Technology Data Exchange (ETDEWEB)

    Mamdani, A.; Efstathiou, J. (eds.)

    1986-01-01

    This report brings together recent developments both in expert systems and in optimisation, and deals with current applications in industry. Part One is concerned with Artificial Intellegence in planning and scheduling and with rule-based control implementation. The tasks of control maintenance, rescheduling and planning are each discussed in relation to new theoretical developments, techniques available, and sample applications. Part Two covers model based control techniques in which the control decisions are used in a computer model of the process. Fault diagnosis, maintenance and trouble-shooting are just some of the activities covered. Part Three contains case studies of projects currently in progress, giving details of the software available and the likely future trends. One of these, on qualitative plant modelling as a basis for knowledge-based operator aids in nuclear power stations is indexed separately.

  18. Expert systems and optimisation in process control

    International Nuclear Information System (INIS)

    Mamdani, A.; Efstathiou, J.

    1986-01-01

    This report brings together recent developments both in expert systems and in optimisation, and deals with current applications in industry. Part One is concerned with Artificial Intellegence in planning and scheduling and with rule-based control implementation. The tasks of control maintenance, rescheduling and planning are each discussed in relation to new theoretical developments, techniques available, and sample applications. Part Two covers model based control techniques in which the control decisions are used in a computer model of the process. Fault diagnosis, maintenance and trouble-shooting are just some of the activities covered. Part Three contains case studies of projects currently in progress, giving details of the software available and the likely future trends. One of these, on qualitative plant modelling as a basis for knowledge-based operator aids in nuclear power stations is indexed separately. (author)

  19. Improving and optimising road pricing in Copenhagen

    DEFF Research Database (Denmark)

    Nielsen, Otto Anker; Larsen, Marie Karen

    2008-01-01

    though quite a number of proposed charging systems have been examined only a few pricing strategies have been investigated. This paper deals with the optimisation of different designs for a road pricing system in the Greater Copenhagen area with respect to temporal and spatial differentiation......The question whether to introduce toll rings or road pricing in Copenhagen has been discussed intensively during the last 10 years. The main results of previous analyses are that none of the systems would make a positive contribution at present, when considered from a socio-economic view. Even...... of the pricing levels. A detailed transport model was used to describe the demand effects. The model was based on data from a real test of road pricing on 500 car drivers. The paper compares the price systems with regard to traffic effects and generalised costs for users and society. It is shown how important...

  20. A code for optimising triplet layout

    CERN Document Server

    AUTHOR|(CDS)2141109; Seryi, Andrei; Abelleira, Jose; Cruz Alaniz, Emilia

    2017-01-01

    One of the main challenges when designing final focus systems of particle accelerators is maximising the beam stay clear in the strong quadrupole magnets of the inner triplet. Moreover it is desirable to keep the quadrupoles in the inner triplet as short as possible for space and costs reasons but also to reduce chromaticity and simplify corrections schemes. An algorithm that explores the triplet parameter space to optimise both these aspects was written. It uses thin lenses as a first approximation for a broad parameter scan and MADX for more precise calculations. The thin lens algorithm is significantly faster than a full scan using MADX and relatively precise at indicating the approximate area where the optimum solution lies.

  1. Optimising Signalised Intersection Using Wireless Vehicle Detectors

    DEFF Research Database (Denmark)

    Adjin, Daniel Michael Okwabi; Torkudzor, Moses; Asare, Jack

    Traffic congestion on roads wastes travel times. In this paper, we developed a vehicular traffic model to optimise a signalised intersection in Accra, using wireless vehicle detectors. Traffic volume gathered was extrapolated to cover 2011 and 2016 and were analysed to obtain the peak hour traffic...... volume causing congestion. The intersection was modelled and simulated in Synchro7 as an actuated signalised model using results from the analysed data. The model for morning peak periods gave optimal cycle lengths of 100s and 150s with corresponding intersection delay of 48.9s and 90.6s in 2011 and 2016...... respectively while that for the evening was 55s giving delay of 14.2s and 16.3s respectively. It is shown that the model will improve traffic flow at the intersection....

  2. Dynamic optimisation of an industrial web process

    Directory of Open Access Journals (Sweden)

    M Soufian

    2008-09-01

    Full Text Available An industrial web process has been studied and it is shown that theunderlying physics of such processes governs by the Navier-Stokes partialdifferential equations with moving boundary conditions, which in turn have tobe determined by the solution of the thermodynamics equations. Thedevelopment of a two-dimensional continuous-discrete model structurebased on this study is presented. Other models are constructed based onthis model for better identification and optimisation purposes. Theparameters of the proposed models are then estimated using real dataobtained from the identification experiments with the process plant. Varioussimulation tests for validation are accompanied with the design, developmentand real-time industrial implementation of an optimal controller for dynamicoptimisation of this web process. It is shown that in comparison with thetraditional controller, the new controller resulted in a better performance, animprovement in film quality and saving in raw materials. This demonstrates theefficiency and validation of the developed models.

  3. Recent perspectives on optimisation of radiological protection

    International Nuclear Information System (INIS)

    Robb, J.D.; Croft, J.R.

    1992-01-01

    The ALARA principle as a requirement in radiological protection has evolved from its theoretical roots. Based on several years work, this paper provides a backdrop to practical approaches to ALARA for the 1990s. The key step, developing ALARA thinking so that it becomes an integral part of radiological protection programmes, is discussed using examples from the UK and France, as is the role of tools to help standardise judgements for decision-making. In its latest recommendations, ICRP have suggested that the optimisation of protection should be constrained by restrictions on the doses to individuals. This paper also considers the function of such restrictions for occupational, public and medical exposure, and in the design process. (author)

  4. Optimisation of parameters of DCD for PHWRs

    International Nuclear Information System (INIS)

    Velmurugan, S.; Sathyaseelan, V.S.; Narasimhan, S.V.; Mathur, P.K.

    1991-01-01

    Decontamination formulation based on EDTA, Oxalic acid, Citric acid was evaluated for its efficacy in removing oxide layers of PHWR. An ion exchange system which was specifically suitable for fission product dominated contamination in PHWRs was optimised for the reagent regeneration stage of the decontamination process. An analysis of the nature of the complexed metal species formed in the dissolution process and Electrochemical measurements were employed as a tool to follow the course of oxide removal during the dissolution process. An attempt was made to understand the redeposition behaviour of various isotopes during the decontamination process. SEM and ESCA studies of metal coupons before and after the dissolution process were used to analyse the deposits in the above context. The pick up of DCD reagents on the ion exchangers and material compatibility tests on Carbon steel, Monel-400 and Zircaloy-2 with the decontaminant under the conditions of decontamination experiment are reported. (author)

  5. Optimisation of Inulinase Production by Kluyveromyces bulgaricus

    Directory of Open Access Journals (Sweden)

    Darija Vranešić

    2002-01-01

    Full Text Available The present work is based on observation of the effects of pH and temperature of fermentation on the production of microbial enzyme inulinase by Kluyveromyces marxianus var. bulgaricus. Inulinase hydrolyzes inulin, a polysaccharide which can be isolated from plants such as Jerusalem artichoke, chicory or dahlia, and transformed into pure fructose or fructooligosaccharides. Fructooligosaccharides have great potential in food industry because they can be used as calorie-reduced compounds and noncariogenic sweeteners as well as soluble fibre and prebiotic compounds. Fructose formation from inulin is a single step enzymatic reaction and yields are up to 95 % the fructose. On the contrary, conventional fructose production from starch needs at least three enzymatic steps, yielding only 45 % of fructose. The process of inulinase production was optimised by using experimental design method. pH value of the cultivation medium showed to be the most significant variable and it should be maintained at optimum value of 3.6. The effect of temperature was slightly lower and optimal values were between 30 and 33 °C. At a low pH value of the cultivation medium, the microorganism was not able to producem enough enzyme and enzyme activities were low. Similar effect was caused by high temperature. The highest values of enzyme activities were achieved at optimal fermentation conditions and the values were: 100.16–124.36 IU/mL (with sucrose as substrate for determination of enzyme activity or 8.6–11.6 IU/mL (with inulin as substrate, respectively. The method of factorial design and response surface analysis makes it possible to study several factors simultaneously, to quantify the individual effect of each factor and to investigate their possible interactions. As a comparison to this method, optimisation of a physiological enzyme activity model depending on pH and temperature was also studied.

  6. Multi-objective optimisation of wastewater treatment plant control to reduce greenhouse gas emissions.

    Science.gov (United States)

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2014-05-15

    This study investigates the potential of control strategy optimisation for the reduction of operational greenhouse gas emissions from wastewater treatment in a cost-effective manner, and demonstrates that significant improvements can be realised. A multi-objective evolutionary algorithm, NSGA-II, is used to derive sets of Pareto optimal operational and control parameter values for an activated sludge wastewater treatment plant, with objectives including minimisation of greenhouse gas emissions, operational costs and effluent pollutant concentrations, subject to legislative compliance. Different problem formulations are explored, to identify the most effective approach to emissions reduction, and the sets of optimal solutions enable identification of trade-offs between conflicting objectives. It is found that multi-objective optimisation can facilitate a significant reduction in greenhouse gas emissions without the need for plant redesign or modification of the control strategy layout, but there are trade-offs to consider: most importantly, if operational costs are not to be increased, reduction of greenhouse gas emissions is likely to incur an increase in effluent ammonia and total nitrogen concentrations. Design of control strategies for a high effluent quality and low costs alone is likely to result in an inadvertent increase in greenhouse gas emissions, so it is of key importance that effects on emissions are considered in control strategy development and optimisation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Optimisation on processing parameters for minimising warpage on side arm using response surface methodology (RSM) and particle swarm optimisation (PSO)

    Science.gov (United States)

    Rayhana, N.; Fathullah, M.; Shayfull, Z.; Nasir, S. M.; Hazwan, M. H. M.; Sazli, M.; Yahya, Z. R.

    2017-09-01

    This study presents the application of optimisation method to reduce the warpage of side arm part. Autodesk Moldflow Insight software was integrated into this study to analyse the warpage. The design of Experiment (DOE) for Response Surface Methodology (RSM) was constructed and by using the equation from RSM, Particle Swarm Optimisation (PSO) was applied. The optimisation method will result in optimised processing parameters with minimum warpage. Mould temperature, melt temperature, packing pressure, packing time and cooling time was selected as the variable parameters. Parameters selection was based on most significant factor affecting warpage stated by previous researchers. The results show that warpage was improved by 28.16% for RSM and 28.17% for PSO. The warpage improvement in PSO from RSM is only by 0.01 %. Thus, the optimisation using RSM is already efficient to give the best combination parameters and optimum warpage value for side arm part. The most significant parameters affecting warpage are packing pressure.

  8. Anomalous roughness of turbulent interfaces with system size dependent local roughness exponent

    International Nuclear Information System (INIS)

    Balankin, Alexander S.; Matamoros, Daniel Morales

    2005-01-01

    In a system far from equilibrium the system size can play the role of control parameter that governs the spatiotemporal dynamics of the system. Accordingly, the kinetic roughness of interfaces in systems far from equilibrium may depend on the system size. To get an insight into this problem, we performed a detailed study of rough interfaces formed in paper combustion experiments. Using paper sheets of different width λ, we found that the turbulent flame fronts display anomalous multi-scaling characterized by non-universal global roughness exponent α and by the system size dependent spectrum of local roughness exponents, ζ q (λ)=ζ 1 (1)q -ω λ φ q =0.93q -0.15 . The structure factor of turbulent flame fronts also exhibits unconventional scaling dependence on λ. These results are expected to apply to a broad range of far from equilibrium systems when the kinetic energy fluctuations exceed a certain critical value.

  9. Friction stir welding: multi-response optimisation using Taguchi-based GRA

    Directory of Open Access Journals (Sweden)

    Jitender Kundu

    2016-01-01

    Full Text Available In present experimental work, friction stir welding of aluminium alloy 5083- H321 is performed for optimisation of process parameters for maximum tensile strength. Taguchi’s L9 orthogonal array has been used for three parameters – tool rotational speed (TRS, traverse speed (TS, and tool tilt angle (TTA with three levels. Multi-response optimisation has been carried out through Taguchi-based grey relational analysis. The grey relational grade has been calculated for all three responses – ultimate tensile strength, percentage elongation, and micro-hardness. Analysis of variance is the tool used for obtaining grey relational grade to find out the significant process parameters. TRS and TS are the two most significant parameters which influence most of the quality characteristics of friction stir welded joint. Validation of predicted values done through confirmation experiments at optimum setting shows a good agreement with experimental values.

  10. Automation of route identification and optimisation based on data-mining and chemical intuition.

    Science.gov (United States)

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  11. Hybrid real-code ant colony optimisation for constrained mechanical design

    Science.gov (United States)

    Pholdee, Nantiwat; Bureerat, Sujin

    2016-01-01

    This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.

  12. Optimisation of beryllium-7 gamma analysis following BCR sequential extraction

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, A. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Blake, W.H., E-mail: wblake@plymouth.ac.uk [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Keith-Roach, M.J. [Plymouth University, School of Geography, Earth and Environmental Sciences, 8 Kirkby Place, Plymouth PL4 8AA (United Kingdom); Kemakta Konsult, Stockholm (Sweden)

    2012-03-30

    Graphical abstract: Showing decrease in analytical uncertainty using the optimal (combined preconcentrated sample extract) method. nv (no value) where extract activities were optimised method can be readily employed in studies using low activity samples. - Abstract: The application of cosmogenic {sup 7}Be as a sediment tracer at the catchment-scale requires an understanding of its geochemical associations in soil to underpin the assumption of irreversible adsorption. Sequential extractions offer a readily accessible means of determining the associations of {sup 7}Be with operationally defined soil phases. However, the subdivision of the low activity concentrations of fallout {sup 7}Be in soils into geochemical fractions can introduce high gamma counting uncertainties. Extending analysis time significantly is not always an option for batches of samples, owing to the on-going decay of {sup 7}Be (t{sub 1/2} = 53.3 days). Here, three different methods of preparing and quantifying {sup 7}Be extracted using the optimised BCR three-step scheme have been evaluated and compared with a focus on reducing analytical uncertainties. The optimal method involved carrying out the BCR extraction in triplicate, sub-sampling each set of triplicates for stable Be analysis before combining each set and coprecipitating the {sup 7}Be with metal oxyhydroxides to produce a thin source for gamma analysis. This method was applied to BCR extractions of natural {sup 7}Be in four agricultural soils. The approach gave good counting statistics from a 24 h analysis period ({approx}10% (2

  13. Single-layer model for surface roughness.

    Science.gov (United States)

    Carniglia, C K; Jensen, D G

    2002-06-01

    Random roughness of an optical surface reduces its specular reflectance and transmittance by the scattering of light. The reduction in reflectance can be modeled by a homogeneous layer on the surface if the refractive index of the layer is intermediate to the indices of the media on either side of the surface. Such a layer predicts an increase in the transmittance of the surface and therefore does not provide a valid model for the effects of scatter on the transmittance. Adding a small amount of absorption to the layer provides a model that predicts a reduction in both reflectance and transmittance. The absorbing layer model agrees with the predictions of a scalar scattering theory for a layer with a thickness that is twice the rms roughness of the surface. The extinction coefficient k for the layer is proportional to the thickness of the layer.

  14. Offshore Wind Power at Rough Sea

    DEFF Research Database (Denmark)

    Petersen, Kristian Rasmus; Madsen, Erik Skov; Bilberg, Arne

    2013-01-01

    This study compare the current operations and maintenance issues of one offshore wind park at very rough sea conditions and two onshore wind parks. Through a detailed data analysis and case studies this study identifies how improvements have been made in maintenance of large wind turbines. Howeve......, the study has also revealed the need for new maintenance models including a shift from breakdown and preventive maintenances and towards more predictive maintenance to reduce the cost of energy for offshore wind energy installations in the future.......This study compare the current operations and maintenance issues of one offshore wind park at very rough sea conditions and two onshore wind parks. Through a detailed data analysis and case studies this study identifies how improvements have been made in maintenance of large wind turbines. However...

  15. The contact sport of rough surfaces

    Science.gov (United States)

    Carpick, Robert W.

    2018-01-01

    Describing the way two surfaces touch and make contact may seem simple, but it is not. Fully describing the elastic deformation of ideally smooth contacting bodies, under even low applied pressure, involves second-order partial differential equations and fourth-rank elastic constant tensors. For more realistic rough surfaces, the problem becomes a multiscale exercise in surface-height statistics, even before including complex phenomena such as adhesion, plasticity, and fracture. A recent research competition, the “Contact Mechanics Challenge” (1), was designed to test various approximate methods for solving this problem. A hypothetical rough surface was generated, and the community was invited to model contact with this surface with competing theories for the calculation of properties, including contact area and pressure. A supercomputer-generated numerical solution was kept secret until competition entries were received. The comparison of results (2) provides insights into the relative merits of competing models and even experimental approaches to the problem.

  16. Prediction of Ductile Fracture Surface Roughness Scaling

    DEFF Research Database (Denmark)

    Needleman, Alan; Tvergaard, Viggo; Bouchaud, Elisabeth

    2012-01-01

    . Ductile crack growth in a thin strip under mode I, overall plane strain, small scale yielding conditions is analyzed. Although overall plane strain loading conditions are prescribed, full 3D analyses are carried out to permit modeling of the three dimensional material microstructure and of the resulting......Experimental observations have shown that the roughness of fracture surfaces exhibit certain characteristic scaling properties. Here, calculations are carried out to explore the extent to which a ductile damage/fracture constitutive relation can be used to model fracture surface roughness scaling...... three dimensional stress and deformation states that develop in the fracture process region. An elastic-viscoplastic constitutive relation for a progressively cavitating plastic solid is used to model the material. Two populations of second phase particles are represented: large inclusions with low...

  17. Estimation of gloss from rough surface parameters

    Science.gov (United States)

    Simonsen, Ingve; Larsen, Åge G.; Andreassen, Erik; Ommundsen, Espen; Nord-Varhaug, Katrin

    2005-12-01

    Gloss is a quantity used in the optical industry to quantify and categorize materials according to how well they scatter light specularly. With the aid of phase perturbation theory, we derive an approximate expression for this quantity for a one-dimensional randomly rough surface. It is demonstrated that gloss depends in an exponential way on two dimensionless quantities that are associated with the surface randomness: the root-mean-square roughness times the perpendicular momentum transfer for the specular direction, and a correlation function dependent factor times a lateral momentum variable associated with the collection angle. Rigorous Monte Carlo simulations are used to access the quality of this approximation, and good agreement is observed over large regions of parameter space.

  18. Sparseness and Roughness of Foreign Exchange Rates

    Science.gov (United States)

    Vandewalle, N.; Ausloos, M.

    An accurate multiaffine analysis of 23 foreign currency exchange rates has been performed. The roughness exponent H1 which characterizes the excursion of the exchange rate has been numerically measured. The degree of intermittency C1 has been also estimated. In the (H1,C1) phase diagram, the currency exchange rates are dispersed in a wide region around the Brownian motion value (H1=0.5,C1=0) and have a significantly intermittent component (C1≠0).

  19. Rough surface scattering simulations using graphics cards

    International Nuclear Information System (INIS)

    Klapetek, Petr; Valtr, Miroslav; Poruba, Ales; Necas, David; Ohlidal, Miloslav

    2010-01-01

    In this article we present results of rough surface scattering calculations using a graphical processing unit implementation of the Finite Difference in Time Domain algorithm. Numerical results are compared to real measurements and computational performance is compared to computer processor implementation of the same algorithm. As a basis for computations, atomic force microscope measurements of surface morphology are used. It is shown that the graphical processing unit capabilities can be used to speedup presented computationally demanding algorithms without loss of precision.

  20. Roughness Length Variability over Heterogeneous Surfaces

    Science.gov (United States)

    2010-03-01

    2004), the influence of variable roughness reaches its maximum at the height of local 0z and vanishes at the so- called blending height (Wieringa...the distribution of visibility restrictors such as low clouds, fog, haze, dust, and pollutants . An improved understanding of ABL structure...R. D., B. H. Lynn, A. Boone, W.-K. Tao, and J. Simpson, 2001: The influence of soil moisture, coastline curvature, and land-breeze circulations on

  1. The characteristic function of rough Heston models

    OpenAIRE

    Euch, Omar El; Rosenbaum, Mathieu

    2016-01-01

    It has been recently shown that rough volatility models, where the volatility is driven by a fractional Brownian motion with small Hurst parameter, provide very relevant dynamics in order to reproduce the behavior of both historical and implied volatilities. However, due to the non-Markovian nature of the fractional Brownian motion, they raise new issues when it comes to derivatives pricing. Using an original link between nearly unstable Hawkes processes and fractional volatility models, we c...

  2. Soil surface roughness decay in contrasting climates, tillage types and management systems

    Science.gov (United States)

    Vidal Vázquez, Eva; Bertol, Ildegardis; Tondello Barbosa, Fabricio; Paz-Ferreiro, Jorge

    2014-05-01

    Soil surface roughness describes the variations in the elevation of the soil surface. Such variations define the soil surface microrelief, which is characterized by a high spatial variability. Soil surface roughness is a property affecting many processes such as depression storage, infiltration, sediment generation, storage and transport and runoff routing. Therefore the soil surface microrelief is a key element in hydrology and soil erosion processes at different spatial scales as for example at the plot, field or catchment scale. In agricultural land soil surface roughness is mainly created by tillage operations, which promote to different extent the formation of microdepressions and microelevations and increase infiltration and temporal retention of water. The decay of soil surface roughness has been demonstrated to be mainly driven by rain height and rain intensity, and to depend also on runoff, aggregate stability, soil reface porosity and soil surface density. Soil roughness formation and decay may be also influenced by antecedent soil moisture (either before tillage or rain), quantity and type of plant residues over the soil surface and soil composition. Characterization of the rate and intensity of soil surface roughness decay provides valuable information about the degradation of the upper most soil surface layer before soil erosion has been initiated or at the very beginning of soil runoff and erosion processes. We analyzed the rate of decay of soil surface roughness from several experiments conducted in two regions under temperate and subtropical climate and with contrasting land use systems. The data sets studied were obtained both under natural and simulated rainfall for various soil tillage and management types. Soil surface roughness decay was characterized bay several parameters, including classic and single parameters such as the random roughness or the tortuosity and parameters based on advanced geostatistical methods or on the fractal theory. Our

  3. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Science.gov (United States)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  4. Radiative transfer model for contaminated rough slabs.

    Science.gov (United States)

    Andrieu, François; Douté, Sylvain; Schmidt, Frédéric; Schmitt, Bernard

    2015-11-01

    We present a semi-analytical model to simulate the bidirectional reflectance distribution function (BRDF) of a rough slab layer containing impurities. This model has been optimized for fast computation in order to analyze massive hyperspectral data by a Bayesian approach. We designed it for planetary surface ice studies but it could be used for other purposes. It estimates the bidirectional reflectance of a rough slab of material containing inclusions, overlaying an optically thick media (semi-infinite media or stratified media, for instance granular material). The inclusions are assumed to be close to spherical and constituted of any type of material other than the ice matrix. It can be any other type of ice, mineral, or even bubbles defined by their optical constants. We assume a low roughness and we consider the geometrical optics conditions. This model is thus applicable for inclusions larger than the considered wavelength. The scattering on the inclusions is assumed to be isotropic. This model has a fast computation implementation and thus is suitable for high-resolution hyperspectral data analysis.

  5. Multi-decadal Arctic sea ice roughness.

    Science.gov (United States)

    Tsamados, M.; Stroeve, J.; Kharbouche, S.; Muller, J. P., , Prof; Nolin, A. W.; Petty, A.; Haas, C.; Girard-Ardhuin, F.; Landy, J.

    2017-12-01

    The transformation of Arctic sea ice from mainly perennial, multi-year ice to a seasonal, first-year ice is believed to have been accompanied by a reduction of the roughness of the ice cover surface. This smoothening effect has been shown to (i) modify the momentum and heat transfer between the atmosphere and ocean, (ii) to alter the ice thickness distribution which in turn controls the snow and melt pond repartition over the ice cover, and (iii) to bias airborne and satellite remote sensing measurements that depend on the scattering and reflective characteristics over the sea ice surface topography. We will review existing and novel remote sensing methodologies proposed to estimate sea ice roughness, ranging from airborne LIDAR measurement (ie Operation IceBridge), to backscatter coefficients from scatterometers (ASCAT, QUICKSCAT), to multi angle maging spectroradiometer (MISR), and to laser (Icesat) and radar altimeters (Envisat, Cryosat, Altika, Sentinel-3). We will show that by comparing and cross-calibrating these different products we can offer a consistent multi-mission, multi-decadal view of the declining sea ice roughness. Implications for sea ice physics, climate and remote sensing will also be discussed.

  6. ROUGHNESS ANALYSIS OF VARIOUSLY POLISHED NIOBIUM SURFACES

    Energy Technology Data Exchange (ETDEWEB)

    Ribeill, G.; Reece, C.

    2008-01-01

    Niobium superconducting radio frequency (SRF) cavities have gained widespread use in accelerator systems. It has been shown that surface roughness is a determining factor in the cavities’ effi ciency and maximum accelerating potential achievable through this technology. Irregularities in the surface can lead to spot heating, undesirable local electrical fi eld enhancement and electron multipacting. Surface quality is typically ensured through the use of acid etching in a Buffered Chemical Polish (BCP) bath and electropolishing (EP). In this study, the effects of these techniques on surface morphology have been investigated in depth. The surface of niobium samples polished using different combinations of these techniques has been characterized through atomic force microscopy (AFM) and stylus profi lometry across a range of length scales. The surface morphology was analyzed using spectral techniques to determine roughness and characteristic dimensions. Experimentation has shown that this method is a valuable tool that provides quantitative information about surface roughness at different length scales. It has demonstrated that light BCP pretreatment and lower electrolyte temperature favors a smoother electropolish. These results will allow for the design of a superior polishing process for niobium SRF cavities and therefore increased accelerator operating effi ciency and power.

  7. Modeling superhydrophobic surfaces comprised of random roughness

    Science.gov (United States)

    Samaha, M. A.; Tafreshi, H. Vahedi; Gad-El-Hak, M.

    2011-11-01

    We model the performance of superhydrophobic surfaces comprised of randomly distributed roughness that resembles natural surfaces, or those produced via random deposition of hydrophobic particles. Such a fabrication method is far less expensive than ordered-microstructured fabrication. The present numerical simulations are aimed at improving our understanding of the drag reduction effect and the stability of the air-water interface in terms of the microstructure parameters. For comparison and validation, we have also simulated the flow over superhydrophobic surfaces made up of aligned or staggered microposts for channel flows as well as streamwise or spanwise ridge configurations for pipe flows. The present results are compared with other theoretical and experimental studies. The numerical simulations indicate that the random distribution of surface roughness has a favorable effect on drag reduction, as long as the gas fraction is kept the same. The stability of the meniscus, however, is strongly influenced by the average spacing between the roughness peaks, which needs to be carefully examined before a surface can be recommended for fabrication. Financial support from DARPA, contract number W91CRB-10-1-0003, is acknowledged.

  8. Design optimisation of a flywheel hybrid vehicle

    Energy Technology Data Exchange (ETDEWEB)

    Kok, D.B.

    1999-11-04

    This thesis describes the design optimisation of a flywheel hybrid vehicle with respect to fuel consumption and exhaust gas emissions. The driveline of this passenger car uses two power sources: a small spark ignition internal combustion engine with three-way catalyst, and a highspeed flywheel system for kinetic energy storage. A custom-made continuously variable transmission (CVT) with so-called i{sup 2} control transports energy between these power sources and the vehicle wheels. The driveline includes auxiliary systems for hydraulic, vacuum and electric purposes. In this fully mechanical driveline, parasitic energy losses determine the vehicle's fuel saving potential to a large extent. Practicable energy loss models have been derived to quantify friction losses in bearings, gearwheels, the CVT, clutches and dynamic seals. In addition, the aerodynamic drag in the flywheel system and power consumption of auxiliaries are charted. With the energy loss models available, a calculation procedure is introduced to optimise the flywheel as a subsystem in which the rotor geometry, the safety containment, and the vacuum system are designed for minimum energy use within the context of automotive applications. A first prototype of the flywheel system was tested experimentally and subsequently redesigned to improve rotordynamics and safety aspects. Coast-down experiments with the improved version show that the energy losses have been lowered significantly. The use of a kinetic energy storage device enables the uncoupling of vehicle wheel power and engine power. Therefore, the engine can be smaller and it can be chosen to operate in its region of best efficiency in start-stop mode. On a test-rig, the measured engine fuel consumption was reduced with more than 30 percent when the engine is intermittently restarted with the aid of the flywheel system. Although the start-stop mode proves to be advantageous for fuel consumption, exhaust gas emissions increase temporarily

  9. The surface roughness and planetary boundary layer

    Science.gov (United States)

    Telford, James W.

    1980-03-01

    Applications of the entrainment process to layers at the boundary, which meet the self similarity requirements of the logarithmic profile, have been studied. By accepting that turbulence has dominating scales related in scale length to the height above the surface, a layer structure is postulated wherein exchange is rapid enough to keep the layers internally uniform. The diffusion rate is then controlled by entrainment between layers. It has been shown that theoretical relationships derived on the basis of using a single layer of this type give quantitatively correct factors relating the turbulence, wind and shear stress for very rough surface conditions. For less rough surfaces, the surface boundary layer can be divided into several layers interacting by entrainment across each interface. This analysis leads to the following quantitatively correct formula compared to published measurements. 1 24_2004_Article_BF00877766_TeX2GIFE1.gif {σ _w }/{u^* } = ( {2/{9Aa}} )^{{1/4}} ( {1 - 3^{{1/2}{ a/k{d_n }/z{σ _w }/{u^* }z/L} )^{{1/4}} = 1.28(1 - 0.945({{σ _w }/{u^* }}}) {{z/L}})^{{1/4 where u^* = ( {{tau/ρ}}^{{1/2}}, σ w is the standard deviation of the vertical velocity, z is the height and L is the Obukhov scale lenght. The constants a, A, k and d n are the entrainment constant, the turbulence decay constant, Von Karman's constant, and the layer depth derived from the theory. Of these, a and A, are universal constants and not empirically determined for the boundary layer. Thus the turbulence needed for the plume model of convection, which resides above these layers and reaches to the inversion, is determined by the shear stress and the heat flux in the surface layers. This model applies to convection in cool air over a warm sea. The whole field is now determined except for the temperature of the air relative to the water, and the wind, which need a further parameter describing sea surface roughness. As a first stop to describing a surface where roughness elements

  10. Profile control studies for JET optimised shear regime

    Energy Technology Data Exchange (ETDEWEB)

    Litaudon, X.; Becoulet, A.; Eriksson, L.G.; Fuchs, V.; Huysmans, G.; How, J.; Moreau, D.; Rochard, F.; Tresset, G.; Zwingmann, W. [Association Euratom-CEA, CEA/Cadarache, Dept. de Recherches sur la Fusion Controlee, DRFC, 13 - Saint-Paul-lez-Durance (France); Bayetti, P.; Joffrin, E.; Maget, P.; Mayorat, M.L.; Mazon, D.; Sarazin, Y. [JET Abingdon, Oxfordshire (United Kingdom); Voitsekhovitch, I. [Universite de Provence, LPIIM, Aix-Marseille 1, 13 (France)

    2000-03-01

    This report summarises the profile control studies, i.e. preparation and analysis of JET Optimised Shear plasmas, carried out during the year 1999 within the framework of the Task-Agreement (RF/CEA/02) between JET and the Association Euratom-CEA/Cadarache. We report on our participation in the preparation of the JET Optimised Shear experiments together with their comprehensive analyses and the modelling. Emphasis is put on the various aspects of pressure profile control (core and edge pressure) together with detailed studies of current profile control by non-inductive means, in the prospects of achieving steady, high performance, Optimised Shear plasmas. (authors)

  11. The Backscattering Phase Function for a Sphere with a Two-Scale Relief of Rough Surface

    Science.gov (United States)

    Klass, E. V.

    2017-12-01

    The backscattering of light from spherical surfaces characterized by one and two-scale roughness reliefs has been investigated. The analysis is performed using the three-dimensional Monte-Carlo program POKS-RG (geometrical-optics approximation), which makes it possible to take into account the roughness of objects under study by introducing local geometries of different levels. The geometric module of the program is aimed at describing objects by equations of second-order surfaces. One-scale roughness is set as an ensemble of geometric figures (convex or concave halves of ellipsoids or cones). The two-scale roughness is modeled by convex halves of ellipsoids, with surface containing ellipsoidal pores. It is shown that a spherical surface with one-scale convex inhomogeneities has a flatter backscattering phase function than a surface with concave inhomogeneities (pores). For a sphere with two-scale roughness, the dependence of the backscattering intensity is found to be determined mostly by the lower-level inhomogeneities. The influence of roughness on the dependence of the backscattering from different spatial regions of spherical surface is analyzed.

  12. An OCD perspective of line edge and line width roughness metrology

    Science.gov (United States)

    Bonam, Ravi; Muthinti, Raja; Breton, Mary; Liu, Chi-Chun; Sieg, Stuart; Seshadri, Indira; Saulnier, Nicole; Shearer, Jeffrey; Patlolla, Raghuveer; Huang, Huai

    2017-03-01

    Metrology of nanoscale patterns poses multiple challenges that range from measurement noise, metrology errors, probe size etc. Optical Metrology has gained a lot of significance in the semiconductor industry due to its fast turn around and reliable accuracy, particularly to monitor in-line process variations. Apart from monitoring critical dimension, thickness of films, there are multiple parameters that can be extracted from Optical Metrology models3. Sidewall angles, material compositions etc., can also be modeled to acceptable accuracy. Line edge and Line Width roughness are much sought of metrology following critical dimension and its uniformity, although there has not been much development in them with optical metrology. Scanning Electron Microscopy is still used as a standard metrology technique for assessment of Line Edge and Line Width roughness. In this work we present an assessment of Optical Metrology and its ability to model roughness from a set of structures with intentional jogs to simulate both Line edge and Line width roughness at multiple amplitudes and frequencies. We also present multiple models to represent roughness and extract relevant parameters from Optical metrology. Another critical aspect of optical metrology setup is correlation of measurement to a complementary technique to calibrate models. In this work, we also present comparison of roughness parameters extracted and measured with variation of image processing conditions on a commercially available CD-SEM tool.

  13. Surface roughness of composite resin veneer after application of herbal and non-herbal toothpaste

    Science.gov (United States)

    Nuraini, S.; Herda, E.; Irawan, B.

    2017-08-01

    The aim of this study was to find out the surface roughness of composite resin veneer after brushing. In this study, 24 specimens of composite resin veneer are divided into three subgroups: brushed without toothpaste, brushed with non-herbal toothpaste, and brushed with herbal toothpaste. Brushing was performed for one set of 5,000 strokes and continued for a second set of 5,000 strokes. Roughness of composite resin veneer was determined using a Surface Roughness Tester. The results were statistically analyzed using Kruskal-Wallis nonparametric test and Post Hoc Mann-Whitney. The results indicate that the highest difference among the Ra values occurred within the subgroup that was brushed with the herbal toothpaste. In conclusion, the herbal toothpaste produced a rougher surface on composite resin veneer compared to non-herbal toothpaste.

  14. Surface roughness evaluation on mandrels and mirror shells for future X-ray telescopes

    Science.gov (United States)

    Sironi, Giorgia; Spiga, D.

    2008-07-01

    More X-ray missions that will be operating in near future, like particular SIMBOL-X, e-Rosita, Con-X/HXT, SVOM/XIAO and Polar-X, will be based on focusing optics manufactured by means of the Ni electroforming replication technique. This production method has already been successfully exploited for SAX, XMM and Swift-XRT. Optical surfaces for X-ray reflection have to be as smooth as possible also at high spatial frequencies. Hence it will be crucial to take under control microroughness in order to reduce the scattering effects. A high rms microroughness would cause the degradation of the angular resolution and loss of effective area. Stringent requirements have therefore to be fixed for mirror shells surface roughness depending on the specific energy range investigated, and roughness evolution has to be carefully monitored during the subsequent steps of the mirror-shells realization. This means to study the roughness evolution in the chain mandrel, mirror shells, multilayer deposition and also the degradation of mandrel roughness following iterated replicas. Such a study allows inferring which phases of production are the major responsible of the roughness growth and could help to find solutions optimizing the involved processes. The exposed study is carried out in the context of the technological consolidation related to SIMBOL-X, along with a systematic metrological study of mandrels and mirror shells. To monitor the roughness increase following each replica, a multiinstrumental approach was adopted: microprofiles were analysed by means of their Power Spectral Density (PSD) in the spatial frequency range 1000-0.01 μm. This enables the direct comparison of roughness data taken with instruments characterized by different operative ranges of frequencies, and in particular optical interferometers and Atomic Force Microscopes. The performed analysis allowed us to set realistic specifications on the mandrel roughness to be achieved, and to suggest a limit for the

  15. Normal tissue dose-effect models in biological dose optimisation

    International Nuclear Information System (INIS)

    Alber, M.

    2008-01-01

    Sophisticated radiotherapy techniques like intensity modulated radiotherapy with photons and protons rely on numerical dose optimisation. The evaluation of normal tissue dose distributions that deviate significantly from the common clinical routine and also the mathematical expression of desirable properties of a dose distribution is difficult. In essence, a dose evaluation model for normal tissues has to express the tissue specific volume effect. A formalism of local dose effect measures is presented, which can be applied to serial and parallel responding tissues as well as target volumes and physical dose penalties. These models allow a transparent description of the volume effect and an efficient control over the optimum dose distribution. They can be linked to normal tissue complication probability models and the equivalent uniform dose concept. In clinical applications, they provide a means to standardize normal tissue doses in the face of inevitable anatomical differences between patients and a vastly increased freedom to shape the dose, without being overly limiting like sets of dose-volume constraints. (orig.)

  16. Load optimised piezoelectric generator for powering battery-less TPMS

    Science.gov (United States)

    Blažević, D.; Kamenar, E.; Zelenika, S.

    2013-05-01

    The design of a piezoelectric device aimed at harvesting the kinetic energy of random vibrations on a vehicle's wheel is presented. The harvester is optimised for powering a Tire Pressure Monitoring System (TPMS). On-road experiments are performed in order to measure the frequencies and amplitudes of wheels' vibrations. It is hence determined that the highest amplitudes occur in an unperiodic manner. Initial tests of the battery-less TPMS are performed in laboratory conditions where tuning and system set-up optimization is achieved. The energy obtained from the piezoelectric bimorph is managed by employing the control electronics which converts AC voltage to DC and conditions the output voltage to make it compatible with the load (i.e. sensor electronics and transmitter). The control electronics also manages the sleep/measure/transmit cycles so that the harvested energy is efficiently used. The system is finally tested in real on-road conditions successfully powering the pressure sensor and transmitting the data to a receiver in the car cockpit.

  17. Structural and operational optimisation of distributed energy systems

    International Nuclear Information System (INIS)

    Soederman, Jarmo; Pettersson, Frank

    2006-01-01

    A distributed energy system (DES) is a system comprising a set of energy suppliers and consumers, district heating pipelines, heat storage facilities and power transmission lines in a region. Distributed energy production has got an increasingly important role in the energy market. In this paper, a model for structural and operational optimisation of DES is presented. In the model, production and consumption of electrical power and heat, power transmissions, transport of fuels to the production plants, transport of water in the district heating pipelines and storage of heat are taken into account. The problem is formulated as a mixed integer linear programming (MILP) problem where the objective is to minimise the overall cost of DES, i.e., the sum of the running costs for the included operations and the annualised investment costs of the included equipment. An illustrative example is presented for a complex DES situation. The solution gives the DES structure, i.e., which production units, heat transport lines and storages should be built as well as their locations be, together with design parameters for plants and pipelines. The model enables the involved parties-suppliers, consumers, designers and authorities-to form a joint view of different situations as a basis for the decision making. A tool based on the model is built, which can be used in design, in creating guidelines for regional energy policies and for versatile what-if analyses

  18. Contribution à l'amélioration de la qualité de surface en optimisant ...

    African Journals Online (AJOL)

    surface roughness of machined parts in carbon steel C45 is obtained when the cutting speed is 180 m ..... study on stainless steel for optimal setting of machining ... milling, Measurement, Vol. 58, pp. 416-428. [10] Lobontiu M., Pascal I, 2010.

  19. Inner-outer interactions in a rough wall turbulent boundary layer over hemispherical roughness using PIV

    Science.gov (United States)

    Pathikonda, Gokul; Clark, Caitlyn; Christensen, Kenneth T.

    2017-11-01

    Inner-outer interactions over rough-wall boundary layer were investigated using high frame-rate, PIV measurements in a Refractive index-matched (RIM) facility. Flows over canonical smooth-wall and hexagonally-packed hemispherical roughness under transitionally rough flow conditions (and with Reτ 1500) were measured using a dual camera PIV system with different fields of view (FOVs) and operating simultaneously. The large FOV measures the large scales and boundary layer parameters, while the small FOV measures the small scales very close to the wall with high spatial ( 7y*) and temporal ( 2.5t*) resolutions. Conditional metrics were formulated to investigate these scale interactions in a spatio-temporal sense using the PIV data. It was found that the observations complement the interaction structure made via hotwire experiments and DNS in previous studies over both smooth and rough-wall flows, with a strong correlation between the large scales and small scale energies indicative of the amplitude modulation interactions. Additionally, frequency and scale modulations were also investigated with limited success. These experiments highlight the similarities and differences in these interactions between the smooth- and rough-wall flows.

  20. Dewetting of thin polymer film on rough substrate: II. Experiment

    International Nuclear Information System (INIS)

    Volodin, Pylyp; Kondyurin, Alexey

    2008-01-01

    The theory of the dewetting process developed for a model of substrate-film interaction forces was examined by an experimental investigation of the dewetting process of thin polystyrene (PS) films on chemically etched silicon substrates. In the dependence on PS films thickness and silicon roughness, various situations of dewetting were observed as follows: (i) if the wavelength of the substrate roughness is much larger than the critical spinodal wavelength of a film, then spinodal dewetting of the film is observed; (ii) if the wavelength of the substrate roughness is smaller than the critical wavelength of the film and the substrate roughness is larger in comparison with film thickness, then the dewetting due to substrate roughness is observed and the dewetted film patterns repeat the rough substrate structure; (iii) if the wavelength of the substrate roughness is smaller than the critical wavelength of the film and the substrate roughness is small in comparison with the film thickness, then spinodal dewetting proceeds

  1. Sub-Patch Roughness in Earthquake Rupture Investigations

    KAUST Repository

    Zielke, Olaf; Mai, Paul Martin

    2016-01-01

    Fault geometric complexities exhibit fractal characteristics over a wide range of spatial scales (<µm to >km) and strongly affect the rupture process at corresponding scales. Numerical rupture simulations provide a framework to quantitatively investigate the relationship between a fault's roughness and its seismic characteristics. Fault discretization however introduces an artificial lower limit to roughness. Individual fault patches are planar and sub-patch roughnessroughness at spatial scales below fault-patch size– is not incorporated. Does negligence of sub-patch roughness measurably affect the outcome of earthquake rupture simulations? We approach this question with a numerical parameter space investigation and demonstrate that sub-patch roughness significantly modifies the slip-strain relationship –a fundamental aspect of dislocation theory. Faults with sub-patch roughness induce less strain than their planar-fault equivalents at distances beyond the length of a slipping fault. We further provide regression functions that characterize the stochastic effect sub-patch roughness.

  2. Validation of a large-scale audit technique for CT dose optimisation

    International Nuclear Information System (INIS)

    Wood, T. J.; Davis, A. W.; Moore, C. S.; Beavis, A. W.; Saunderson, J. R.

    2008-01-01

    The expansion and increasing availability of computed tomography (CT) imaging means that there is a greater need for the development of efficient optimisation strategies that are able to inform clinical practice, without placing a significant burden on limited departmental resources. One of the most fundamental aspects to any optimisation programme is the collection of patient dose information, which can be compared with appropriate diagnostic reference levels. This study has investigated the implementation of a large-scale audit technique, which utilises data that already exist in the radiology information system, to determine typical doses for a range of examinations on four CT scanners. This method has been validated against what is considered the 'gold standard' technique for patient dose audits, and it has been demonstrated that results equivalent to the 'standard-sized patient' can be inferred from this much larger data set. This is particularly valuable where CT optimisation is concerned as it is considered a 'high dose' technique, and hence close monitoring of patient dose is particularly important. (authors)

  3. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation

    Directory of Open Access Journals (Sweden)

    Andrew J. Capel

    2017-01-01

    Full Text Available Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis.

  4. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation

    Science.gov (United States)

    Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D

    2017-01-01

    Additive manufacturing or ‘3D printing’ is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis. PMID:28228852

  5. 3D printed fluidics with embedded analytic functionality for automated reaction optimisation.

    Science.gov (United States)

    Capel, Andrew J; Wright, Andrew; Harding, Matthew J; Weaver, George W; Li, Yuqi; Harris, Russell A; Edmondson, Steve; Goodridge, Ruth D; Christie, Steven D R

    2017-01-01

    Additive manufacturing or '3D printing' is being developed as a novel manufacturing process for the production of bespoke micro- and milliscale fluidic devices. When coupled with online monitoring and optimisation software, this offers an advanced, customised method for performing automated chemical synthesis. This paper reports the use of two additive manufacturing processes, stereolithography and selective laser melting, to create multifunctional fluidic devices with embedded reaction monitoring capability. The selectively laser melted parts are the first published examples of multifunctional 3D printed metal fluidic devices. These devices allow high temperature and pressure chemistry to be performed in solvent systems destructive to the majority of devices manufactured via stereolithography, polymer jetting and fused deposition modelling processes previously utilised for this application. These devices were integrated with commercially available flow chemistry, chromatographic and spectroscopic analysis equipment, allowing automated online and inline optimisation of the reaction medium. This set-up allowed the optimisation of two reactions, a ketone functional group interconversion and a fused polycyclic heterocycle formation, via spectroscopic and chromatographic analysis.

  6. Dose to population as a metric in the design of optimised exposure control in digital mammography

    International Nuclear Information System (INIS)

    Klausz, R.; Shramchenko, N.

    2005-01-01

    This paper describes a methods for automatic optimisation of parameter (AOP) in digital mammography systems. Using a model of the image chain, contrast to noise ratio (CNR) and average glandular dose (AGD) are computed for possible X-ray parameters and breast types. The optimisation process consists of the determination of the operating points providing the lowest possible AGD for each CNR level and breast type. The proposed metric for the dose used in the design of an AOP mode is the resulting dose to the population, computed by averaging the AGD values over the distribution of breast types in the population. This method has been applied to the automatic exposure control of new digital mammography equipment. Breast thickness and composition are estimated from a low dose pre-exposure and used to index tables containing sets of optimised operating points. The resulting average dose to the population ranges from a level comparable to state-of-the-art screen/film mammography to a reduction by a factor of two. Using this method, both CNR and dose are kept under control for all breast types, taking into consideration both individual and collective risk. (authors)

  7. Weight Optimisation of Steel Monopile Foundations for Offshore Windfarms

    DEFF Research Database (Denmark)

    Fog Gjersøe, Nils; Bouvin Pedersen, Erik; Kristensen, Brian

    2015-01-01

    The potential for mass reduction of monopiles in offshore windfarms using current design practice is investigated. Optimisation by sensitivity analysis is carried out for the following important parameters: wall thickness distribution between tower and monopile, soil stiffness, damping ratio...

  8. Protection against natural radiation: Optimisation and decision exercises

    International Nuclear Information System (INIS)

    O'Riordan, M.C.

    1984-02-01

    Six easy exercises are presented in which cost-benefit analysis is used to optimise protection against natural radiation or to decide whether protection is appropriate. The exercises are illustrative only and do not commit the Board. (author)

  9. Optimisation of wheat-sprouted soybean flour bread using response ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-11-16

    Nov 16, 2009 ... Full Length Research Paper. Optimisation of ... Victoria A. Jideani1* and Felix C. Onwubali2. 1Department of Food Technology, Cape Peninsula University of Technology, P. O. Box 652, Cape Town 8000, South. Africa.

  10. Optimised intake stroke analysis for flat and dome head pistons ...

    African Journals Online (AJOL)

    Optimised intake stroke analysis for flat and dome head pistons. ... in understanding the performance characteristics optioned between flat head and dome head pistons in engine design. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  11. Distributed optimisation problem with communication delay and external disturbance

    Science.gov (United States)

    Tran, Ngoc-Tu; Xiao, Jiang-Wen; Wang, Yan-Wu; Yang, Wu

    2017-12-01

    This paper investigates the distributed optimisation problem for the multi-agent systems (MASs) with the simultaneous presence of external disturbance and the communication delay. To solve this problem, a two-step design scheme is introduced. In the first step, based on the internal model principle, the internal model term is constructed to compensate the disturbance asymptotically. In the second step, a distributed optimisation algorithm is designed to solve the distributed optimisation problem based on the MASs with the simultaneous presence of disturbance and communication delay. Moreover, in the proposed algorithm, each agent interacts with its neighbours through the connected topology and the delay occurs during the information exchange. By utilising Lyapunov-Krasovskii functional, the delay-dependent conditions are derived for both slowly and fast time-varying delay, respectively, to ensure the convergence of the algorithm to the optimal solution of the optimisation problem. Several numerical simulation examples are provided to illustrate the effectiveness of the theoretical results.

  12. Shape optimisation and performance analysis of flapping wings

    KAUST Repository

    Ghommem, Mehdi; Collier, Nathan; Niemi, Antti; Calo, Victor M.

    2012-01-01

    optimised shapes produce efficient flapping flights, the wake pattern and its vorticity strength are examined. This work described in this paper should facilitate better guidance for shape design of engineered flying systems.

  13. Share-of-Surplus Product Line Optimisation with Price Levels

    Directory of Open Access Journals (Sweden)

    X. G. Luo

    2014-01-01

    Full Text Available Kraus and Yano (2003 established the share-of-surplus product line optimisation model and developed a heuristic procedure for this nonlinear mixed-integer optimisation model. In their model, price of a product is defined as a continuous decision variable. However, because product line optimisation is a planning process in the early stage of product development, pricing decisions usually are not very precise. In this research, a nonlinear integer programming share-of-surplus product line optimization model that allows the selection of candidate price levels for products is established. The model is further transformed into an equivalent linear mixed-integer optimisation model by applying linearisation techniques. Experimental results in different market scenarios show that the computation time of the transformed model is much less than that of the original model.

  14. Optimising a fall out dust monitoring sampling programme at a ...

    African Journals Online (AJOL)

    GREG

    Key words: Fall out dust monitoring, cement plant, optimising, air pollution sampling, fall out dust sampler locations. .... applied for those areas where controls are in place. Sampling ..... mass balance in the total cement manufacturing process.

  15. Issues with performance measures for dynamic multi-objective optimisation

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available Symposium on Computational Intelligence in Dynamic and Uncertain Environments (CIDUE), Mexico, 20-23 June 2013 Issues with Performance Measures for Dynamic Multi-objective Optimisation Mard´e Helbig CSIR: Meraka Institute Brummeria, South Africa...

  16. Optimisation Study on the Production of Anaerobic Digestate ...

    African Journals Online (AJOL)

    DR. AMIN

    optimise the production of ADC from organic fractions of domestic wastes and the effects of ADC amendments on soil .... (22%), cooked meat (9%), lettuce (11%), carrots. (3%), potato (44%) ... seed was obtained from a mesophilic anaerobic.

  17. Roughness characterization of the galling of metals

    Science.gov (United States)

    Hubert, C.; Marteau, J.; Deltombe, R.; Chen, Y. M.; Bigerelle, M.

    2014-09-01

    Several kinds of tests exist to characterize the galling of metals, such as that specified in ASTM Standard G98. While the testing procedure is accurate and robust, the analysis of the specimen's surfaces (area=1.2 cm) for the determination of the critical pressure of galling remains subject to operator judgment. Based on the surface's topography analyses, we propose a methodology to express the probability of galling according to the macroscopic pressure load. After performing galling tests on 304L stainless steel, a two-step segmentation of the S q parameter (root mean square of surface amplitude) computed from local roughness maps (100 μ m× 100 μ m) enables us to distinguish two tribological processes. The first step represents the abrasive wear (erosion) and the second one the adhesive wear (galling). The total areas of both regions are highly relevant to quantify galling and erosion processes. Then, a one-parameter phenomenological model is proposed to objectively determine the evolution of non-galled relative area A e versus the pressure load P, with high accuracy ({{A}e}=100/(1+a{{P}2}) with a={{0.54}+/- 0.07}× {{10}-3} M P{{a}-2} and with {{R}2}=0.98). From this model, the critical pressure of galling is found to be equal to 43MPa. The {{S}5 V} roughness parameter (the five deepest valleys in the galled region's surface) is the most relevant roughness parameter for the quantification of damages in the ‘galling region’. The significant valleys’ depths increase from 10 μm-250 μm when the pressure increases from 11-350 MPa, according to a power law ({{S}5 V}=4.2{{P}0.75}, with {{R}2}=0.93).

  18. Algorithm for optimisation of paediatric chest radiography

    International Nuclear Information System (INIS)

    Kostova-Lefterova, D.

    2016-01-01

    The purpose of this work is to assess the current practice and patient doses in paediatric chest radiography in a large university hospital. The X-ray unit is used in the paediatric department for respiratory diseases. Another purpose was to recommend and apply optimized protocols to reduce patient dose while maintaining diagnostic image quality for the x-ray images. The practice of two different radiographers was studied. The results were compared with the existing practice in paediatric chest radiography and the opportunities for optimization were identified in order to reduce patient doses. A methodology was developed for optimization of the x-ray examinations by grouping children in age groups or according to other appropriate indication and creating an algorithm for proper selection of the exposure parameters for each group. The algorithm for the optimisation of paediatric chest radiography reduced patient doses (PKA, organ dose, effective dose) between 1.5 and 6 times for the different age groups, the average glandular dose up to 10 times and the dose for the lung between 2 and 5 times. The resulting X-ray images were of good diagnostic quality. The subjectivity in the choice of exposure parameters was reduced and standardization has been achieved in the work of the radiographers. The role of the radiologist, the medical physicist and radiographer in the process of optimization was shown. It was proven the effect of teamwork in reducing patient doses at keeping adequate image quality. Key words: Chest Radiography. Paediatric Radiography. Optimization. Radiation Exposure. Radiation Protection

  19. Optimising preterm nutrition: present and future

    LENUS (Irish Health Repository)

    Brennan, Ann-Marie

    2016-04-01

    The goal of preterm nutrition in achieving growth and body composition approximating that of the fetus of the same postmenstrual age is difficult to achieve. Current nutrition recommendations depend largely on expert opinion, due to lack of evidence, and are primarily birth weight based, with no consideration given to gestational age and\\/or need for catch-up growth. Assessment of growth is based predominately on anthropometry, which gives insufficient attention to the quality of growth. The present paper provides a review of the current literature on the nutritional management and assessment of growth in preterm infants. It explores several approaches that may be required to optimise nutrient intakes in preterm infants, such as personalising nutritional support, collection of nutrient intake data in real-time, and measurement of body composition. In clinical practice, the response to inappropriate nutrient intakes is delayed as the effects of under- or overnutrition are not immediate, and there is limited nutritional feedback at the cot-side. The accurate and non-invasive measurement of infant body composition, assessed by means of air displacement plethysmography, has been shown to be useful in assessing quality of growth. The development and implementation of personalised, responsive nutritional management of preterm infants, utilising real-time nutrient intake data collection, with ongoing nutritional assessments that include measurement of body composition is required to help meet the individual needs of preterm infants.

  20. Sequential projection pursuit for optimised vibration-based damage detection in an experimental wind turbine blade

    Science.gov (United States)

    Hoell, Simon; Omenzetter, Piotr

    2018-02-01

    To advance the concept of smart structures in large systems, such as wind turbines (WTs), it is desirable to be able to detect structural damage early while using minimal instrumentation. Data-driven vibration-based damage detection methods can be competitive in that respect because global vibrational responses encompass the entire structure. Multivariate damage sensitive features (DSFs) extracted from acceleration responses enable to detect changes in a structure via statistical methods. However, even though such DSFs contain information about the structural state, they may not be optimised for the damage detection task. This paper addresses the shortcoming by exploring a DSF projection technique specialised for statistical structural damage detection. High dimensional initial DSFs are projected onto a low-dimensional space for improved damage detection performance and simultaneous computational burden reduction. The technique is based on sequential projection pursuit where the projection vectors are optimised one by one using an advanced evolutionary strategy. The approach is applied to laboratory experiments with a small-scale WT blade under wind-like excitations. Autocorrelation function coefficients calculated from acceleration signals are employed as DSFs. The optimal numbers of projection vectors are identified with the help of a fast forward selection procedure. To benchmark the proposed method, selections of original DSFs as well as principal component analysis scores from these features are additionally investigated. The optimised DSFs are tested for damage detection on previously unseen data from the healthy state and a wide range of damage scenarios. It is demonstrated that using selected subsets of the initial and transformed DSFs improves damage detectability compared to the full set of features. Furthermore, superior results can be achieved by projecting autocorrelation coefficients onto just a single optimised projection vector.

  1. Smart border initiative: a Franco-German cross-border energy optimisation project

    International Nuclear Information System (INIS)

    2017-01-01

    Integrated and optimised local energy systems will play a key role in achieving the energy transition objectives set by France and Germany, in line with the Energy Union's goals, and contribute to ensuring a secure, affordable and climate-friendly energy supply in the EU. In order to capitalise on the French and German expertise and experiences in developing such systems and to continue strengthening the cross-border cooperation towards a fully integrated European energy market, both Governments have decided to launch a common initiative to identify and structure a cross-border energy optimisation project. Tilia and Dena have undertaken this mission to jointly develop the Smart Border Initiative (SBI). The SBI will, on the one hand, connect policies designed by France and Germany in order to support their cities and territories in their energy transition strategies and European market integration. It is currently a paradox that, though more balanced and resilient energy systems build up, bottom-up, at the local level, borders remain an obstacle to this local integration, in spite of the numerous complementarities observed in cross-border regions, and of their specific needs, in terms of smart mobility for example. The SBI project aims at enabling European neighbouring regions separated by a border to jointly build up optimised local energy systems, and jointly develop their local economies following an integrated, sustainable and low-carbon model. On the other hand, this showcase project will initiate a new stage in the EU electricity market integration, by completing high voltage interconnections with local, low voltage integration at DSO level, opening new optimisation possibilities in managing the electricity balance, and enabling DSOs to jointly overcome some of the current challenges, notably the increased share of renewable energy (RE) and ensuring Europe's security of supply

  2. Intelligent Support for a Computer Aided Design Optimisation Cycle

    OpenAIRE

    B. Dolšak; M. Novak; J. Kaljun

    2006-01-01

    It is becoming more and more evident that  adding intelligence  to existing computer aids, such as computer aided design systems, can lead to significant improvements in the effective and reliable performance of various engineering tasks, including design optimisation. This paper presents three different intelligent modules to be applied within a computer aided design optimisation cycle to enable more intelligent and less experience-dependent design performance. 

  3. A supportive architecture for CFD-based design optimisation

    Science.gov (United States)

    Li, Ni; Su, Zeya; Bi, Zhuming; Tian, Chao; Ren, Zhiming; Gong, Guanghong

    2014-03-01

    Multi-disciplinary design optimisation (MDO) is one of critical methodologies to the implementation of enterprise systems (ES). MDO requiring the analysis of fluid dynamics raises a special challenge due to its extremely intensive computation. The rapid development of computational fluid dynamic (CFD) technique has caused a rise of its applications in various fields. Especially for the exterior designs of vehicles, CFD has become one of the three main design tools comparable to analytical approaches and wind tunnel experiments. CFD-based design optimisation is an effective way to achieve the desired performance under the given constraints. However, due to the complexity of CFD, integrating with CFD analysis in an intelligent optimisation algorithm is not straightforward. It is a challenge to solve a CFD-based design problem, which is usually with high dimensions, and multiple objectives and constraints. It is desirable to have an integrated architecture for CFD-based design optimisation. However, our review on existing works has found that very few researchers have studied on the assistive tools to facilitate CFD-based design optimisation. In the paper, a multi-layer architecture and a general procedure are proposed to integrate different CFD toolsets with intelligent optimisation algorithms, parallel computing technique and other techniques for efficient computation. In the proposed architecture, the integration is performed either at the code level or data level to fully utilise the capabilities of different assistive tools. Two intelligent algorithms are developed and embedded with parallel computing. These algorithms, together with the supportive architecture, lay a solid foundation for various applications of CFD-based design optimisation. To illustrate the effectiveness of the proposed architecture and algorithms, the case studies on aerodynamic shape design of a hypersonic cruising vehicle are provided, and the result has shown that the proposed architecture

  4. Multiobjective optimisation of bogie suspension to boost speed on curves

    Science.gov (United States)

    Milad Mousavi-Bideleh, Seyed; Berbyuk, Viktor

    2016-01-01

    To improve safety and maximum admissible speed on different operational scenarios, multiobjective optimisation of bogie suspension components of a one-car railway vehicle model is considered. The vehicle model has 50 degrees of freedom and is developed in multibody dynamics software SIMPACK. Track shift force, running stability, and risk of derailment are selected as safety objective functions. The improved maximum admissible speeds of the vehicle on curves are determined based on the track plane accelerations up to 1.5 m/s2. To attenuate the number of design parameters for optimisation and improve the computational efficiency, a global sensitivity analysis is accomplished using the multiplicative dimensional reduction method (M-DRM). A multistep optimisation routine based on genetic algorithm (GA) and MATLAB/SIMPACK co-simulation is executed at three levels. The bogie conventional secondary and primary suspension components are chosen as the design parameters in the first two steps, respectively. In the last step semi-active suspension is in focus. The input electrical current to magnetorheological yaw dampers is optimised to guarantee an appropriate safety level. Semi-active controllers are also applied and the respective effects on bogie dynamics are explored. The safety Pareto optimised results are compared with those associated with in-service values. The global sensitivity analysis and multistep approach significantly reduced the number of design parameters and improved the computational efficiency of the optimisation. Furthermore, using the optimised values of design parameters give the possibility to run the vehicle up to 13% faster on curves while a satisfactory safety level is guaranteed. The results obtained can be used in Pareto optimisation and active bogie suspension design problems.

  5. A COMPARATIVE STUDY ON MULTI-SWARM OPTIMISATION AND BAT ALGORITHM FOR UNCONSTRAINED NON LINEAR OPTIMISATION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Evans BAIDOO

    2016-12-01

    Full Text Available A study branch that mocks-up a population of network of swarms or agents with the ability to self-organise is Swarm intelligence. In spite of the huge amount of work that has been done in this area in both theoretically and empirically and the greater success that has been attained in several aspects, it is still ongoing and at its infant stage. An immune system, a cloud of bats, or a flock of birds are distinctive examples of a swarm system. . In this study, two types of meta-heuristics algorithms based on population and swarm intelligence - Multi Swarm Optimization (MSO and Bat algorithms (BA - are set up to find optimal solutions of continuous non-linear optimisation models. In order to analyze and compare perfect solutions at the expense of performance of both algorithms, a chain of computational experiments on six generally used test functions for assessing the accuracy and the performance of algorithms, in swarm intelligence fields are used. Computational experiments show that MSO algorithm seems much superior to BA.

  6. Urban roughness mapping validation techniques and some first results

    NARCIS (Netherlands)

    Bottema, M; Mestayer, PG

    1998-01-01

    Because of measuring problems related to evaluation of urban roughness parameters, a new approach using a roughness mapping tool has been tested: evaluation of roughness length z(o) and zero displacement z(d) from cadastral databases. Special attention needs to be given to the validation of the

  7. Procedure and applications of combined wheel/rail roughness measurement

    NARCIS (Netherlands)

    Dittrich, M.G.

    2009-01-01

    Wheel-rail roughness is known to be the main excitation source of railway rolling noise. Besides the already standardised method for direct roughness measurement, it is also possible to measure combined wheel-rail roughness from vertical railhead vibration during a train pass-by. This is a different

  8. Use of roughness maps in visualisation of surfaces

    DEFF Research Database (Denmark)

    Seitavuopio, Paulus; Rantanen, Jukka; Yliruusi, Jouko

    2005-01-01

    monohydrate, theophylline anhydrate, sodium chloride and potassium chloride. The roughness determinations were made by a laser profilometer. The new matrix method gives detailed roughness maps, which are able to show local variations in surface roughness values and provide an illustrative picture...

  9. ROMI 4.0: Updated Rough Mill Simulator

    Science.gov (United States)

    Timo Grueneberg; R. Edward Thomas; Urs Buehlmann

    2012-01-01

    In the secondary hardwood industry, rough mills convert hardwood lumber into dimension parts for furniture, cabinets, and other wood products. ROMI 4.0, the US Department of Agriculture Forest Service's ROugh-MIll simulator, is a software package designed to simulate the cut-up of hardwood lumber in rough mills in such a way that a maximum possible component yield...

  10. 7 CFR 868.201 - Definition of rough rice.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Definition of rough rice. 868.201 Section 868.201... FOR CERTAIN AGRICULTURAL COMMODITIES United States Standards for Rough Rice Terms Defined § 868.201 Definition of rough rice. Rice (Oryza sativa L.) which consists of 50 percent or more of paddy kernels (see...

  11. Multi criteria decision making using correlation coefficient under rough neutrosophic environment

    Directory of Open Access Journals (Sweden)

    Surapati Pramanik

    2017-09-01

    Full Text Available In this paper, we define correlation coefficient measure between any two rough neutrosophic sets. We also prove some of its basic properties.. We develop a new multiple attribute group decision making method based on the proposed correlation coefficient measure.

  12. Multi criteria decision making using correlation coefficient under rough neutrosophic environment

    OpenAIRE

    Pramanik, Surapati; Roy, Rumi; Roy, Tapan Kumar; Smarandache, Florentin

    2017-01-01

    In this paper, we define correlation coefficient measure between any two rough neutrosophic sets. We also prove some of its basic properties. We develop a new multiple attribute group decision making method based on the proposed correlation coefficient measure. An illustrative example of medical diagnosis is solved to demonstrate the applicability and effecriveness of the proposed method.

  13. Rough viscoelastic sliding contact: Theory and experiments

    Science.gov (United States)

    Carbone, G.; Putignano, C.

    2014-03-01

    In this paper, we show how the numerical theory introduced by the authors [Carbone and Putignano, J. Mech. Phys. Solids 61, 1822 (2013), 10.1016/j.jmps.2013.03.005] can be effectively employed to study the contact between viscoelastic rough solids. The huge numerical complexity is successfully faced up by employing the adaptive nonuniform mesh developed by the authors in Putignano et al. [J. Mech. Phys. Solids 60, 973 (2012), 10.1016/j.jmps.2012.01.006]. Results mark the importance of accounting for viscoelastic effects to correctly simulate the sliding rough contact. In detail, attention is, first, paid to evaluate the viscoelastic dissipation, i.e., the viscoelastic friction. Fixed the sliding speed and the normal load, friction is completely determined. Furthermore, since the methodology employed in the work allows to study contact between real materials, a comparison between experimental outcomes and numerical prediction in terms of viscoelastic friction is shown. The good agreement seems to validate—at least partially—the presented methodology. Finally, it is shown that viscoelasticity entails not only the dissipative effects previously outlined, but is also strictly related to the anisotropy of the contact solution. Indeed, a marked anisotropy is present in the contact region, which results stretched in the direction perpendicular to the sliding speed. In the paper, the anisotropy of the deformed surface and of the contact area is investigated and quantified.

  14. Design of optimised backstepping controller for the synchronisation of chaotic Colpitts oscillator using shark smell algorithm

    Science.gov (United States)

    Fouladi, Ehsan; Mojallali, Hamed

    2018-01-01

    In this paper, an adaptive backstepping controller has been tuned to synchronise two chaotic Colpitts oscillators in a master-slave configuration. The parameters of the controller are determined using shark smell optimisation (SSO) algorithm. Numerical results are presented and compared with those of particle swarm optimisation (PSO) algorithm. Simulation results show better performance in terms of accuracy and convergence for the proposed optimised method compared to PSO optimised controller or any non-optimised backstepping controller.

  15. Optimisation of the formulation of a bubble bath by a chemometric approach market segmentation and optimisation.

    Science.gov (United States)

    Marengo, Emilio; Robotti, Elisa; Gennaro, Maria Carla; Bertetto, Mariella

    2003-03-01

    The optimisation of the formulation of a commercial bubble bath was performed by chemometric analysis of Panel Tests results. A first Panel Test was performed to choose the best essence, among four proposed to the consumers; the best essence chosen was used in the revised commercial bubble bath. Afterwards, the effect of changing the amount of four components (the amount of primary surfactant, the essence, the hydratant and the colouring agent) of the bubble bath was studied by a fractional factorial design. The segmentation of the bubble bath market was performed by a second Panel Test, in which the consumers were requested to evaluate the samples coming from the experimental design. The results were then treated by Principal Component Analysis. The market had two segments: people preferring a product with a rich formulation and people preferring a poor product. The final target, i.e. the optimisation of the formulation for each segment, was obtained by the calculation of regression models relating the subjective evaluations given by the Panel and the compositions of the samples. The regression models allowed to identify the best formulations for the two segments ofthe market.

  16. Optimisation of integrated energy and materials systems

    International Nuclear Information System (INIS)

    Gielen, D.J.; Okken, P.A.

    1994-06-01

    To define cost-effective long term CO2 reduction strategies an integrated energy and materials system model for the Netherlands for the period 2000-2040 is developed. The model is based upon the energy system model MARKAL, which configures an optimal mix of technologies to satisfy the specified energy and product/materials service demands. This study concentrates on CO 2 emission reduction in the materials system. For this purpose, the energy system model is enlarged with a materials system model including all steps 'from cradle to grave'. The materials system model includes 29 materials, 20 product groups and 30 waste materials. The system is divided into seven types of technologies; 250 technologies are modeled. The results show that the integrated optimisation of the energy system and the materials system can significantly reduce the emission reduction costs, especially at higher reduction percentages. The reduction is achieved through shifts in materials production and waste handling and through materials substitution in products. Shifts in materials production and waste management seem cost-effective, while the cost-effectiveness of shifts in product composition is sensitive due to the cost structure of products. For the building sector, transportation applications and packaging, CO 2 policies show a significant impact on prices, and shifts in product composition could occur. For other products, the reduction through materials substitution seems less promising. The impact on materials consumption seems most significant for cement (reduced), timber and aluminium (both increased). For steel and plastics, the net effect is balanced, but shifts between applications do occur. The MARKAL-approach is feasible to study integrated energy and materials systems. The progress compared to other environmental system analysis instruments is much more insight in the interaction of technologies on a national scale and in time

  17. An approach to next step device optimisation

    International Nuclear Information System (INIS)

    Salpietro, E.

    2000-01-01

    The requirements for ITER EDA were to achieve ignition with a good safety margin, and controlled long inductive burn. These requirements lead to a big device, which requested a too ambitious step to be undertaken by the world fusion community. More realistic objectives for a next step device shall be to demonstrate the net production of energy with a high energy gain factor (Q) and a high boot strap current fraction (>60%) which is required for a Fusion Power Plant (FPP). The Next Step Device (NSD) shall also allow operation flexibility in order to explore a large range of plasma parameters to find out the optimum concept for the fusion power plant prototype. These requirements could be too demanding for one single device and could probably be better explored in a strongly integrated world programme. The cost of one or more devices is the decisive factor for the choice of the fusion power development programme strategy. The plasma elongation and triangularity have a strong impact in the cost of the device and are limited by the plasma vertical position control issue. The distance between plasma separatrix and the toroidal field conductor does not vary a lot between devices. It is determined by the sum of the distance between first wall-plasma sepratrix and the thickness of the nuclear shield required to protect the toroidal field coil insultation. The thickness of the TF coil is determined by the allowable stresses and superconducting characteristics. The outer radius of the central solenoid is the result of an optimisation to provide the magnetic flux to inductively drive the plasma. Therefore, in order to achieve the objectives for Q and boot-strap current fractions at the minimum cost, the plasma aspect ratio and magnetic field value shall be determined. The paper will present the critical issues for the next device and will make considerations on the optimal way to proceed towards the realisation of the fusion power plant

  18. Optimisation of material discrimination using spectral CT

    International Nuclear Information System (INIS)

    Nik, S.J.; Meyer, J.; Watts, R.

    2010-01-01

    Full text: Spectral computed tomography (CT) using novel X-ray photon counting detectors (PCDs) with energy resolving capabilities is capable of providing energy-selective images. This extra energy information may allow materials such as iodine and calcium, or water and fat to be distinguished. PCDs have energy thresholds, enabling the classification of photons into multiple energy bins. The inform tion content of spectral CT images depends on how the photons are grouped together. [n this work, a method is presented to optimise energy windows for maximum material discrimination. Given a combination of thicknesses, the reference number of expected photons in each energy bin is computed using the Bee Lambert equation. A similar calculation is performed for an exhaustive range of thicknesses and the number of photons in each case is com pared to the reference, allowing a statistical map of the uncertainty in thickness parameters to be constructed. The 63%-confidence region in the two-dimensional thickness space is a representation of how optimal the bins are for material separation. The model is demonstrated with 0.1 mm of iodine and 2.2 mm of calcium using two adjacent bins encompassing the entire energy range. Bins bordering at the iodine k-edge of 33.2 keY are found to be optimal. When compared to two abutted energy bins with equal incident counts as used in the literature (bordering at 54 keY), the thickness uncertainties are reduced from approximately 4% to less than I % (see Figure). This approach has been developed for two materials and is expandable to an arbitrary number of materials and bins.

  19. Roughness as classicality indicator of a quantum state

    Science.gov (United States)

    Lemos, Humberto C. F.; Almeida, Alexandre C. L.; Amaral, Barbara; Oliveira, Adélcio C.

    2018-03-01

    We define a new quantifier of classicality for a quantum state, the Roughness, which is given by the L2 (R2) distance between Wigner and Husimi functions. We show that the Roughness is bounded and therefore it is a useful tool for comparison between different quantum states for single bosonic systems. The state classification via the Roughness is not binary, but rather it is continuous in the interval [ 0 , 1 ], being the state more classic as the Roughness approaches to zero, and more quantum when it is closer to the unity. The Roughness is maximum for Fock states when its number of photons is arbitrarily large, and also for squeezed states at the maximum compression limit. On the other hand, the Roughness approaches its minimum value for thermal states at infinite temperature and, more generally, for infinite entropy states. The Roughness of a coherent state is slightly below one half, so we may say that it is more a classical state than a quantum one. Another important result is that the Roughness performs well for discriminating both pure and mixed states. Since the Roughness measures the inherent quantumness of a state, we propose another function, the Dynamic Distance Measure (DDM), which is suitable for measure how much quantum is a dynamics. Using DDM, we studied the quartic oscillator, and we observed that there is a certain complementarity between dynamics and state, i.e. when dynamics becomes more quantum, the Roughness of the state decreases, while the Roughness grows as the dynamics becomes less quantum.

  20. Effect of Blade Roughness on Transition and Wind Turbine Performance.

    Energy Technology Data Exchange (ETDEWEB)

    Ehrmann, Robert S. [Texas A & M Univ., College Station, TX (United States); White, E. B. [Texas A & M Univ., College Station, TX (United States)

    2015-09-01

    The real-world effect of accumulated surface roughness on wind-turbine power production is not well understood. To isolate specific blade roughness features and test their effect, field measurements of turbine-blade roughness were made and simulated on a NACA 633-418 airfoil in a wind tunnel. Insect roughness, paint chips, and erosion were characterized then manufactured. In the tests, these roughness configurations were recreated as distributed roughness, a forward-facing step, and an eroded leading edge. Distributed roughness was tested in three heights and five densities. Chord Reynolds number was varied between 0:8 to 4:8 × 106. Measurements included lift, drag, pitching moment, and boundary-layer transition location. Results indicate minimal effect from paint-chip roughness. As distributed roughness height and density increase, the lift-curve slope, maximum lift, and lift-to-drag ratio decrease. As Reynolds number increases, natural transition is replaced by bypass transition. The critical roughness Reynolds number varies between 178 to 318, within the historical range. At a chord Reynolds number of 3:2 × 106, the maximum lift-to-drag ratio decreases 40% for 140 μm roughness, corresponding to a 2.3% loss in annual energy production. Simulated performance loss compares well to measured performance loss of an in-service wind turbine.

  1. Skin friction measurements of mathematically generated roughness in the transitionally- to fully-rough regimes

    Science.gov (United States)

    Barros, Julio; Schultz, Michael; Flack, Karen

    2016-11-01

    Engineering systems are affected by surface roughness which cause an increase in drag leading to significant performance penalties. One important question is how to predict frictional drag purely based upon surface topography. Although significant progress has been made in recent years, this has proven to be challenging. The present work takes a systematic approach by generating surface roughness in which surfaces parameters, such as rms , skewness, can be controlled. Surfaces were produced using the random Fourier modes method with enforced power-law spectral slopes. The surfaces were manufactured using high resolution 3D-printing. In this study three surfaces with constant amplitude and varying slope, P, were investigated (P = - 0 . 5 , - 1 . 0 , - 1 . 5). Skin-friction measurements were conducted in a high Reynolds number turbulent channel flow facility, covering a wide range of Reynolds numbers, from hydraulic-smooth to fully-rough regimes. Results show that some long wavelength roughness scales do not contribute significantly to the frictional drag, thus highlighting the need for filtering in the calculation of surface statistics. Upon high-pass filtering, it was found that krms is highly correlated with the measured ks.

  2. The collection of the main issues for wind farm optimisation in complex terrain

    DEFF Research Database (Denmark)

    Xu, Chang; Chen, Dandan; Han, Xingxing

    2016-01-01

    The paper aims at establishing the collection of the main issues for wind farm optimisation in complex terrain. To make wind farm cost effective, this paper briefly analyses the main factors influencing wind farm design in complex terrain and sets up a series of mathematical model that includes...... micro-siting, collector circuits, access roads design for optimization problems. The paper relies on the existing one year wind data in the wind farm area and uses genetic algorithm to optimize the micro-siting problem. After optimization of the turbine layout, single-source shortest path algorithm...

  3. Electricity storages - optimised operation based on spot market prices; Stromspeicher. Optimierte Fahrweise auf Basis der Spotmarktpreise

    Energy Technology Data Exchange (ETDEWEB)

    Bernhard, Dominik; Roon, Serafin von [FfE Forschungsstelle fuer Energiewirtschaft e.V., Muenchen (Germany)

    2010-06-15

    With its integrated energy and climate package the last federal government set itself ambitious goals for the improvement of energy efficiency and growth of renewable energy production. These goals were confirmed by the new government in its coalition agreement. However, they can only be realised if the supply of electricity from fluctuating renewable sources can be made to coincide with electricity demand. Electricity storages are therefore an indispensable component of the future energy supply system. This article studies the optimised operation of an electricity storage based on spot market prices and the influence of wind power production up to the year 2020.

  4. Evaluation and optimisation of preparative semi-automated electrophoresis systems for Illumina library preparation.

    Science.gov (United States)

    Quail, Michael A; Gu, Yong; Swerdlow, Harold; Mayho, Matthew

    2012-12-01

    Size selection can be a critical step in preparation of next-generation sequencing libraries. Traditional methods employing gel electrophoresis lack reproducibility, are labour intensive, do not scale well and employ hazardous interchelating dyes. In a high-throughput setting, solid-phase reversible immobilisation beads are commonly used for size-selection, but result in quite a broad fragment size range. We have evaluated and optimised the use of two semi-automated preparative DNA electrophoresis systems, the Caliper Labchip XT and the Sage Science Pippin Prep, for size selection of Illumina sequencing libraries. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Influence of surface roughness of a desert

    Science.gov (United States)

    Sud, Y. C.; Smith, W. E.

    1984-01-01

    A numerical simulation study, using the current GLAS climate GCM, was carried out to examine the influence of low bulk aerodynamic drag parameter in the deserts. The results illustrate the importance of yet another feedback effect of a desert on itself, that is produced by the reduction in surface roughness height of land once the vegetation dies and desert forms. Apart from affecting the moisture convergence, low bulk transport coefficients of a desert lead to enhanced longwave cooling and sinking which together reduce precipitation by Charney's (1975) mechanism. Thus, this effect, together with albedo and soil moisture influence, perpetuate a desert condition through its geophysical feedback effect. The study further suggests that man made deserts is a viable hypothesis.

  6. Accelerated aging effects on surface hardness and roughness of lingual retainer adhesives.

    Science.gov (United States)

    Ramoglu, Sabri Ilhan; Usumez, Serdar; Buyukyilmaz, Tamer

    2008-01-01

    To test the null hypothesis that accelerated aging has no effect on the surface microhardness and roughness of two light-cured lingual retainer adhesives. Ten samples of light-cured materials, Transbond Lingual Retainer (3M Unitek) and Light Cure Retainer (Reliance) were cured with a halogen light for 40 seconds. Vickers hardness and surface roughness were measured before and after accelerated aging of 300 hours in a weathering tester. Differences between mean values were analyzed for statistical significance using a t-test. The level of statistical significance was set at P statistically significant (P statistically significant (P .05). Accelerated aging significantly increased the surface microhardness of both light-cured retainer adhesives tested. It also significantly increased the surface roughness of the Transbond Lingual Retainer.

  7. Effect of laser parameters on surface roughness of laser modified tool steel after thermal cyclic loading

    Science.gov (United States)

    Lau Sheng, Annie; Ismail, Izwan; Nur Aqida, Syarifah

    2018-03-01

    This study presents the effects of laser parameters on the surface roughness of laser modified tool steel after thermal cyclic loading. Pulse mode Nd:YAG laser was used to perform the laser surface modification process on AISI H13 tool steel samples. Samples were then treated with thermal cyclic loading experiments which involved alternate immersion in molten aluminium (800°C) and water (27°C) for 553 cycles. A full factorial design of experiment (DOE) was developed to perform the investigation. Factors for the DOE are the laser parameter namely overlap rate (η), pulse repetition frequency (f PRF) and peak power (Ppeak ) while the response is the surface roughness after thermal cyclic loading. Results indicate the surface roughness of the laser modified surface after thermal cyclic loading is significantly affected by laser parameter settings.

  8. Roughness-reflectance relationship of bare desert terrain: An empirical study

    International Nuclear Information System (INIS)

    Shoshany, M.

    1993-01-01

    A study of the bidirectional reflectance distribution function (BRDF) in relation to surface roughness properties was conducted in arid land near Fowlers Gap Research Station, New South Wales, Australia. Such empirical study is necessary for investigating the possibility of determining terrain geomorphological parameters from bidirectional reflectance data. A new apparatus was developed to take accurate hemispherical directions radiance measurements (HDRM). A digitizer for three-dimensional in situ roughness measurements was also developed. More than 70 hemispherical data sets were collected for various illumination conditions and surface types of desert stony pavements and rocky terrain slopes. In general, it was found that most of the surfaces exhibited an anisotropic reflection, combining a major component of backscattering. The BRDF of different surface types in relation to their roughness properties as determined by the field digitizer were then examined. Results showed that sites that are considered to differ significantly from a geomorphological point of view would not necessarily form a different BRDF

  9. Algorithme intelligent d'optimisation d'un design structurel de grande envergure

    Science.gov (United States)

    Dominique, Stephane

    The implementation of an automated decision support system in the field of design and structural optimisation can give a significant advantage to any industry working on mechanical designs. Indeed, by providing solution ideas to a designer or by upgrading existing design solutions while the designer is not at work, the system may reduce the project cycle time, or allow more time to produce a better design. This thesis presents a new approach to automate a design process based on Case-Based Reasoning (CBR), in combination with a new genetic algorithm named Genetic Algorithm with Territorial core Evolution (GATE). This approach was developed in order to reduce the operating cost of the process. However, as the system implementation cost is quite expensive, the approach is better suited for large scale design problem, and particularly for design problems that the designer plans to solve for many different specification sets. First, the CBR process uses a databank filled with every known solution to similar design problems. Then, the closest solutions to the current problem in term of specifications are selected. After this, during the adaptation phase, an artificial neural network (ANN) interpolates amongst known solutions to produce an additional solution to the current problem using the current specifications as inputs. Each solution produced and selected by the CBR is then used to initialize the population of an island of the genetic algorithm. The algorithm will optimise the solution further during the refinement phase. Using progressive refinement, the algorithm starts using only the most important variables for the problem. Then, as the optimisation progress, the remaining variables are gradually introduced, layer by layer. The genetic algorithm that is used is a new algorithm specifically created during this thesis to solve optimisation problems from the field of mechanical device structural design. The algorithm is named GATE, and is essentially a real number

  10. Dose optimisation for intraoperative cone-beam flat-detector CT in paediatric spinal surgery

    International Nuclear Information System (INIS)

    Petersen, Asger Greval; Eiskjaer, Soeren; Kaspersen, Jon

    2012-01-01

    During surgery for spinal deformities, accurate placement of pedicle screws may be guided by intraoperative cone-beam flat-detector CT. The purpose of this study was to identify appropriate paediatric imaging protocols aiming to reduce the radiation dose in line with the ALARA principle. Using O-arm registered (Medtronic, Inc.), three paediatric phantoms were employed to measure CTDI w doses with default and lowered exposure settings. Images from 126 scans were evaluated by two spinal surgeons and scores were compared (Kappa statistics). Effective doses were calculated. The recommended new low-dose 3-D spine protocols were then used in 15 children. The lowest acceptable exposure as judged by image quality for intraoperative use was 70 kVp/40 mAs, 70 kVp/80 mAs and 80 kVp/40 mAs for the 1-, 5- and 12-year-old-equivalent phantoms respectively (kappa = 0,70). Optimised dose settings reduced CTDI w doses 89-93%. The effective dose was 0.5 mSv (91-94,5% reduction). The optimised protocols were used clinically without problems. Radiation doses for intraoperative 3-D CT using a cone-beam flat-detector scanner could be reduced at least 89% compared to manufacturer settings and still be used to safely navigate pedicle screws. (orig.)

  11. Investigation of roughing machining simulation by using visual basic programming in NX CAM system

    Science.gov (United States)

    Hafiz Mohamad, Mohamad; Nafis Osman Zahid, Muhammed

    2018-03-01

    This paper outlines a simulation study to investigate the characteristic of roughing machining simulation in 4th axis milling processes by utilizing visual basic programming in NX CAM systems. The selection and optimization of cutting orientation in rough milling operation is critical in 4th axis machining. The main purpose of roughing operation is to approximately shape the machined parts into finished form by removing the bulk of material from workpieces. In this paper, the simulations are executed by manipulating a set of different cutting orientation to generate estimated volume removed from the machine parts. The cutting orientation with high volume removal is denoted as an optimum value and chosen to execute a roughing operation. In order to run the simulation, customized software is developed to assist the routines. Operations build-up instructions in NX CAM interface are translated into programming codes via advanced tool available in the Visual Basic Studio. The codes is customized and equipped with decision making tools to run and control the simulations. It permits the integration with any independent program files to execute specific operations. This paper aims to discuss about the simulation program and identifies optimum cutting orientations for roughing processes. The output of this study will broaden up the simulation routines performed in NX CAM systems.

  12. Non-Contact Surface Roughness Measurement by Implementation of a Spatial Light Modulator

    Directory of Open Access Journals (Sweden)

    Laura Aulbach

    2017-03-01

    Full Text Available The surface structure, especially the roughness, has a significant influence on numerous parameters, such as friction and wear, and therefore estimates the quality of technical systems. In the last decades, a broad variety of surface roughness measurement methods were developed. A destructive measurement procedure or the lack of feasibility of online monitoring are the crucial drawbacks of most of these methods. This article proposes a new non-contact method for measuring the surface roughness that is straightforward to implement and easy to extend to online monitoring processes. The key element is a liquid-crystal-based spatial light modulator, integrated in an interferometric setup. By varying the imprinted phase of the modulator, a correlation between the imprinted phase and the fringe visibility of an interferogram is measured, and the surface roughness can be derived. This paper presents the theoretical approach of the method and first simulation and experimental results for a set of surface roughnesses. The experimental results are compared with values obtained by an atomic force microscope and a stylus profiler.

  13. Probabilistic flood inundation mapping at ungauged streams due to roughness coefficient uncertainty in hydraulic modelling

    Science.gov (United States)

    Papaioannou, George; Vasiliades, Lampros; Loukas, Athanasios; Aronica, Giuseppe T.

    2017-04-01

    Probabilistic flood inundation mapping is performed and analysed at the ungauged Xerias stream reach, Volos, Greece. The study evaluates the uncertainty introduced by the roughness coefficient values on hydraulic models in flood inundation modelling and mapping. The well-established one-dimensional (1-D) hydraulic model, HEC-RAS is selected and linked to Monte-Carlo simulations of hydraulic roughness. Terrestrial Laser Scanner data have been used to produce a high quality DEM for input data uncertainty minimisation and to improve determination accuracy on stream channel topography required by the hydraulic model. Initial Manning's n roughness coefficient values are based on pebble count field surveys and empirical formulas. Various theoretical probability distributions are fitted and evaluated on their accuracy to represent the estimated roughness values. Finally, Latin Hypercube Sampling has been used for generation of different sets of Manning roughness values and flood inundation probability maps have been created with the use of Monte Carlo simulations. Historical flood extent data, from an extreme historical flash flood event, are used for validation of the method. The calibration process is based on a binary wet-dry reasoning with the use of Median Absolute Percentage Error evaluation metric. The results show that the proposed procedure supports probabilistic flood hazard mapping at ungauged rivers and provides water resources managers with valuable information for planning and implementing flood risk mitigation strategies.

  14. Comparative Evaluation of Conventional and Accelerated Castings on Marginal Fit and Surface Roughness

    Science.gov (United States)

    Jadhav, Vivek Dattatray; Motwani, Bhagwan K.; Shinde, Jitendra; Adhapure, Prasad

    2017-01-01

    Aims: The aim of this study was to evaluate the marginal fit and surface roughness of complete cast crowns made by a conventional and an accelerated casting technique. Settings and Design: This study was divided into three parts. In Part I, the marginal fit of full metal crowns made by both casting techniques in the vertical direction was checked, in Part II, the fit of sectional metal crowns in the horizontal direction made by both casting techniques was checked, and in Part III, the surface roughness of disc-shaped metal plate specimens made by both casting techniques was checked. Materials and Methods: A conventional technique was compared with an accelerated technique. In Part I of the study, the marginal fit of the full metal crowns as well as in Part II, the horizontal fit of sectional metal crowns made by both casting techniques was determined, and in Part III, the surface roughness of castings made with the same techniques was compared. Statistical Analysis Used: The results of the t-test and independent sample test do not indicate statistically significant differences in the marginal discrepancy detected between the two casting techniques. Results: For the marginal discrepancy and surface roughness, crowns fabricated with the accelerated technique were significantly different from those fabricated with the conventional technique. Conclusions: Accelerated casting technique showed quite satisfactory results, but the conventional technique was superior in terms of marginal fit and surface roughness. PMID:29042726

  15. Roughness of equipotential lines due to a self-affine boundary

    International Nuclear Information System (INIS)

    Assis, Thiago A de; Mota, Fernando de B; Miranda, Jose G V; Andrade, Roberto F S; Filho, Hugo de O Dias; Castilho, Caio M C de

    2006-01-01

    In this work, the characterization of the roughness of a set of equipotential lines l, due to a rough surface held at a nonzero voltage bias, is investigated. The roughness of the equipotential lines reflects the roughness of the profile, and causes a rapid variation in the electric field close to the surface. An ideal situation was considered, where a well known self-affine profile mimics the surface, while the equipotential lines are numerically evaluated using Liebmann's method. The use of an exact scale invariant profile helps to understand the dependency of the line roughness exponent α(l) on both the value of the potential (or on the average distance to the profile) and the profile's length. Results clearly support previous indications that: (a) for a system of fixed size, higher values of α characterize less corrugated lines far away from the profile; (b) for a fixed value of the potential, α decreases with the length of the profile towards the value of the boundary. This suggests that, for a system of infinite size, all equipotential lines share the same value of α

  16. A methodological approach to the design of optimising control strategies for sewer systems

    DEFF Research Database (Denmark)

    Mollerup, Ane Loft; Mikkelsen, Peter Steen; Sin, Gürkan

    2016-01-01

    This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters. Accordin......This study focuses on designing an optimisation based control for sewer system in a methodological way and linking itto a regulatory control. Optimisation based design is found to depend on proper choice of a model, formulation of objective function and tuning of optimisation parameters....... Accordingly, two novel optimisation configurations are developed, where the optimisation either acts on the actuators or acts on the regulatory control layer. These two optimisation designs are evaluated on a sub-catchment of the sewer system in Copenhagen, and found to perform better than the existing...

  17. Transmit Power Optimisation in Wireless Network

    Directory of Open Access Journals (Sweden)

    Besnik Terziu

    2011-09-01

    Full Text Available Transmit power optimisation in wireless networks based on beamforming have emerged as a promising technique to enhance the spectrum efficiency of present and future wireless communication systems. The aim of this study is to minimise the access point power consumption in cellular networks while maintaining a targeted quality of service (QoS for the mobile terminals. In this study, the targeted quality of service is delivered to a mobile station by providing a desired level of Signal to Interference and Noise Ratio (SINR. Base-stations are coordinated across multiple cells in a multi-antenna beamforming system. This study focuses on a multi-cell multi-antenna downlink scenario where each mobile user is equipped with a single antenna, but where multiple mobile users may be active simultaneously in each cell and are separated via spatial multiplexing using beamforming. The design criteria is to minimize the total weighted transmitted power across the base-stations subject to SINR constraints at the mobile users. The main contribution of this study is to define an iterative algorithm that is capable of finding the joint optimal beamformers for all basestations, based on a correlation-based channel model, the full-correlation model. Among all correlated channel models, the correlated channel model used in this study is the most accurate, giving the best performance in terms of power consumption. The environment here in this study is chosen to be Non-Light of- Sight (NLOS condition, where a signal from a wireless transmitter passes several obstructions before arriving at a wireless receiver. Moreover there are many scatterers local to the mobile, and multiple reflections can occur among them before energy arrives at the mobile. The proposed algorithm is based on uplink-downlink duality using the Lagrangian duality theory. Time-Division Duplex (TDD is chosen as the platform for this study since it has been adopted to the latest technologies in Fourth

  18. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    Science.gov (United States)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  19. Mutual information-based LPI optimisation for radar network

    Science.gov (United States)

    Shi, Chenguang; Zhou, Jianjiang; Wang, Fei; Chen, Jun

    2015-07-01

    Radar network can offer significant performance improvement for target detection and information extraction employing spatial diversity. For a fixed number of radars, the achievable mutual information (MI) for estimating the target parameters may extend beyond a predefined threshold with full power transmission. In this paper, an effective low probability of intercept (LPI) optimisation algorithm is presented to improve LPI performance for radar network. Based on radar network system model, we first provide Schleher intercept factor for radar network as an optimisation metric for LPI performance. Then, a novel LPI optimisation algorithm is presented, where for a predefined MI threshold, Schleher intercept factor for radar network is minimised by optimising the transmission power allocation among radars in the network such that the enhanced LPI performance for radar network can be achieved. The genetic algorithm based on nonlinear programming (GA-NP) is employed to solve the resulting nonconvex and nonlinear optimisation problem. Some simulations demonstrate that the proposed algorithm is valuable and effective to improve the LPI performance for radar network.

  20. Robustness analysis of bogie suspension components Pareto optimised values

    Science.gov (United States)

    Mousavi Bideleh, Seyed Milad

    2017-08-01

    Bogie suspension system of high speed trains can significantly affect vehicle performance. Multiobjective optimisation problems are often formulated and solved to find the Pareto optimised values of the suspension components and improve cost efficiency in railway operations from different perspectives. Uncertainties in the design parameters of suspension system can negatively influence the dynamics behaviour of railway vehicles. In this regard, robustness analysis of a bogie dynamics response with respect to uncertainties in the suspension design parameters is considered. A one-car railway vehicle model with 50 degrees of freedom and wear/comfort Pareto optimised values of bogie suspension components is chosen for the analysis. Longitudinal and lateral primary stiffnesses, longitudinal and vertical secondary stiffnesses, as well as yaw damping are considered as five design parameters. The effects of parameter uncertainties on wear, ride comfort, track shift force, stability, and risk of derailment are studied by varying the design parameters around their respective Pareto optimised values according to a lognormal distribution with different coefficient of variations (COVs). The robustness analysis is carried out based on the maximum entropy concept. The multiplicative dimensional reduction method is utilised to simplify the calculation of fractional moments and improve the computational efficiency. The results showed that the dynamics response of the vehicle with wear/comfort Pareto optimised values of bogie suspension is robust against uncertainties in the design parameters and the probability of failure is small for parameter uncertainties with COV up to 0.1.