WorldWideScience

Sample records for rough set approach

  1. A Rough Set Approach for Customer Segmentation

    Directory of Open Access Journals (Sweden)

    Prabha Dhandayudam

    2014-04-01

    Full Text Available Customer segmentation is a process that divides a business's total customers into groups according to their diversity of purchasing behavior and characteristics. The data mining clustering technique can be used to accomplish this customer segmentation. This technique clusters the customers in such a way that the customers in one group behave similarly when compared to the customers in other groups. The customer related data are categorical in nature. However, the clustering algorithms for categorical data are few and are unable to handle uncertainty. Rough set theory (RST is a mathematical approach that handles uncertainty and is capable of discovering knowledge from a database. This paper proposes a new clustering technique called MADO (Minimum Average Dissimilarity between Objects for categorical data based on elements of RST. The proposed algorithm is compared with other RST based clustering algorithms, such as MMR (Min-Min Roughness, MMeR (Min Mean Roughness, SDR (Standard Deviation Roughness, SSDR (Standard deviation of Standard Deviation Roughness, and MADE (Maximal Attributes DEpendency. The results show that for the real customer data considered, the MADO algorithm achieves clusters with higher cohesion, lower coupling, and less computational complexity when compared to the above mentioned algorithms. The proposed algorithm has also been tested on a synthetic data set to prove that it is also suitable for high dimensional data.

  2. Flu Diagnosis System Using Jaccard Index and Rough Set Approaches

    Science.gov (United States)

    Efendi, Riswan; Azah Samsudin, Noor; Mat Deris, Mustafa; Guan Ting, Yip

    2018-04-01

    Jaccard index and rough set approaches have been frequently implemented in decision support systems with various domain applications. Both approaches are appropriate to be considered for categorical data analysis. This paper presents the applications of sets operations for flu diagnosis systems based on two different approaches, such as, Jaccard index and rough set. These two different approaches are established using set operations concept, namely intersection and subset. The step-by-step procedure is demonstrated from each approach in diagnosing flu system. The similarity and dissimilarity indexes between conditional symptoms and decision are measured using Jaccard approach. Additionally, the rough set is used to build decision support rules. Moreover, the decision support rules are established using redundant data analysis and elimination of unclassified elements. A number data sets is considered to attempt the step-by-step procedure from each approach. The result has shown that rough set can be used to support Jaccard approaches in establishing decision support rules. Additionally, Jaccard index is better approach for investigating the worst condition of patients. While, the definitely and possibly patients with or without flu can be determined using rough set approach. The rules may improve the performance of medical diagnosis systems. Therefore, inexperienced doctors and patients are easier in preliminary flu diagnosis.

  3. Rough Set Approach to Incomplete Multiscale Information System

    Science.gov (United States)

    Yang, Xibei; Qi, Yong; Yu, Dongjun; Yu, Hualong; Song, Xiaoning; Yang, Jingyu

    2014-01-01

    Multiscale information system is a new knowledge representation system for expressing the knowledge with different levels of granulations. In this paper, by considering the unknown values, which can be seen everywhere in real world applications, the incomplete multiscale information system is firstly investigated. The descriptor technique is employed to construct rough sets at different scales for analyzing the hierarchically structured data. The problem of unravelling decision rules at different scales is also addressed. Finally, the reduct descriptors are formulated to simplify decision rules, which can be derived from different scales. Some numerical examples are employed to substantiate the conceptual arguments. PMID:25276852

  4. A rough set approach for determining weights of decision makers in group decision making.

    Science.gov (United States)

    Yang, Qiang; Du, Ping-An; Wang, Yong; Liang, Bin

    2017-01-01

    This study aims to present a novel approach for determining the weights of decision makers (DMs) based on rough group decision in multiple attribute group decision-making (MAGDM) problems. First, we construct a rough group decision matrix from all DMs' decision matrixes on the basis of rough set theory. After that, we derive a positive ideal solution (PIS) founded on the average matrix of rough group decision, and negative ideal solutions (NISs) founded on the lower and upper limit matrixes of rough group decision. Then, we obtain the weight of each group member and priority order of alternatives by using relative closeness method, which depends on the distances from each individual group member' decision to the PIS and NISs. Through comparisons with existing methods and an on-line business manager selection example, the proposed method show that it can provide more insights into the subjectivity and vagueness of DMs' evaluations and selections.

  5. A Rough Set Approach of Mechanical Fault Diagnosis for Five-Plunger Pump

    Directory of Open Access Journals (Sweden)

    Jiangping Wang

    2013-01-01

    Full Text Available Five-plunger pumps are widely used in oil field to recover petroleum due to their reliability and relatively low cost. Petroleum production is, to a great extent, dependent upon the running condition of the pumps. Closely monitoring the condition of the pumps and carrying out timely system diagnosis whenever a fault symptom is detected would help to reduce the production downtime and improve overall productivity. In this paper, a rough set approach of mechanical fault diagnosis is proposed to identify the five-plunger pump faults. The details of the approach, together with the basic concepts of the rough sets theory, are presented. The rough classifier is a set of decision rules derived from lower and upper approximations of decision classes. The definitions of these approximations are based on the indiscernibility relation in the set of objects. The spectrum features of vibration signals are abstracted as the attributes of the learning samples. The minimum decision rule set is used to classify technical states of the considered object. The diagnostic investigation is done on data from a five-plunger pump in outdoor conditions on a real industrial object. Results show that the approach can effectively identify the different operating states of the pump.

  6. Generalized rough sets

    International Nuclear Information System (INIS)

    Rady, E.A.; Kozae, A.M.; Abd El-Monsef, M.M.E.

    2004-01-01

    The process of analyzing data under uncertainty is a main goal for many real life problems. Statistical analysis for such data is an interested area for research. The aim of this paper is to introduce a new method concerning the generalization and modification of the rough set theory introduced early by Pawlak [Int. J. Comput. Inform. Sci. 11 (1982) 314

  7. Novel Approach to Tourism Analysis with Multiple Outcome Capability Using Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Chun-Che Huang

    2016-12-01

    Full Text Available To explore the relationship between characteristics and decision-making outcomes of the tourist is critical to keep competitive tourism business. In investigation of tourism development, most of the existing studies lack of a systematic approach to analyze qualitative data. Although the traditional Rough Set (RS based approach is an excellent classification method in qualitative modeling, but it is canarsquo;t deal with the case of multiple outcomes, which is a common situation in tourism. Consequently, the Multiple Outcome Reduct Generation (MORG and Multiple Outcome Rule Extraction (MORE approaches based on RS to handle multiple outcomes are proposed. This study proposes a ranking based approach to induct meaningful reducts and ensure the strength and robustness of decision rules, which helps decision makers understand touristarsquo;s characteristics in a tourism case.

  8. A rough set-based association rule approach implemented on a brand trust evaluation model

    Science.gov (United States)

    Liao, Shu-Hsien; Chen, Yin-Ju

    2017-09-01

    In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.

  9. An intelligent hybrid scheme for optimizing parking space: A Tabu metaphor and rough set based approach

    Directory of Open Access Journals (Sweden)

    Soumya Banerjee

    2011-03-01

    Full Text Available Congested roads, high traffic, and parking problems are major concerns for any modern city planning. Congestion of on-street spaces in official neighborhoods may give rise to inappropriate parking areas in office and shopping mall complex during the peak time of official transactions. This paper proposes an intelligent and optimized scheme to solve parking space problem for a small city (e.g., Mauritius using a reactive search technique (named as Tabu Search assisted by rough set. Rough set is being used for the extraction of uncertain rules that exist in the databases of parking situations. The inclusion of rough set theory depicts the accuracy and roughness, which are used to characterize uncertainty of the parking lot. Approximation accuracy is employed to depict accuracy of a rough classification [1] according to different dynamic parking scenarios. And as such, the hybrid metaphor proposed comprising of Tabu Search and rough set could provide substantial research directions for other similar hard optimization problems.

  10. δ-Cut Decision-Theoretic Rough Set Approach: Model and Attribute Reductions

    Directory of Open Access Journals (Sweden)

    Hengrong Ju

    2014-01-01

    Full Text Available Decision-theoretic rough set is a quite useful rough set by introducing the decision cost into probabilistic approximations of the target. However, Yao’s decision-theoretic rough set is based on the classical indiscernibility relation; such a relation may be too strict in many applications. To solve this problem, a δ-cut decision-theoretic rough set is proposed, which is based on the δ-cut quantitative indiscernibility relation. Furthermore, with respect to criterions of decision-monotonicity and cost decreasing, two different algorithms are designed to compute reducts, respectively. The comparisons between these two algorithms show us the following: (1 with respect to the original data set, the reducts based on decision-monotonicity criterion can generate more rules supported by the lower approximation region and less rules supported by the boundary region, and it follows that the uncertainty which comes from boundary region can be decreased; (2 with respect to the reducts based on decision-monotonicity criterion, the reducts based on cost minimum criterion can obtain the lowest decision costs and the largest approximation qualities. This study suggests potential application areas and new research trends concerning rough set theory.

  11. Bankruptcy Prediction with Rough Sets

    NARCIS (Netherlands)

    J.C. Bioch (Cor); V. Popova (Viara)

    2001-01-01

    textabstractThe bankruptcy prediction problem can be considered an or dinal classification problem. The classical theory of Rough Sets describes objects by discrete attributes, and does not take into account the order- ing of the attributes values. This paper proposes a modification of the Rough Set

  12. Fuzzy sets, rough sets, multisets and clustering

    CERN Document Server

    Dahlbom, Anders; Narukawa, Yasuo

    2017-01-01

    This book is dedicated to Prof. Sadaaki Miyamoto and presents cutting-edge papers in some of the areas in which he contributed. Bringing together contributions by leading researchers in the field, it concretely addresses clustering, multisets, rough sets and fuzzy sets, as well as their applications in areas such as decision-making. The book is divided in four parts, the first of which focuses on clustering and classification. The second part puts the spotlight on multisets, bags, fuzzy bags and other fuzzy extensions, while the third deals with rough sets. Rounding out the coverage, the last part explores fuzzy sets and decision-making.

  13. Rough set classification based on quantum logic

    Science.gov (United States)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  14. A combined data mining approach using rough set theory and case-based reasoning in medical datasets

    Directory of Open Access Journals (Sweden)

    Mohammad Taghi Rezvan

    2014-06-01

    Full Text Available Case-based reasoning (CBR is the process of solving new cases by retrieving the most relevant ones from an existing knowledge-base. Since, irrelevant or redundant features not only remarkably increase memory requirements but also the time complexity of the case retrieval, reducing the number of dimensions is an issue worth considering. This paper uses rough set theory (RST in order to reduce the number of dimensions in a CBR classifier with the aim of increasing accuracy and efficiency. CBR exploits a distance based co-occurrence of categorical data to measure similarity of cases. This distance is based on the proportional distribution of different categorical values of features. The weight used for a feature is the average of co-occurrence values of the features. The combination of RST and CBR has been applied to real categorical datasets of Wisconsin Breast Cancer, Lymphography, and Primary cancer. The 5-fold cross validation method is used to evaluate the performance of the proposed approach. The results show that this combined approach lowers computational costs and improves performance metrics including accuracy and interpretability compared to other approaches developed in the literature.

  15. Comparative analysis of targeted metabolomics: dominance-based rough set approach versus orthogonal partial least square-discriminant analysis.

    Science.gov (United States)

    Blasco, H; Błaszczyński, J; Billaut, J C; Nadal-Desbarats, L; Pradat, P F; Devos, D; Moreau, C; Andres, C R; Emond, P; Corcia, P; Słowiński, R

    2015-02-01

    Metabolomics is an emerging field that includes ascertaining a metabolic profile from a combination of small molecules, and which has health applications. Metabolomic methods are currently applied to discover diagnostic biomarkers and to identify pathophysiological pathways involved in pathology. However, metabolomic data are complex and are usually analyzed by statistical methods. Although the methods have been widely described, most have not been either standardized or validated. Data analysis is the foundation of a robust methodology, so new mathematical methods need to be developed to assess and complement current methods. We therefore applied, for the first time, the dominance-based rough set approach (DRSA) to metabolomics data; we also assessed the complementarity of this method with standard statistical methods. Some attributes were transformed in a way allowing us to discover global and local monotonic relationships between condition and decision attributes. We used previously published metabolomics data (18 variables) for amyotrophic lateral sclerosis (ALS) and non-ALS patients. Principal Component Analysis (PCA) and Orthogonal Partial Least Square-Discriminant Analysis (OPLS-DA) allowed satisfactory discrimination (72.7%) between ALS and non-ALS patients. Some discriminant metabolites were identified: acetate, acetone, pyruvate and glutamine. The concentrations of acetate and pyruvate were also identified by univariate analysis as significantly different between ALS and non-ALS patients. DRSA correctly classified 68.7% of the cases and established rules involving some of the metabolites highlighted by OPLS-DA (acetate and acetone). Some rules identified potential biomarkers not revealed by OPLS-DA (beta-hydroxybutyrate). We also found a large number of common discriminating metabolites after Bayesian confirmation measures, particularly acetate, pyruvate, acetone and ascorbate, consistent with the pathophysiological pathways involved in ALS. DRSA provides

  16. Generalized rough sets hybrid structure and applications

    CERN Document Server

    Mukherjee, Anjan

    2015-01-01

    The book introduces the concept of “generalized interval valued intuitionistic fuzzy soft sets”. It presents the basic properties of these sets and also, investigates an application of generalized interval valued intuitionistic fuzzy soft sets in decision making with respect to interval of degree of preference. The concept of “interval valued intuitionistic fuzzy soft rough sets” is discussed and interval valued intuitionistic fuzzy soft rough set based multi criteria group decision making scheme is presented, which refines the primary evaluation of the whole expert group and enables us to select the optimal object in a most reliable manner. The book also details concept of interval valued intuitionistic fuzzy sets of type 2. It presents the basic properties of these sets. The book also introduces the concept of “interval valued intuitionistic fuzzy soft topological space (IVIFS topological space)” together with intuitionistic fuzzy soft open sets (IVIFS open sets) and intuitionistic fuzzy soft cl...

  17. Information Measures of Roughness of Knowledge and Rough Sets for Incomplete Information Systems

    Institute of Scientific and Technical Information of China (English)

    LIANG Ji-ye; QU Kai-she

    2001-01-01

    In this paper we address information measures of roughness of knowledge and rough sets for incomplete information systems. The definition of rough entropy of knowledge and its important properties are given. In particular, the relationship between rough entropy of knowledge and the Hartley measure of uncertainty is established. We show that rough entropy of knowledge decreases monotonously as granularity of information become smaller. This gives an information interpretation for roughness of knowledge. Based on rough entropy of knowledge and roughness of rough set. a definition of rough entropy of rough set is proposed, and we show that rough entropy of rough set decreases monotonously as granularity of information become smaller. This gives more accurate measure for roughness of rough set.

  18. Thriving rough sets 10th anniversary : honoring professor Zdzisław Pawlak's life and legacy & 35 years of rough sets

    CERN Document Server

    Skowron, Andrzej; Yao, Yiyu; Ślęzak, Dominik; Polkowski, Lech

    2017-01-01

    This special book is dedicated to the memory of Professor Zdzisław Pawlak, the father of rough set theory, in order to commemorate both the 10th anniversary of his passing and 35 years of rough set theory. The book consists of 20 chapters distributed into four sections, which focus in turn on a historical review of Professor Zdzisław Pawlak and rough set theory; a review of the theory of rough sets; the state of the art of rough set theory; and major developments in rough set based data mining approaches. Apart from Professor Pawlak’s contributions to rough set theory, other areas he was interested in are also included. Moreover, recent theoretical studies and advances in applications are also presented. The book will offer a useful guide for researchers in Knowledge Engineering and Data Mining by suggesting new approaches to solving the problems they encounter.

  19. More on neutrosophic soft rough sets and its modification

    Directory of Open Access Journals (Sweden)

    Emad Marei

    2015-12-01

    Full Text Available This paper aims to introduce and discuss anew mathematical tool for dealing with uncertainties, which is a combination of neutrosophic sets, soft sets and rough sets, namely neutrosophic soft rough set model. Also, its modification is introduced. Some of their properties are studied and supported with proved propositions and many counter examples. Some of rough relations are redefined as a neutrosophic soft rough relations. Comparisons among traditional rough model, suggested neutrosophic soft rough model and its modification, by using their properties and accuracy measures are introduced. Finally, we illustrate that, classical rough set model can be viewed as a special case of suggested models in this paper.

  20. Rough sets selected methods and applications in management and engineering

    CERN Document Server

    Peters, Georg; Ślęzak, Dominik; Yao, Yiyu

    2012-01-01

    Introduced in the early 1980s, Rough Set Theory has become an important part of soft computing in the last 25 years. This book provides a practical, context-based analysis of rough set theory, with each chapter exploring a real-world application of Rough Sets.

  1. Can We Make Definite Categorization of Student Attitudes? A Rough Set Approach to Investigate Students' Implicit Attitudinal Typologies toward Living Things

    Science.gov (United States)

    Narli, Serkan; Yorek, Nurettin; Sahin, Mehmet; Usak, Muhammet

    2010-01-01

    This study investigates the possibility of analyzing educational data using the theory of rough sets which is mostly employed in the fields of data analysis and data mining. Data were collected using an open-ended conceptual understanding test of the living things administered to first-year high school students. The responses of randomly selected…

  2. Simplified Approach to Predicting Rough Surface Transition

    Science.gov (United States)

    Boyle, Robert J.; Stripf, Matthias

    2009-01-01

    Turbine vane heat transfer predictions are given for smooth and rough vanes where the experimental data show transition moving forward on the vane as the surface roughness physical height increases. Consiste nt with smooth vane heat transfer, the transition moves forward for a fixed roughness height as the Reynolds number increases. Comparison s are presented with published experimental data. Some of the data ar e for a regular roughness geometry with a range of roughness heights, Reynolds numbers, and inlet turbulence intensities. The approach ta ken in this analysis is to treat the roughness in a statistical sense , consistent with what would be obtained from blades measured after e xposure to actual engine environments. An approach is given to determ ine the equivalent sand grain roughness from the statistics of the re gular geometry. This approach is guided by the experimental data. A roughness transition criterion is developed, and comparisons are made with experimental data over the entire range of experimental test co nditions. Additional comparisons are made with experimental heat tran sfer data, where the roughness geometries are both regular as well a s statistical. Using the developed analysis, heat transfer calculatio ns are presented for the second stage vane of a high pressure turbine at hypothetical engine conditions.

  3. Soft sets combined with interval valued intuitionistic fuzzy sets of type-2 and rough sets

    Directory of Open Access Journals (Sweden)

    Anjan Mukherjee

    2015-03-01

    Full Text Available Fuzzy set theory, rough set theory and soft set theory are all mathematical tools dealing with uncertainties. The concept of type-2 fuzzy sets was introduced by Zadeh in 1975 which was extended to interval valued intuitionistic fuzzy sets of type-2 by the authors.This paper is devoted to the discussions of the combinations of interval valued intuitionistic sets of type-2, soft sets and rough sets.Three different types of new hybrid models, namely-interval valued intuitionistic fuzzy soft sets of type-2, soft rough interval valued intuitionistic fuzzy sets of type-2 and soft interval valued intuitionistic fuzzy rough sets of type-2 are proposed and their properties are derived.

  4. PhysarumSoft: An update based on rough set theory

    Science.gov (United States)

    Schumann, Andrew; Pancerz, Krzysztof

    2017-07-01

    PhysarumSoft is a software tool consisting of two modules developed for programming Physarum machines and simulating Physarum games, respectively. The paper briefly discusses what has been added since the last version released in 2015. New elements in both modules are based on rough set theory. Rough sets are used to model behaviour of Physarum machines and to describe strategy games.

  5. Rough set and rule-based multicriteria decision aiding

    Directory of Open Access Journals (Sweden)

    Roman Slowinski

    2012-08-01

    Full Text Available The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA. DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems.

  6. Matroidal Structure of Generalized Rough Sets Based on Tolerance Relations

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    of the generalized rough set based on the tolerance relation. The matroid can also induce a new relation. We investigate the connection between the original tolerance relation and the induced relation.

  7. Helly-type theorems for roughly convexlike sets

    International Nuclear Information System (INIS)

    Phan Thanh An

    2005-04-01

    For a given positive real number of γ, a subset M of an n-dimensional Euclidean space is said to be roughly convexlike (with the roughness degree γ) if x 0 , x 1 is an element of M and parallel x 1 - x 0 parallel > γ imply ]x 0 , x 1 [intersection M ≠ 0. In this paper, we present Helly-type theorems for such sets then solve an open question about sets of constant width raised by Buchman and Valentine and Sallee (author)

  8. Variable precision rough set for multiple decision attribute analysis

    Institute of Scientific and Technical Information of China (English)

    Lai; Kin; Keung

    2008-01-01

    A variable precision rough set (VPRS) model is used to solve the multi-attribute decision analysis (MADA) problem with multiple conflicting decision attributes and multiple condition attributes. By introducing confidence measures and a β-reduct, the VPRS model can rationally solve the conflicting decision analysis problem with multiple decision attributes and multiple condition attributes. For illustration, a medical diagnosis example is utilized to show the feasibility of the VPRS model in solving the MADA...

  9. UNCERTAINTY HANDLING IN DISASTER MANAGEMENT USING HIERARCHICAL ROUGH SET GRANULATION

    Directory of Open Access Journals (Sweden)

    H. Sheikhian

    2015-08-01

    Full Text Available Uncertainty is one of the main concerns in geospatial data analysis. It affects different parts of decision making based on such data. In this paper, a new methodology to handle uncertainty for multi-criteria decision making problems is proposed. It integrates hierarchical rough granulation and rule extraction to build an accurate classifier. Rough granulation provides information granules with a detailed quality assessment. The granules are the basis for the rule extraction in granular computing, which applies quality measures on the rules to obtain the best set of classification rules. The proposed methodology is applied to assess seismic physical vulnerability in Tehran. Six effective criteria reflecting building age, height and material, topographic slope and earthquake intensity of the North Tehran fault have been tested. The criteria were discretized and the data set was granulated using a hierarchical rough method, where the best describing granules are determined according to the quality measures. The granules are fed into the granular computing algorithm resulting in classification rules that provide the highest prediction quality. This detailed uncertainty management resulted in 84% accuracy in prediction in a training data set. It was applied next to the whole study area to obtain the seismic vulnerability map of Tehran. A sensitivity analysis proved that earthquake intensity is the most effective criterion in the seismic vulnerability assessment of Tehran.

  10. Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam Volume 2

    CERN Document Server

    Suraj, Zbigniew

    2013-01-01

    This book is dedicated to the memory of Professor Zdzis{\\l}aw Pawlak who passed away almost six year ago. He is the founder of the Polish school of Artificial Intelligence and one of the pioneers in Computer Engineering and Computer Science with worldwide influence. He was a truly great scientist, researcher, teacher and a human being. This book prepared in two volumes contains more than 50 chapters. This demonstrates that the scientific approaches  discovered by of Professor Zdzis{\\l}aw Pawlak, especially the rough set approach as a tool for dealing with imperfect knowledge, are vivid and intensively explored by many researchers in many places throughout the world. The submitted papers prove that interest in rough set research is growing and is possible to see many new excellent results both on theoretical foundations and applications of rough sets alone or in combination with other approaches. We are proud to offer the readers this book.

  11. Rough Sets and Intelligent Systems - Professor Zdzisław Pawlak in Memoriam Volume 1

    CERN Document Server

    Suraj, Zbigniew

    2013-01-01

    This book is dedicated to the memory of Professor Zdzis{\\l}aw Pawlak who passed away almost six year ago. He is the founder of the Polish school of Artificial Intelligence and one of the pioneers in Computer Engineering and Computer Science with worldwide influence. He was a truly great scientist, researcher, teacher and a human being. This book prepared in two volumes contains more than 50 chapters. This demonstrates that the scientific approaches  discovered by of Professor Zdzis{\\l}aw Pawlak, especially the rough set approach as a tool for dealing with imperfect knowledge, are vivid and intensively explored by many researchers in many places throughout the world. The submitted papers prove that interest in rough set research is growing and is possible to see many new excellent results both on theoretical foundations and applications of rough sets alone or in combination with other approaches. We are proud to offer the readers this book.

  12. A Dual Hesitant Fuzzy Multigranulation Rough Set over Two-Universe Model for Medical Diagnoses

    Science.gov (United States)

    Zhang, Chao; Li, Deyu; Yan, Yan

    2015-01-01

    In medical science, disease diagnosis is one of the difficult tasks for medical experts who are confronted with challenges in dealing with a lot of uncertain medical information. And different medical experts might express their own thought about the medical knowledge base which slightly differs from other medical experts. Thus, to solve the problems of uncertain data analysis and group decision making in disease diagnoses, we propose a new rough set model called dual hesitant fuzzy multigranulation rough set over two universes by combining the dual hesitant fuzzy set and multigranulation rough set theories. In the framework of our study, both the definition and some basic properties of the proposed model are presented. Finally, we give a general approach which is applied to a decision making problem in disease diagnoses, and the effectiveness of the approach is demonstrated by a numerical example. PMID:26858772

  13. An IDS Alerts Aggregation Algorithm Based on Rough Set Theory

    Science.gov (United States)

    Zhang, Ru; Guo, Tao; Liu, Jianyi

    2018-03-01

    Within a system in which has been deployed several IDS, a great number of alerts can be triggered by a single security event, making real alerts harder to be found. To deal with redundant alerts, we propose a scheme based on rough set theory. In combination with basic concepts in rough set theory, the importance of attributes in alerts was calculated firstly. With the result of attributes importance, we could compute the similarity of two alerts, which will be compared with a pre-defined threshold to determine whether these two alerts can be aggregated or not. Also, time interval should be taken into consideration. Allowed time interval for different types of alerts is computed individually, since different types of alerts may have different time gap between two alerts. In the end of this paper, we apply proposed scheme on DAPRA98 dataset and the results of experiment show that our scheme can efficiently reduce the redundancy of alerts so that administrators of security system could avoid wasting time on useless alerts.

  14. Recent Fuzzy Generalisations of Rough Sets Theory: A Systematic Review and Methodological Critique of the Literature

    Directory of Open Access Journals (Sweden)

    Abbas Mardani

    2017-01-01

    Full Text Available Rough set theory has been used extensively in fields of complexity, cognitive sciences, and artificial intelligence, especially in numerous fields such as expert systems, knowledge discovery, information system, inductive reasoning, intelligent systems, data mining, pattern recognition, decision-making, and machine learning. Rough sets models, which have been recently proposed, are developed applying the different fuzzy generalisations. Currently, there is not a systematic literature review and classification of these new generalisations about rough set models. Therefore, in this review study, the attempt is made to provide a comprehensive systematic review of methodologies and applications of recent generalisations discussed in the area of fuzzy-rough set theory. On this subject, the Web of Science database has been chosen to select the relevant papers. Accordingly, the systematic and meta-analysis approach, which is called “PRISMA,” has been proposed and the selected articles were classified based on the author and year of publication, author nationalities, application field, type of study, study category, study contribution, and journal in which the articles have appeared. Based on the results of this review, we found that there are many challenging issues related to the different application area of fuzzy-rough set theory which can motivate future research studies.

  15. Rough sets applied in sublattices and ideals of lattices

    Directory of Open Access Journals (Sweden)

    R. Ameri

    2015-12-01

    Full Text Available The purpose of this paper is the study of rough hyperlattice. In this regards we introduce rough sublattice and rough ideals of lattices. We will proceed by obtaining lower and upper approximations in these lattices.

  16. Preference Mining Using Neighborhood Rough Set Model on Two Universes.

    Science.gov (United States)

    Zeng, Kai

    2016-01-01

    Preference mining plays an important role in e-commerce and video websites for enhancing user satisfaction and loyalty. Some classical methods are not available for the cold-start problem when the user or the item is new. In this paper, we propose a new model, called parametric neighborhood rough set on two universes (NRSTU), to describe the user and item data structures. Furthermore, the neighborhood lower approximation operator is used for defining the preference rules. Then, we provide the means for recommending items to users by using these rules. Finally, we give an experimental example to show the details of NRSTU-based preference mining for cold-start problem. The parameters of the model are also discussed. The experimental results show that the proposed method presents an effective solution for preference mining. In particular, NRSTU improves the recommendation accuracy by about 19% compared to the traditional method.

  17. Intelligent fault diagnosis of rolling bearing based on kernel neighborhood rough sets and statistical features

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Xiao Ran; Zhang, You Yun; Zhu, Yong Sheng [Xi' an Jiaotong Univ., Xi' an (China)

    2012-09-15

    Intelligent fault diagnosis benefits from efficient feature selection. Neighborhood rough sets are effective in feature selection. However, determining the neighborhood value accurately remains a challenge. The wrapper feature selection algorithm is designed by combining the kernel method and neighborhood rough sets to self-adaptively select sensitive features. The combination effectively solves the shortcomings in selecting the neighborhood value in the previous application process. The statistical features of time and frequency domains are used to describe the characteristic of the rolling bearing to make the intelligent fault diagnosis approach work. Three classification algorithms, namely, classification and regression tree (CART), commercial version 4.5 (C4.5), and radial basis function support vector machines (RBFSVM), are used to test UCI datasets and 10 fault datasets of rolling bearing. The results indicate that the diagnostic approach presented could effectively select the sensitive fault features and simultaneously identify the type and degree of the fault.

  18. Intelligent fault diagnosis of rolling bearing based on kernel neighborhood rough sets and statistical features

    International Nuclear Information System (INIS)

    Zhu, Xiao Ran; Zhang, You Yun; Zhu, Yong Sheng

    2012-01-01

    Intelligent fault diagnosis benefits from efficient feature selection. Neighborhood rough sets are effective in feature selection. However, determining the neighborhood value accurately remains a challenge. The wrapper feature selection algorithm is designed by combining the kernel method and neighborhood rough sets to self-adaptively select sensitive features. The combination effectively solves the shortcomings in selecting the neighborhood value in the previous application process. The statistical features of time and frequency domains are used to describe the characteristic of the rolling bearing to make the intelligent fault diagnosis approach work. Three classification algorithms, namely, classification and regression tree (CART), commercial version 4.5 (C4.5), and radial basis function support vector machines (RBFSVM), are used to test UCI datasets and 10 fault datasets of rolling bearing. The results indicate that the diagnostic approach presented could effectively select the sensitive fault features and simultaneously identify the type and degree of the fault

  19. Merger and Acquisition Target Selection Based on Interval Neutrosophic Multigranulation Rough Sets over Two Universes

    Directory of Open Access Journals (Sweden)

    Chao Zhang

    2017-07-01

    Full Text Available As a significant business activity, merger and acquisition (M&A generally means transactions in which the ownership of companies, other business organizations or their operating units are transferred or combined. In a typical M&A procedure, M&A target selection is an important issue that tends to exert an increasingly significant impact on different business areas. Although some research works based on fuzzy methods have been explored on this issue, they can only deal with incomplete and uncertain information, but not inconsistent and indeterminate information that exists universally in the decision making process. Additionally, it is advantageous to solve M&A problems under the group decision making context. In order to handle these difficulties in M&A target selection background, we introduce a novel rough set model by combining interval neutrosophic sets (INSs with multigranulation rough sets over two universes, called an interval neutrosophic (IN multigranulation rough set over two universes. Then, we discuss the definition and some fundamental properties of the proposed model. Finally, we establish decision making rules and computing approaches for the proposed model in M&A target selection background, and the effectiveness of the decision making approach is demonstrated by an illustrative case analysis.

  20. Rough Standard Neutrosophic Sets: An Application on Standard Neutrosophic Information Systems

    Directory of Open Access Journals (Sweden)

    Nguyen Xuan Thao

    2016-12-01

    Full Text Available A rough fuzzy set is the result of the approximation of a fuzzy set with respect to a crisp approximation space. It is a mathematical tool for the knowledge discovery in the fuzzy information systems. In this paper, we introduce the concepts of rough standard neutrosophic sets and standard neutrosophic information system, and give some results of the knowledge discovery on standard neutrosophic information system based on rough standard neutrosophic sets.

  1. Knowledge Reduction Based on Divide and Conquer Method in Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Feng Hu

    2012-01-01

    Full Text Available The divide and conquer method is a typical granular computing method using multiple levels of abstraction and granulations. So far, although some achievements based on divided and conquer method in the rough set theory have been acquired, the systematic methods for knowledge reduction based on divide and conquer method are still absent. In this paper, the knowledge reduction approaches based on divide and conquer method, under equivalence relation and under tolerance relation, are presented, respectively. After that, a systematic approach, named as the abstract process for knowledge reduction based on divide and conquer method in rough set theory, is proposed. Based on the presented approach, two algorithms for knowledge reduction, including an algorithm for attribute reduction and an algorithm for attribute value reduction, are presented. Some experimental evaluations are done to test the methods on uci data sets and KDDCUP99 data sets. The experimental results illustrate that the proposed approaches are efficient to process large data sets with good recognition rate, compared with KNN, SVM, C4.5, Naive Bayes, and CART.

  2. Revitalizing the setting approach

    DEFF Research Database (Denmark)

    Bloch, Paul; Toft, Ulla; Reinbach, Helene Christine

    2014-01-01

    BackgroundThe concept of health promotion rests on aspirations aiming at enabling people to increase control over and improve their health. Health promotion action is facilitated in settings such as schools, homes and work places. As a contribution to the promotion of healthy lifestyles, we have ...... approach is based on ecological and whole-systems thinking, and stipulates important principles and values of integration, participation, empowerment, context and knowledge-based development....... further developed the setting approach in an effort to harmonise it with contemporary realities (and complexities) of health promotion and public health action. The paper introduces a modified concept, the supersetting approach, which builds on the optimised use of diverse and valuable resources embedded...... in local community settings and on the strengths of social interaction and local ownership as drivers of change processes. Interventions based on a supersetting approach are first and foremost characterised by being integrated, but also participatory, empowering, context-sensitive and knowledge...

  3. Study of different effectives on wind energy by using mathematical methods and rough set theory

    International Nuclear Information System (INIS)

    Marrouf, A.A.

    2009-01-01

    Analysis of data plays an important role in all fields of life, a huge number of data that results from experimental data in all scientific and social sciences. The analysis of these data was carried out by statistical methods and its representation depended on classical Euclidean geometric concepts.In the 21 st century, new direction for data analysis have been started in applications. These direction depend basically on modern mathematical theories. The quality of data and information can be characterized as interfering and man is unable to distinguish between its vocabularies. The topological methods are the most compatible for this process of analysis for making decision. At the end of 20 th century, a new topological method appeared, this is known by R ough Set Theory Approach , this doesn't depend on external suppositions. It is known as (let data speak). This is good for all types of data. The theory was originated by Pawlak in 1982 [48] as a result of long term program of fundamental research on logical properties of information systems, carried out by him and a group of logicians from Phlish Academy of sciences and the University of Warsaw, Poland. Various real life application of rough sets have shown its usefulness in many domains as civil engineering, medical data analysis, generating of a cement kiln control algorithm from observation of stocker's actions, vibration analysis, air craft pilot performance evaluation, hydrology, pharmacology, image processing and ecology.Variable Precision Rough Set (VPRS)-model is proposed by W. Ziarko [80]. It is a new generalization of the rough set model. It is aimed at handling underlain information and is directly derived from the original model without any additional assumptions.Topology is a mathematical tool to study information systems and variable precision rough sets. Ziarko presumed that the notion of variable precision rough sets depend on special types of topological spaces. In this space, the families of

  4. Factors Analysis And Profit Achievement For Trading Company By Using Rough Set Method

    Directory of Open Access Journals (Sweden)

    Muhammad Ardiansyah Sembiring

    2017-06-01

    Full Text Available This research has been done to analysis the financial raport fortrading company and it is  intimately  related  to  some  factors  which  determine  the profit of company. The result of this reseach is showed about  New Knowledge and perform of the rule. In  discussion, by followed data mining process and using Rough Set method. Rough Set is to analyzed the performance of the result. This  reseach will be assist to the manager of company with draw the intactandobjective. Rough set method is also to difined  the rule of discovery process and started the formation about Decision System, Equivalence Class, Discernibility Matrix,  Discernibility Matrix Modulo D, Reduction and General Rules. Rough set method is efective model about the performing analysis in the company.   Keywords : Data Mining, General Rules, Profit,. Rough Set.

  5. Prediction of protein interaction hot spots using rough set-based multiple criteria linear programming.

    Science.gov (United States)

    Chen, Ruoying; Zhang, Zhiwang; Wu, Di; Zhang, Peng; Zhang, Xinyang; Wang, Yong; Shi, Yong

    2011-01-21

    Protein-protein interactions are fundamentally important in many biological processes and it is in pressing need to understand the principles of protein-protein interactions. Mutagenesis studies have found that only a small fraction of surface residues, known as hot spots, are responsible for the physical binding in protein complexes. However, revealing hot spots by mutagenesis experiments are usually time consuming and expensive. In order to complement the experimental efforts, we propose a new computational approach in this paper to predict hot spots. Our method, Rough Set-based Multiple Criteria Linear Programming (RS-MCLP), integrates rough sets theory and multiple criteria linear programming to choose dominant features and computationally predict hot spots. Our approach is benchmarked by a dataset of 904 alanine-mutated residues and the results show that our RS-MCLP method performs better than other methods, e.g., MCLP, Decision Tree, Bayes Net, and the existing HotSprint database. In addition, we reveal several biological insights based on our analysis. We find that four features (the change of accessible surface area, percentage of the change of accessible surface area, size of a residue, and atomic contacts) are critical in predicting hot spots. Furthermore, we find that three residues (Tyr, Trp, and Phe) are abundant in hot spots through analyzing the distribution of amino acids. Copyright © 2010 Elsevier Ltd. All rights reserved.

  6. Rough set semantics for identity on the Web

    NARCIS (Netherlands)

    Beek, Wouter; Schlobach, Stefan; van Harmelen, Frank

    2014-01-01

    Identity relations are at the foundation of many logic-based knowledge representations. We argue that the traditional notion of equality, is unsuited for many realistic knowledge representation settings. The classical interpretation of equality is too strong when the equality statements are re-used

  7. A Novel Rough Set Reduct Algorithm for Medical Domain Based on Bee Colony Optimization

    OpenAIRE

    Suguna, N.; Thanushkodi, K.

    2010-01-01

    Feature selection refers to the problem of selecting relevant features which produce the most predictive outcome. In particular, feature selection task is involved in datasets containing huge number of features. Rough set theory has been one of the most successful methods used for feature selection. However, this method is still not able to find optimal subsets. This paper proposes a new feature selection method based on Rough set theory hybrid with Bee Colony Optimization (BCO) in an attempt...

  8. Application of preprocessing filtering on Decision Tree C4.5 and rough set theory

    Science.gov (United States)

    Chan, Joseph C. C.; Lin, Tsau Y.

    2001-03-01

    This paper compares two artificial intelligence methods: the Decision Tree C4.5 and Rough Set Theory on the stock market data. The Decision Tree C4.5 is reviewed with the Rough Set Theory. An enhanced window application is developed to facilitate the pre-processing filtering by introducing the feature (attribute) transformations, which allows users to input formulas and create new attributes. Also, the application produces three varieties of data set with delaying, averaging, and summation. The results prove the improvement of pre-processing by applying feature (attribute) transformations on Decision Tree C4.5. Moreover, the comparison between Decision Tree C4.5 and Rough Set Theory is based on the clarity, automation, accuracy, dimensionality, raw data, and speed, which is supported by the rules sets generated by both algorithms on three different sets of data.

  9. Uncertainty Modeling for Database Design using Intuitionistic and Rough Set Theory

    Science.gov (United States)

    2009-01-01

    Definition. An intuitionistic rough relation R is a sub- set of the set cross product P(D1)× P(D2) × · · ·× P( Dm )× Dµ.× Dv. For a specific relation, R...that aj ∈ dij for all j. The interpretation space is the cross product D1× D2 × · · ·× Dm × Dµ× Dv but is limited for a given re- lation R to the set...systems, Journal of Information Science 11 (1985), 77–87. [7] T. Beaubouef and F. Petry, Rough Querying of Crisp Data in Relational Databases, Third

  10. USE OF ROUGH SETS AND SPECTRAL DATA FOR BUILDING PREDICTIVE MODELS OF REACTION RATE CONSTANTS

    Science.gov (United States)

    A model for predicting the log of the rate constants for alkaline hydrolysis of organic esters has been developed with the use of gas-phase min-infrared library spectra and a rule-building software system based on the mathematical theory of rough sets. A diverse set of 41 esters ...

  11. Rough set theory and its application in fault diagnosis in Nuclear Power Plant

    International Nuclear Information System (INIS)

    Chen Zhihui; Nuclear Power Inst. of China, Chengdu; Xia Hong; Huang Wei

    2006-01-01

    Rough Set theory is the mathematic theory that can express and deal with vague and uncertain data. There is complicated and uncertain data in the fault feature of Nuclear Power Plant, so that Rough Set theory can be introduced to analyze and process the historical data to find out the rule of fault diagnosis of Nuclear Power Plant. This paper introduces the Rough Set theory and Knowledge Acquisition briefly, and describes the reduction algorithm based on discernibility matrix and its application in the fault diagnosis to generate rules of diagnosis. Using these rules, three kinds of model faults have been diagnosed correctly. The conclusion can be drawn that this method can reduce the redundancy of the fault feature, simplify and optimize the rule of diagnosis. (authors)

  12. The Dynamic Evaluation of Enterprise's Strategy Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    刘恒江; 陈继祥

    2003-01-01

    This paper presents dynamic evaluation of enterprise's strategy which is suitable for dealing with the complex and dynamic problems of strategic evaluation. Rough Set Theory is a powerful mathematical tool to handle vagueness and uncertainty of dynamic evaluation. By the application of Rough Set Theory, this paper computes the significance and weights of each evaluation criterion and helps to lay evaluation emphasis on the main and effective criteria. From the reduced decision table,evaluators can get decision rules Which direct them to give judgment or suggestion of strategy. The whole evaluation process is decided by data, so the results are certain and reasonable.

  13. A new intelligent classifier for breast cancer diagnosis based on a rough set and extreme learning machine: RS + ELM

    OpenAIRE

    KAYA, Yılmaz

    2014-01-01

    Breast cancer is one of the leading causes of death among women all around the world. Therefore, true and early diagnosis of breast cancer is an important problem. The rough set (RS) and extreme learning machine (ELM) methods were used collectively in this study for the diagnosis of breast cancer. The unnecessary attributes were discarded from the dataset by means of the RS approach. The classification process by means of ELM was performed using the remaining attributes. The Wisconsin B...

  14. Modeling of Two-Phase Flow in Rough-Walled Fracture Using Level Set Method

    Directory of Open Access Journals (Sweden)

    Yunfeng Dai

    2017-01-01

    Full Text Available To describe accurately the flow characteristic of fracture scale displacements of immiscible fluids, an incompressible two-phase (crude oil and water flow model incorporating interfacial forces and nonzero contact angles is developed. The roughness of the two-dimensional synthetic rough-walled fractures is controlled with different fractal dimension parameters. Described by the Navier–Stokes equations, the moving interface between crude oil and water is tracked using level set method. The method accounts for differences in densities and viscosities of crude oil and water and includes the effect of interfacial force. The wettability of the rough fracture wall is taken into account by defining the contact angle and slip length. The curve of the invasion pressure-water volume fraction is generated by modeling two-phase flow during a sudden drainage. The volume fraction of water restricted in the rough-walled fracture is calculated by integrating the water volume and dividing by the total cavity volume of the fracture while the two-phase flow is quasistatic. The effect of invasion pressure of crude oil, roughness of fracture wall, and wettability of the wall on two-phase flow in rough-walled fracture is evaluated.

  15. Analysis of the experimental data of air pollution using atmospheric dispersion modeling and rough set

    International Nuclear Information System (INIS)

    Halfa, I.K.I

    2008-01-01

    This thesis contains four chapters and list of references:In chapter 1, we introduce a brief survey about the atmospheric concepts and the topological methods for data analysis.In section 1.1, we give introduce a general introduction. We recall some of atmospheric fundamentals in Section 1.2. Section 1.3, shows the concepts of modern topological methods for data analysis.In chapter 2, we have studied the properties of atmosphere and focus on concept of Rough set and its properties. This concepts of rough set has been applied to analyze the atmospheric data.In section 2.1, we introduce a general introduction about concept of rough set and properties of atmosphere. Section 2.2 focuses on the concept of rough set and its properties and generalization of approximation of rough set theory by using topological space. In section 2.3 we have studied the stabilities of atmosphere for Inshas location for all seasons using different schemes and compared these schemes using statistical and rough set methods. In section 2.4, we introduce mixing height of plume for all seasons. Section 2.5 introduced seasonal surface layer turbulence processes for Inshas location. Section 2.6 gives a comparison between the seasonal surface layer turbulence processes for Inshas location and for different locations using rough set theory.In chapter 3 we focus on the concept of variable precision rough set (VPRS) and its properties and using it to compare, between the estimated and observed data of the concentration of air pollution for Inshas location. In Section 3.1 we introduce a general introduction about VPRS and air pollution. In Section 3.2 we have focused on the concept and properties of VPRS. In Section 3.3 we have introduced a method to estimate the concentration of air pollution for Inshas location using Gaussian plume model. Section 3.4 has showed the experimental data. The estimated data have been compared with the observed data using statistical methods in Section 3.5. In Section 3

  16. Risk Decision Making Based on Decision-theoretic Rough Set: A Three-way View Decision Model

    OpenAIRE

    Huaxiong Li; Xianzhong Zhou

    2011-01-01

    Rough set theory has witnessed great success in data mining and knowledge discovery, which provides a good support for decision making on a certain data. However, a practical decision problem always shows diversity under the same circumstance according to different personality of the decision makers. A simplex decision model can not provide a full description on such diverse decisions. In this article, a review of Pawlak rough set models and probabilistic rough set models is presented, and a ...

  17. Fault Diagnosis Method of Polymerization Kettle Equipment Based on Rough Sets and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Shu-zhi Gao

    2013-01-01

    Full Text Available Polyvinyl chloride (PVC polymerizing production process is a typical complex controlled object, with complexity features, such as nonlinear, multivariable, strong coupling, and large time-delay. Aiming at the real-time fault diagnosis and optimized monitoring requirements of the large-scale key polymerization equipment of PVC production process, a real-time fault diagnosis strategy is proposed based on rough sets theory with the improved discernibility matrix and BP neural networks. The improved discernibility matrix is adopted to reduct the attributes of rough sets in order to decrease the input dimensionality of fault characteristics effectively. Levenberg-Marquardt BP neural network is trained to diagnose the polymerize faults according to the reducted decision table, which realizes the nonlinear mapping from fault symptom set to polymerize fault set. Simulation experiments are carried out combining with the industry history datum to show the effectiveness of the proposed rough set neural networks fault diagnosis method. The proposed strategy greatly increased the accuracy rate and efficiency of the polymerization fault diagnosis system.

  18. Noninvasive evaluation of mental stress using by a refined rough set technique based on biomedical signals.

    Science.gov (United States)

    Liu, Tung-Kuan; Chen, Yeh-Peng; Hou, Zone-Yuan; Wang, Chao-Chih; Chou, Jyh-Horng

    2014-06-01

    Evaluating and treating of stress can substantially benefits to people with health problems. Currently, mental stress evaluated using medical questionnaires. However, the accuracy of this evaluation method is questionable because of variations caused by factors such as cultural differences and individual subjectivity. Measuring of biomedical signals is an effective method for estimating mental stress that enables this problem to be overcome. However, the relationship between the levels of mental stress and biomedical signals remain poorly understood. A refined rough set algorithm is proposed to determine the relationship between mental stress and biomedical signals, this algorithm combines rough set theory with a hybrid Taguchi-genetic algorithm, called RS-HTGA. Two parameters were used for evaluating the performance of the proposed RS-HTGA method. A dataset obtained from a practice clinic comprising 362 cases (196 male, 166 female) was adopted to evaluate the performance of the proposed approach. The empirical results indicate that the proposed method can achieve acceptable accuracy in medical practice. Furthermore, the proposed method was successfully used to identify the relationship between mental stress levels and bio-medical signals. In addition, the comparison between the RS-HTGA and a support vector machine (SVM) method indicated that both methods yield good results. The total averages for sensitivity, specificity, and precision were greater than 96%, the results indicated that both algorithms produced highly accurate results, but a substantial difference in discrimination existed among people with Phase 0 stress. The SVM algorithm shows 89% and the RS-HTGA shows 96%. Therefore, the RS-HTGA is superior to the SVM algorithm. The kappa test results for both algorithms were greater than 0.936, indicating high accuracy and consistency. The area under receiver operating characteristic curve for both the RS-HTGA and a SVM method were greater than 0.77, indicating

  19. Hyperspectral band selection based on consistency-measure of neighborhood rough set theory

    International Nuclear Information System (INIS)

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen

    2016-01-01

    Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)

  20. A Novel Method for Predicting Anisakid Nematode Infection of Atlantic Cod Using Rough Set Theory.

    Science.gov (United States)

    Wąsikowska, Barbara; Sobecka, Ewa; Bielat, Iwona; Legierko, Monika; Więcaszek, Beata

    2018-03-01

    Atlantic cod ( Gadus morhua L.) is one of the most important fish species in the fisheries industries of many countries; however, these fish are often infected with parasites. The detection of pathogenic larval nematodes is usually performed in fish processing facilities by visual examination using candling or by digesting muscles in artificial digestive juices, but these methods are both time and labor intensive. This article presents an innovative approach to the analysis of cod parasites from both the Atlantic and Baltic Sea areas through the application of rough set theory, one of the methods of artificial intelligence, for the prediction of food safety in a food production chain. The parasitological examinations were performed focusing on nematode larvae pathogenic to humans, e.g., Anisakis simplex, Contracaecum osculatum, and Pseudoterranova decipiens. The analysis allowed identification of protocols with which it is possible to make preliminary estimates of the quantity and quality of parasites found in cod catches before detailed analyses are performed. The results indicate that the method used can be an effective analytical tool for these types of data. To achieve this goal, a database is needed that contains the patterns intensity of parasite infections and the conditions of commercial fish species in different localities in their distributions.

  1. A ROUGH SET DECISION TREE BASED MLP-CNN FOR VERY HIGH RESOLUTION REMOTELY SENSED IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    C. Zhang

    2017-09-01

    Full Text Available Recent advances in remote sensing have witnessed a great amount of very high resolution (VHR images acquired at sub-metre spatial resolution. These VHR remotely sensed data has post enormous challenges in processing, analysing and classifying them effectively due to the high spatial complexity and heterogeneity. Although many computer-aid classification methods that based on machine learning approaches have been developed over the past decades, most of them are developed toward pixel level spectral differentiation, e.g. Multi-Layer Perceptron (MLP, which are unable to exploit abundant spatial details within VHR images. This paper introduced a rough set model as a general framework to objectively characterize the uncertainty in CNN classification results, and further partition them into correctness and incorrectness on the map. The correct classification regions of CNN were trusted and maintained, whereas the misclassification areas were reclassified using a decision tree with both CNN and MLP. The effectiveness of the proposed rough set decision tree based MLP-CNN was tested using an urban area at Bournemouth, United Kingdom. The MLP-CNN, well capturing the complementarity between CNN and MLP through the rough set based decision tree, achieved the best classification performance both visually and numerically. Therefore, this research paves the way to achieve fully automatic and effective VHR image classification.

  2. Method research of fault diagnosis based on rough set for nuclear power plant

    International Nuclear Information System (INIS)

    Chen Zhihui; Xia Hong

    2005-01-01

    Nuclear power equipment fault feature is complicated and uncertain. Rough set theory can express and deal with vagueness and uncertainty, so that it can be introduced nuclear power fault diagnosis to analyze and process historical data to find rule of fault feature. Rough set theory treatment step: Data preprocessing, attribute reduction, attribute value reduction, rule generation. According to discernibility matrix definition and nature, we can utilize discernibility matrix in reduction algorithm that make attribute and attribute value reduction, so that it can minish algorithmic complication and simplify programming. This algorithm is applied to the nuclear power fault diagnosis to generate rules of diagnosis. Using these rules, we have diagnosed five kinds of model faults correctly. (authors)

  3. The prefabricated building risk decision research of DM technology on the basis of Rough Set

    Science.gov (United States)

    Guo, Z. L.; Zhang, W. B.; Ma, L. H.

    2017-08-01

    With the resources crises and more serious pollution, the green building has been strongly advocated by most countries and become a new building style in the construction field. Compared with traditional building, the prefabricated building has its own irreplaceable advantages but is influenced by many uncertainties. So far, a majority of scholars have been studying based on qualitative researches from all of the word. This paper profoundly expounds its significance about the prefabricated building. On the premise of the existing research methods, combined with rough set theory, this paper redefines the factors which affect the prefabricated building risk. Moreover, it quantifies risk factors and establish an expert knowledge base through assessing. And then reduced risk factors about the redundant attributes and attribute values, finally form the simplest decision rule. This simplest decision rule, which is based on the DM technology of rough set theory, provides prefabricated building with a controllable new decision-making method.

  4. Analysis of Roadway Traffic Accidents Based on Rough Sets and Bayesian Networks

    Directory of Open Access Journals (Sweden)

    Xiaoxia Xiong

    2018-02-01

    Full Text Available The paper integrates Rough Sets (RS and Bayesian Networks (BN for roadway traffic accident analysis. RS reduction of attributes is first employed to generate the key set of attributes affecting accident outcomes, which are then fed into a BN structure as nodes for BN construction and accident outcome classification. Such RS-based BN framework combines the advantages of RS in knowledge reduction capability and BN in describing interrelationships among different attributes. The framework is demonstrated using the 100-car naturalistic driving data from Virginia Tech Transportation Institute to predict accident type. Comparative evaluation with the baseline BNs shows the RS-based BNs generally have a higher prediction accuracy and lower network complexity while with comparable prediction coverage and receiver operating characteristic curve area, proving that the proposed RS-based BN overall outperforms the BNs with/without traditional feature selection approaches. The proposed RS-based BN indicates the most significant attributes that affect accident types include pre-crash manoeuvre, driver’s attention from forward roadway to centre mirror, number of secondary tasks undertaken, traffic density, and relation to junction, most of which feature pre-crash driver states and driver behaviours that have not been extensively researched in literature, and could give further insight into the nature of traffic accidents.

  5. Research of Strategic Alliance Stable Decision-making Model Based on Rough Set and DEA

    OpenAIRE

    Zhang Yi

    2013-01-01

    This article uses rough set theory for stability evaluation system of strategic alliance at first. Uses data analysis method for reduction, eliminates redundant indexes. Selected 6 enterprises as a decision-making unit, then select 4 inputs and 2 outputs indexes data, using DEA model to calculate, analysis reasons for poor benefit of decision-making unit, find out improvement direction and quantity for changing, provide a reference for the alliance stability.

  6. Extraction of design rules from multi-objective design exploration (MODE) using rough set theory

    International Nuclear Information System (INIS)

    Obayashi, Shigeru

    2011-01-01

    Multi-objective design exploration (MODE) and its application for design rule extraction are presented. MODE reveals the structure of design space from the trade-off information. The self-organizing map (SOM) is incorporated into MODE as a visual data-mining tool for design space. SOM divides the design space into clusters with specific design features. The sufficient conditions for belonging to a cluster of interest are extracted using rough set theory. The resulting MODE was applied to the multidisciplinary wing design problem, which revealed a cluster of good designs, and we extracted the design rules of such designs successfully.

  7. Rough set soft computing cancer classification and network: one stone, two birds.

    Science.gov (United States)

    Zhang, Yue

    2010-07-15

    Gene expression profiling provides tremendous information to help unravel the complexity of cancer. The selection of the most informative genes from huge noise for cancer classification has taken centre stage, along with predicting the function of such identified genes and the construction of direct gene regulatory networks at different system levels with a tuneable parameter. A new study by Wang and Gotoh described a novel Variable Precision Rough Sets-rooted robust soft computing method to successfully address these problems and has yielded some new insights. The significance of this progress and its perspectives will be discussed in this article.

  8. Crop Evaluation System Optimization: Attribute Weights Determination Based on Rough Sets Theory

    Directory of Open Access Journals (Sweden)

    Ruihong Wang

    2017-01-01

    Full Text Available The present study is mainly a continuation of our previous study, which is about a crop evaluation system development that is based on grey relational analysis. In that system, the attribute weight determination affects the evaluation result directly. Attribute weight is usually ascertained by decision-makers experience knowledge. In this paper, we utilize rough sets theory to calculate attribute significance and then combine it with weight given by decision-maker. This method is a comprehensive consideration of subjective experience knowledge and objective situation; thus it can acquire much more ideal results. Finally, based on this method, we improve the system based on ASP.NET technology.

  9. Rough Sets as a Knowledge Discovery and Classification Tool for the Diagnosis of Students with Learning Disabilities

    Directory of Open Access Journals (Sweden)

    Yu-Chi Lin

    2011-02-01

    Full Text Available Due to the implicit characteristics of learning disabilities (LDs, the diagnosis of students with learning disabilities has long been a difficult issue. Artificial intelligence techniques like artificial neural network (ANN and support vector machine (SVM have been applied to the LD diagnosis problem with satisfactory outcomes. However, special education teachers or professionals tend to be skeptical to these kinds of black-box predictors. In this study, we adopt the rough set theory (RST, which can not only perform as a classifier, but may also produce meaningful explanations or rules, to the LD diagnosis application. Our experiments indicate that the RST approach is competitive as a tool for feature selection, and it performs better in term of prediction accuracy than other rulebased algorithms such as decision tree and ripper algorithms. We also propose to mix samples collected from sources with different LD diagnosis procedure and criteria. By pre-processing these mixed samples with simple and readily available clustering algorithms, we are able to improve the quality and support of rules generated by the RST. Overall, our study shows that the rough set approach, as a classification and knowledge discovery tool, may have great potential in playing an essential role in LD diagnosis.

  10. Incremental Knowledge Acquisition for WSD: A Rough Set and IL based Method

    Directory of Open Access Journals (Sweden)

    Xu Huang

    2015-07-01

    Full Text Available Word sense disambiguation (WSD is one of tricky tasks in natural language processing (NLP as it needs to take into full account all the complexities of language. Because WSD involves in discovering semantic structures from unstructured text, automatic knowledge acquisition of word sense is profoundly difficult. To acquire knowledge about Chinese multi-sense verbs, we introduce an incremental machine learning method which combines rough set method and instance based learning. First, context of a multi-sense verb is extracted into a table; its sense is annotated by a skilled human and stored in the same table. By this way, decision table is formed, and then rules can be extracted within the framework of attributive value reduction of rough set. Instances not entailed by any rule are treated as outliers. When new instances are added to decision table, only the new added and outliers need to be learned further, thus incremental leaning is fulfilled. Experiments show the scale of decision table can be reduced dramatically by this method without performance decline.

  11. Prediction of financial crises by means of rough sets and decision trees

    Directory of Open Access Journals (Sweden)

    Zuleyka Díaz-Martínez

    2011-03-01

    Full Text Available This paper tries to further investigate the factors behind a financial crisis. By using a large sample of countries in the period 1981 to 1999, it intends to apply two methods coming from the Artificial Intelligence (Rough Sets theory and C4.5 algorithm and analyze the role of a set of macroeconomic and financial variables in explaining banking crises. These variables are both quantitative and qualitative. These methods do not require variables or data used to satisfy any assumptions. Statistical methods traditionally employed call for the explicative variables to satisfy statistical assumptions which is quite difficult to happen. This fact complicates the analysis. We obtained good results based on the classification accuracies (80% of correctly classified countries from an independent sample, which proves the suitability of both methods.

  12. Candidate Smoke Region Segmentation of Fire Video Based on Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Yaqin Zhao

    2015-01-01

    Full Text Available Candidate smoke region segmentation is the key link of smoke video detection; an effective and prompt method of candidate smoke region segmentation plays a significant role in a smoke recognition system. However, the interference of heavy fog and smoke-color moving objects greatly degrades the recognition accuracy. In this paper, a novel method of candidate smoke region segmentation based on rough set theory is presented. First, Kalman filtering is used to update video background in order to exclude the interference of static smoke-color objects, such as blue sky. Second, in RGB color space smoke regions are segmented by defining the upper approximation, lower approximation, and roughness of smoke-color distribution. Finally, in HSV color space small smoke regions are merged by the definition of equivalence relation so as to distinguish smoke images from heavy fog images in terms of V component value variety from center to edge of smoke region. The experimental results on smoke region segmentation demonstrated the effectiveness and usefulness of the proposed scheme.

  13. Processing and filtrating of driver fatigue characteristic parameters based on rough set

    Science.gov (United States)

    Ye, Wenwu; Zhao, Xuyang

    2018-05-01

    With the rapid development of economy, people become increasingly rich, and cars have become a common means of transportation in daily life. However, the problem of traffic safety is becoming more and more serious. And fatigue driving is one of the main causes of traffic accidents. Therefore, it is of great importance for us to study the detection of fatigue driving to improve traffic safety. In the cause of determining whether the driver is tired, the characteristic quantity related to the steering angle of the steering wheel and the characteristic quantity of the driver's pulse are all important indicators. The fuzzy c-means clustering is used to discretize the above indexes. Because the characteristic parameters are too miscellaneous, rough set is used to filtrate these characteristics. Finally, this paper finds out the highest correlation with fatigue driving. It is proved that these selected characteristics are of great significance to the evaluation of fatigue driving.

  14. Rough Set Theory based prognostication of life expectancy for terminally ill patients.

    Science.gov (United States)

    Gil-Herrera, Eleazar; Yalcin, Ali; Tsalatsanis, Athanasios; Barnes, Laura E; Djulbegovic, Benjamin

    2011-01-01

    We present a novel knowledge discovery methodology that relies on Rough Set Theory to predict the life expectancy of terminally ill patients in an effort to improve the hospice referral process. Life expectancy prognostication is particularly valuable for terminally ill patients since it enables them and their families to initiate end-of-life discussions and choose the most desired management strategy for the remainder of their lives. We utilize retrospective data from 9105 patients to demonstrate the design and implementation details of a series of classifiers developed to identify potential hospice candidates. Preliminary results confirm the efficacy of the proposed methodology. We envision our work as a part of a comprehensive decision support system designed to assist terminally ill patients in making end-of-life care decisions.

  15. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus

    Directory of Open Access Journals (Sweden)

    Rahman Ali

    2015-07-01

    Full Text Available Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body’s resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1 restricted one type of diabetes; (2 lack understandability and explanatory power of the techniques and decision; (3 limited either to prediction purpose or management over the structured contents; and (4 lack competence for dimensionality and vagueness of patient’s data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM and type-2 diabetes mellitus (T2DM. For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  16. H2RM: A Hybrid Rough Set Reasoning Model for Prediction and Management of Diabetes Mellitus.

    Science.gov (United States)

    Ali, Rahman; Hussain, Jamil; Siddiqi, Muhammad Hameed; Hussain, Maqbool; Lee, Sungyoung

    2015-07-03

    Diabetes is a chronic disease characterized by high blood glucose level that results either from a deficiency of insulin produced by the body, or the body's resistance to the effects of insulin. Accurate and precise reasoning and prediction models greatly help physicians to improve diagnosis, prognosis and treatment procedures of different diseases. Though numerous models have been proposed to solve issues of diagnosis and management of diabetes, they have the following drawbacks: (1) restricted one type of diabetes; (2) lack understandability and explanatory power of the techniques and decision; (3) limited either to prediction purpose or management over the structured contents; and (4) lack competence for dimensionality and vagueness of patient's data. To overcome these issues, this paper proposes a novel hybrid rough set reasoning model (H2RM) that resolves problems of inaccurate prediction and management of type-1 diabetes mellitus (T1DM) and type-2 diabetes mellitus (T2DM). For verification of the proposed model, experimental data from fifty patients, acquired from a local hospital in semi-structured format, is used. First, the data is transformed into structured format and then used for mining prediction rules. Rough set theory (RST) based techniques and algorithms are used to mine the prediction rules. During the online execution phase of the model, these rules are used to predict T1DM and T2DM for new patients. Furthermore, the proposed model assists physicians to manage diabetes using knowledge extracted from online diabetes guidelines. Correlation-based trend analysis techniques are used to manage diabetic observations. Experimental results demonstrate that the proposed model outperforms the existing methods with 95.9% average and balanced accuracies.

  17. Knowledge Mining from Clinical Datasets Using Rough Sets and Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Kindie Biredagn Nahato

    2015-01-01

    Full Text Available The availability of clinical datasets and knowledge mining methodologies encourages the researchers to pursue research in extracting knowledge from clinical datasets. Different data mining techniques have been used for mining rules, and mathematical models have been developed to assist the clinician in decision making. The objective of this research is to build a classifier that will predict the presence or absence of a disease by learning from the minimal set of attributes that has been extracted from the clinical dataset. In this work rough set indiscernibility relation method with backpropagation neural network (RS-BPNN is used. This work has two stages. The first stage is handling of missing values to obtain a smooth data set and selection of appropriate attributes from the clinical dataset by indiscernibility relation method. The second stage is classification using backpropagation neural network on the selected reducts of the dataset. The classifier has been tested with hepatitis, Wisconsin breast cancer, and Statlog heart disease datasets obtained from the University of California at Irvine (UCI machine learning repository. The accuracy obtained from the proposed method is 97.3%, 98.6%, and 90.4% for hepatitis, breast cancer, and heart disease, respectively. The proposed system provides an effective classification model for clinical datasets.

  18. AN ARTIFICIAL INTELLIGENCE APPROACH FOR THE PREDICTION OF SURFACE ROUGHNESS IN CO2 LASER CUTTING

    Directory of Open Access Journals (Sweden)

    MILOŠ MADIĆ

    2012-12-01

    Full Text Available In laser cutting, the cut quality is of great importance. Multiple non-linear effects of process parameters and their interactions make very difficult to predict cut quality. In this paper, artificial intelligence (AI approach was applied to predict the surface roughness in CO2 laser cutting. To this aim, artificial neural network (ANN model of surface roughness was developed in terms of cutting speed, laser power and assist gas pressure. The experimental results obtained from Taguchi’s L25 orthogonal array were used to develop ANN model. The ANN mathematical model of surface roughness was expressed as explicit nonlinear function of the selected input parameters. Statistical results indicate that the ANN model can predict the surface roughness with good accuracy. It was showed that ANNs may be used as a good alternative in analyzing the effects of cutting parameters on the surface roughness.

  19. Desining an Expert System for Analyzing the Energy Consumption Behavior of Employees in Organizations Using Rough Set Theory

    Directory of Open Access Journals (Sweden)

    Tooraj Karimi

    2015-06-01

    Full Text Available Understanding and changing the energy consumption behavior requires extensive knowledge about the motives of behavior. In this research, Rough Set Theory is used to investigate the energy consumption behavior of employees in organizations. So, thirteen condition attributes and a decision attribute are selected and the decision system is created. Condition attributes include demographic, values, attitudes and organizational characteristics of employees and decision attribute relates to energy consumption behavior. 482 employees are selected randomly from 37 office buildings of ministry of Petroleum and rough modeling are performed for them. By combining different methods of discretizing, reduction algorithms and rule generating, nine models are made using ROSETTA software. The results show that four of the 13 condition attributes, involving “organizational citizenship”, “satisfaction”, “attitude toward behavior” and “lighting control” are selected as the main features of the system. After cross validation of the various models, the model of manually discretizing using genetic algorithms and ORR approach to extract reducts has the most accuracy and selected as the most reliable model.

  20. Study on intelligence fault diagnosis method for nuclear power plant equipment based on rough set and fuzzy neural network

    International Nuclear Information System (INIS)

    Liu Yongkuo; Xia Hong; Xie Chunli; Chen Zhihui; Chen Hongxia

    2007-01-01

    Rough set theory and fuzzy neural network are combined, to take full advantages of the two of them. Based on the reduction technology to knowledge of Rough set method, and by drawing the simple rule from a large number of initial data, the fuzzy neural network was set up, which was with better topological structure, improved study speed, accurate judgment, strong fault-tolerant ability, and more practical. In order to test the validity of the method, the inverted U-tubes break accident of Steam Generator and etc are used as examples, and many simulation experiments are performed. The test result shows that it is feasible to incorporate the fault intelligence diagnosis method based on rough set and fuzzy neural network in the nuclear power plant equipment, and the method is simple and convenience, with small calculation amount and reliable result. (authors)

  1. Rough Sets and Stomped Normal Distribution for Simultaneous Segmentation and Bias Field Correction in Brain MR Images.

    Science.gov (United States)

    Banerjee, Abhirup; Maji, Pradipta

    2015-12-01

    The segmentation of brain MR images into different tissue classes is an important task for automatic image analysis technique, particularly due to the presence of intensity inhomogeneity artifact in MR images. In this regard, this paper presents a novel approach for simultaneous segmentation and bias field correction in brain MR images. It integrates judiciously the concept of rough sets and the merit of a novel probability distribution, called stomped normal (SN) distribution. The intensity distribution of a tissue class is represented by SN distribution, where each tissue class consists of a crisp lower approximation and a probabilistic boundary region. The intensity distribution of brain MR image is modeled as a mixture of finite number of SN distributions and one uniform distribution. The proposed method incorporates both the expectation-maximization and hidden Markov random field frameworks to provide an accurate and robust segmentation. The performance of the proposed approach, along with a comparison with related methods, is demonstrated on a set of synthetic and real brain MR images for different bias fields and noise levels.

  2. Engineering Application Way of Faults Knowledge Discovery Based on Rough Set Theory

    International Nuclear Information System (INIS)

    Zhao Rongzhen; Deng Linfeng; Li Chao

    2011-01-01

    For the knowledge acquisition puzzle of intelligence decision-making technology in mechanical industry, to use the Rough Set Theory (RST) as a kind of tool to solve the puzzle was researched. And the way to realize the knowledge discovery in engineering application is explored. A case extracting out the knowledge rules from a concise data table shows out some important information. It is that the knowledge discovery similar to the mechanical faults diagnosis is an item of complicated system engineering project. In where, first of all-important tasks is to preserve the faults knowledge into a table with data mode. And the data must be derived from the plant site and should also be as concise as possible. On the basis of the faults knowledge data obtained so, the methods and algorithms to process the data and extract the knowledge rules from them by means of RST can be processed only. The conclusion is that the faults knowledge discovery by the way is a process of rising upward. But to develop the advanced faults diagnosis technology by the way is a large-scale knowledge engineering project for long time. Every step in which should be designed seriously according to the tool's demands firstly. This is the basic guarantees to make the knowledge rules obtained have the values of engineering application and the studies have scientific significance. So, a general framework is designed for engineering application to go along the route developing the faults knowledge discovery technology.

  3. Rough Set Theory Based Fuzzy TOPSIS on Serious Game Design Evaluation Framework

    Directory of Open Access Journals (Sweden)

    Chung-Ho Su

    2013-01-01

    Full Text Available This study presents a hybrid methodology for solving the serious game design evaluation in which evaluation criteria are based on meaningful learning, ARCS motivation, cognitive load, and flow theory (MACF by rough set theory (RST and experts’ selection. The purpose of this study tends to develop an evaluation model with RST based fuzzy Delphi-AHP-TOPSIS for MACF characteristics. Fuzzy Delphi method is utilized for selecting the evaluation criteria, Fuzzy AHP is used for analyzing the criteria structure and determining the evaluation weight of criteria, and Fuzzy TOPSIS is applied to determine the sequence of the evaluations. A real case is also used for evaluating the selection of MACF criteria design for four serious games, and both the practice and evaluation of the case could be explained. The results show that the playfulness (C24, skills (C22, attention (C11, and personalized (C35 are determined as the four most important criteria in the MACF selection process. And evaluation results of case study point out that Game 1 has the best score overall (Game 1 > Game 3 > Game 2 > Game 4. Finally, proposed evaluation framework tends to evaluate the effectiveness and the feasibility of the evaluation model and provide design criteria for relevant multimedia game design educators.

  4. The use of principal component, discriminate and rough sets analysis methods of radiological data

    International Nuclear Information System (INIS)

    Seddeek, M.K.; Kozae, A.M.; Sharshar, T.; Badran, H.M.

    2006-01-01

    In this work, computational methods of finding clusters of multivariate data points were explored using principal component analysis (PCA), discriminate analysis (DA) and rough set analysis (RSA) methods. The variables were the concentrations of four natural isotopes and the texture characteristics of 100 sand samples from the coast of North Sinai, Egypt. Beach and dune sands are the two types of samples included. These methods were used to reduce the dimensionality of multivariate data and as classification and clustering methods. The results showed that the classification of sands in the environment of North Sinai is dependent upon the radioactivity contents of the naturally occurring radioactive materials and not upon the characteristics of the sand. The application of DA enables the creation of a classification rule for sand type and it revealed that samples with high negatively values of the first score have the highest contamination of black sand. PCA revealed that radioactivity concentrations alone can be considered to predict the classification of other samples. The results of RSA showed that only one of the concentrations of 238 U, 226 Ra and 232 Th with 40 K content, can characterize the clusters together with characteristics of the sand. Both PCA and RSA result in the following conclusion: 238 U, 226 Ra and 232 Th behave similarly. RSA revealed that one/two of them may not be considered without affecting the body of knowledge

  5. A Rough Set-Based Model of HIV-1 Reverse Transcriptase Resistome

    Directory of Open Access Journals (Sweden)

    Marcin Kierczak

    2009-10-01

    Full Text Available Reverse transcriptase (RT is a viral enzyme crucial for HIV-1 replication. Currently, 12 drugs are targeted against the RT. The low fidelity of the RT-mediated transcription leads to the quick accumulation of drug-resistance mutations. The sequence-resistance relationship remains only partially understood. Using publicly available data collected from over 15 years of HIV proteome research, we have created a general and predictive rule-based model of HIV-1 resistance to eight RT inhibitors. Our rough set-based model considers changes in the physicochemical properties of a mutated sequence as compared to the wild-type strain. Thanks to the application of the Monte Carlo feature selection method, the model takes into account only the properties that significantly contribute to the resistance phenomenon. The obtained results show that drug-resistance is determined in more complex way than believed. We confirmed the importance of many resistance-associated sites, found some sites to be less relevant than formerly postulated and— more importantly—identified several previously neglected sites as potentially relevant. By mapping some of the newly discovered sites on the 3D structure of the RT, we were able to suggest possible molecular-mechanisms of drug-resistance. Importantly, our model has the ability to generalize predictions to the previously unseen cases. The study is an example of how computational biology methods can increase our understanding of the HIV-1 resistome.

  6. Evaluating the Utility of Web-Based Consumer Support Tools Using Rough Sets

    Science.gov (United States)

    Maciag, Timothy; Hepting, Daryl H.; Slezak, Dominik; Hilderman, Robert J.

    On the Web, many popular e-commerce sites provide consumers with decision support tools to assist them in their commerce-related decision-making. Many consumers will rank the utility of these tools quite highly. Data obtained from web usage mining analyses, which may provide knowledge about a user's online experiences, could help indicate the utility of these tools. This type of analysis could provide insight into whether provided tools are adequately assisting consumers in conducting their online shopping activities or if new or additional enhancements need consideration. Although some research in this regard has been described in previous literature, there is still much that can be done. The authors of this paper hypothesize that a measurement of consumer decision accuracy, i.e. a measurement preferences, could help indicate the utility of these tools. This paper describes a procedure developed towards this goal using elements of rough set theory. The authors evaluated the procedure using two support tools, one based on a tool developed by the US-EPA and the other developed by one of the authors called cogito. Results from the evaluation did provide interesting insights on the utility of both support tools. Although it was shown that the cogito tool obtained slightly higher decision accuracy, both tools could be improved from additional enhancements. Details of the procedure developed and results obtained from the evaluation will be provided. Opportunities for future work are also discussed.

  7. In the Context of Multiple Intelligences Theory, Intelligent Data Analysis of Learning Styles Was Based on Rough Set Theory

    Science.gov (United States)

    Narli, Serkan; Ozgen, Kemal; Alkan, Huseyin

    2011-01-01

    The present study aims to identify the relationship between individuals' multiple intelligence areas and their learning styles with mathematical clarity using the concept of rough sets which is used in areas such as artificial intelligence, data reduction, discovery of dependencies, prediction of data significance, and generating decision…

  8. Capillary condensation in pores with rough walls: a density functional approach.

    Science.gov (United States)

    Bryk, P; Rzysko, W; Malijevsky, Al; Sokołowski, S

    2007-09-01

    The effect of surface roughness of slit-like pore walls on the capillary condensation of a spherical particles and short chains is studied. The gas molecules interact with the substrate by a Lennard-Jones (9,3) potential. The rough layer at each pore wall has a variable thickness and density and consists of a disordered quenched matrix of spherical particles. The system is described in the framework of a density functional approach and using computer simulations. The contribution due to attractive van der Waals interactions between adsorbate molecules is described by using first-order mean spherical approximation and mean-field approximation.

  9. Extraction of spatial-temporal rules from mesoscale eddies in the South China Sea Based on rough set theory

    Science.gov (United States)

    Du, Y.; Fan, X.; He, Z.; Su, F.; Zhou, C.; Mao, H.; Wang, D.

    2011-06-01

    In this paper, a rough set theory is introduced to represent spatial-temporal relationships and extract the corresponding rules from typical mesoscale-eddy states in the South China Sea (SCS). Three decision attributes are adopted in this study, which make the approach flexible in retrieving spatial-temporal rules with different features. Spatial-temporal rules of typical states in the SCS are extracted as three decision attributes, which then are confirmed by the previous works. The results demonstrate that this approach is effective in extracting spatial-temporal rules from typical mesoscale-eddy states, and therefore provides a powerful approach to forecasts in the future. Spatial-temporal rules in the SCS indicate that warm eddies following the rules are generally in the southeastern and central SCS around 2000 m isobaths in winter. Their intensity and vorticity are weaker than those of cold eddies. They usually move a shorter distance. By contrast, cold eddies are in 2000 m-deeper regions of the southwestern and northeastern SCS in spring and fall. Their intensity and vorticity are strong. Usually they move a long distance. In winter, a few rules are followed by cold eddies in the northern tip of the basin and southwest of Taiwan Island rather than warm eddies, indicating cold eddies may be well-regulated in the region. Several warm-eddy rules are achieved west of Luzon Island, indicating warm eddies may be well-regulated in the region as well. Otherwise, warm and cold eddies are distributed not only in the jet flow off southern Vietnam induced by intraseasonal wind stress in summer-fall, but also in the northern shallow water, which should be a focus of future study.

  10. Defect inspection in hot slab surface: multi-source CCD imaging based fuzzy-rough sets method

    Science.gov (United States)

    Zhao, Liming; Zhang, Yi; Xu, Xiaodong; Xiao, Hong; Huang, Chao

    2016-09-01

    To provide an accurate surface defects inspection method and make the automation of robust image region of interests(ROI) delineation strategy a reality in production line, a multi-source CCD imaging based fuzzy-rough sets method is proposed for hot slab surface quality assessment. The applicability of the presented method and the devised system are mainly tied to the surface quality inspection for strip, billet and slab surface etcetera. In this work we take into account the complementary advantages in two common machine vision (MV) systems(line array CCD traditional scanning imaging (LS-imaging) and area array CCD laser three-dimensional (3D) scanning imaging (AL-imaging)), and through establishing the model of fuzzy-rough sets in the detection system the seeds for relative fuzzy connectedness(RFC) delineation for ROI can placed adaptively, which introduces the upper and lower approximation sets for RIO definition, and by which the boundary region can be delineated by RFC region competitive classification mechanism. For the first time, a Multi-source CCD imaging based fuzzy-rough sets strategy is attempted for CC-slab surface defects inspection that allows an automatic way of AI algorithms and powerful ROI delineation strategies to be applied to the MV inspection field.

  11. Rough Sets as a Knowledge Discovery and Classification Tool for the Diagnosis of Students with Learning Disabilities

    OpenAIRE

    Yu-Chi Lin; Tung-Kuang Wu; Shian-Chang Huang; Ying-Ru Meng; Wen-Yau Liang

    2011-01-01

    Due to the implicit characteristics of learning disabilities (LDs), the diagnosis of students with learning disabilities has long been a difficult issue. Artificial intelligence techniques like artificial neural network (ANN) and support vector machine (SVM) have been applied to the LD diagnosis problem with satisfactory outcomes. However, special education teachers or professionals tend to be skeptical to these kinds of black-box predictors. In this study, we adopt the rough set theory (RST)...

  12. Self-consistent approach to x-ray reflection from rough surfaces

    International Nuclear Information System (INIS)

    Feranchuk, I. D.; Feranchuk, S. I.; Ulyanenkov, A. P.

    2007-01-01

    A self-consistent analytical approach for specular x-ray reflection from interfaces with transition layers [I. D. Feranchuk et al., Phys. Rev. B 67, 235417 (2003)] based on the distorted-wave Born approximation (DWBA) is used for the description of coherent and incoherent x-ray scattering from rough surfaces and interfaces. This approach takes into account the transformation of the modeling transition layer profile at the interface, which is caused by roughness correlations. The reflection coefficients for each DWBA order are directly calculated without phenomenological assumptions on their exponential decay at large scattering angles. Various regions of scattering angles are discussed, which show qualitatively different dependence of the reflection coefficient on the scattering angle. The experimental data are analyzed using the method developed

  13. Using Variable Precision Rough Set for Selection and Classification of Biological Knowledge Integrated in DNA Gene Expression

    Directory of Open Access Journals (Sweden)

    Calvo-Dmgz D.

    2012-12-01

    Full Text Available DNA microarrays have contributed to the exponential growth of genomic and experimental data in the last decade. This large amount of gene expression data has been used by researchers seeking diagnosis of diseases like cancer using machine learning methods. In turn, explicit biological knowledge about gene functions has also grown tremendously over the last decade. This work integrates explicit biological knowledge, provided as gene sets, into the classication process by means of Variable Precision Rough Set Theory (VPRS. The proposed model is able to highlight which part of the provided biological knowledge has been important for classification. This paper presents a novel model for microarray data classification which is able to incorporate prior biological knowledge in the form of gene sets. Based on this knowledge, we transform the input microarray data into supergenes, and then we apply rough set theory to select the most promising supergenes and to derive a set of easy interpretable classification rules. The proposed model is evaluated over three breast cancer microarrays datasets obtaining successful results compared to classical classification techniques. The experimental results shows that there are not significat differences between our model and classical techniques but it is able to provide a biological-interpretable explanation of how it classifies new samples.

  14. Surface Roughness of Al-5Cu Alloy using a Taguchi-Fuzzy Based Approach

    Directory of Open Access Journals (Sweden)

    Biswajit Das

    2014-07-01

    Full Text Available The present paper investigates the application of traditional Taguchi method with fuzzy logic for multi objective optimization of the turning process of Al-5Cu alloy in CNC Lathe machine. The cutting parameters are optimized with considerations of the multiple surface roughness characteristics (Centre line average roughness Ra, Average maximum height of the profile Rz, Maximum height of the profile Rt, Mean spacing of local peaks of the profile Sa . Experimental results are demonstrated to present the effectiveness of this approach. The parameters used in the experiment were cutting speed, depth of cut, feed rate. Other parameters such as tool nose radius, tool material, workpiece length, workpiece diameter, and workpiece material were taken as constant.

  15. Restrictions on Measurement of Roughness of Textile Fabrics by Laser Triangulation: A Phenomenological Approach

    International Nuclear Information System (INIS)

    Berberi, Pellumb; Tabaku, Burhan

    2010-01-01

    Laser triangulation method is one of the methods used for contactless measurement of roughness of textile fabrics. Method is based on measurement of distance between the sensor and the object by imaging the light scattered from the surface. However, experimental results, especially for high values of roughness, show a strong dependence to duration of exposure time to laser pulses. Use of very short exposure times and long exposures times causes appearance on the surface of the scanned textile of pixels with Active peak heights. The number of Active peaks increases with decrease of exposure time down to 0.1 ms, and increases with increase of exposure time up to 100 ms. Appearance of Active peaks leads to nonrealistic increase of roughness of the surface both for short exposure times and long exposure times reaching a minimum somewhere in the region of medium exposure times, 1 to 2 ms. The above effect suggests a careful analysis of experimental data and, also, becomes an important restriction to the method. In this paper we attempt to make a phenomenological approach to the mechanisms leading to these effects. We suppose that effect is related both to scattering properties of scanned surface and to physical parameters of CCD sensors. The first factor becomes more important in the region of long exposure times, while second factor becomes more important in the region of short exposure times.

  16. The Logical Properties of Lower and Upper Approximation Operations in Rough Sets%粗集中上下近似运算的逻辑性质

    Institute of Scientific and Technical Information of China (English)

    祝峰; 何华灿

    2000-01-01

    In this paper,we discuss the logical properties of rough sets through topological boolean algebras and closure topological boolean algebras.We get representation theorems of finite topological boolean algebras and closure topological boolean algebras under the upper-lower relation condition,which establish the relationship between topological boolean algebras or closure topological boolean algebras and rough sets in the general sets are similar to the Stone's representation theorem of boolean algebras.

  17. Selection of an evaluation index for water ecological civilizations of water-shortage cities based on the grey rough set

    Science.gov (United States)

    Zhang, X. Y.; Zhu, J. W.; Xie, J. C.; Liu, J. L.; Jiang, R. G.

    2017-08-01

    According to the characteristics and existing problems of water ecological civilization of water-shortage cities, the evaluation index system of water ecological civilization was established using a grey rough set. From six aspects of water resources, water security, water environment, water ecology, water culture and water management, this study established the prime frame of the evaluation system, including 28 items, and used rough set theory to undertake optimal selection of the index system. Grey correlation theory then was used for weightings in order that the integrated evaluation index system for water ecology civilization of water-shortage cities could be constituted. Xi’an City was taken as an example, for which the results showed that 20 evaluation indexes could be obtained after optimal selection of the preliminary framework of evaluation index. The most influential indices were the water-resource category index and water environment category index. The leakage rate of the public water supply pipe network, as well as the disposal, treatment and usage rate of polluted water, urban water surface area ratio, the water quality of the main rivers, and so on also are important. It was demonstrated that the evaluation index could provide an objectively reflection of regional features and key points for the development of water ecology civilization for cities with scarce water resources. It is considered that the application example has universal applicability.

  18. SYSTEMIC APPROACH AND ROUGHNESS APPLICATION TO CAUSE EMERGING PROPERTIES IN THE RESTORATION OF DEGRADED SOILS

    Directory of Open Access Journals (Sweden)

    Juarês José Aumond

    2014-09-01

    Full Text Available http://dx.doi.org/10.5902/1980509815737Based on the general systems theory, an ecological model for the restoration of ecosystems has been developed, which soils are highly degraded, and treating the ecosystem as a complex dynamic system, hyper-sensitive to initial conditions of soil preparation. Assuming that degraded ecosystems are sensitive to initial conditions of soil preparation, the technique of roughness was evaluated (relief variations alternating between concave and convex surfaces to trigger over time emergent properties that accelerate the process of ecological restoration. The degraded ecosystems can be understood as organizationally open systems, as a dissipative structure, in which irreversibly matter and energy flow. The main task in ecological restoration in areas that had the soil degraded is to achieve the internalization of matter and energy to induce the system to organizational closure. The roughness, represented by soil micro-topography is an effective technique in the internalization of matter, retaining water, sediment, organic matter, nutrients and seeds. Variations of relief trigger environmental changes over time in a dynamic and heterogeneous way, which influence the interactions between solar radiation, moisture and nutrients, creating different opportunities for plants and animal species. There must be an oriented concentration to flow structures and processes between the degraded ecosystem (system and the environment (neighborhood. In this approach, a particular concentration on the interrelationships between the system and the environment is dedicated. For ecological restoration, whose area is with degraded soil, such as mining and ranching, a new integrative degraded systemic approach is proposed, in which the roughness of the soil might trigger spatial and temporal patterns and emergent environmental properties due to the hyper-sensitivity to initial conditions of the land preparation.

  19. Internet TV set-top devices for web-based projects: smooth sailing or rough surfing?

    Science.gov (United States)

    Johnson, K B; Ravert, R D; Everton, A

    1999-01-01

    The explosion of projects utilizing the World Wide Web in the home environment offer a select group of patients a tremendous tool for information management and health-related support. However, many patients do not have ready access to the Internet in their homes. For these patients, Internet TV set-top devices may provide a low cost alternative to PC-based web browsers. As a part of a larger descriptive study providing adolescents with access to an on-line support group, we investigated the feasibility of using an Internet TV set-top device for those patients in need of Internet access. Although the devices required some configuration before being installed in the home environment, they required a minimum of support and were well accepted by these patients. However, these patients used the Internet less frequently than their peers with home personal computers--most likely due to a lack of easy availability of the telephone or television at all times. Internet TV set-top devices represent a feasible alternative access to the World Wide Web for some patients. Any attempt to use these devices should, however, be coupled with education to all family members, and an attempt at providing a dedicated television and phone line.

  20. The Selection of Wagons for the Internal Transport of a Logistics Company: A Novel Approach Based on Rough BWM and Rough SAW Methods

    Directory of Open Access Journals (Sweden)

    Željko Stević

    2017-11-01

    Full Text Available The rationalization of logistics activities and processes is very important in the business and efficiency of every company. In this respect, transportation as a subsystem of logistics, whether internal or external, is potentially a huge area for achieving significant savings. In this paper, the emphasis is placed upon the internal transport logistics of a paper manufacturing company. It is necessary to rationalize the movement of vehicles in the company’s internal transport, that is, for the majority of the transport to be transferred to rail transport, because the company already has an industrial track installed in its premises. To do this, it is necessary to purchase at least two used wagons. The problem is formulated as a multi-criteria decision model with eight criteria and eight alternatives. The paper presents a new approach based on a combination of the Simple Additive Weighting (SAW method and rough numbers, which is used for ranking the potential solutions and selecting the most suitable one. The rough Best–Worst Method (BWM was used to determine the weight values of the criteria. The results obtained using a combination of these two methods in their rough form were verified by means of a sensitivity analysis consisting of a change in the weight criteria and comparison with the following methods in their conventional and rough forms: the Analytic Hierarchy Process (AHP, Technique for Ordering Preference by Similarity to Ideal Solution (TOPSIS and MultiAttributive Border Approximation area Comparison (MABAC. The results show very high stability of the model and ranks that are the same or similar in different scenarios.

  1. Set-Theoretic Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan

    Despite being widely accepted and applied, maturity models in Information Systems (IS) have been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. This PhD thesis focuses on addressing...... these criticisms by incorporating recent developments in configuration theory, in particular application of set-theoretic approaches. The aim is to show the potential of employing a set-theoretic approach for maturity model research and empirically demonstrating equifinal paths to maturity. Specifically...... methodological guidelines consisting of detailed procedures to systematically apply set theoretic approaches for maturity model research and provides demonstrations of it application on three datasets. The thesis is a collection of six research papers that are written in a sequential manner. The first paper...

  2. An effective medium approach to predict the apparent contact angle of drops on super-hydrophobic randomly rough surfaces.

    Science.gov (United States)

    Bottiglione, F; Carbone, G

    2015-01-14

    The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.

  3. Research on classified real-time flood forecasting framework based on K-means cluster and rough set.

    Science.gov (United States)

    Xu, Wei; Peng, Yong

    2015-01-01

    This research presents a new classified real-time flood forecasting framework. In this framework, historical floods are classified by a K-means cluster according to the spatial and temporal distribution of precipitation, the time variance of precipitation intensity and other hydrological factors. Based on the classified results, a rough set is used to extract the identification rules for real-time flood forecasting. Then, the parameters of different categories within the conceptual hydrological model are calibrated using a genetic algorithm. In real-time forecasting, the corresponding category of parameters is selected for flood forecasting according to the obtained flood information. This research tests the new classified framework on Guanyinge Reservoir and compares the framework with the traditional flood forecasting method. It finds that the performance of the new classified framework is significantly better in terms of accuracy. Furthermore, the framework can be considered in a catchment with fewer historical floods.

  4. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    Science.gov (United States)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  5. Diamonds in the rough: key performance indicators for reticles and design sets

    Science.gov (United States)

    Ackmann, Paul

    2008-10-01

    The discussion on reticle cost continues to raise questions by many in the semiconductor industry. The diamond industry developed a method to judge and grade diamonds. [1, 11] The diamond-marketing tool of "The 4Cs of Diamonds" and other slogans help explain the multiple, complex variables that determine the value of a particular stone. Understanding the critical factors of Carat, Clarity, Color, and Cut allows all customers to choose a gem that matches their unique desires. I apply the same principles of "The 4Cs of Diamonds" to develop an analogous method for rating and tracking reticle performance. I introduced the first 3Cs of reticle manufacturing during my BACUS presentation panel at SPIE in February 2008. [2] To these first 3Cs (Capital, Complexity, and Content), I now add a fourth, Cycle time. I will look at how our use of reticles changes by node and use "The 4Cs of Reticles" to develop the key performance indicators (KPI) that will help our industry set standards for evaluating reticle technology. Capital includes both cost and utilization. This includes tools, people, facilities, and support systems required for building the most critical reticles. Tools have highest value in the first two years of use, and each new technology node will likely increase the Capital cost of reticles. New technologies, specifications, and materials drive Complexity for reticles, including smaller feature size, increased optical proximity correction (OPC), and more levels at sub-wavelength. The large data files needed to create finer features require the use of the newest tools for writing, inspection, and repair. Content encompasses the customer's specifications and requirements, which the mask shop must meet. The specifications are critical because they drive wafer yield. A clear increase of the number of masking levels has occurred since the 90 nm node. Cycle time starts when the design is finished and lasts until the mask house ships the reticle to the fab. Depending on

  6. A Modified Approach in Modeling and Calculation of Contact Characteristics of Rough Surfaces

    Directory of Open Access Journals (Sweden)

    J.A. Abdo

    2005-12-01

    Full Text Available A mathematical formulation for the contact of rough surfaces is presented. The derivation of the contact model is facilitated through the definition of plastic asperities that are assumed to be embedded at a critical depth within the actual surface asperities. The surface asperities are assumed to deform elastically whereas the plastic asperities experience only plastic deformation. The deformation of plastic asperities is made to obey the law of conservation of volume. It is believed that the proposed model is advantageous since (a it provides a more accurate account of elasticplastic behavior of surfaces in contact and (b it is applicable to model formulations that involve asperity shoulder-to shoulder contact. Comparison of numerical results for estimating true contact area and contact force using the proposed model and the earlier methods suggest that the proposed approach provides a more realistic prediction of elastic-plastic contact behavior.

  7. A Set Theoretical Approach to Maturity Models

    DEFF Research Database (Denmark)

    Lasrado, Lester; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    characterized by equifinality, multiple conjunctural causation, and case diversity. We prescribe methodological guidelines consisting of a six-step procedure to systematically apply set theoretic methods to conceptualize, develop, and empirically derive maturity models and provide a demonstration......Maturity Model research in IS has been criticized for the lack of theoretical grounding, methodological rigor, empirical validations, and ignorance of multiple and non-linear paths to maturity. To address these criticisms, this paper proposes a novel set-theoretical approach to maturity models...

  8. Assessment of physiological performance and perception of pushing different wheelchairs on indoor modular units simulating a surface roughness often encountered in under-resourced settings.

    Science.gov (United States)

    Sasaki, Kotaro; Rispin, Karen

    2017-01-01

    In under-resourced settings where motorized wheelchairs are rarely available, manual wheelchair users with limited upper-body strength and functionalities need to rely on assisting pushers for their mobility. Because traveling surfaces in under-resourced settings are often unpaved and rough, wheelchair pushers could experience high physiological loading. In order to evaluate pushers' physiological loading and to improve wheelchair designs, we built indoor modular units that simulate rough surface conditions, and tested a hypothesis that pushing different wheelchairs would result in different physiological performances and pushers' perception of difficulty on the simulated rough surface. Eighteen healthy subjects pushed two different types of pediatric wheelchairs (Moti-Go manufactured by Motivation, and KidChair by Hope Haven) fitted with a 50-kg dummy on the rough and smooth surfaces at self-selected speeds. Oxygen uptake, traveling distance for 6 minutes, and the rating of difficulty were obtained. The results supported our hypothesis, showing that pushing Moti-Go on the rough surface was physiologically less loading than KidChair, but on the smooth surface, the two wheelchairs did not differ significantly. These results indicate wheelchair designs to improve pushers' performance in under-resourced settings should be evaluated on rough surfaces.

  9. Rough multiple objective decision making

    CERN Document Server

    Xu, Jiuping

    2011-01-01

    Rough Set TheoryBasic concepts and properties of rough sets Rough Membership Rough Intervals Rough FunctionApplications of Rough SetsMultiple Objective Rough Decision Making Reverse Logistics Problem with Rough Interval Parameters MODM based Rough Approximation for Feasible RegionEVRMCCRMDCRM Reverse Logistics Network Design Problem of Suji Renewable Resource MarketBilevel Multiple Objective Rough Decision Making Hierarchical Supply Chain Planning Problem with Rough Interval Parameters Bilevel Decision Making ModelBL-EVRM BL-CCRMBL-DCRMApplication to Supply Chain Planning of Mianyang Co., LtdStochastic Multiple Objective Rough Decision Multi-Objective Resource-Constrained Project Scheduling UnderRough Random EnvironmentRandom Variable Stochastic EVRM Stochastic CCRM Stochastic DCRM Multi-Objective rc-PSP/mM/Ro-Ra for Longtan Hydropower StationFuzzy Multiple Objective Rough Decision Making Allocation Problem under Fuzzy Environment Fuzzy Variable Fu-EVRM Fu-CCRM Fu-DCRM Earth-Rock Work Allocation Problem.

  10. Shadow analysis of soil surface roughness compared to the chain set method and direct measurement of micro-relief

    Directory of Open Access Journals (Sweden)

    R. García Moreno

    2010-08-01

    Full Text Available Soil surface roughness (SSR expresses soil susceptibility to wind and water erosion and plays an important role in the development and the maintenance of soil biota. Several methods have been developed to characterise SSR based on different methods of acquiring data. Because the main problems related to these methods involve the use and handling of equipment in the field, the present study aims to fill the need for a method for measuring SSR that is more reliable, low-cost and convenient in the field than traditional field methods. Shadow analysis, which interprets micro-topographic shadows, is based on the principle that there is a direct relationship between the soil surface roughness and the shadows cast by soil structures under fixed sunlight conditions. SSR was calculated with shadows analysis in the laboratory using hemispheres of different diameter with a diverse distribution of known altitudes and a surface area of 1 m2.

    Data obtained from the shadow analysis were compared to data obtained with the chain method and simulation of the micro-relief. The results show a relationship among the SSR calculated using the different methods. To further improve the method, shadow analysis was used to measure the SSR in a sandy clay loam field using different tillage tools (chisel, tiller and roller and in a control of 4 m2 surface plots divided into subplots of 1 m2. The measurements were compared to the data obtained using the chain set and pin meter methods. The SSR measured was the highest when the chisel was used, followed by the tiller and the roller, and finally the control, for each of the three methods. Shadow analysis is shown to be a reliable method that does not disturb the measured surface, is easy to handle and analyse, and shortens the time involved in field operations by a factor ranging from 4 to 20 compared to well known techniques such as the chain set and pin meter methods.

  11. Risk approaches in setting radiation standards

    International Nuclear Information System (INIS)

    Whipple, C.

    1984-01-01

    This paper discusses efforts to increase the similarity of risk regulation approaches for radiation and chemical carcinogens. The risk assessment process in both cases involves the same controversy over the extrapolation from high to low doses and dose rates, and in both cases the boundaries between science and policy in risk assessment are indistinct. Three basic considerations are presented to approach policy questions: the economic efficiency of the regulatory approach, the degree of residual risk, and the technical opportunities for risk control. It is the author's opinion that if an agency can show that its standard-setting policies are consistent with those which have achieved political and judicial acceptance in other contexts, the greater the predictability of the regulatory process and the stability of this process

  12. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    Science.gov (United States)

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-04-01

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  13. Surface roughness retrieval by inversion of the Hapke model: A multiscale approach

    Science.gov (United States)

    Labarre, S.; Ferrari, C.; Jacquemoud, S.

    2017-07-01

    Surface roughness is a key property of soils that controls many surface processes and influences the scattering of incident electromagnetic waves at a wide range of scales. Hapke (2012b) designed a photometric model providing an approximate analytical solution of the Bidirectional Reflectance Distribution Function (BRDF) of a particulate medium: he introduced the effect of surface roughness as a correction factor of the BRDF of a smooth surface. This photometric roughness is defined as the mean slope angle of the facets composing the surface, integrated over all scales from the grain size to the local topography. Yet its physical meaning is still a question at issue, as the scale at which it occurs is not clearly defined. This work aims at better understanding the relative influence of roughness scales on soil BRDF and to test the ability of the Hapke model to retrieve a roughness that depicts effectively the ground truth. We apply a wavelet transform on millimeter digital terrain models (DTM) acquired over volcanic terrains. This method allows splitting the frequency band of a signal in several sub-bands, each corresponding to a spatial scale. We demonstrate that sub-centimeter surface features dominate both the integrated roughness and the BRDF shape. We investigate the suitability of the Hapke model for surface roughness retrieval by inversion on optical data. A global sensitivity analysis of the model shows that soil BRDF is very sensitive to surface roughness, nearly as much as the single scattering albedo according to the phase angle, but also that these two parameters are strongly correlated. Based on these results, a simplified two-parameter model depending on surface albedo and roughness is proposed. Inversion of this model on BRDF data simulated by a ray-tracing code over natural targets shows a good estimation of surface roughness when the assumptions of the model are verified, with a priori knowledge on surface albedo.

  14. Classification of sand samples according to radioactivity content by the use of euclidean and rough sets techniques

    International Nuclear Information System (INIS)

    Abd El-Monsef, M.M.; Kozae, A.M.; Seddeek, M.K.; Medhat, T.; Sharshar, T.; Badran, H.M.

    2004-01-01

    Form the geological point of view, the origin and transport of black and normal sands is particularly important. Black and normal sands came to their places along the Mediterranean-sea coast after transport by some natural process. Both types of sands have different radiological properties. This study is, therefore, attempts to use mathematical methods to classify Egyptian sand samples collected from 42 locations in an area of 40 x 19 km 2 based on their radioactivity contents. The use of all information resulted from the experimental measurements of radioactivity contents as well as some other parameters can be a time and effort consuming task. So that the process of eliminating unnecessary attributes is of prime importance. This elimination process of the superfluous attributes that cannot affect the decision was carried out. Some topological techniques to classify the information systems resulting from the radioactivity measurements were then carried out. These techniques were applied in Euclidean and quasi-discrete topological cases. While there are some applications in environmental radioactivity of the former case, the use of the quasi-discrete in the so-called rough set information analysis is new in such a study. The mathematical methods are summarized and the results and their radiological implications are discussed. Generally, the results indicate no radiological anomaly and it supports the hypothesis previously suggested about the presence of two types of sand in the studied area

  15. Design of cognitive engine for cognitive radio based on the rough sets and radial basis function neural network

    Science.gov (United States)

    Yang, Yanchao; Jiang, Hong; Liu, Congbin; Lan, Zhongli

    2013-03-01

    Cognitive radio (CR) is an intelligent wireless communication system which can dynamically adjust the parameters to improve system performance depending on the environmental change and quality of service. The core technology for CR is the design of cognitive engine, which introduces reasoning and learning methods in the field of artificial intelligence, to achieve the perception, adaptation and learning capability. Considering the dynamical wireless environment and demands, this paper proposes a design of cognitive engine based on the rough sets (RS) and radial basis function neural network (RBF_NN). The method uses experienced knowledge and environment information processed by RS module to train the RBF_NN, and then the learning model is used to reconfigure communication parameters to allocate resources rationally and improve system performance. After training learning model, the performance is evaluated according to two benchmark functions. The simulation results demonstrate the effectiveness of the model and the proposed cognitive engine can effectively achieve the goal of learning and reconfiguration in cognitive radio.

  16. Discrete rough set analysis of two different soil-behavior-induced landslides in National Shei-Pa Park, Taiwan

    Directory of Open Access Journals (Sweden)

    Shih-Hsun Chang

    2015-11-01

    Full Text Available The governing factors that influence landslide occurrences are complicated by the different soil conditions at various sites. To resolve the problem, this study focused on spatial information technology to collect data and information on geology. GIS, remote sensing and digital elevation model (DEM were used in combination to extract the attribute values of the surface material in the vast study area of Shei-Pa National Park, Taiwan. The factors influencing landslides were collected and quantification values computed. The major soil component of loam and gravel in the Shei-Pa area resulted in different landslide problems. The major factors were successfully extracted from the influencing factors. Finally, the discrete rough set (DRS classifier was used as a tool to find the threshold of each attribute contributing to landslide occurrence, based upon the knowledge database. This rule-based knowledge database provides an effective and urgent system to manage landslides. NDVI (Normalized Difference Vegetation Index, VI (Vegetation Index, elevation, and distance from the road are the four major influencing factors for landslide occurrence. The landslide hazard potential diagrams (landslide susceptibility maps were drawn and a rational accuracy rate of landslide was calculated. This study thus offers a systematic solution to the investigation of landslide disasters.

  17. Approach to the determination of the contact angle in hydrophobic samples with simultaneous correction of the effect of the roughness

    Science.gov (United States)

    Domínguez, Noemí; Castilla, Pau; Linzoain, María Eugenia; Durand, Géraldine; García, Cristina; Arasa, Josep

    2018-04-01

    This work presents the validation study of a method developed to measure contact angles with a confocal device in a set of hydrophobic samples. The use of this device allows the evaluation of the roughness of the surface and the determination of the contact angle in the same area of the sample. Furthermore, a theoretical evaluation of the impact of the roughness of a nonsmooth surface in the calculation of the contact angle when it is not taken into account according to Wenzel's model is also presented.

  18. Plume Dispersion over Idealized Urban-liked Roughness with Height Variation: an LES Approach

    Science.gov (United States)

    Wong, Colman Ching Chi; Liu, Chun-Ho

    2013-04-01

    Human activities (e.g. vehicular emission) are the primary pollutant sources affecting the health and living quality of stakeholders in modern compact cities. Gaussian plume dispersion model is commonly used for pollutant distribution estimate that works well over rural areas with flat terrain. However, its major parameters, dispersion coefficients, exclude the effect of surface roughness that unavoidably prone to error handling the pollutant transport in the urban boundary layer (UBL) over building roughness. Our recent large-eddy simulation (LES) has shown that urban surfaces affect significantly the pollutant dispersion over idealized, identical two-dimensional (2D) street canyons of uniform height. As an extension to our on-going effort, this study is conceived to investigate how rough urban surfaces, which are constructed by 2D street canyons of non-uniform height, modify the UBL pollutant dispersion . A series of LESs with idealized roughness elements of non-uniform heights were performed in neutral stratification. Building models with two different heights were placed alternatively in the computational domain to construct 2D street canyons in cross flows. The plume dispersion from a ground-level passive pollutant source over more realistic urban areas was then examined. Along with the existing building-height-to-street-width (aspect) ratio (AR), a new parameter, building-height variability (BHV), is used to measure the building height unevenness. Four ARs (1, 0.5, 0.25 and 0.125) and three BHVs (20%, 40% and 60%) were considered in this study. Preliminary results show that BHV greatly increases the aerodynamic roughness of the hypothetical urban surfaces for narrow street canyons. Analogous to our previous findings, the air exchange rate (ACH) of street canyons increases with increasing friction factor, implying that street-level ventilation could be improved by increasing building roughness via BHV. In addition, the parameters used in dispersion coefficient

  19. Natural background approach to setting radiation standards

    International Nuclear Information System (INIS)

    Adler, H.I.; Federow, H.; Weinberg, A.M.

    1979-01-01

    The suggestion has often been made that an additional radiation exposure imposed on humanity as a result of some important activity such as electricity generation would be acceptable if the exposure was small compared to the natural background. In order to make this concept quantitative and objective, we propose that small compared with the natural background be interpreted as the standard deviation (weighted with the exposed population) of the natural background. This use of the variation in natural background radiation is less arbitrary and requires fewer unfounded assumptions than some current approaches to standard-setting. The standard deviation is an easily calculated statistic that is small compared with the mean value for natural exposures of populations. It is an objectively determined quantity and its significance is generally understood. Its determination does not omit any of the pertinent data. When this method is applied to the population of the United States, it suggests that a dose of 20 mrem/year would be an acceptable standard. This is comparable to the 25 mrem/year suggested as the maximum allowable exposure to an individual from the complete uranium fuel cycle

  20. A novel approach for quantifying the zero-plane displacement of rough-wall boundary layers

    Science.gov (United States)

    Ferreira, Manuel; Rodriguez-Lopez, Eduardo; Ganapathisubramani, Bharath; Aerodynamics; Flight Mechanics Team

    2017-11-01

    Indirect methods of wall shear stress (WSS) estimation are frequently used to characterise rough wall boundary-layer flows. The zero-plane displacement, hypothesised to be the vertical location where it acts, is often treated as a fitting parameter. However, it would be preferrable to measure both these quantities directly, especially for surfaces with large roughness elements where established scaling and similarity laws may not hold. In this talk we present a novel floating element balance that is able to measure not only the WSS but also the wall normal location at which it acts. While allowing compensation for mild static pressure gradients by means of a first-order analytical model. Its architecture is based on a parallel-shift linkage and it's fitted with custom built force transducers and a data acquisition system especially designed to achieve high Signal-to-Noise Ratios (SNR). The smooth-wall boundary-layer flow is used as benchmark to assess the accuracy of this balance. The values of skin friction coefficient show an agreement with hot-wire anemometry to within 2 % at a local Reynolds number Reθ = 4 ×103 up to 104. A rough surface of regularly distributed large elements is used to investigate the ability to infer the zero-plane displacement.

  1. Automated Detection of Cancer Associated Genes Using a Combined Fuzzy-Rough-Set-Based F-Information and Water Swirl Algorithm of Human Gene Expression Data.

    Directory of Open Access Journals (Sweden)

    Pugalendhi Ganesh Kumar

    Full Text Available This study describes a novel approach to reducing the challenges of highly nonlinear multiclass gene expression values for cancer diagnosis. To build a fruitful system for cancer diagnosis, in this study, we introduced two levels of gene selection such as filtering and embedding for selection of potential genes and the most relevant genes associated with cancer, respectively. The filter procedure was implemented by developing a fuzzy rough set (FR-based method for redefining the criterion function of f-information (FI to identify the potential genes without discretizing the continuous gene expression values. The embedded procedure is implemented by means of a water swirl algorithm (WSA, which attempts to optimize the rule set and membership function required to classify samples using a fuzzy-rule-based multiclassification system (FRBMS. Two novel update equations are proposed in WSA, which have better exploration and exploitation abilities while designing a self-learning FRBMS. The efficiency of our new approach was evaluated on 13 multicategory and 9 binary datasets of cancer gene expression. Additionally, the performance of the proposed FRFI-WSA method in designing an FRBMS was compared with existing methods for gene selection and optimization such as genetic algorithm (GA, particle swarm optimization (PSO, and artificial bee colony algorithm (ABC on all the datasets. In the global cancer map with repeated measurements (GCM_RM dataset, the FRFI-WSA showed the smallest number of 16 most relevant genes associated with cancer using a minimal number of 26 compact rules with the highest classification accuracy (96.45%. In addition, the statistical validation used in this study revealed that the biological relevance of the most relevant genes associated with cancer and their linguistics detected by the proposed FRFI-WSA approach are better than those in the other methods. The simple interpretable rules with most relevant genes and effectively

  2. Water Quality Assessment in the Harbin Reach of the Songhuajiang River (China Based on a Fuzzy Rough Set and an Attribute Recognition Theoretical Model

    Directory of Open Access Journals (Sweden)

    Yan An

    2014-03-01

    Full Text Available A large number of parameters are acquired during practical water quality monitoring. If all the parameters are used in water quality assessment, the computational complexity will definitely increase. In order to reduce the input space dimensions, a fuzzy rough set was introduced to perform attribute reduction. Then, an attribute recognition theoretical model and entropy method were combined to assess water quality in the Harbin reach of the Songhuajiang River in China. A dataset consisting of ten parameters was collected from January to October in 2012. Fuzzy rough set was applied to reduce the ten parameters to four parameters: BOD5, NH3-N, TP, and F. coli (Reduct A. Considering that DO is a usual parameter in water quality assessment, another reduct, including DO, BOD5, NH3-N, TP, TN, F, and F. coli (Reduct B, was obtained. The assessment results of Reduct B show a good consistency with those of Reduct A, and this means that DO is not always necessary to assess water quality. The results with attribute reduction are not exactly the same as those without attribute reduction, which can be attributed to the α value decided by subjective experience. The assessment results gained by the fuzzy rough set obviously reduce computational complexity, and are acceptable and reliable. The model proposed in this paper enhances the water quality assessment system.

  3. Hybrid intelligence systems and artificial neural network (ANN approach for modeling of surface roughness in drilling

    Directory of Open Access Journals (Sweden)

    Ch. Sanjay

    2014-12-01

    Full Text Available In machining processes, drilling operation is material removal process that has been widely used in manufacturing since industrial revolution. The useful life of cutting tool and its operating conditions largely controls the economics of machining operations. Drilling is most frequently performed material removing process and is used as a preliminary step for many operations, such as reaming, tapping, and boring. Drill wear has a bad effect on the surface finish and dimensional accuracy of the work piece. The surface finish of a machined part is one of the most important quality characteristics in manufacturing industries. The primary objective of this research is the prediction of suitable parameters for surface roughness in drilling. Cutting speed, cutting force, and machining time were given as inputs to the adaptive fuzzy neural network and neuro-fuzzy analysis for estimating the values of surface roughness by using 2, 3, 4, and 5 membership functions. The best structures were selected based on minimum of summation of square with the actual values with the estimated values by artificial neural fuzzy inference system (ANFIS and neuro-fuzzy systems. For artificial neural network (ANN analysis, the number of neurons was selected from 1, 2, 3, … , 20. The learning rate was selected as .5 and .5 smoothing factor was used. The inputs were selected as cutting speed, feed, machining time, and thrust force. The best structures of neural networks were selected based on the criteria as the minimum of summation of square with the actual value of surface roughness. Drilling experiments with 10 mm size were performed at two cutting speeds and feeds. Comparative analysis has been done between the actual values and the estimated values obtained by ANFIS, neuro-fuzzy, and ANN analysis.

  4. Multi-Attribute Decision-Making Method Based on Neutrosophic Soft Rough Information

    Directory of Open Access Journals (Sweden)

    Muhammad Akram

    2018-03-01

    Full Text Available Soft sets (SSs, neutrosophic sets (NSs, and rough sets (RSs are different mathematical models for handling uncertainties, but they are mutually related. In this research paper, we introduce the notions of soft rough neutrosophic sets (SRNSs and neutrosophic soft rough sets (NSRSs as hybrid models for soft computing. We describe a mathematical approach to handle decision-making problems in view of NSRSs. We also present an efficient algorithm of our proposed hybrid model to solve decision-making problems.

  5. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting.

    Science.gov (United States)

    Liew, Kongmeng; Lindborg, PerMagnus; Rodrigues, Ruth; Styles, Suzy J

    2018-01-01

    Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama , a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio-visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  6. Cross-Modal Perception of Noise-in-Music: Audiences Generate Spiky Shapes in Response to Auditory Roughness in a Novel Electroacoustic Concert Setting

    Directory of Open Access Journals (Sweden)

    Kongmeng Liew

    2018-02-01

    Full Text Available Noise has become integral to electroacoustic music aesthetics. In this paper, we define noise as sound that is high in auditory roughness, and examine its effect on cross-modal mapping between sound and visual shape in participants. In order to preserve the ecological validity of contemporary music aesthetics, we developed Rama, a novel interface, for presenting experimentally controlled blocks of electronically generated sounds that varied systematically in roughness, and actively collected data from audience interaction. These sounds were then embedded as musical drones within the overall sound design of a multimedia performance with live musicians, Audience members listened to these sounds, and collectively voted to create the shape of a visual graphic, presented as part of the audio–visual performance. The results of the concert setting were replicated in a controlled laboratory environment to corroborate the findings. Results show a consistent effect of auditory roughness on shape design, with rougher sounds corresponding to spikier shapes. We discuss the implications, as well as evaluate the audience interface.

  7. Predicting High or Low Transfer Efficiency of Photovoltaic Systems Using a Novel Hybrid Methodology Combining Rough Set Theory, Data Envelopment Analysis and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lee-Ing Tong

    2012-02-01

    Full Text Available Solar energy has become an important energy source in recent years as it generates less pollution than other energies. A photovoltaic (PV system, which typically has many components, converts solar energy into electrical energy. With the development of advanced engineering technologies, the transfer efficiency of a PV system has been increased from low to high. The combination of components in a PV system influences its transfer efficiency. Therefore, when predicting the transfer efficiency of a PV system, one must consider the relationship among system components. This work accurately predicts whether transfer efficiency of a PV system is high or low using a novel hybrid model that combines rough set theory (RST, data envelopment analysis (DEA, and genetic programming (GP. Finally, real data-set are utilized to demonstrate the accuracy of the proposed method.

  8. Fuzzy multi-project rough-cut capacity planning

    NARCIS (Netherlands)

    Masmoudi, Malek; Hans, Elias W.; Leus, Roel; Hait, Alain; Sotskov, Yuri N.; Werner, Frank

    2014-01-01

    This chapter studies the incorporation of uncertainty into multi-project rough-cut capacity planning. We use fuzzy sets to model uncertainties, adhering to the so-called possibilistic approach. We refer to the resulting proactive planning environment as Fuzzy Rough Cut Capacity Planning (FRCCP).

  9. A hybrid Taguchi-artificial neural network approach to predict surface roughness during electric discharge machining of titanium alloys

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Sanjeev; Batish, Ajay [Thapar University, Patiala (India); Singh, Rupinder [GNDEC, Ludhiana (India); Singh, T. P. [Symbiosis Institute of Technology, Pune (India)

    2014-07-15

    In the present study, electric discharge machining process was used for machining of titanium alloys. Eight process parameters were varied during the process. Experimental results showed that current and pulse-on-time significantly affected the performance characteristics. Artificial neural network coupled with Taguchi approach was applied for optimization and prediction of surface roughness. The experimental results and the predicted results showed good agreement. SEM was used to investigate the surface integrity. Analysis for migration of different chemical elements and formation of compounds on the surface was performed using EDS and XRD pattern. The results showed that high discharge energy caused surface defects such as cracks, craters, thick recast layer, micro pores, pin holes, residual stresses and debris. Also, migration of chemical elements both from electrode and dielectric media were observed during EDS analysis. Presence of carbon was seen on the machined surface. XRD results showed formation of titanium carbide compound which precipitated on the machined surface.

  10. A Model to Identify the Most Effective Business Rule in Information Systems using Rough Set Theory: Study on Loan Business Process

    Directory of Open Access Journals (Sweden)

    Mohammad Aghdasi

    2011-09-01

    In this paper, a practical model is used to identify the most effective rules in information systems. In this model, first, critical business attributes which fit to strategic expectations are taken into account. These are the attributes which their changes are more important than others in achieving the strategic expectations. To identify these attributes we utilize rough set theory. Those business rules which use critical information attribute in their structures are identified as the most effective business rules. The Proposed model helps information system developers to identify scope of effective business rules. It causes a decrease in time and cost of information system maintenance. Also it helps business analyst to focus on managing critical business attributes in order to achieve a specific goal.

  11. Examination of Routine Practice Patterns in the Hospital Information Data Warehouse: Use of OLAP and Rough Set Analysis with Clinician Feedback

    Science.gov (United States)

    Grant, Andrew; Grant, Gwyneth; Gagné, Jean; Blanchette, Carl; Comeau, Émilie; Brodeur, Guillaume; Dionne, Jonathon; Ayite, Alphonse; Synak, Piotr; Wroblewski, Jakub; Apanowitz, Cas

    2001-01-01

    The patient centred electronic patient record enables retrospective analysis of practice patterns as one means to assist clinicians adjust and improve their practice. An interrogation of the data-warehouse linking test use to Diagnostic Related Group (DRG) of one years data of the Sherbrooke University Hospital showed that one-third of patients used two-thirds of these diagnostic tests. Using RoughSets analysis, zones of repeated tests were demonstrated where results remained within stable limits. It was concluded that 30% of fluid and electrolyte testing was probably unnecessary. These findings led to an endorsement of changing the test request formats in the hospital information system from profiles to individual tests requiring justification.

  12. La metodología Rough Set frente al Análisis Discriminante en los problemas de clasificación multiatributo

    Directory of Open Access Journals (Sweden)

    Vilar Zanón, J.L.

    2003-01-01

    Full Text Available Muchas decisiones financieras implican la clasificación de una observación (empresas, títulos... en una categoría o grupo, lo que ha propiciado la aplicación de métodos de investigación operativa a los problemas financieros. Un caso particular de los problemas de clasificación, es cuando el número de grupos se limita a dos. Existen numerosos estudios financieros dedicados a los problemas de clasificación binaria: clasificación de créditos entre fallidos y no, fusiones y adquisiciones, clasificación de bonos o la predicción del fracaso empresarial. Se han empleado numerosos métodos estadísticos para abordar los problemas mencionados. En la mayoría de las ocasiones, las variables explicativas utilizadas no suelen cumplir las hipótesis estadísticas que requieren estos métodos, lo cual ha motivado la búsqueda de otras herramientas que superen estos inconvenientes como es la Teoría Rough Set. Este trabajo describe una investigación empírica consistente en un estudio comparativo de la utilización del Análisis Discriminante y de la Teoría Rough Set sobre un sistema de información compuesto por 72 empresas españolas de seguros no-vida descritas mediante 21 ratios financieros. Hemos comparado su efectividad aplicándolos a la detección de la insolvencia como problema de clasificación multiatributo entre empresas sanas y fracasadas y utilizando como atributos los ratios financieros.

  13. Fingerprinting the type of line edge roughness

    Science.gov (United States)

    Fernández Herrero, A.; Pflüger, M.; Scholze, F.; Soltwisch, V.

    2017-06-01

    Lamellar gratings are widely used diffractive optical elements and are prototypes of structural elements in integrated electronic circuits. EUV scatterometry is very sensitive to structure details and imperfections, which makes it suitable for the characterization of nanostructured surfaces. As compared to X-ray methods, EUV scattering allows for steeper angles of incidence, which is highly preferable for the investigation of small measurement fields on semiconductor wafers. For the control of the lithographic manufacturing process, a rapid in-line characterization of nanostructures is indispensable. Numerous studies on the determination of regular geometry parameters of lamellar gratings from optical and Extreme Ultraviolet (EUV) scattering also investigated the impact of roughness on the respective results. The challenge is to appropriately model the influence of structure roughness on the diffraction intensities used for the reconstruction of the surface profile. The impact of roughness was already studied analytically but for gratings with a periodic pseudoroughness, because of practical restrictions of the computational domain. Our investigation aims at a better understanding of the scattering caused by line roughness. We designed a set of nine lamellar Si-gratings to be studied by EUV scatterometry. It includes one reference grating with no artificial roughness added, four gratings with a periodic roughness distribution, two with a prevailing line edge roughness (LER) and another two with line width roughness (LWR), and four gratings with a stochastic roughness distribution (two with LER and two with LWR). We show that the type of line roughness has a strong impact on the diffuse scatter angular distribution. Our experimental results are not described well by the present modelling approach based on small, periodically repeated domains.

  14. Rough Finite State Automata and Rough Languages

    Science.gov (United States)

    Arulprakasam, R.; Perumal, R.; Radhakrishnan, M.; Dare, V. R.

    2018-04-01

    Sumita Basu [1, 2] recently introduced the concept of a rough finite state (semi)automaton, rough grammar and rough languages. Motivated by the work of [1, 2], in this paper, we investigate some closure properties of rough regular languages and establish the equivalence between the classes of rough languages generated by rough grammar and the classes of rough regular languages accepted by rough finite automaton.

  15. Setting research priorities by applying the combined approach matrix.

    Science.gov (United States)

    Ghaffar, Abdul

    2009-04-01

    Priority setting in health research is a dynamic process. Different organizations and institutes have been working in the field of research priority setting for many years. In 1999 the Global Forum for Health Research presented a research priority setting tool called the Combined Approach Matrix or CAM. Since its development, the CAM has been successfully applied to set research priorities for diseases, conditions and programmes at global, regional and national levels. This paper briefly explains the CAM methodology and how it could be applied in different settings, giving examples and describing challenges encountered in the process of setting research priorities and providing recommendations for further work in this field. The construct and design of the CAM is explained along with different steps needed, including planning and organization of a priority-setting exercise and how it could be applied in different settings. The application of the CAM are described by using three examples. The first concerns setting research priorities for a global programme, the second describes application at the country level and the third setting research priorities for diseases. Effective application of the CAM in different and diverse environments proves its utility as a tool for setting research priorities. Potential challenges encountered in the process of research priority setting are discussed and some recommendations for further work in this field are provided.

  16. Notions of Rough Neutrosophic Digraphs

    Directory of Open Access Journals (Sweden)

    Nabeela Ishfaq

    2018-01-01

    Full Text Available [-3]Graph theory has numerous applications in various disciplines, including computer networks, neural networks, expert systems, cluster analysis, and image capturing. Rough neutrosophic set (NS theory is a hybrid tool for handling uncertain information that exists in real life. In this research paper, we apply the concept of rough NS theory to graphs and present a new kind of graph structure, rough neutrosophic digraphs. We present certain operations, including lexicographic products, strong products, rejection and tensor products on rough neutrosophic digraphs. We investigate some of their properties. We also present an application of a rough neutrosophic digraph in decision-making.

  17. Setting conservation management thresholds using a novel participatory modeling approach.

    Science.gov (United States)

    Addison, P F E; de Bie, K; Rumpff, L

    2015-10-01

    We devised a participatory modeling approach for setting management thresholds that show when management intervention is required to address undesirable ecosystem changes. This approach was designed to be used when management thresholds: must be set for environmental indicators in the face of multiple competing objectives; need to incorporate scientific understanding and value judgments; and will be set by participants with limited modeling experience. We applied our approach to a case study where management thresholds were set for a mat-forming brown alga, Hormosira banksii, in a protected area management context. Participants, including management staff and scientists, were involved in a workshop to test the approach, and set management thresholds to address the threat of trampling by visitors to an intertidal rocky reef. The approach involved trading off the environmental objective, to maintain the condition of intertidal reef communities, with social and economic objectives to ensure management intervention was cost-effective. Ecological scenarios, developed using scenario planning, were a key feature that provided the foundation for where to set management thresholds. The scenarios developed represented declines in percent cover of H. banksii that may occur under increased threatening processes. Participants defined 4 discrete management alternatives to address the threat of trampling and estimated the effect of these alternatives on the objectives under each ecological scenario. A weighted additive model was used to aggregate participants' consequence estimates. Model outputs (decision scores) clearly expressed uncertainty, which can be considered by decision makers and used to inform where to set management thresholds. This approach encourages a proactive form of conservation, where management thresholds and associated actions are defined a priori for ecological indicators, rather than reacting to unexpected ecosystem changes in the future. © 2015 The

  18. A Novel Approach for Evaluating the Contraction of Hypo-Peritectic Steels during Initial Solidification by Surface Roughness

    Directory of Open Access Journals (Sweden)

    Junli Guo

    2018-04-01

    Full Text Available The contraction of peritectic steels in the initial solidification has an important influence on the formation of surface defects of continuously cast slabs. In order to understand the contraction behavior of the initial solidification of steels in the mold, the solidification process and surface roughness in a commercial hypo-peritectic and several non-peritectic steels were investigated using Confocal Scanning Laser Microscope (CSLM. The massive transformation of delta-Fe (δ to austenite (γ was documented in the hypo-peritectic steel, which caused surface wrinkles and greatly increases the surface roughness of samples in the experiments. Surface roughness (Ra(δ→γ was calculated to evaluate the contraction level of the hypo-peritectic steel due to δ–γ transformation. The result shows that the surface roughness method can facilitate the estimation of the contraction level of peritectic transformation over a wide range of cooling rates.

  19. Bistatic scattering from a three-dimensional object above a two-dimensional randomly rough surface modeled with the parallel FDTD approach.

    Science.gov (United States)

    Guo, L-X; Li, J; Zeng, H

    2009-11-01

    We present an investigation of the electromagnetic scattering from a three-dimensional (3-D) object above a two-dimensional (2-D) randomly rough surface. A Message Passing Interface-based parallel finite-difference time-domain (FDTD) approach is used, and the uniaxial perfectly matched layer (UPML) medium is adopted for truncation of the FDTD lattices, in which the finite-difference equations can be used for the total computation domain by properly choosing the uniaxial parameters. This makes the parallel FDTD algorithm easier to implement. The parallel performance with different number of processors is illustrated for one rough surface realization and shows that the computation time of our parallel FDTD algorithm is dramatically reduced relative to a single-processor implementation. Finally, the composite scattering coefficients versus scattered and azimuthal angle are presented and analyzed for different conditions, including the surface roughness, the dielectric constants, the polarization, and the size of the 3-D object.

  20. Assessment of wind speed and wind power through three stations in Egypt, including air density variation and analysis results with rough set theory

    International Nuclear Information System (INIS)

    Essa, K.S.M.; Embaby, M.; Marrouf, A.A.; Koza, A.M.; Abd El-Monsef, M.E.

    2007-01-01

    It is well known that the wind energy potential is proportional to both air density and the third power of the wind speed average over a suitable time period. The wind speed and air density have random variables depending on both time and location. The main objective of this work is to derive the most general wind energy potential of the wind formulation putting into consideration the time variable in both wind speed and air density. The correction factor is derived explicitly in terms of the cross-correlation and the coefficients of variation.The application is performed for environmental and wind speed measurements at the Cairo Airport, Kosseir and Hurguada, Egypt. Comparisons are made between Weibull, Rayleigh, and actual data distributions of wind speed and wind power of one year 2005. A Weibull distribution is the best match to the actual probability distribution of wind speed data for most stations. The maximum wind energy potential was 373 W/m 2 in June at Hurguada (Red Sea coast) where the annual mean value was 207 W/m 2 . By Using Rough Set Theory, We Find That the Wind Power Depends on the Wind Speed with greater than air density

  1. APPLICATION OF ROUGH SET THEORY TO MAINTENANCE LEVEL DECISION-MAKING FOR AERO-ENGINE MODULES BASED ON INCREMENTAL KNOWLEDGE LEARNING

    Institute of Scientific and Technical Information of China (English)

    陆晓华; 左洪福; 蔡景

    2013-01-01

    The maintenance of an aero-engine usually includes three levels ,and the maintenance cost and period greatly differ depending on the different maintenance levels .To plan a reasonable maintenance budget program , airlines would like to predict the maintenance level of aero-engine before repairing in terms of performance parame-ters ,which can provide more economic benefits .The maintenance level decision rules are mined using the histori-cal maintenance data of a civil aero-engine based on the rough set theory ,and a variety of possible models of upda-ting rules produced by newly increased maintenance cases added to the historical maintenance case database are in-vestigated by the means of incremental machine learning .The continuously updated rules can provide reasonable guidance suggestions for engineers and decision support for planning a maintenance budget program before repai-ring .The results of an example show that the decision rules become more typical and robust ,and they are more accurate to predict the maintenance level of an aero-engine module as the maintenance data increase ,which illus-trates the feasibility of the represented method .

  2. Study on Supplier Selection for Photovoltaic Enterprises Based on Rough Set%基于粗糙集的光伏企业供应商选择

    Institute of Scientific and Technical Information of China (English)

    甘卫华; 张蕊

    2013-01-01

    In this paper, in light of the problems faced by the photovoltaic enterprises after industrial expansion, such as glut of production capacity, technical complexity, short development history and volatile industrial environment, we analyzed the raw materials of a key unit to capture the solar energy in the empirical case of a certain photovoltaic enterprises. Then we used the rough set method to select the suppliers of such raw materials.%目前光伏企业正面临大量扩张后的产能过剩、技术复杂、发展年限少、情况变数大的问题.粗糙集能够在没有先知条件的情况下对不确定的事物进行相对客观的处理.以某光伏企业为例,针对其中的关键太阳能组件的原材料进行分析.运用粗糙集的方法,对光伏企业原材料进行供应商的选择,为企业提供科学的建议.

  3. Cooling-load prediction by the combination of rough set theory and an artificial neural-network based on data-fusion technique

    International Nuclear Information System (INIS)

    Hou Zhijian; Lian Zhiwei; Yao Ye; Yuan Xinjian

    2006-01-01

    A novel method integrating rough sets (RS) theory and an artificial neural network (ANN) based on data-fusion technique is presented to forecast an air-conditioning load. Data-fusion technique is the process of combining multiple sensors data or related information to estimate or predict entity states. In this paper, RS theory is applied to find relevant factors to the load, which are used as inputs of an artificial neural-network to predict the cooling load. To improve the accuracy and enhance the robustness of load forecasting results, a general load-prediction model, by synthesizing multi-RSAN (MRAN), is presented so as to make full use of redundant information. The optimum principle is employed to deduce the weights of each RSAN model. Actual prediction results from a real air-conditioning system show that, the MRAN forecasting model is better than the individual RSAN and moving average (AMIMA) ones, whose relative error is within 4%. In addition, individual RSAN forecasting results are better than that of ARIMA

  4. Classification of breast masses in ultrasound images using self-adaptive differential evolution extreme learning machine and rough set feature selection.

    Science.gov (United States)

    Prabusankarlal, Kadayanallur Mahadevan; Thirumoorthy, Palanisamy; Manavalan, Radhakrishnan

    2017-04-01

    A method using rough set feature selection and extreme learning machine (ELM) whose learning strategy and hidden node parameters are optimized by self-adaptive differential evolution (SaDE) algorithm for classification of breast masses is investigated. A pathologically proven database of 140 breast ultrasound images, including 80 benign and 60 malignant, is used for this study. A fast nonlocal means algorithm is applied for speckle noise removal, and multiresolution analysis of undecimated discrete wavelet transform is used for accurate segmentation of breast lesions. A total of 34 features, including 29 textural and five morphological, are applied to a [Formula: see text]-fold cross-validation scheme, in which more relevant features are selected by quick-reduct algorithm, and the breast masses are discriminated into benign or malignant using SaDE-ELM classifier. The diagnosis accuracy of the system is assessed using parameters, such as accuracy (Ac), sensitivity (Se), specificity (Sp), positive predictive value (PPV), negative predictive value (NPV), Matthew's correlation coefficient (MCC), and area ([Formula: see text]) under receiver operating characteristics curve. The performance of the proposed system is also compared with other classifiers, such as support vector machine and ELM. The results indicated that the proposed SaDE algorithm has superior performance with [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] compared to other classifiers.

  5. Evaluation of the methodologies used to generate random pavement profiles based on the power spectral density: An approach based on the International Roughness Index

    Directory of Open Access Journals (Sweden)

    Boris Jesús Goenaga

    2017-01-01

    Full Text Available The pavement roughness is the main variable that produces the vertical excitation in vehicles. Pavement profiles are the main determinant of (i discomfort perception on users and (ii dynamic loads generated at the tire-pavement interface, hence its evaluation constitutes an essential step on a Pavement Management System. The present document evaluates two specific techniques used to simulate pavement profiles; these are the shaping filter and the sinusoidal approach, both based on the Power Spectral Density. Pavement roughness was evaluated using the International Roughness Index (IRI, which represents the most used index to characterize longitudinal road profiles. Appropriate parameters were defined in the simulation process to obtain pavement profiles with specific ranges of IRI values using both simulation techniques. The results suggest that using a sinusoidal approach one can generate random profiles with IRI values that are representative of different road types, therefore, one could generate a profile for a paved or an unpaved road, representing all the proposed categories defined by ISO 8608 standard. On the other hand, to obtain similar results using the shaping filter approximation a modification in the simulation parameters is necessary. The new proposed values allow one to generate pavement profiles with high levels of roughness, covering a wider range of surface types. Finally, the results of the current investigation could be used to further improve our understanding on the effect of pavement roughness on tire pavement interaction. The evaluated methodologies could be used to generate random profiles with specific levels of roughness to assess its effect on dynamic loads generated at the tire-pavement interface and user’s perception of road condition.

  6. Fuzzy Rough Ring and Its Prop erties

    Institute of Scientific and Technical Information of China (English)

    REN Bi-jun; FU Yan-ling

    2013-01-01

    This paper is devoted to the theories of fuzzy rough ring and its properties. The fuzzy approximation space generated by fuzzy ideals and the fuzzy rough approximation operators were proposed in the frame of fuzzy rough set model. The basic properties of fuzzy rough approximation operators were analyzed and the consistency between approximation operators and the binary operation of ring was discussed.

  7. Optimization of Surface Roughness Parameters of Al-6351 Alloy in EDC Process: A Taguchi Coupled Fuzzy Logic Approach

    Science.gov (United States)

    Kar, Siddhartha; Chakraborty, Sujoy; Dey, Vidyut; Ghosh, Subrata Kumar

    2017-10-01

    This paper investigates the application of Taguchi method with fuzzy logic for multi objective optimization of roughness parameters in electro discharge coating process of Al-6351 alloy with powder metallurgical compacted SiC/Cu tool. A Taguchi L16 orthogonal array was employed to investigate the roughness parameters by varying tool parameters like composition and compaction load and electro discharge machining parameters like pulse-on time and peak current. Crucial roughness parameters like Centre line average roughness, Average maximum height of the profile and Mean spacing of local peaks of the profile were measured on the coated specimen. The signal to noise ratios were fuzzified to optimize the roughness parameters through a single comprehensive output measure (COM). Best COM obtained with lower values of compaction load, pulse-on time and current and 30:70 (SiC:Cu) composition of tool. Analysis of variance is carried out and a significant COM model is observed with peak current yielding highest contribution followed by pulse-on time, compaction load and composition. The deposited layer is characterised by X-Ray Diffraction analysis which confirmed the presence of tool materials on the work piece surface.

  8. Fit for purpose? Introducing a rational priority setting approach into a community care setting.

    Science.gov (United States)

    Cornelissen, Evelyn; Mitton, Craig; Davidson, Alan; Reid, Colin; Hole, Rachelle; Visockas, Anne-Marie; Smith, Neale

    2016-06-20

    Purpose - Program budgeting and marginal analysis (PBMA) is a priority setting approach that assists decision makers with allocating resources. Previous PBMA work establishes its efficacy and indicates that contextual factors complicate priority setting, which can hamper PBMA effectiveness. The purpose of this paper is to gain qualitative insight into PBMA effectiveness. Design/methodology/approach - A Canadian case study of PBMA implementation. Data consist of decision-maker interviews pre (n=20), post year-1 (n=12) and post year-2 (n=9) of PBMA to examine perceptions of baseline priority setting practice vis-à-vis desired practice, and perceptions of PBMA usability and acceptability. Findings - Fit emerged as a key theme in determining PBMA effectiveness. Fit herein refers to being of suitable quality and form to meet the intended purposes and needs of the end-users, and includes desirability, acceptability, and usability dimensions. Results confirm decision-maker desire for rational approaches like PBMA. However, most participants indicated that the timing of the exercise and the form in which PBMA was applied were not well-suited for this case study. Participant acceptance of and buy-in to PBMA changed during the study: a leadership change, limited organizational commitment, and concerns with organizational capacity were key barriers to PBMA adoption and thereby effectiveness. Practical implications - These findings suggest that a potential way-forward includes adding a contextual readiness/capacity assessment stage to PBMA, recognizing organizational complexity, and considering incremental adoption of PBMA's approach. Originality/value - These insights help us to better understand and work with priority setting conditions to advance evidence-informed decision making.

  9. A Positive Behavioral Approach for Aggression in Forensic Psychiatric Settings.

    Science.gov (United States)

    Tolisano, Peter; Sondik, Tracey M; Dike, Charles C

    2017-03-01

    Aggression toward self and others by complex patients admitted to forensic psychiatric settings is a relatively common yet extremely difficult behavior to treat. Traditional interventions in forensic inpatient settings have historically emphasized control and management over treatment. Research over the past several years has demonstrated the value of behavioral and psychosocial treatment interventions to reduce aggression and to increase prosocial skill development in inpatient forensic population. Positive behavioral support (PBS) offers a comprehensive approach that incorporates the science of applied behavioral analysis (ABA) in support of patients with challenging behaviors, including aggression and violence. In this article, we describe a PBS model to treat aggression in forensic settings. PBS includes a comprehensive functional assessment, along with four basic elements: ecological strategies, positive programming, focused support strategies, and reactive strategies. Other key components are described, including data collection, staff training, fidelity checks to ensure correct implementation of the plan, and ongoing monitoring and revision of PBS strategies, according to treatment outcomes. Finally, a behavioral consultation team approach within the inpatient forensic setting is recommended, led by an assigned doctoral-level psychologist with specialized knowledge and training in behavioral methods. The behavioral consultation team works directly with the unit treatment team and the identified patient to develop, implement, and track a plan that may extend over several weeks to several months including transition into the community. PBS can offer a positive systemic impact in forensic inpatient settings, such as providing a nonpharmacologic means to address aggression, reducing the incidences of restraint and seclusion, enhancing staff proficiency in managing challenging patient presentations, and reducing recidivism when used as part of the bridge to

  10. Fuzzy set approach to quality function deployment: An investigation

    Science.gov (United States)

    Masud, Abu S. M.

    1992-01-01

    The final report of the 1992 NASA/ASEE Summer Faculty Fellowship at the Space Exploration Initiative Office (SEIO) in Langley Research Center is presented. Quality Function Deployment (QFD) is a process, focused on facilitating the integration of the customer's voice in the design and development of a product or service. Various input, in the form of judgements and evaluations, are required during the QFD analyses. All the input variables in these analyses are treated as numeric variables. The purpose of the research was to investigate how QFD analyses can be performed when some or all of the input variables are treated as linguistic variables with values expressed as fuzzy numbers. The reason for this consideration is that human judgement, perception, and cognition are often ambiguous and are better represented as fuzzy numbers. Two approaches for using fuzzy sets in QFD have been proposed. In both cases, all the input variables are considered as linguistic variables with values indicated as linguistic expressions. These expressions are then converted to fuzzy numbers. The difference between the two approaches is due to how the QFD computations are performed with these fuzzy numbers. In Approach 1, the fuzzy numbers are first converted to their equivalent crisp scores and then the QFD computations are performed using these crisp scores. As a result, the output of this approach are crisp numbers, similar to those in traditional QFD. In Approach 2, all the QFD computations are performed with the fuzzy numbers and the output are fuzzy numbers also. Both the approaches have been explained with the help of illustrative examples of QFD application. Approach 2 has also been applied in a QFD application exercise in SEIO, involving a 'mini moon rover' design. The mini moon rover is a proposed tele-operated vehicle that will traverse and perform various tasks, including autonomous operations, on the moon surface. The output of the moon rover application exercise is a

  11. Moment problems and the causal set approach to quantum gravity

    International Nuclear Information System (INIS)

    Ash, Avner; McDonald, Patrick

    2003-01-01

    We study a collection of discrete Markov chains related to the causal set approach to modeling discrete theories of quantum gravity. The transition probabilities of these chains satisfy a general covariance principle, a causality principle, and a renormalizability condition. The corresponding dynamics are completely determined by a sequence of non-negative real coupling constants. Using techniques related to the classical moment problem, we give a complete description of any such sequence of coupling constants. We prove a representation theorem: every discrete theory of quantum gravity arising from causal set dynamics satisfying covariance, causality, and renormalizability corresponds to a unique probability distribution function on the non-negative real numbers, with the coupling constants defining the theory given by the moments of the distribution

  12. Towards a Set Theoretical Approach to Big Data Analytics

    DEFF Research Database (Denmark)

    Mukkamala, Raghava Rao; Hussain, Abid; Vatrapu, Ravi

    2014-01-01

    Formal methods, models and tools for social big data analytics are largely limited to graph theoretical approaches such as social network analysis (SNA) informed by relational sociology. There are no other unified modeling approaches to social big data that integrate the conceptual, formal...... this technique to the data analysis of big social data collected from Facebook page of the fast fashion company, H&M....... and software realms. In this paper, we first present and discuss a theory and conceptual model of social data. Second, we outline a formal model based on set theory and discuss the semantics of the formal model with a real-world social data example from Facebook. Third, we briefly present and discuss...

  13. Utilizing 4-H in Afterschool Settings: Two Approaches for Integration

    Directory of Open Access Journals (Sweden)

    Rachel Rudd

    2013-03-01

    Full Text Available As our communities grow and change, afterschool programs represent an avenue to bring resources to populations which would otherwise not be available to them. Combining 4-H with the afterschool environment can be beneficial in supporting and raising the quality of afterschool programs being offered. This article explores the benefits and challenges of two approaches of implementing 4-H programming in afterschool settings: the 4-H managed program that is created and run solely by 4-H faculty and staff and the 4-H afterschool partnerships which are facilitated in partnership with existing afterschool programs. Regardless of the approach, combining 4-H with afterschool programs can strengthen well established programs and can enhance the quality of all afterschool programs.

  14. Basis set approach in the constrained interpolation profile method

    International Nuclear Information System (INIS)

    Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

    2003-07-01

    We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

  15. Surface roughness and morphology of dental nanocomposites polished by four different procedures evaluated by a multifractal approach

    Energy Technology Data Exchange (ETDEWEB)

    Ţălu, Ştefan, E-mail: stefan_ta@yahoo.com [Technical University of Cluj-Napoca, Faculty of Mechanical Engineering, Department of AET, Discipline of Descriptive Geometry and Engineering Graphics, 103-105 B-dul Muncii St., Cluj-Napoca 400641, Cluj (Romania); Stach, Sebastian, E-mail: sebastian.stach@us.edu.pl [University of Silesia, Faculty of Computer Science and Materials Science, Institute of Informatics, Department of Biomedical Computer Systems, Będzińska 39, 41-205 Sosnowiec (Poland); Lainović, Tijana, E-mail: tijana.lainovic@gmail.com [University of Novi Sad, Faculty of Medicine, School of Dentistry, Hajduk Veljkova 3, 21000 Novi Sad (Serbia); Vilotić, Marko, E-mail: markovil@uns.ac.rs [University of Novi Sad, Faculty of Technical Sciences, Department for Production Engineering, Trg Dositeja Obradovića 6, 21000 Novi Sad (Serbia); Blažić, Larisa, E-mail: larisa.blazic@gmail.com [University of Novi Sad, Faculty of Medicine, School of Dentistry, Clinic of Dentistry of Vojvodina, Department of Restorative Dentistry and Endodontics, Hajduk Veljkova 3, 21000 Novi Sad (Serbia); Alb, Sandu Florin, E-mail: albflorin@yahoo.com [“Iuliu Haţieganu” University of Medicine and Pharmacy, Faculty of Dentistry, Department of Periodontology, 8 Victor Babeş St., 400012 Cluj-Napoca (Romania); Kakaš, Damir, E-mail: kakasdam@uns.ac.rs [University of Novi Sad, Faculty of Technical Sciences, Department for Production Engineering, Trg Dositeja Obradovića 6, 21000 Novi Sad (Serbia)

    2015-03-01

    Graphical abstract: - Highlights: • Multifractals are good indicators of polished dental composites 3-D surface structure. • The nanofilled composite had superior 3-D surface properties than the nanohybrid one. • Composite polishing with diamond paste created improved 3-D multifractal structure. • Recommendation: polish the composite with diamond paste if using the one-step tool. • Multifractal analysis could become essential in designing new dental surfaces. - Abstract: The objective of this study was to determine the effect of different dental polishing methods on surface texture parameters of dental nanocomposites. The 3-D surface morphology was investigated by atomic force microscopy (AFM) and multifractal analysis. Two representative dental resin-based nanocomposites were investigated: a nanofilled and a nanohybrid composite. The samples were polished by two dental polishing protocols using multi-step and one-step system. Both protocols were then followed by diamond paste polishing. The 3-D surface roughness of samples was studied by AFM on square areas of topography on the 80 × 80 μm{sup 2} scanning area. The multifractal spectrum theory based on computational algorithms was applied for AFM data and multifractal spectra were calculated. The generalized dimension D{sub q} and the singularity spectrum f(α) provided quantitative values that characterize the local scale properties of dental nanocomposites polished by four different dental polishing protocols at nanometer scale. The results showed that the larger the spectrum width Δα (Δα = α{sub max} − α{sub min}) of the multifractal spectra f(α), the more non-uniform was the surface morphology. Also, the 3-D surface topography was described by statistical parameters, according to ISO 25178-2:2012. The 3-D surface of samples had a multifractal nature. Nanofilled composite had lower values of height parameters than nanohybrid composites, due to its composition. Multi-step polishing protocol

  16. Surface roughness and morphology of dental nanocomposites polished by four different procedures evaluated by a multifractal approach

    International Nuclear Information System (INIS)

    Ţălu, Ştefan; Stach, Sebastian; Lainović, Tijana; Vilotić, Marko; Blažić, Larisa; Alb, Sandu Florin; Kakaš, Damir

    2015-01-01

    Graphical abstract: - Highlights: • Multifractals are good indicators of polished dental composites 3-D surface structure. • The nanofilled composite had superior 3-D surface properties than the nanohybrid one. • Composite polishing with diamond paste created improved 3-D multifractal structure. • Recommendation: polish the composite with diamond paste if using the one-step tool. • Multifractal analysis could become essential in designing new dental surfaces. - Abstract: The objective of this study was to determine the effect of different dental polishing methods on surface texture parameters of dental nanocomposites. The 3-D surface morphology was investigated by atomic force microscopy (AFM) and multifractal analysis. Two representative dental resin-based nanocomposites were investigated: a nanofilled and a nanohybrid composite. The samples were polished by two dental polishing protocols using multi-step and one-step system. Both protocols were then followed by diamond paste polishing. The 3-D surface roughness of samples was studied by AFM on square areas of topography on the 80 × 80 μm 2 scanning area. The multifractal spectrum theory based on computational algorithms was applied for AFM data and multifractal spectra were calculated. The generalized dimension D q and the singularity spectrum f(α) provided quantitative values that characterize the local scale properties of dental nanocomposites polished by four different dental polishing protocols at nanometer scale. The results showed that the larger the spectrum width Δα (Δα = α max − α min ) of the multifractal spectra f(α), the more non-uniform was the surface morphology. Also, the 3-D surface topography was described by statistical parameters, according to ISO 25178-2:2012. The 3-D surface of samples had a multifractal nature. Nanofilled composite had lower values of height parameters than nanohybrid composites, due to its composition. Multi-step polishing protocol created a better

  17. Generalizing roughness: experiments with flow-oriented roughness

    Science.gov (United States)

    Trevisani, Sebastiano

    2015-04-01

    Surface texture analysis applied to High Resolution Digital Terrain Models (HRDTMs) improves the capability to characterize fine-scale morphology and permits the derivation of useful morphometric indexes. An important indicator to be taken into account in surface texture analysis is surface roughness, which can have a discriminant role in the detection of different geomorphic processes and factors. The evaluation of surface roughness is generally performed considering it as an isotropic surface parameter (e.g., Cavalli, 2008; Grohmann, 2011). However, surface texture has often an anisotropic character, which means that surface roughness could change according to the considered direction. In some applications, for example involving surface flow processes, the anisotropy of roughness should be taken into account (e.g., Trevisani, 2012; Smith, 2014). Accordingly, we test the application of a flow-oriented directional measure of roughness, computed considering surface gravity-driven flow. For the calculation of flow-oriented roughness we use both classical variogram-based roughness (e.g., Herzfeld,1996; Atkinson, 2000) as well as an ad-hoc developed robust modification of variogram (i.e. MAD, Trevisani, 2014). The presented approach, based on a D8 algorithm, shows the potential impact of considering directionality in the calculation of roughness indexes. The use of flow-oriented roughness could improve the definition of effective proxies of impedance to flow. Preliminary results on the integration of directional roughness operators with morphometric-based models, are promising and can be extended to more complex approaches. Atkinson, P.M., Lewis, P., 2000. Geostatistical classification for remote sensing: an introduction. Computers & Geosciences 26, 361-371. Cavalli, M. & Marchi, L. 2008, "Characterization of the surface morphology of an alpine alluvial fan using airborne LiDAR", Natural Hazards and Earth System Science, vol. 8, no. 2, pp. 323-333. Grohmann, C

  18. The natural background approach to setting radiation standards

    International Nuclear Information System (INIS)

    Adler, H.I.; Federow, H.; Weinberg, A.M.

    1979-01-01

    The suggestion has often been made that an additional radiation exposure imposed on humanity as a result of some important activity such as electricity generation would be acceptable if the exposure was 'small' compared to the natural background. In order to make this concept quantitative and objective, we propose that 'small compared with the natural background' be interpreted as the standard deviation (weighted with the exposed population) of the natural background. We believe that this use of the variation in natural background radiation is less arbitrary and requires fewer unfounded assumptions than some current approaches to standard-setting. The standard deviation is an easily calculated statistic that is small compared with the mean value for natural exposures of populations. It is an objectively determined quantity and its significance is generally understood. Its determination does not omit any of the pertinent data. When this method is applied to the population of the USA, it implies that a dose of 20 mrem/year would be an acceptable standard. This is closely comparable to the 25 mrem/year suggested by the Environmental Protection Agency as the maximum allowable exposure to an individual in the general population as a result of the operation of the complete uranium fuel cycle. Other agents for which a natural background exists can be treated in the same way as radiation. In addition, a second method for determining permissible exposure levels for agents other than radiation is presented. This method makes use of the natural background radiation data as a primary standard. Some observations on benzo(a)pyrene, using this latter method, are presented. (author)

  19. Axis Problem of Rough 3-Valued Algebras

    Institute of Scientific and Technical Information of China (English)

    Jianhua Dai; Weidong Chen; Yunhe Pan

    2006-01-01

    The collection of all the rough sets of an approximation space has been given several algebraic interpretations, including Stone algebras, regular double Stone algebras, semi-simple Nelson algebras, pre-rough algebras and 3-valued Lukasiewicz algebras. A 3-valued Lukasiewicz algebra is a Stone algebra, a regular double Stone algebra, a semi-simple Nelson algebra, a pre-rough algebra. Thus, we call the algebra constructed by the collection of rough sets of an approximation space a rough 3-valued Lukasiewicz algebra. In this paper,the rough 3-valued Lukasiewicz algebras, which are a special kind of 3-valued Lukasiewicz algebras, are studied. Whether the rough 3-valued Lukasiewicz algebra is a axled 3-valued Lukasiewicz algebra is examined.

  20. Nursing Minimum Data Set Based on EHR Archetypes Approach.

    Science.gov (United States)

    Spigolon, Dandara N; Moro, Cláudia M C

    2012-01-01

    The establishment of a Nursing Minimum Data Set (NMDS) can facilitate the use of health information systems. The adoption of these sets and represent them based on archetypes are a way of developing and support health systems. The objective of this paper is to describe the definition of a minimum data set for nursing in endometriosis represent with archetypes. The study was divided into two steps: Defining the Nursing Minimum Data Set to endometriosis, and Development archetypes related to the NMDS. The nursing data set to endometriosis was represented in the form of archetype, using the whole perception of the evaluation item, organs and senses. This form of representation is an important tool for semantic interoperability and knowledge representation for health information systems.

  1. Routing Trains Through Railway Junctions: A New Set Packing Approach

    DEFF Research Database (Denmark)

    Lusby, Richard; Larsen, Jesper; Ryan, David

    how the problem can be formulated as a set packing model. To exploit the structure of the problem we present a solution procedure which entails solving the dual of this formulation through the dynamic addition of violated cuts (primal variables). A discussion of the variable (train path) generation...

  2. A potential approach to solutions for set games

    NARCIS (Netherlands)

    Driessen, T.S.H.; Sun, H.

    2001-01-01

    Concerning the solution theory for set games, the paper introduces a new solution by allocating, to any player, the items (taken from an universe) that are attainable for the player, but can not be blocked (by any coalition not containing the player). The resulting value turns out to be an utmost

  3. Standard setting and quality of assessment: A conceptual approach ...

    African Journals Online (AJOL)

    Quality performance standards and the effect of assessment outcomes are important in the educational milieu, as assessment remains the representative ... not be seen as a methodological process of setting pass/fail cut-off points only, but as a powerful catalyst for quality improvements in HPE by promoting excellence in ...

  4. Behavioral Analytic Approach to Placement of Patients in Community Settings.

    Science.gov (United States)

    Glickman, Henry S.; And Others

    Twenty adult psychiatric outpatients were assessed by their primary therapists on the Current Behavior Inventory prior to placing them in community settings. The diagnoses included schizophrenia, major affective disorder, dysthymic disorder, and atypical paranoid disorder. The inventory assessed behaviors in four areas: independent community…

  5. An ecosystem approach to malaria control in an urban setting

    Directory of Open Access Journals (Sweden)

    Carrasquilla Gabriel

    2001-01-01

    Full Text Available We conducted a research project aimed at strengthening local government and the community for a sustainable malaria control strategy. The project began with a baseline diagnosis of malaria prevalence, a KAP survey, entomology, and health services delivery, after which an epidemiological study was performed to identify risk factors associated with malaria, thereafter used to plan intervention measures. A program evaluation was conducted five years later. By using an ecosystem approach to reanalyze data, this paper discusses how malaria arises from a complex interaction of cultural, economic, ecological, social, and individual factors. Intervention measures require an intersectorial and transdisciplinary approach that does not exist at the moment. Health sector leadership is limited, and there is no true community participation. Implications for research, including the use of qualitative and quantitative methods, study design, and complexity of data analysis are discussed. Finally, implications for malaria control are discussed, stressing the differences between the ecosystem and integrated disease control approaches.

  6. Institutional Complexity and Social Entrepreneurship: A Fuzzy-Set Approach

    OpenAIRE

    Munoz, PA; Kibler, E

    2016-01-01

    This study examines the local institutional complexity of social entrepreneurship. Building on a novel fuzzy-set analysis of 407 social entrepreneurs in the UK, the study identifies five configurations of local institutional forces that collectively explain the confidence of social entrepreneurs in successfully managing their business. The findings demonstrate that local authorities are a dominant condition; yet combinations of other complementary—more and less formalized—local institutions n...

  7. Fuzzy set theoretic approach to fault tree analysis | Tyagi ...

    African Journals Online (AJOL)

    This approach can be widely used to improve the reliability and to reduce the operating cost of a system. The proposed techniques are discussed and illustrated by taking an example of a nuclear power plant. Keywords: Fault tree, Triangular and Trapezoidal fuzzy number, Fuzzy importance, Ranking of fuzzy numbers ...

  8. Subject de-biasing of data sets: A Bayesian approach

    International Nuclear Information System (INIS)

    Pate-Cornell, M.E.

    1994-01-01

    In this paper, the authors examine the relevance of data sets (for instance, of past incidents) for risk management decisions when there are reasons to believe that all types of incidents have not been reported at the same rate. Their objective is to infer from the data reports what actually happened in order to assess the potential benefits of different safety measures. The authors use a simple Bayesian model to correct (de-bias) the data sets given the nonreport rates, which are assessed (subjectively) by experts and encoded as the probabilities of reports given different characteristics of the events of interest. They compute a probability distribution for the past number of events given the past number of reports. They illustrate the method by the cases of two data sets: incidents in anesthesia in Australia, and oil spills in the Gulf of Mexico. In the first case, the de-biasing allows correcting for the fact that some types of incidents, such as technical malfunctions, are more likely to be reported when they occur than anesthetist mistakes. In the second case, the authors have to account for the fact that the rates of oil spill reports indifferent incident categories have increased over the years, perhaps at the same time as the rates of incidents themselves

  9. Level Set Approach to Anisotropic Wet Etching of Silicon

    Directory of Open Access Journals (Sweden)

    Branislav Radjenović

    2010-05-01

    Full Text Available In this paper a methodology for the three dimensional (3D modeling and simulation of the profile evolution during anisotropic wet etching of silicon based on the level set method is presented. Etching rate anisotropy in silicon is modeled taking into account full silicon symmetry properties, by means of the interpolation technique using experimentally obtained values for the etching rates along thirteen principal and high index directions in KOH solutions. The resulting level set equations are solved using an open source implementation of the sparse field method (ITK library, developed in medical image processing community, extended for the case of non-convex Hamiltonians. Simulation results for some interesting initial 3D shapes, as well as some more practical examples illustrating anisotropic etching simulation in the presence of masks (simple square aperture mask, convex corner undercutting and convex corner compensation, formation of suspended structures are shown also. The obtained results show that level set method can be used as an effective tool for wet etching process modeling, and that is a viable alternative to the Cellular Automata method which now prevails in the simulations of the wet etching process.

  10. Strategic approach to film marketing in international setting

    Directory of Open Access Journals (Sweden)

    Štavljanin Velimir

    2011-01-01

    Full Text Available This paper represents the strategic aspects of the film marketing through an analysis of contemporary international theory and practice. The analysis is based on the basic principles of the film marketing and film product development. Application of marketing principles in the film industry under the new business conditions is only a prerequisite, but no more a guarantee of success. From the point of view of marketing managers, success must be ensured by the strategic approach, which is addressed in the paper. Given that the most successful marketing activities depend on the marketing mix strategies, a novel approach to film marketing mix was one of the main focuses of the paper. Attention of a separate chapter is focused on film marketing mix, taking into account technology impact on film marketing. .

  11. Approaches to handling uncertainty when setting environmental exposure standards

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2009-01-01

    attempts for the first time to cover the full range of issues related to model uncertainties, from the subjectivity of setting up a conceptual model of a given system, all the way to communicating the nature of model uncertainties to non-scientists and accounting for model uncertainties in policy decisions....... Theoretical chapters, providing background information on specific steps in the modelling process and in the adoption of models by end-users, are complemented by illustrative case studies dealing with soils and global climate change. All the chapters are authored by recognized experts in their respective...

  12. Concurrent approach for evolving compact decision rule sets

    Science.gov (United States)

    Marmelstein, Robert E.; Hammack, Lonnie P.; Lamont, Gary B.

    1999-02-01

    The induction of decision rules from data is important to many disciplines, including artificial intelligence and pattern recognition. To improve the state of the art in this area, we introduced the genetic rule and classifier construction environment (GRaCCE). It was previously shown that GRaCCE consistently evolved decision rule sets from data, which were significantly more compact than those produced by other methods (such as decision tree algorithms). The primary disadvantage of GRaCCe, however, is its relatively poor run-time execution performance. In this paper, a concurrent version of the GRaCCE architecture is introduced, which improves the efficiency of the original algorithm. A prototype of the algorithm is tested on an in- house parallel processor configuration and the results are discussed.

  13. Problems with pragmatic approaches to setting a moral scope

    Directory of Open Access Journals (Sweden)

    James Henderson

    2010-12-01

    Full Text Available http://dx.doi.org/10.5007/1677-2954.2008v7n2p309In Towards Justice and Virtue and "Distant Strangers, Moral Standing, and State Boundaries, "Onora O´Neil argues that questions of the form "To whom is one obliged to accord ethical treatment?" may be decides based purely on the actions of the agent in question. In particular, she claims that metaphysical accounts of personhood are not necessary to set a moral scope and that such accounts have failed in any case. While there can be no doubt that no account of personhood has achieved unanious acceptance, her account is found wanting based on the observation that actions are not sufficiente to separate all of those within our moral scope from aal of those outside it. Indeed, clear examples of entities not deserving ethical treatment fall under her umbrella of protection. Solving this problem requires just what she seeks to exclude from her theory - an account of personhood. By paper´s end, ir should be clear that any theory based purel y on the actions of agents will be insufficient to separate all the ethical wheat from chaff.

  14. A facile and cost-effective approach to engineer surface roughness for preparation of large-scale superhydrophobic substrate with high adhesive force

    Science.gov (United States)

    Zhou, Bingpu; Tian, Jingxuan; Wang, Cong; Gao, Yibo; Wen, Weijia

    2016-12-01

    This study presents a convenient avenue to fabricate polydimethylsiloxane (PDMS) with controllable surface morphologies and wetting characteristics via standard molding technique. The templates with engineered surface roughness were simply prepared by combinations of microfluidics and photo-polymerization of N-Isopropylacrylamide (NIPAM). The surface morphology of mold could be adjusted via ultraviolet-curing duration or the grafting density, which means that the surface of PDMS sample replicated from the mold could also be easily controlled based on the proposed method. Furthermore, via multiple grafting and replication processes, we have successfully demonstrated that hydrophobicity properties of prepared PDMS samples could be swiftly enhanced to ∼154° with highly adhesive force with resident water droplets. The obtained PDMS samples exhibited well resistance to external mechanical deformation even up to 100 cycles. The proposed scheme is timesaving, cost-effective and suitable for large-scale production of superhydrophobic PDMS substrates. We believe that the presented approach can provide a promising method for preparing superhydrophobic surface with highly adhesive force for on-chip liquid transport, localized reaction, etc.

  15. Characterizing aerodynamic roughness length (z0) for a debris-covered glacier: aerodynamic inversion and SfM-derived microtopographic approaches

    Science.gov (United States)

    Miles, Evan; Steiner, Jakob; Brun, Fanny; Detert, Martin; Buri, Pascal; Pellicciotti, Francesca

    2016-04-01

    Aerodynamic surface roughness is an essential parameter in surface energy balance studies. While actual measurements on bare ice glaciers are rare, a wide range of literature values exist for ice and snow surfaces. There are very few values suggested for debris covered glaciers and actual measurements are even scarcer - studies instead optimize z0 or use a reference value. The increased use of photogrammetry on glaciers provides an opportunity to characterize the range of z0 values meaningful for debris-covered glaciers. We apply Agisoft's Structure-from-Motion process chain to produce high resolution DEMs for five 1m x 1m plots (1mm resolution) with differing grain-size distributions, as well as a large ~180m x ~180m depression (5cm) on Lirung Glacier in the Nepalese Himalayas. For each plot, we calculate z0 according to transect-based microtopographic parameterisations. We compare individual-transect z0 estimates based on profile position and direction, and develop a grid version of the algorithms aggregating height data from all bidirectional transects. This grid approach is applied to our larger DEM to characterize the variability of z0 across the study site for each algorithm. For the plot DEMs, z0 estimated by any algorithm varies by an order of magnitude based on transect position. Although the algorithms reproduce the same variability among transects and plots, z0 estimates vary by an order of magnitude between algorithms. For any algorithm, however, we find minimal difference between cross- and down-glacier profile directions. At the basin scale, results from different algorithms are strongly correlated and results are more closely clustered with the exception of the Rounce (2015) algorithm, while any algorithm's values range by two orders of magnitude across the study depression. The Rounce algorithm consistently produced the highest z0 values, while the Lettau (1969) and Munro (1989) methods produced the lowest values, and use of the Nield (2013

  16. Fractal approach to surface roughness of TiO{sub 2}/WO{sub 3} coatings formed by plasma electrolytic oxidation process

    Energy Technology Data Exchange (ETDEWEB)

    Rožić, L.J., E-mail: ljrozic@nanosys.ihtmbg.ac.rs [University of Belgrade, IChTM-Department of Catalysis and Chemical Engineering, Njegoševa 12, Belgrade (Serbia); Petrović, S.; Radić, N. [University of Belgrade, IChTM-Department of Catalysis and Chemical Engineering, Njegoševa 12, Belgrade (Serbia); Stojadinović, S. [University of Belgrade, Faculty of Physics, Studentski trg 12-16, Belgrade (Serbia); Vasilić, R. [Faculty of Environmental Governance and Corporate Responsibility, Educons University, Vojvode Putnika 87, Sremska Kamenica (Serbia); Stefanov, P. [Institute of General and Inorganic Chemistry, Bulgarian Academy of Sciences, Sofia 1113 (Bulgaria); Grbić, B. [University of Belgrade, IChTM-Department of Catalysis and Chemical Engineering, Njegoševa 12, Belgrade (Serbia)

    2013-07-31

    In this study, we have shown that atomic force microscopy is a powerful technique to study the fractal parameters of TiO{sub 2}/WO{sub 3} coatings prepared by plasma electrolytic oxidation (PEO) process. Since the surface roughness of obtained oxide coatings affects their physical properties, an accurate description of roughness parameters is highly desirable. The surface roughness, described by root mean squared and arithmetic average values, is analyzed considering the scans of a series of atomic force micrographs. The results show that the oxide coatings exhibit lower surface roughness in initial stage of PEO process. Also, the surfaces of TiO{sub 2}/WO{sub 3} coatings exhibit fractal behavior. Positive correlation between the fractal dimension and surface roughness of the surfaces of TiO{sub 2}/WO{sub 3} coatings in initial stage of PEO process was found. - Highlights: • TiO{sub 2}/WO{sub 3} coatings were obtained by plasma electrolytic oxidation. • Oxide coatings exhibit lower surface roughness in initial stage of process. • The surfaces of TiO{sub 2}/WO{sub 3} coatings exhibit fractal behavior.

  17. Measurement of surface roughness

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo

    This document is used in connection with two 3 hours laboratory exercises that are part of the course GEOMETRICAL METROLOGY AND MACHINE TESTING. The laboratories include a demonstration of the function of roughness measuring instruments plus a series of exercises illustrating roughness measurement...

  18. On the sighting of unicorns: A variational approach to computing invariant sets in dynamical systems

    Science.gov (United States)

    Junge, Oliver; Kevrekidis, Ioannis G.

    2017-06-01

    We propose to compute approximations to invariant sets in dynamical systems by minimizing an appropriate distance between a suitably selected finite set of points and its image under the dynamics. We demonstrate, through computational experiments, that this approach can successfully converge to approximations of (maximal) invariant sets of arbitrary topology, dimension, and stability, such as, e.g., saddle type invariant sets with complicated dynamics. We further propose to extend this approach by adding a Lennard-Jones type potential term to the objective function, which yields more evenly distributed approximating finite point sets, and illustrate the procedure through corresponding numerical experiments.

  19. Rough Neutrosophic Multi-Attribute Decision-Making Based on Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Kalyan Mondal

    2015-01-01

    Full Text Available This paper presents rough netrosophic multiattribute decision making based on grey relational analysis. While the concept of neutrosophic sets is a powerful logic to deal with indeterminate and inconsistent data, the theory of rough neutrosophic sets is also a powerful mathematical tool to deal with incompleteness. The rating of all alternatives is expressed with the upper and lower approximation operator and the pair of neutrosophic sets which are characterized by truth-membership degree, indeterminacy-membership degree, and falsitymembership degree. Weight of each attribute is partially known to decision maker. We extend the neutrosophic grey relational analysis method to rough neutrosophic grey relational analysis method and apply it to multiattribute decision making problem. Information entropy method is used to obtain the partially known attribute weights. Accumulated geometric operator is defined to transform rough neutrosophic number (neutrosophic pair to single valued neutrosophic number. Neutrosophic grey relational coefficient is determined by using Hamming distance between each alternative to ideal rough neutrosophic estimates reliability solution and the ideal rough neutrosophic estimates un-reliability solution. Then rough neutrosophic relational degree is defined to determine the ranking order of all alternatives. Finally, a numerical example is provided to illustrate the applicability and efficiency of the proposed approach.

  20. Modeling surface roughness scattering in metallic nanowires

    Energy Technology Data Exchange (ETDEWEB)

    Moors, Kristof, E-mail: kristof@itf.fys.kuleuven.be [KU Leuven, Institute for Theoretical Physics, Celestijnenlaan 200D, B-3001 Leuven (Belgium); IMEC, Kapeldreef 75, B-3001 Leuven (Belgium); Sorée, Bart [IMEC, Kapeldreef 75, B-3001 Leuven (Belgium); Physics Department, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerpen (Belgium); KU Leuven, Electrical Engineering (ESAT) Department, Kasteelpark Arenberg 10, B-3001 Leuven (Belgium); Magnus, Wim [IMEC, Kapeldreef 75, B-3001 Leuven (Belgium); Physics Department, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerpen (Belgium)

    2015-09-28

    Ando's model provides a rigorous quantum-mechanical framework for electron-surface roughness scattering, based on the detailed roughness structure. We apply this method to metallic nanowires and improve the model introducing surface roughness distribution functions on a finite domain with analytical expressions for the average surface roughness matrix elements. This approach is valid for any roughness size and extends beyond the commonly used Prange-Nee approximation. The resistivity scaling is obtained from the self-consistent relaxation time solution of the Boltzmann transport equation and is compared to Prange-Nee's approach and other known methods. The results show that a substantial drop in resistivity can be obtained for certain diameters by achieving a large momentum gap between Fermi level states with positive and negative momentum in the transport direction.

  1. Skin friction measurements of systematically-varied roughness: Probing the role of roughness amplitude and skewness

    Science.gov (United States)

    Barros, Julio; Flack, Karen; Schultz, Michael

    2017-11-01

    Real-world engineering systems which feature either external or internal wall-bounded turbulent flow are routinely affected by surface roughness. This gives rise to performance degradation in the form of increased drag or head loss. However, at present there is no reliable means to predict these performance losses based upon the roughness topography alone. This work takes a systematic approach by generating random surface roughness in which the surface statistics are closely controlled. Skin friction and roughness function results will be presented for two groups of these rough surfaces. The first group is Gaussian (i.e. zero skewness) in which the root-mean-square roughness height (krms) is varied. The second group has a fixed krms, and the skewness is varied from approximately -1 to +1. The effect of the roughness amplitude and skewness on the skin friction will be discussed. Particular attention will be paid to the effect of these parameters on the roughness function in the transitionally-rough flow regime. For example, the role these parameters play in the monotonic or inflectional nature of the roughness function will be addressed. Future research into the details of the turbulence structure over these rough surfaces will also be outlined. Research funded by U.S. Office of Naval Research (ONR).

  2. Improved installation approach for variable spring setting on a pipe yet to be insulated

    International Nuclear Information System (INIS)

    Shah, H.H.; Chitnis, S.S.; Rencher, D.

    1993-01-01

    This paper provides an approach in setting of variable spring supports for noninsulated or partially insulated piping systems so that resetting these supports is not required when the insulation is fully installed. This approach shows a method of deriving the spring coldload setting tolerance values that can be readily utilized by craft personnel. This method is based on the percentage of the weight of the insulation compared to the total weight of the pipe and the applicable tolerance. Use of these setting tolerances eliminates reverification of the original cold-load settings, for the majority of variable springs when the insulation is fully installed

  3. Granular computing in decision approximation an application of rough mereology

    CERN Document Server

    Polkowski, Lech

    2015-01-01

    This book presents a study in knowledge discovery in data with knowledge understood as a set of relations among objects and their properties. Relations in this case are implicative decision rules and the paradigm in which they are induced is that of computing with granules defined by rough inclusions, the latter introduced and studied  within rough mereology, the fuzzified version of mereology. In this book basic classes of rough inclusions are defined and based on them methods for inducing granular structures from data are highlighted. The resulting granular structures are subjected to classifying algorithms, notably k—nearest  neighbors and bayesian classifiers. Experimental results are given in detail both in tabular and visualized form for fourteen data sets from UCI data repository. A striking feature of granular classifiers obtained by this approach is that preserving the accuracy of them on original data, they reduce  substantially the size of the granulated data set as well as the set of granular...

  4. REACH, non-testing approaches and the urgent need for a change in mind set

    NARCIS (Netherlands)

    Schaafsma, G.; Kroese, E.D.; Tielemans, E.L.J.P.; Sandt, J.J.M. van de; Leeuwen, C.J. van

    2009-01-01

    The objectives of REACH cannot be achieved under the current risk assessment approach. A change in mind set among all the relevant stakeholders is needed: risk assessment should move away from a labor-intensive and animal-consuming approach to intelligent and pragmatic testing, by combining exposure

  5. A "Mindful Rational Living" Approach for Addressing HIV in the School Setting

    Science.gov (United States)

    Chenneville, Tiffany; St. John Walsh, Audra

    2016-01-01

    This paper describes a "mindful rational living" approach, which incorporates mindfulness techniques with rational emotive behavioral therapy strategies for addressing HIV in the school setting. The utility of this approach for attending to the physical, mental, and psychosocial aspects of school-based HIV prevention and treatment will…

  6. A level set approach for shock-induced α-γ phase transition of RDX

    Science.gov (United States)

    Josyula, Kartik; Rahul; De, Suvranu

    2018-02-01

    We present a thermodynamically consistent level sets approach based on regularization energy functional which can be directly incorporated into a Galerkin finite element framework to model interface motion. The regularization energy leads to a diffusive form of flux that is embedded within the level sets evolution equation which maintains the signed distance property of the level set function. The scheme is shown to compare well with the velocity extension method in capturing the interface position. The proposed level sets approach is employed to study the α-γphase transformation in RDX single crystal shocked along the (100) plane. Example problems in one and three dimensions are presented. We observe smooth evolution of the phase interface along the shock direction in both models. There is no diffusion of the interface during the zero level set evolution in the three dimensional model. The level sets approach is shown to capture the characteristics of the shock-induced α-γ phase transformation such as stress relaxation behind the phase interface and the finite time required for the phase transformation to complete. The regularization energy based level sets approach is efficient, robust, and easy to implement.

  7. Multi-criteria decision making--an approach to setting priorities in health care.

    Science.gov (United States)

    Nobre, F F; Trotta, L T; Gomes, L F

    1999-12-15

    The objective of this paper is to present a multi-criteria decision making (MCDM) approach to support public health decision making that takes into consideration the fuzziness of the decision goals and the behavioural aspect of the decision maker. The approach is used to analyse the process of health technology procurement in a University Hospital in Rio de Janeiro, Brazil. The method, known as TODIM, relies on evaluating alternatives with a set of decision criteria assessed using an ordinal scale. Fuzziness in generating criteria scores and weights or conflicts caused by dealing with different viewpoints of a group of decision makers (DMs) are solved using fuzzy set aggregation rules. The results suggested that MCDM models, incorporating fuzzy set approaches, should form a set of tools for public health decision making analysis, particularly when there are polarized opinions and conflicting objectives from the DM group. Copyright 1999 John Wiley & Sons, Ltd.

  8. A Hartree–Fock study of the confined helium atom: Local and global basis set approaches

    Energy Technology Data Exchange (ETDEWEB)

    Young, Toby D., E-mail: tyoung@ippt.pan.pl [Zakład Metod Komputerowych, Instytut Podstawowych Prolemów Techniki Polskiej Akademia Nauk, ul. Pawińskiego 5b, 02-106 Warszawa (Poland); Vargas, Rubicelia [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico); Garza, Jorge, E-mail: jgo@xanum.uam.mx [Universidad Autónoma Metropolitana Iztapalapa, División de Ciencias Básicas e Ingenierías, Departamento de Química, San Rafael Atlixco 186, Col. Vicentina, Iztapalapa, D.F. C.P. 09340, México (Mexico)

    2016-02-15

    Two different basis set methods are used to calculate atomic energy within Hartree–Fock theory. The first is a local basis set approach using high-order real-space finite elements and the second is a global basis set approach using modified Slater-type orbitals. These two approaches are applied to the confined helium atom and are compared by calculating one- and two-electron contributions to the total energy. As a measure of the quality of the electron density, the cusp condition is analyzed. - Highlights: • Two different basis set methods for atomic Hartree–Fock theory. • Galerkin finite element method and modified Slater-type orbitals. • Confined atom model (helium) under small-to-extreme confinement radii. • Detailed analysis of the electron wave-function and the cusp condition.

  9. Urban roughness mapping validation techniques and some first results

    NARCIS (Netherlands)

    Bottema, M; Mestayer, PG

    1998-01-01

    Because of measuring problems related to evaluation of urban roughness parameters, a new approach using a roughness mapping tool has been tested: evaluation of roughness length z(o) and zero displacement z(d) from cadastral databases. Special attention needs to be given to the validation of the

  10. Evaluation of the surface hardness, roughness, gloss and color of composites after different finishing/polishing treatments and thermocycling using a multitechnique approach.

    Science.gov (United States)

    Pala, Kanşad; Tekçe, Neslihan; Tuncer, Safa; Serim, Merve Efe; Demirci, Mustafa

    2016-01-01

    The objectives of this study were to evaluate the mechanical and physical properties of resin composites. The materials evaluated were the Clearfil Majesty Posterior, Filtek Z550 and G-aenial Posterior composites. A total of 189 specimens were fabricated for microhardness, roughness, gloss and color tests. The specimens were divided into three finishing and polishing systems: Enhance, OneGloss and Sof-Lex Spiral. Microhardness, roughness, gloss and color were measured after 24 h and after 10,000 thermocycles. Two samples from each group were evaluated using SEM and AFM. G-aenial Posterior exhibited the lowest microhardness values. The mean roughness ranged from 0.37 to 0.61 µm. The smoothest surfaces were obtained with Sof-Lex Spiral for each material. G-aenial Posterior with Enhance was determined to be the glossiest surfaces. All of the materials exhibited similar ΔE values ranging between 1.69 and 2.75. Sof-Lex Spiral discs could be used successfully to polish composites.

  11. Numerical Schemes for Rough Parabolic Equations

    Energy Technology Data Exchange (ETDEWEB)

    Deya, Aurelien, E-mail: deya@iecn.u-nancy.fr [Universite de Nancy 1, Institut Elie Cartan Nancy (France)

    2012-04-15

    This paper is devoted to the study of numerical approximation schemes for a class of parabolic equations on (0,1) perturbed by a non-linear rough signal. It is the continuation of Deya (Electron. J. Probab. 16:1489-1518, 2011) and Deya et al. (Probab. Theory Relat. Fields, to appear), where the existence and uniqueness of a solution has been established. The approach combines rough paths methods with standard considerations on discretizing stochastic PDEs. The results apply to a geometric 2-rough path, which covers the case of the multidimensional fractional Brownian motion with Hurst index H>1/3.

  12. Robust surface roughness indices and morphological interpretation

    Science.gov (United States)

    Trevisani, Sebastiano; Rocca, Michele

    2016-04-01

    Geostatistical-based image/surface texture indices based on variogram (Atkison and Lewis, 2000; Herzfeld and Higginson, 1996; Trevisani et al., 2012) and on its robust variant MAD (median absolute differences, Trevisani and Rocca, 2015) offer powerful tools for the analysis and interpretation of surface morphology (potentially not limited to solid earth). In particular, the proposed robust index (Trevisani and Rocca, 2015) with its implementation based on local kernels permits the derivation of a wide set of robust and customizable geomorphometric indices capable to outline specific aspects of surface texture. The stability of MAD in presence of signal noise and abrupt changes in spatial variability is well suited for the analysis of high-resolution digital terrain models. Moreover, the implementation of MAD by means of a pixel-centered perspective based on local kernels, with some analogies to the local binary pattern approach (Lucieer and Stein, 2005; Ojala et al., 2002), permits to create custom roughness indices capable to outline different aspects of surface roughness (Grohmann et al., 2011; Smith, 2015). In the proposed poster, some potentialities of the new indices in the context of geomorphometry and landscape analysis will be presented. At same time, challenges and future developments related to the proposed indices will be outlined. Atkinson, P.M., Lewis, P., 2000. Geostatistical classification for remote sensing: an introduction. Computers & Geosciences 26, 361-371. Grohmann, C.H., Smith, M.J., Riccomini, C., 2011. Multiscale Analysis of Topographic Surface Roughness in the Midland Valley, Scotland. IEEE Transactions on Geoscience and Remote Sensing 49, 1220-1213. Herzfeld, U.C., Higginson, C.A., 1996. Automated geostatistical seafloor classification - Principles, parameters, feature vectors, and discrimination criteria. Computers and Geosciences, 22 (1), pp. 35-52. Lucieer, A., Stein, A., 2005. Texture-based landform segmentation of LiDAR imagery

  13. Understanding employee motivation and organizational performance: Arguments for a set-theoretic approach

    Directory of Open Access Journals (Sweden)

    Michael T. Lee

    2016-09-01

    Full Text Available Empirical evidence demonstrates that motivated employees mean better organizational performance. The objective of this conceptual paper is to articulate the progress that has been made in understanding employee motivation and organizational performance, and to suggest how the theory concerning employee motivation and organizational performance may be advanced. We acknowledge the existing limitations of theory development and suggest an alternative research approach. Current motivation theory development is based on conventional quantitative analysis (e.g., multiple regression analysis, structural equation modeling. Since researchers are interested in context and understanding of this social phenomena holistically, they think in terms of combinations and configurations of a set of pertinent variables. We suggest that researchers take a set-theoretic approach to complement existing conventional quantitative analysis. To advance current thinking, we propose a set-theoretic approach to leverage employee motivation for organizational performance.

  14. Sub-Patch Roughness in Earthquake Rupture Investigations

    KAUST Repository

    Zielke, Olaf; Mai, Paul Martin

    2016-01-01

    Fault geometric complexities exhibit fractal characteristics over a wide range of spatial scales (<µm to >km) and strongly affect the rupture process at corresponding scales. Numerical rupture simulations provide a framework to quantitatively investigate the relationship between a fault's roughness and its seismic characteristics. Fault discretization however introduces an artificial lower limit to roughness. Individual fault patches are planar and sub-patch roughnessroughness at spatial scales below fault-patch size– is not incorporated. Does negligence of sub-patch roughness measurably affect the outcome of earthquake rupture simulations? We approach this question with a numerical parameter space investigation and demonstrate that sub-patch roughness significantly modifies the slip-strain relationship –a fundamental aspect of dislocation theory. Faults with sub-patch roughness induce less strain than their planar-fault equivalents at distances beyond the length of a slipping fault. We further provide regression functions that characterize the stochastic effect sub-patch roughness.

  15. Hybrid approach for detection of dental caries based on the methods FCM and level sets

    Science.gov (United States)

    Chaabene, Marwa; Ben Ali, Ramzi; Ejbali, Ridha; Zaied, Mourad

    2017-03-01

    This paper presents a new technique for detection of dental caries that is a bacterial disease that destroys the tooth structure. In our approach, we have achieved a new segmentation method that combines the advantages of fuzzy C mean algorithm and level set method. The results obtained by the FCM algorithm will be used by Level sets algorithm to reduce the influence of the noise effect on the working of each of these algorithms, to facilitate level sets manipulation and to lead to more robust segmentation. The sensitivity and specificity confirm the effectiveness of proposed method for caries detection.

  16. New Approaches for Solving Fokker Planck Equation on Cantor Sets within Local Fractional Operators

    Directory of Open Access Journals (Sweden)

    Hassan Kamil Jassim

    2015-01-01

    Full Text Available We discuss new approaches to handling Fokker Planck equation on Cantor sets within local fractional operators by using the local fractional Laplace decomposition and Laplace variational iteration methods based on the local fractional calculus. The new approaches maintain the efficiency and accuracy of the analytical methods for solving local fractional differential equations. Illustrative examples are given to show the accuracy and reliable results.

  17. Roughing up Beta

    DEFF Research Database (Denmark)

    Bollerslev, Tim; Li, Sophia Zhengzi; Todorov, Viktor

    -section. An investment strategy that goes long stocks with high jump betas and short stocks with low jump betas produces significant average excess returns. These higher risk premiums for the discontinuous and overnight market betas remain significant after controlling for a long list of other firm characteristics......Motivated by the implications from a stylized equilibrium pricing framework, we investigate empirically how individual equity prices respond to continuous, or \\smooth," and jumpy, or \\rough," market price moves, and how these different market price risks, or betas, are priced in the cross......-section of expected returns. Based on a novel highfrequency dataset of almost one-thousand individual stocks over two decades, we find that the two rough betas associated with intraday discontinuous and overnight returns entail significant risk premiums, while the intraday continuous beta is not priced in the cross...

  18. A Systems Approach to Understanding Occupational Therapy Service Negotiations in a Preschool Setting

    Science.gov (United States)

    Silverman, Fern; Kramer, Paula; Ravitch, Sharon

    2011-01-01

    The purpose of this study was to use a systems approach to examine informal communications, meaning those occurring outside of scheduled meetings, among stakeholders in a preschool early intervention program. This investigation expands the discussion of how occupational therapy treatment decisions are made in educational settings by using a…

  19. Adapting Evidence-Based Mental Health Treatments in Community Settings: Preliminary Results from a Partnership Approach

    Science.gov (United States)

    Southam-Gerow, Michael A.; Hourigan, Shannon E.; Allin, Robert B., Jr.

    2009-01-01

    This article describes the application of a university-community partnership model to the problem of adapting evidence-based treatment approaches in a community mental health setting. Background on partnership research is presented, with consideration of methodological and practical issues related to this kind of research. Then, a rationale for…

  20. Exploring Organized and Visionary Approaches to Designing an Advent Fun Day in an Educational Setting

    Science.gov (United States)

    Francis, Leslie J.; Smith, Greg

    2015-01-01

    This study draws on one of the four components of psychological type theory (the distinction between judging and perceiving attitudes toward the outside world) to examine the implications of these two contrasting psychological perspectives for shaping approaches to Christian ministry within an educational setting. Qualitative data were generated…

  1. A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set

    Science.gov (United States)

    Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong

    2012-01-01

    Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181

  2. A parametric level-set approach for topology optimization of flow domains

    DEFF Research Database (Denmark)

    Pingen, Georg; Waidmann, Matthias; Evgrafov, Anton

    2010-01-01

    of the design variables in the traditional approaches is seen as a possible cause for the slow convergence. Non-smooth material distributions are suspected to trigger premature onset of instationary flows which cannot be treated by steady-state flow models. In the present work, we study whether the convergence...... and the versatility of topology optimization methods for fluidic systems can be improved by employing a parametric level-set description. In general, level-set methods allow controlling the smoothness of boundaries, yield a non-local influence of design variables, and decouple the material description from the flow...... field discretization. The parametric level-set method used in this study utilizes a material distribution approach to represent flow boundaries, resulting in a non-trivial mapping between design variables and local material properties. Using a hydrodynamic lattice Boltzmann method, we study...

  3. Optimized Basis Sets for the Environment in the Domain-Specific Basis Set Approach of the Incremental Scheme.

    Science.gov (United States)

    Anacker, Tony; Hill, J Grant; Friedrich, Joachim

    2016-04-21

    Minimal basis sets, denoted DSBSenv, based on the segmented basis sets of Ahlrichs and co-workers have been developed for use as environmental basis sets for the domain-specific basis set (DSBS) incremental scheme with the aim of decreasing the CPU requirements of the incremental scheme. The use of these minimal basis sets within explicitly correlated (F12) methods has been enabled by the optimization of matching auxiliary basis sets for use in density fitting of two-electron integrals and resolution of the identity. The accuracy of these auxiliary sets has been validated by calculations on a test set containing small- to medium-sized molecules. The errors due to density fitting are about 2-4 orders of magnitude smaller than the basis set incompleteness error of the DSBSenv orbital basis sets. Additional reductions in computational cost have been tested with the reduced DSBSenv basis sets, in which the highest angular momentum functions of the DSBSenv auxiliary basis sets have been removed. The optimized and reduced basis sets are used in the framework of the domain-specific basis set of the incremental scheme to decrease the computation time without significant loss of accuracy. The computation times and accuracy of the previously used environmental basis and that optimized in this work have been validated with a test set of medium- to large-sized systems. The optimized and reduced DSBSenv basis sets decrease the CPU time by about 15.4% and 19.4% compared with the old environmental basis and retain the accuracy in the absolute energy with standard deviations of 0.99 and 1.06 kJ/mol, respectively.

  4. Surface correlations of hydrodynamic drag for transitionally rough engineering surfaces

    Science.gov (United States)

    Thakkar, Manan; Busse, Angela; Sandham, Neil

    2017-02-01

    Rough surfaces are usually characterised by a single equivalent sand-grain roughness height scale that typically needs to be determined from laboratory experiments. Recently, this method has been complemented by a direct numerical simulation approach, whereby representative surfaces can be scanned and the roughness effects computed over a range of Reynolds number. This development raises the prospect over the coming years of having enough data for different types of rough surfaces to be able to relate surface characteristics to roughness effects, such as the roughness function that quantifies the downward displacement of the logarithmic law of the wall. In the present contribution, we use simulation data for 17 irregular surfaces at the same friction Reynolds number, for which they are in the transitionally rough regime. All surfaces are scaled to the same physical roughness height. Mean streamwise velocity profiles show a wide range of roughness function values, while the velocity defect profiles show a good collapse. Profile peaks of the turbulent kinetic energy also vary depending on the surface. We then consider which surface properties are important and how new properties can be incorporated into an empirical model, the accuracy of which can then be tested. Optimised models with several roughness parameters are systematically developed for the roughness function and profile peak turbulent kinetic energy. In determining the roughness function, besides the known parameters of solidity (or frontal area ratio) and skewness, it is shown that the streamwise correlation length and the root-mean-square roughness height are also significant. The peak turbulent kinetic energy is determined by the skewness and root-mean-square roughness height, along with the mean forward-facing surface angle and spanwise effective slope. The results suggest feasibility of relating rough-wall flow properties (throughout the range from hydrodynamically smooth to fully rough) to surface

  5. Roughness as classicality indicator of a quantum state

    Science.gov (United States)

    Lemos, Humberto C. F.; Almeida, Alexandre C. L.; Amaral, Barbara; Oliveira, Adélcio C.

    2018-03-01

    We define a new quantifier of classicality for a quantum state, the Roughness, which is given by the L2 (R2) distance between Wigner and Husimi functions. We show that the Roughness is bounded and therefore it is a useful tool for comparison between different quantum states for single bosonic systems. The state classification via the Roughness is not binary, but rather it is continuous in the interval [ 0 , 1 ], being the state more classic as the Roughness approaches to zero, and more quantum when it is closer to the unity. The Roughness is maximum for Fock states when its number of photons is arbitrarily large, and also for squeezed states at the maximum compression limit. On the other hand, the Roughness approaches its minimum value for thermal states at infinite temperature and, more generally, for infinite entropy states. The Roughness of a coherent state is slightly below one half, so we may say that it is more a classical state than a quantum one. Another important result is that the Roughness performs well for discriminating both pure and mixed states. Since the Roughness measures the inherent quantumness of a state, we propose another function, the Dynamic Distance Measure (DDM), which is suitable for measure how much quantum is a dynamics. Using DDM, we studied the quartic oscillator, and we observed that there is a certain complementarity between dynamics and state, i.e. when dynamics becomes more quantum, the Roughness of the state decreases, while the Roughness grows as the dynamics becomes less quantum.

  6. [Improvement approaches in the hospital setting: From total quality management to Lean].

    Science.gov (United States)

    Curatolo, N; Lamouri, S; Huet, J-C; Rieutord, A

    2015-07-01

    Hospitals have to deal strong with economic constraints and increasing requirements in terms of quality and safety of care. To address these constraints, one solution could be the adoption of approaches from the industry sector. Following the decree of April 6, 2011 on the quality management of the medication use process, some of these approaches, such as risk management, are now part of the everyday work of healthcare professionals. However, other approaches, such as business process improvement, are still poorly developed in the hospital setting. In this general review, we discuss the main approaches of business process improvements that have been used in hospitals by focusing specifically on one of the newest and most currently used: Lean. Copyright © 2014. Published by Elsevier Masson SAS.

  7. A Regionalization Approach to select the final watershed parameter set among the Pareto solutions

    Science.gov (United States)

    Park, G. H.; Micheletty, P. D.; Carney, S.; Quebbeman, J.; Day, G. N.

    2017-12-01

    The calibration of hydrological models often results in model parameters that are inconsistent with those from neighboring basins. Considering that physical similarity exists within neighboring basins some of the physically related parameters should be consistent among them. Traditional manual calibration techniques require an iterative process to make the parameters consistent, which takes additional effort in model calibration. We developed a multi-objective optimization procedure to calibrate the National Weather Service (NWS) Research Distributed Hydrological Model (RDHM), using the Nondominant Sorting Genetic Algorithm (NSGA-II) with expert knowledge of the model parameter interrelationships one objective function. The multi-objective algorithm enables us to obtain diverse parameter sets that are equally acceptable with respect to the objective functions and to choose one from the pool of the parameter sets during a subsequent regionalization step. Although all Pareto solutions are non-inferior, we exclude some of the parameter sets that show extremely values for any of the objective functions to expedite the selection process. We use an apriori model parameter set derived from the physical properties of the watershed (Koren et al., 2000) to assess the similarity for a given parameter across basins. Each parameter is assigned a weight based on its assumed similarity, such that parameters that are similar across basins are given higher weights. The parameter weights are useful to compute a closeness measure between Pareto sets of nearby basins. The regionalization approach chooses the Pareto parameter sets that minimize the closeness measure of the basin being regionalized. The presentation will describe the results of applying the regionalization approach to a set of pilot basins in the Upper Colorado basin as part of a NASA-funded project.

  8. Modelling dynamic roughness during floods

    NARCIS (Netherlands)

    Paarlberg, Andries; Dohmen-Janssen, Catarine M.; Hulscher, Suzanne J.M.H.; Termes, A.P.P.

    2007-01-01

    In this paper, we present a dynamic roughness model to predict water levels during floods. Hysteresis effects of dune development are explicitly included. It is shown that differences between the new dynamic roughness model, and models where the roughness coefficient is calibrated, are most

  9. Rough flows and homogenization in stochastic turbulence

    OpenAIRE

    Bailleul, I.; Catellier, R.

    2016-01-01

    We provide in this work a tool-kit for the study of homogenisation of random ordinary differential equations, under the form of a friendly-user black box based on the tehcnology of rough flows. We illustrate the use of this setting on the example of stochastic turbulence.

  10. The Apparent Contact Angle and Wetted Area of Active Alloys on Silicon Carbide as a Function of the Temperature and the Surface Roughness: A Multivariate Approach

    Science.gov (United States)

    Tillmann, Wolfgang; Pfeiffer, Jan; Wojarski, Lukas

    2015-08-01

    Despite the broad field of applications for active filler alloys for brazing ceramics, as well as intense research work on the wetting and spreading behavior of these alloys on ceramic surfaces within the last decades, the manufactured joints still exhibit significant variations in their properties due to the high sensitivity of the alloys to changing brazing conditions. This increases the need for investigations of the wetting and spreading behavior of filler alloys with regard to the dominating influences combined with their interdependencies, instead of solely focusing on single parameter investigations. In this regard, measurements of the wetting angle and area were conducted at solidified AgCuTi and CuSnTi alloys on SiC substrates. Based on these measurements, a regression model was generated, illustrating the influence of the brazing temperature, the roughness of the faying surfaces, the furnace atmosphere, and their interdependencies on the wetting and spreading behavior of the filler alloys. It was revealed that the behavior of the melts was significantly influenced by the varied brazing parameters, as well as by their interdependencies. This result was also predicted by the developed model and showed a high accuracy.

  11. Death of distance and agglomeration forces of firms in the urban e-economy : an artificial intelligence approach using rough set analysis

    NARCIS (Netherlands)

    Nijkamp, Peter; Geenhuizen, van Marina

    2005-01-01

    The present study addresses the relevance of geographic proximity for companies in our age of advanced ICT. Many visions of, and speculations on, an increased footlooseness of companies and a concomitant dispersal of urban economic activity have been published in recent years. To identify whether

  12. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  13. Optimal PID settings for first and second-order processes - Comparison with different controller tuning approaches

    OpenAIRE

    Pappas, Iosif

    2016-01-01

    PID controllers are extensively used in industry. Although many tuning methodologies exist, finding good controller settings is not an easy task and frequently optimization-based design is preferred to satisfy more complex criteria. In this thesis, the focus was to find which tuning approaches, if any, present close to optimal behavior. Pareto-optimal controllers were found for different first and second-order processes with time delay. Performance was quantified in terms of the integrat...

  14. Gestalt therapy approaches with aggressive children in a day care setting

    OpenAIRE

    Maxey, Win

    1987-01-01

    This research study was designed to evaluate whether or not Gestalt therapy approaches could be used effectively when intervening with aggressive acts in a day care setting. Five focus children were observed at timed intervals as to whether or not they were aggressive, how the caretaker intervened, and how the children responded to the caretaker intervention. After a baseline of aggressive acts was established, caretakers were trained to use Gestalt therapy interventio...

  15. Research on Risk Allocation of Public-private Partnership Projects based on Rough Set Theory%基于粗糙集理论的PPP项目风险分担研究

    Institute of Scientific and Technical Information of China (English)

    巴希; 乌云娜; 胡新亮; 李泽众

    2013-01-01

      公私合作制将私人资本、技术和管理经验引入基础设施建设及运营项目,发挥了巨大的经济效益和社会效益。在我国公私合作项目的实践过程中,风险分担不明确始终是阻碍该模式在基础设施投融资领域进行广泛推广的关键因素,严重时甚至导致项目的失败。针对承担风险的主体不确定这一影响公私双方持久稳定合作的问题,通过案例分析和文献研究识别出风险分担主体不明确的风险因素,在此基础上利用粗糙集方法对风险分担评价指标体系中指标进行属性约简,以剔除对于分担结果影响较小的因素。理想点法能够对具有不同风险分担偏好的评价人做出的风险承担选择进行评价,以确定合理的风险分担方案。评价结果为公私双方制定合理的风险分担方案提供参考。%Public-Private-Partnership introduces private capital, technology and management experience into infrastructure con-struction and operation of projects, which brings huge economic and social benefits. In the practice of public-private partnership pro-ject, the unclear risk-sharing is always the key factor hindering the PPP pattern to widely promote in infrastructure investment and fi-nancing, even lead to project failure. Aimed at the uncertainty of main body bearing risk, which affects the lasting stability of Public-Private-Partnership. Through case studies and literature research, this paper identified the risk factors which has unclear risk-sharing body. On this basis, using Rough Set method to evaluate the risk-sharing index system and reduce the attributes which affect the risk-sharing result least. The TOPSIS method can evaluate the program by the people with different risk-sharing preferences that will deter-mine the most reasonable one. Meantime, the evaluation results provide a reference for both public and private to develop a reasonable risk-sharing scheme.

  16. A simplified approach to the PROMETHEE method for priority setting in management of mine action projects

    Directory of Open Access Journals (Sweden)

    Marko Mladineo

    2016-12-01

    Full Text Available In the last 20 years, priority setting in mine actions, i.e. in humanitarian demining, has become an increasingly important topic. Given that mine action projects require management and decision-making based on a multi -criteria approach, multi-criteria decision-making methods like PROMETHEE and AHP have been used worldwide for priority setting. However, from the aspect of mine action, where stakeholders in the decision-making process for priority setting are project managers, local politicians, leaders of different humanitarian organizations, or similar, applying these methods can be difficult. Therefore, a specialized web-based decision support system (Web DSS for priority setting, developed as part of the FP7 project TIRAMISU, has been extended using a module for developing custom priority setting scenarios in line with an exceptionally easy, user-friendly approach. The idea behind this research is to simplify the multi-criteria analysis based on the PROMETHEE method. Therefore, a simplified PROMETHEE method based on statistical analysis for automated suggestions of parameters such as preference function thresholds, interactive selection of criteria weights, and easy input of criteria evaluations is presented in this paper. The result is web-based DSS that can be applied worldwide for priority setting in mine action. Additionally, the management of mine action projects is supported using modules for providing spatial data based on the geographic information system (GIS. In this paper, the benefits and limitations of a simplified PROMETHEE method are presented using a case study involving mine action projects, and subsequently, certain proposals are given for the further research.

  17. Fault Detection of Wind Turbines with Uncertain Parameters: A Set-Membership Approach

    Directory of Open Access Journals (Sweden)

    Thomas Bak

    2012-07-01

    Full Text Available In this paper a set-membership approach for fault detection of a benchmark wind turbine is proposed. The benchmark represents relevant fault scenarios in the control system, including sensor, actuator and system faults. In addition we also consider parameter uncertainties and uncertainties on the torque coefficient. High noise on the wind speed measurement, nonlinearities in the aerodynamic torque and uncertainties on the parameters make fault detection a challenging problem. We use an effective wind speed estimator to reduce the noise on the wind speed measurements. A set-membership approach is used generate a set that contains all states consistent with the past measurements and the given model of the wind turbine including uncertainties and noise. This set represents all possible states the system can be in if not faulty. If the current measurement is not consistent with this set, a fault is detected. For representation of these sets we use zonotopes and for modeling of uncertainties we use matrix zonotopes, which yields a computationally efficient algorithm. The method is applied to the wind turbine benchmark problem without and with uncertainties. The result demonstrates the effectiveness of the proposed method compared to other proposed methods applied to the same problem. An advantage of the proposed method is that there is no need for threshold design, and it does not produce positive false alarms. In the case where uncertainty on the torque lookup table is introduced, some faults are not detectable. Previous research has not addressed this uncertainty. The method proposed here requires equal or less detection time than previous results.

  18. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  19. An alternative approach for teacher education framed by a collaborative partnership setting

    DEFF Research Database (Denmark)

    Pontoppidan, Birgitte Schou

    The study presents an alternative didactical approach to teacher education linking practice and theory through a collaborative partnership setting. Using a ―small scale teaching design in which students alternate between schools and college it was possible to show someevidence that, by following...... this approach, first year student teachers in a science & technology class developed teacher knowledge (as aspects of PCK). The study identifies an example using Co-Re and PaPeR as a Resource Folio to show where evidence of developing teacher knowledge is seen. This didactical approach turns the traditional...... teacher education on its head and begins with a focus on practice so students alternate between school–based and college–based teaching in a cyclical fashion, and are encouraged to link theory with practice. This kind of college teaching demands a new teacher educational paradigm for which collaboration...

  20. Setting-level influences on implementation of the responsive classroom approach.

    Science.gov (United States)

    Wanless, Shannon B; Patton, Christine L; Rimm-Kaufman, Sara E; Deutsch, Nancy L

    2013-02-01

    We used mixed methods to examine the association between setting-level factors and observed implementation of a social and emotional learning intervention (Responsive Classroom® approach; RC). In study 1 (N = 33 3rd grade teachers after the first year of RC implementation), we identified relevant setting-level factors and uncovered the mechanisms through which they related to implementation. In study 2 (N = 50 4th grade teachers after the second year of RC implementation), we validated our most salient Study 1 finding across multiple informants. Findings suggested that teachers perceived setting-level factors, particularly principal buy-in to the intervention and individualized coaching, as influential to their degree of implementation. Further, we found that intervention coaches' perspectives of principal buy-in were more related to implementation than principals' or teachers' perspectives. Findings extend the application of setting theory to the field of implementation science and suggest that interventionists may want to consider particular accounts of school setting factors before determining the likelihood of schools achieving high levels of implementation.

  1. Priority Setting in Indigenous Health: Why We Need an Explicit Decision Making Approach

    Directory of Open Access Journals (Sweden)

    Michael E. Otim

    2015-06-01

    Full Text Available Indigenous Australians have significantly poorer health outcomes than the non-Indigenous population worldwide. The Australian government has increased its investment in Indigenous health through the "Closing the Health Gap" initiative. Deciding where to invest scarce resources so as to maximize health outcomes for Indigenous peoples may require improved priority setting processes. Current government practice involves a mix of implicit and explicit processes to varying degrees at the macro and meso decision making levels. In this article, we argue that explicit priority setting should be emphasized in Indigenous health, as it can ensure that the decision making process is accountable, systematic, and transparent. Following a review of the literature, we outline four key issues that need to be considered for explicit priority setting: developing an Indigenous health "constitution," strengthening the evidence base, selecting mechanisms for priority setting, and establishing appropriate incentives and institutional structure. We then summarize our findings into a checklist that can help a decision makers ensure that explicit priority setting is undertaken in Indigenous health. By addressing these key issues, the benefits of an explicit approach, which include increased efficiency, equity, and use of evidence, can be realized, thereby maximizing Indigenous health outcomes.

  2. Reproducibility of surface roughness in reaming

    DEFF Research Database (Denmark)

    Müller, Pavel; De Chiffre, Leonardo

    An investigation on the reproducibility of surface roughness in reaming was performed to document the applicability of this approach for testing cutting fluids. Austenitic stainless steel was used as a workpiece material and HSS reamers as cutting tools. Reproducibility of the results was evaluat...

  3. 全序优势关系下区间信息系统多粒度粗糙集的粒度约简%On Particle Size Reduction of Multi-granularity Rough set in Interval Information System with Total Order Dominance Relation

    Institute of Scientific and Technical Information of China (English)

    于莹莹

    2017-01-01

    Multi-granularity rough set is a rise as a research direction in rough set theory in recent years.According to information system based on dominance relation,the interval of the granularity of rough sets,the paper puts forward the concept of relative particle size reduction,size reduction algorithm based on granularity importance,and use instance for the specific analysis of the effectiveness of the proposed method.%多粒度粗糙集是近年来粗糙集理论中兴起的一个研究方向.该文针对优势关系下的区间信息系统的多粒度粗糙集,提出了相对粒度约简的概念,给出了基于粒度重要性的粒度约简算法.用实例来进行具体分析该方法的有效性.

  4. Rough Surface Contact

    Directory of Open Access Journals (Sweden)

    T Nguyen

    2017-06-01

    Full Text Available This paper studies the contact of general rough curved surfaces having nearly identical geometries, assuming the contact at each differential area obeys the model proposed by Greenwood and Williamson. In order to account for the most general gross geometry, principles of differential geometry of surface are applied. This method while requires more rigorous mathematical manipulations, the fact that it preserves the original surface geometries thus makes the modeling procedure much more intuitive. For subsequent use, differential geometry of axis-symmetric surface is considered instead of general surface (although this “general case” can be done as well in Chapter 3.1. The final formulas for contact area, load, and frictional torque are derived in Chapter 3.2.

  5. Assessing Viability and Sustainability: a Systems-based Approach for Deriving Comprehensive Indicator Sets

    Directory of Open Access Journals (Sweden)

    Hartmut Bossel

    2002-01-01

    Full Text Available Performance assessment in holistic approaches such as integrated natural resource management has to deal with a complex set of interacting and self-organizing natural and human systems and agents, all pursuing their own "interests" while also contributing to the development of the total system. Performance indicators must therefore reflect the viability of essential component systems as well as their contributions to the viability and performance of other component systems and the total system under study. A systems-based derivation of a comprehensive set of performance indicators first requires the identification of essential component systems, their mutual (often hierarchical or reciprocal relationships, and their contributions to the performance of other component systems and the total system. The second step consists of identifying the indicators that represent the viability states of the component systems and the contributions of these component systems to the performance of the total system. The search for performance indicators is guided by the realization that essential interests (orientations or orientors of systems and actors are shaped by both their characteristic functions and the fundamental and general properties of their system environments (e.g., normal environmental state, scarcity of resources, variety, variability, change, other coexisting systems. To be viable, a system must devote an essential minimum amount of attention to satisfying the "basic orientors" that respond to the properties of its environment. This fact can be used to define comprehensive and system-specific sets of performance indicators that reflect all important concerns. Often, qualitative indicators and the study of qualitative systems are sufficient for reliable performance assessments. However, this approach can also be formalized for quantitative computer-assisted assessment. Examples are presented of indicator sets for the sustainable development of

  6. Chemical Topic Modeling: Exploring Molecular Data Sets Using a Common Text-Mining Approach.

    Science.gov (United States)

    Schneider, Nadine; Fechner, Nikolas; Landrum, Gregory A; Stiefl, Nikolaus

    2017-08-28

    Big data is one of the key transformative factors which increasingly influences all aspects of modern life. Although this transformation brings vast opportunities it also generates novel challenges, not the least of which is organizing and searching this data deluge. The field of medicinal chemistry is not different: more and more data are being generated, for instance, by technologies such as DNA encoded libraries, peptide libraries, text mining of large literature corpora, and new in silico enumeration methods. Handling those huge sets of molecules effectively is quite challenging and requires compromises that often come at the expense of the interpretability of the results. In order to find an intuitive and meaningful approach to organizing large molecular data sets, we adopted a probabilistic framework called "topic modeling" from the text-mining field. Here we present the first chemistry-related implementation of this method, which allows large molecule sets to be assigned to "chemical topics" and investigating the relationships between those. In this first study, we thoroughly evaluate this novel method in different experiments and discuss both its disadvantages and advantages. We show very promising results in reproducing human-assigned concepts using the approach to identify and retrieve chemical series from sets of molecules. We have also created an intuitive visualization of the chemical topics output by the algorithm. This is a huge benefit compared to other unsupervised machine-learning methods, like clustering, which are commonly used to group sets of molecules. Finally, we applied the new method to the 1.6 million molecules of the ChEMBL22 data set to test its robustness and efficiency. In about 1 h we built a 100-topic model of this large data set in which we could identify interesting topics like "proteins", "DNA", or "steroids". Along with this publication we provide our data sets and an open-source implementation of the new method (CheTo) which

  7. Dissolution of minerals with rough surfaces

    Science.gov (United States)

    de Assis, Thiago A.; Aarão Reis, Fábio D. A.

    2018-05-01

    We study dissolution of minerals with initial rough surfaces using kinetic Monte Carlo simulations and a scaling approach. We consider a simple cubic lattice structure, a thermally activated rate of detachment of a molecule (site), and rough surface configurations produced by fractional Brownian motion algorithm. First we revisit the problem of dissolution of initial flat surfaces, in which the dissolution rate rF reaches an approximately constant value at short times and is controlled by detachment of step edge sites. For initial rough surfaces, the dissolution rate r at short times is much larger than rF ; after dissolution of some hundreds of molecular layers, r decreases by some orders of magnitude across several time decades. Meanwhile, the surface evolves through configurations of decreasing energy, beginning with dissolution of isolated sites, then formation of terraces with disordered boundaries, their growth, and final smoothing. A crossover time to a smooth configuration is defined when r = 1.5rF ; the surface retreat at the crossover is approximately 3 times the initial roughness and is temperature-independent, while the crossover time is proportional to the initial roughness and is controlled by step-edge site detachment. The initial dissolution process is described by the so-called rough rates, which are measured for fixed ratios between the surface retreat and the initial roughness. The temperature dependence of the rough rates indicates control by kink site detachment; in general, it suggests that rough rates are controlled by the weakest microscopic bonds during the nucleation and formation of the lowest energy configurations of the crystalline surface. Our results are related to recent laboratory studies which show enhanced dissolution in polished calcite surfaces. In the application to calcite dissolution in alkaline environment, the minimal values of recently measured dissolution rate spectra give rF ∼10-9 mol/(m2 s), and the calculated rate

  8. Surface roughness evaluation on mandrels and mirror shells for future X-ray telescopes

    Science.gov (United States)

    Sironi, Giorgia; Spiga, D.

    2008-07-01

    More X-ray missions that will be operating in near future, like particular SIMBOL-X, e-Rosita, Con-X/HXT, SVOM/XIAO and Polar-X, will be based on focusing optics manufactured by means of the Ni electroforming replication technique. This production method has already been successfully exploited for SAX, XMM and Swift-XRT. Optical surfaces for X-ray reflection have to be as smooth as possible also at high spatial frequencies. Hence it will be crucial to take under control microroughness in order to reduce the scattering effects. A high rms microroughness would cause the degradation of the angular resolution and loss of effective area. Stringent requirements have therefore to be fixed for mirror shells surface roughness depending on the specific energy range investigated, and roughness evolution has to be carefully monitored during the subsequent steps of the mirror-shells realization. This means to study the roughness evolution in the chain mandrel, mirror shells, multilayer deposition and also the degradation of mandrel roughness following iterated replicas. Such a study allows inferring which phases of production are the major responsible of the roughness growth and could help to find solutions optimizing the involved processes. The exposed study is carried out in the context of the technological consolidation related to SIMBOL-X, along with a systematic metrological study of mandrels and mirror shells. To monitor the roughness increase following each replica, a multiinstrumental approach was adopted: microprofiles were analysed by means of their Power Spectral Density (PSD) in the spatial frequency range 1000-0.01 μm. This enables the direct comparison of roughness data taken with instruments characterized by different operative ranges of frequencies, and in particular optical interferometers and Atomic Force Microscopes. The performed analysis allowed us to set realistic specifications on the mandrel roughness to be achieved, and to suggest a limit for the

  9. Using a Robust Design Approach to Optimize Chair Set-up in Wheelchair Sport

    Directory of Open Access Journals (Sweden)

    David S. Haydon

    2018-02-01

    Full Text Available Optimisation of wheelchairs for court sports is currently a difficult and time-consuming process due to the broad range of impairments across athletes, difficulties in monitoring on-court performance, and the trade-off set-up that parameters have on key performance variables. A robust design approach to this problem can potentially reduce the amount of testing required, and therefore allow for individual on-court assessments. This study used orthogonal design with four set-up factors (seat height, depth, and angle, as well as tyre pressure at three levels (current, decreased, and increased for three elite wheelchair rugby players. Each player performed two maximal effort sprints from a stationary position in nine different set-ups, with this allowing for detailed analysis of each factor and level. Whilst statistical significance is difficult to obtain due to the small sample size, meaningful difference results aligning with previous research findings were identified and provide support for the use of this approach.

  10. An approach to develop chemical intuition for atomistic electron transport calculations using basis set rotations

    Energy Technology Data Exchange (ETDEWEB)

    Borges, A.; Solomon, G. C. [Department of Chemistry and Nano-Science Center, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen Ø (Denmark)

    2016-05-21

    Single molecule conductance measurements are often interpreted through computational modeling, but the complexity of these calculations makes it difficult to directly link them to simpler concepts and models. Previous work has attempted to make this connection using maximally localized Wannier functions and symmetry adapted basis sets, but their use can be ambiguous and non-trivial. Starting from a Hamiltonian and overlap matrix written in a hydrogen-like basis set, we demonstrate a simple approach to obtain a new basis set that is chemically more intuitive and allows interpretation in terms of simple concepts and models. By diagonalizing the Hamiltonians corresponding to each atom in the molecule, we obtain a basis set that can be partitioned into pseudo-σ and −π and allows partitioning of the Landuaer-Büttiker transmission as well as create simple Hückel models that reproduce the key features of the full calculation. This method provides a link between complex calculations and simple concepts and models to provide intuition or extract parameters for more complex model systems.

  11. Multi-Attribute Decision Making Based on Several Trigonometric Hamming Similarity Measures under Interval Rough Neutrosophic Environment

    Directory of Open Access Journals (Sweden)

    Surapati Pramanik

    2018-03-01

    Full Text Available In this paper, the sine, cosine and cotangent similarity measures of interval rough neutrosophic sets is proposed. Some properties of the proposed measures are discussed. We have proposed multi attribute decision making approaches based on proposed similarity measures. To demonstrate the applicability, a numerical example is solved.

  12. A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks

    Science.gov (United States)

    De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio

    2016-05-01

    This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.

  13. Roughness Effects on Fretting Fatigue

    Science.gov (United States)

    Yue, Tongyan; Abdel Wahab, Magd

    2017-05-01

    Fretting is a small oscillatory relative motion between two normal loaded contact surfaces. It may cause fretting fatigue, fretting wear and/or fretting corrosion damage depending on various fretting couples and working conditions. Fretting fatigue usually occurs at partial slip condition, and results in catastrophic failure at the stress levels below the fatigue limit of the material. Many parameters may affect fretting behaviour, including the applied normal load and displacement, material properties, roughness of the contact surfaces, frequency, etc. Since fretting damage is undesirable due to contacting, the effect of rough contact surfaces on fretting damage has been studied by many researchers. Experimental method on this topic is usually focusing on rough surface effects by finishing treatment and random rough surface effects in order to increase fretting fatigue life. However, most of numerical models on roughness are based on random surface. This paper reviewed both experimental and numerical methodology on the rough surface effects on fretting fatigue.

  14. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shenggao, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Mathematical Center for Interdiscipline Research, Soochow University, 1 Shizi Street, Jiangsu, Suzhou 215006 (China); Sun, Hui; Cheng, Li-Tien [Department of Mathematics, University of California, San Diego, La Jolla, California 92093-0112 (United States); Dzubiella, Joachim [Soft Matter and Functional Materials, Helmholtz-Zentrum Berlin, 14109 Berlin, Germany and Institut für Physik, Humboldt-Universität zu Berlin, 12489 Berlin (Germany); Li, Bo, E-mail: sgzhou@suda.edu.cn, E-mail: bli@math.ucsd.edu [Department of Mathematics and Quantitative Biology Graduate Program, University of California, San Diego, La Jolla, California 92093-0112 (United States); McCammon, J. Andrew [Department of Chemistry and Biochemistry, Department of Pharmacology, Howard Hughes Medical Institute, University of California, San Diego, La Jolla, California 92093-0365 (United States)

    2016-08-07

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the

  15. Urban Aerodynamic Roughness Length Mapping Using Multitemporal SAR Data

    Directory of Open Access Journals (Sweden)

    Fengli Zhang

    2017-01-01

    Full Text Available Aerodynamic roughness is very important to urban meteorological and climate studies. Radar remote sensing is considered to be an effective means for aerodynamic roughness retrieval because radar backscattering is sensitive to the surface roughness and geometric structure of a given target. In this paper, a methodology for aerodynamic roughness length estimation using SAR data in urban areas is introduced. The scale and orientation characteristics of backscattering of various targets in urban areas were firstly extracted and analyzed, which showed great potential of SAR data for urban roughness elements characterization. Then the ground truth aerodynamic roughness was calculated from wind gradient data acquired by the meteorological tower using fitting and iterative method. And then the optimal dimension of the upwind sector for the aerodynamic roughness calculation was determined through a correlation analysis between backscattering extracted from SAR data at various upwind sector areas and the aerodynamic roughness calculated from the meteorological tower data. Finally a quantitative relationship was set up to retrieve the aerodynamic roughness length from SAR data. Experiments based on ALOS PALSAR and COSMO-SkyMed data from 2006 to 2011 prove that the proposed methodology can provide accurate roughness length estimations for the spatial and temporal analysis of urban surface.

  16. A new fiber optic sensor for inner surface roughness measurement

    Science.gov (United States)

    Xu, Xiaomei; Liu, Shoubin; Hu, Hong

    2009-11-01

    In order to measure inner surface roughness of small holes nondestructively, a new fiber optic sensor is researched and developed. Firstly, a new model for surface roughness measurement is proposed, which is based on intensity-modulated fiber optic sensors and scattering modeling of rough surfaces. Secondly, a fiber optical measurement system is designed and set up. Under the help of new techniques, the fiber optic sensor can be miniaturized. Furthermore, the use of micro prism makes the light turn 90 degree, so the inner side surface roughness of small holes can be measured. Thirdly, the fiber optic sensor is gauged by standard surface roughness specimens, and a series of measurement experiments have been done. The measurement results are compared with those obtained by TR220 Surface Roughness Instrument and Form Talysurf Laser 635, and validity of the developed fiber optic sensor is verified. Finally, precision and influence factors of the fiber optic sensor are analyzed.

  17. Numerical simulations of seepage flow in rough single rock fractures

    Directory of Open Access Journals (Sweden)

    Qingang Zhang

    2015-09-01

    Full Text Available To investigate the relationship between the structural characteristics and seepage flow behavior of rough single rock fractures, a set of single fracture physical models were produced using the Weierstrass–Mandelbrot functions to test the seepage flow performance. Six single fractures, with various surface roughnesses characterized by fractal dimensions, were built using COMSOL multiphysics software. The fluid flow behavior through the rough fractures and the influences of the rough surfaces on the fluid flow behavior was then monitored. The numerical simulation indicates that there is a linear relationship between the average flow velocity over the entire flow path and the fractal dimension of the rough surface. It is shown that there is good a agreement between the numerical results and the experimental data in terms of the properties of the fluid flowing through the rough single rock fractures.

  18. Effect of truncated cone roughness element density on hydrodynamic drag

    Science.gov (United States)

    Womack, Kristofer; Schultz, Michael; Meneveau, Charles

    2017-11-01

    An experimental study was conducted on rough-wall, turbulent boundary layer flow with roughness elements whose idealized shape model barnacles that cause hydrodynamic drag in many applications. Varying planform densities of truncated cone roughness elements were investigated. Element densities studied ranged from 10% to 79%. Detailed turbulent boundary layer velocity statistics were recorded with a two-component LDV system on a three-axis traverse. Hydrodynamic roughness length (z0) and skin-friction coefficient (Cf) were determined and compared with the estimates from existing roughness element drag prediction models including Macdonald et al. (1998) and other recent models. The roughness elements used in this work model idealized barnacles, so implications of this data set for ship powering are considered. This research was supported by the Office of Naval Research and by the Department of Defense (DoD) through the National Defense Science & Engineering Graduate Fellowship (NDSEG) Program.

  19. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    Science.gov (United States)

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  20. A Novel MADM Approach Based on Fuzzy Cross Entropy with Interval-Valued Intuitionistic Fuzzy Sets

    Directory of Open Access Journals (Sweden)

    Xin Tong

    2015-01-01

    Full Text Available The paper presents a novel multiple attribute decision-making (MADM approach for the problem with completely unknown attribute weights in the framework of interval-valued intuitionistic fuzzy sets (IVIFS. First, the fuzzy cross entropy and discrimination degree of IVIFS are defied. Subsequently, based on the discrimination degree of IVIFS, a nonlinear programming model to minimize the total deviation of discrimination degrees between alternatives and the positive ideal solution PIS as well as the negative ideal solution (NIS is constructed to obtain the attribute weights and, then, the weighted discrimination degree. Finally, all the alternatives are ranked according to the relative closeness coefficients using the extended TOPSIS method, and the most desirable alternative is chosen. The proposed approach extends the research method of MADM based on the IVIF cross entropy. Finally, we illustrate the feasibility and validity of the proposed method by two examples.

  1. Thermogram breast cancer prediction approach based on Neutrosophic sets and fuzzy c-means algorithm.

    Science.gov (United States)

    Gaber, Tarek; Ismail, Gehad; Anter, Ahmed; Soliman, Mona; Ali, Mona; Semary, Noura; Hassanien, Aboul Ella; Snasel, Vaclav

    2015-08-01

    The early detection of breast cancer makes many women survive. In this paper, a CAD system classifying breast cancer thermograms to normal and abnormal is proposed. This approach consists of two main phases: automatic segmentation and classification. For the former phase, an improved segmentation approach based on both Neutrosophic sets (NS) and optimized Fast Fuzzy c-mean (F-FCM) algorithm was proposed. Also, post-segmentation process was suggested to segment breast parenchyma (i.e. ROI) from thermogram images. For the classification, different kernel functions of the Support Vector Machine (SVM) were used to classify breast parenchyma into normal or abnormal cases. Using benchmark database, the proposed CAD system was evaluated based on precision, recall, and accuracy as well as a comparison with related work. The experimental results showed that our system would be a very promising step toward automatic diagnosis of breast cancer using thermograms as the accuracy reached 100%.

  2. Guidelines for a palliative approach for aged care in the community setting: a suite of resources

    Directory of Open Access Journals (Sweden)

    David C. Currow

    2012-11-01

    Full Text Available AbstractIn Australia, many people ageing in their own homes are becoming increasingly frail and unwell, approaching the end of life. A palliative approach, which adheres to palliative care principles, is often appropriate. These principles provide a framework for proactive and holistic care in which quality of life and of dying is prioritised, as is support for families. A palliative approach can be delivered by the general practitioner working with the community aged care team, in collaboration with family carers. Support from specialist palliative care services is available if necessary. The Guidelines for a Palliative Approach for Aged Care in the Community Setting were published by the Australian Government Department of Health and Ageing to inform practice in this area. There are three resource documents. The main document provides practical evidence based guidelines, good practice points, tools, and links to resources. This document is written for general practitioners, nurses, social workers, therapists, pastoral care workers, and other health professionals and responded to needs identified during national consultation. Evidence based guidelines were underpinned by systematic reviews of the research literature. Good practice points were developed from literature reviews and expert opinion. Two ‘plain English’ booklets were developed in a process involving consumer consultation; one is for older people and their families, the other for care workers. The resources are intended to facilitate home care that acknowledges and plans for the client’s deteriorating functional trajectory and inevitable death. At a time when hospitals and residential aged care facilities are under enormous pressure as the population ages, such a planned approach makes sense for the health system as a whole. The approach also makes sense for older people who wish to die in their own homes. Family needs are recognised and addressed. Unnecessary hospitalisations

  3. Guidelines for a palliative approach for aged care in the community setting: A suite of resources.

    Science.gov (United States)

    Toye, Christine; Blackwell, Scott; Maher, Sean; Currow, David C; Holloway, Kristi; Tieman, Jennifer; Hegarty, Meg

    2012-01-01

    In Australia, many people ageing in their own homes are becoming increasingly frail and unwell, approaching the end of life. A palliative approach, which adheres to palliative care principles, is often appropriate. These principles provide a framework for proactive and holistic care in which quality of life and of dying is prioritised, as is support for families. A palliative approach can be delivered by the general practitioner working with the community aged care team, in collaboration with family carers. Support from specialist palliative care services is available if necessary.The Guidelines for a Palliative Approach for Aged Care in the Community Setting were published by the Australian Government Department of Health and Ageing to inform practice in this area. There are three resource documents. The main document provides practical evidence based guidelines, good practice points, tools, and links to resources. This document is written for general practitioners, nurses, social workers, therapists, pastoral care workers, and other health professionals and responded to needs identified during national consultation. Evidence based guidelines were underpinned by systematic reviews of the research literature. Good practice points were developed from literature reviews and expert opinion. Two 'plain English' booklets were developed in a process involving consumer consultation; one is for older people and their families, the other for care workers.The resources are intended to facilitate home care that acknowledges and plans for the client's deteriorating functional trajectory and inevitable death. At a time when hospitals and residential aged care facilities are under enormous pressure as the population ages, such a planned approach makes sense for the health system as a whole. The approach also makes sense for older people who wish to die in their own homes. Family needs are recognised and addressed. Unnecessary hospitalisations or residential placements and

  4. A Science, Engineering and Technology (SET) Approach Improves Science Process Skills in 4-H Animal Science Participants

    Science.gov (United States)

    Clarke, Katie C.

    2010-01-01

    A new Science, Engineering and Technology (SET) approach was designed for youth who participated in the Minnesota State Fair Livestock interview process. The project and evaluation were designed to determine if the new SET approach increased content knowledge and science process skills in participants. Results revealed that youth participants not…

  5. Natural science modules with SETS approach to improve students’ critical thinking ability

    Science.gov (United States)

    Budi, A. P. S.; Sunarno, W.; Sugiyarto

    2018-05-01

    SETS (Science, Environment, Technology and Society) approach for learning is important to be developed for middle school, since it can improve students’ critical thinking ability. This research aimed to determine feasibility and the effectiveness of Natural Science Module with SETS approach to increase their critical thinking ability. The module development was done by invitation, exploration, explanation, concept fortifying, and assessment. Questionnaire and test performed including pretest and posttest with control group design were used as data collection technique in this research. Two classes were selected randomly as samples and consisted of 32 students in each group. Descriptive data analysis was used to analyze the module feasibility and t-test was used to analyze their critical thinking ability. The results showed that the feasibility of the module development has a very good results based on assessment of the experts, practitioners and peers. Based on the t-test results, there was significant difference between control class and experiment class (0.004), with n-gain score of control and the experiment class respectively 0.270 (low) and 0.470 (medium). It showed that the module was more effective than the textbook. It was able to improve students’ critical thinking ability and appropriate to be used in learning process.

  6. [Priority setting of health interventions. Review of criteria, approaches and role of assessment agencies].

    Science.gov (United States)

    Varela-Lema, Leonor; Atienza-Merino, Gerardo; López-García, Marisa

    This study was carried out to develop an explicit health priority setting methodology to support decision-making regarding the technologies to be assessed for inclusion in the National Health Service service portfolio. The primary objective is to identify and analyse the criteria, approaches and conceptual frameworks used for national/international priority setting. An exhaustive review of the literature was carried out. For this purpose, a search of the main biomedical databases was performed and assessment agency websites were reviewed, among other sources. In general terms, it was found that there are no standardised criteria for priority setting, although some consensus and common trends have been identified regarding key elements (criteria, models and strategies, key actors, etc.). Globally, 8 key domains were identified: 1) need for intervention; 2) health outcomes; 3) type of benefit of the intervention; 4) economic consequences; 5) existing knowledge on the intervention/quality of and uncertainties regarding the evidence; 6) implementation and complexity of the intervention/feasibility; 7) priority, justice and ethics; and 8) overall context. The review provides a thorough analysis of the relevant issues and offers key recommendations regarding considerations for developing a national prioritisation framework. Findings are envisioned to be useful for different public organisations that are aiming to establish healthcare priorities. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Design Approach and Implementation of Application Specific Instruction Set Processor for SHA-3 BLAKE Algorithm

    Science.gov (United States)

    Zhang, Yuli; Han, Jun; Weng, Xinqian; He, Zhongzhu; Zeng, Xiaoyang

    This paper presents an Application Specific Instruction-set Processor (ASIP) for the SHA-3 BLAKE algorithm family by instruction set extensions (ISE) from an RISC (reduced instruction set computer) processor. With a design space exploration for this ASIP to increase the performance and reduce the area cost, we accomplish an efficient hardware and software implementation of BLAKE algorithm. The special instructions and their well-matched hardware function unit improve the calculation of the key section of the algorithm, namely G-functions. Also, relaxing the time constraint of the special function unit can decrease its hardware cost, while keeping the high data throughput of the processor. Evaluation results reveal the ASIP achieves 335Mbps and 176Mbps for BLAKE-256 and BLAKE-512. The extra area cost is only 8.06k equivalent gates. The proposed ASIP outperforms several software approaches on various platforms in cycle per byte. In fact, both high throughput and low hardware cost achieved by this programmable processor are comparable to that of ASIC implementations.

  8. Variations in roughness predictions (flume experiments)

    NARCIS (Netherlands)

    Noordam, Daniëlle; Blom, Astrid; van der Klis, H.; Hulscher, Suzanne J.M.H.; Makaske, A.; Wolfert, H.P.; van Os, A.G.

    2005-01-01

    Data of flume experiments with bed forms are used to analyze and compare different roughness predictors. In this study, the hydraulic roughness consists of grain roughness and form roughness. We predict the grain roughness by means of the size of the sediment. The form roughness is predicted by

  9. LBA-ECO LC-15 Aerodynamic Roughness Maps of Vegetation Canopies, Amazon Basin: 2000

    Data.gov (United States)

    National Aeronautics and Space Administration — This data set, LBA-ECO LC-15 Aerodynamic Roughness Maps of Vegetation Canopies, Amazon Basin: 2000, provides physical roughness maps of vegetation canopies in the...

  10. Comparison of vegetation roughness descriptions

    NARCIS (Netherlands)

    Augustijn, Dionysius C.M.; Huthoff, Freek; van Velzen, E.H.; Altinakar, M.S.; Kokpinar, M.A.; Aydin, I.; Cokgor, S.; Kirkgoz, S.

    2008-01-01

    Vegetation roughness is an important parameter in describing flow through river systems. Vegetation impedes the flow, which affects the stage-discharge curve and may increase flood risks. Roughness is often used as a calibration parameter in river models, however when vegetation is allowed to

  11. Integrating Genomic Data Sets for Knowledge Discovery: An Informed Approach to Management of Captive Endangered Species

    Directory of Open Access Journals (Sweden)

    Kristopher J. L. Irizarry

    2016-01-01

    Full Text Available Many endangered captive populations exhibit reduced genetic diversity resulting in health issues that impact reproductive fitness and quality of life. Numerous cost effective genomic sequencing and genotyping technologies provide unparalleled opportunity for incorporating genomics knowledge in management of endangered species. Genomic data, such as sequence data, transcriptome data, and genotyping data, provide critical information about a captive population that, when leveraged correctly, can be utilized to maximize population genetic variation while simultaneously reducing unintended introduction or propagation of undesirable phenotypes. Current approaches aimed at managing endangered captive populations utilize species survival plans (SSPs that rely upon mean kinship estimates to maximize genetic diversity while simultaneously avoiding artificial selection in the breeding program. However, as genomic resources increase for each endangered species, the potential knowledge available for management also increases. Unlike model organisms in which considerable scientific resources are used to experimentally validate genotype-phenotype relationships, endangered species typically lack the necessary sample sizes and economic resources required for such studies. Even so, in the absence of experimentally verified genetic discoveries, genomics data still provides value. In fact, bioinformatics and comparative genomics approaches offer mechanisms for translating these raw genomics data sets into integrated knowledge that enable an informed approach to endangered species management.

  12. Integrating Genomic Data Sets for Knowledge Discovery: An Informed Approach to Management of Captive Endangered Species.

    Science.gov (United States)

    Irizarry, Kristopher J L; Bryant, Doug; Kalish, Jordan; Eng, Curtis; Schmidt, Peggy L; Barrett, Gini; Barr, Margaret C

    2016-01-01

    Many endangered captive populations exhibit reduced genetic diversity resulting in health issues that impact reproductive fitness and quality of life. Numerous cost effective genomic sequencing and genotyping technologies provide unparalleled opportunity for incorporating genomics knowledge in management of endangered species. Genomic data, such as sequence data, transcriptome data, and genotyping data, provide critical information about a captive population that, when leveraged correctly, can be utilized to maximize population genetic variation while simultaneously reducing unintended introduction or propagation of undesirable phenotypes. Current approaches aimed at managing endangered captive populations utilize species survival plans (SSPs) that rely upon mean kinship estimates to maximize genetic diversity while simultaneously avoiding artificial selection in the breeding program. However, as genomic resources increase for each endangered species, the potential knowledge available for management also increases. Unlike model organisms in which considerable scientific resources are used to experimentally validate genotype-phenotype relationships, endangered species typically lack the necessary sample sizes and economic resources required for such studies. Even so, in the absence of experimentally verified genetic discoveries, genomics data still provides value. In fact, bioinformatics and comparative genomics approaches offer mechanisms for translating these raw genomics data sets into integrated knowledge that enable an informed approach to endangered species management.

  13. Road roughness evaluation using in-pavement strain sensors

    Science.gov (United States)

    Zhang, Zhiming; Deng, Fodan; Huang, Ying; Bridgelall, Raj

    2015-11-01

    The international roughness index (IRI) is a characterization of road roughness or ride quality that transportation agencies most often report. The prevalent method of acquiring IRI data requires instrumented vehicles and technicians with specialized training to interpret the results. The extensive labor and high cost requirements associated with the existing approaches limit data collection to at most once per year for portions of the national highway system. Agencies characterize roughness only for some secondary roads but much less frequently, such as once every five years, resulting in outdated roughness information. This research developed a real-time roughness evaluation approach that links the output of durable in-pavement strain sensors to prevailing indices that summarize road roughness. Field experiments validated the high consistency of the approach by showing that it is within 3.3% of relative IRI estimates. After their installation and calibration during road construction, the ruggedized strain sensors will report road roughness continuously. Thus, the solution will provide agencies a real-time roughness monitoring solution over the remaining service life of road assets.

  14. Non-Contact Surface Roughness Measurement by Implementation of a Spatial Light Modulator

    Directory of Open Access Journals (Sweden)

    Laura Aulbach

    2017-03-01

    Full Text Available The surface structure, especially the roughness, has a significant influence on numerous parameters, such as friction and wear, and therefore estimates the quality of technical systems. In the last decades, a broad variety of surface roughness measurement methods were developed. A destructive measurement procedure or the lack of feasibility of online monitoring are the crucial drawbacks of most of these methods. This article proposes a new non-contact method for measuring the surface roughness that is straightforward to implement and easy to extend to online monitoring processes. The key element is a liquid-crystal-based spatial light modulator, integrated in an interferometric setup. By varying the imprinted phase of the modulator, a correlation between the imprinted phase and the fringe visibility of an interferogram is measured, and the surface roughness can be derived. This paper presents the theoretical approach of the method and first simulation and experimental results for a set of surface roughnesses. The experimental results are compared with values obtained by an atomic force microscope and a stylus profiler.

  15. Generating evidence on a risk-based monitoring approach in the academic setting – lessons learned

    Directory of Open Access Journals (Sweden)

    Belinda von Niederhäusern

    2017-02-01

    Full Text Available Abstract Background In spite of efforts to employ risk-based strategies to increase monitoring efficiency in the academic setting, empirical evidence on their effectiveness remains sparse. This mixed-methods study aimed to evaluate the risk-based on-site monitoring approach currently followed at our academic institution. Methods We selected all studies monitored by the Clinical Trial Unit (CTU according to Risk ADApted MONitoring (ADAMON at the University Hospital Basel, Switzerland, between 01.01.2012 and 31.12.2014. We extracted study characteristics and monitoring information from the CTU Enterprise Resource Management system and from monitoring reports of all selected studies. We summarized the data descriptively. Additionally, we conducted semi-structured interviews with the three current CTU monitors. Results During the observation period, a total of 214 monitoring visits were conducted in 43 studies resulting in 2961 documented monitoring findings. Our risk-based approach predominantly identified administrative (46.2% and patient right findings (49.1%. We identified observational study design, high ADAMON risk category, industry sponsorship, the presence of an electronic database, experienced site staff, and inclusion of vulnerable study population to be factors associated with lower numbers of findings. The monitors understand the positive aspects of a risk-based approach but fear missing systematic errors due to the low frequency of visits. Conclusions We show that the factors mostly increasing the risk for on-site monitoring findings are underrepresented in the current risk analysis scheme. Our risk-based on-site approach should further be complemented by centralized data checks, allowing monitors to transform their role towards partners for overall trial quality, and success.

  16. Approaches, tools and methods used for setting priorities in health research in the 21(st) century.

    Science.gov (United States)

    Yoshida, Sachiyo

    2016-06-01

    Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001-2014. A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (priorities were set. A further 19% used a combination of expert panel interview and focus group discussion ("consultation process") but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face-to-face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. The number of priority setting exercises in health research published in PubMed-indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well-defined structure - such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix - it is likely that the Delphi method and non-replicable consultation processes will gradually be replaced by these emerging tools, which offer more

  17. Approaches, tools and methods used for setting priorities in health research in the 21st century

    Science.gov (United States)

    Yoshida, Sachiyo

    2016-01-01

    Background Health research is difficult to prioritize, because the number of possible competing ideas for research is large, the outcome of research is inherently uncertain, and the impact of research is difficult to predict and measure. A systematic and transparent process to assist policy makers and research funding agencies in making investment decisions is a permanent need. Methods To obtain a better understanding of the landscape of approaches, tools and methods used to prioritize health research, I conducted a methodical review using the PubMed database for the period 2001–2014. Results A total of 165 relevant studies were identified, in which health research prioritization was conducted. They most frequently used the CHNRI method (26%), followed by the Delphi method (24%), James Lind Alliance method (8%), the Combined Approach Matrix (CAM) method (2%) and the Essential National Health Research method (priorities were set. A further 19% used a combination of expert panel interview and focus group discussion (“consultation process”) but provided few details, while a further 2% used approaches that were clearly described, but not established as a replicable method. Online surveys that were not accompanied by face–to–face meetings were used in 8% of studies, while 9% used a combination of literature review and questionnaire to scrutinise the research options for prioritization among the participating experts. Conclusion The number of priority setting exercises in health research published in PubMed–indexed journals is increasing, especially since 2010. These exercises are being conducted at a variety of levels, ranging from the global level to the level of an individual hospital. With the development of new tools and methods which have a well–defined structure – such as the CHNRI method, James Lind Alliance Method and Combined Approach Matrix – it is likely that the Delphi method and non–replicable consultation processes will gradually be

  18. A novel approach to deprescribing in long-term care settings: The SMART campaign.

    Science.gov (United States)

    Abrahamson, Kathleen; Nazir, Arif; Pressler, Karis

    2017-11-01

    There have been numerous calls within the medical community urging providers to consider the complex problem of inappropriate polypharmacy and inappropriate medication use among nursing home residents. It is clear that innovative, longitudinal policy-supported interventions are needed to better understand prescribing practices in long-term care settings and to curtail the negative, cascading outcomes associated with inappropriate polypharmacy among elderly patients. The Indiana Safer Medication Administration Regimens and Treatment (SMART) campaign is funded by the Indiana State Department of Health for a pilot period of 2 years (2016-18) with the objectives of: 1. Reducing the average number of medications per resident, 2. Reducing use of antipsychotic, anxiolytic, and hypnotic medications, and 3. Reducing overall medication costs within participating facilities. In this report we comment upon what is new about the Indiana approach, and what we believe is worthy of consideration by other states. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Tele-cytology: An innovative approach for cervical cancer screening in resource-poor settings.

    Science.gov (United States)

    Singh, Sandeep; Badaya, Sorabh

    2016-01-01

    Carcinoma cervix remains a leading cause of cancer mortality among women in countries lacking any screening program. The existing screening policy and approach via conventional cytology centered mainly in Tertiary Care Center, is totally unaffordable to Indian women, especially in the remote areas. This suggests the need of depolarizing the resources via generating the near real time modalities which could be used at the door step of the needy ones. For any screening modality to be effective it should be adequately sensitive, specific, reproducible, cheap, simple, affordable, and the most important is should be real time to ensure wide coverage and curtail loss to follow-up. Incorporating telecytology as a screening tool could make the dream come true. Telecytology is the interpretation of cytology material at a distance using digital images. Use of mobile telecytology unit housed in a van carrying satellite equipment and the automated image capturing systems is the central theme behind this idea. The imaging equipment would be carrying out the imaging of Papanicolaou smears prepared at the screening site and sending the images to the central laboratories situated at some tertiary care level. This concept could overcome the hindrance of trained cytology infrastructure in the resource poor settings and could provide an efficient and economical way of screening patients. There is possibility that the designed approach may not detect the entire women positive for the disease but if the desired objective was to diagnose as many cases as possible in resource poor setting, then this process offers an advantage over no screening at all.

  20. Experiencing a probabilistic approach to clarify and disclose uncertainties when setting occupational exposure limits.

    Science.gov (United States)

    Vernez, David; Fraize-Frontier, Sandrine; Vincent, Raymond; Binet, Stéphane; Rousselle, Christophe

    2018-03-15

    Assessment factors (AFs) are commonly used for deriving reference concentrations for chemicals. These factors take into account variabilities as well as uncertainties in the dataset, such as inter-species and intra-species variabilities or exposure duration extrapolation or extrapolation from the lowest-observed-adverse-effect level (LOAEL) to the noobserved- adverse-effect level (NOAEL). In a deterministic approach, the value of an AF is the result of a debate among experts and, often a conservative value is used as a default choice. A probabilistic framework to better take into account uncertainties and/or variability when setting occupational exposure limits (OELs) is presented and discussed in this paper. Each AF is considered as a random variable with a probabilistic distribution. A short literature was conducted before setting default distributions ranges and shapes for each AF commonly used. A random sampling, using Monte Carlo techniques, is then used for propagating the identified uncertainties and computing the final OEL distribution. Starting from the broad default distributions obtained, experts narrow it to its most likely range, according to the scientific knowledge available for a specific chemical. Introducing distribution rather than single deterministic values allows disclosing and clarifying variability and/or uncertainties inherent to the OEL construction process. This probabilistic approach yields quantitative insight into both the possible range and the relative likelihood of values for model outputs. It thereby provides a better support in decision-making and improves transparency. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  1. Shrinkage covariance matrix approach based on robust trimmed mean in gene sets detection

    Science.gov (United States)

    Karjanto, Suryaefiza; Ramli, Norazan Mohamed; Ghani, Nor Azura Md; Aripin, Rasimah; Yusop, Noorezatty Mohd

    2015-02-01

    Microarray involves of placing an orderly arrangement of thousands of gene sequences in a grid on a suitable surface. The technology has made a novelty discovery since its development and obtained an increasing attention among researchers. The widespread of microarray technology is largely due to its ability to perform simultaneous analysis of thousands of genes in a massively parallel manner in one experiment. Hence, it provides valuable knowledge on gene interaction and function. The microarray data set typically consists of tens of thousands of genes (variables) from just dozens of samples due to various constraints. Therefore, the sample covariance matrix in Hotelling's T2 statistic is not positive definite and become singular, thus it cannot be inverted. In this research, the Hotelling's T2 statistic is combined with a shrinkage approach as an alternative estimation to estimate the covariance matrix to detect significant gene sets. The use of shrinkage covariance matrix overcomes the singularity problem by converting an unbiased to an improved biased estimator of covariance matrix. Robust trimmed mean is integrated into the shrinkage matrix to reduce the influence of outliers and consequently increases its efficiency. The performance of the proposed method is measured using several simulation designs. The results are expected to outperform existing techniques in many tested conditions.

  2. Interdisciplinary approach to the management of medical supplies in the nursing home setting

    Directory of Open Access Journals (Sweden)

    Juan Francisco Peris Martí

    2017-07-01

    Full Text Available Introduction: Given the impact of pressure ulcers in institutionalized elderly people, an interdisciplinary approach to the care of ulcers and the management of medical supplies is essential. The aim of this study is to describe and evaluate the management of medical supplies by an interdisciplinary team in order to promote their rational use in the nursing home setting. Methods: An interdisciplinary team was set up, coordinated by a Pharmacy Unit including representatives of 18 elderly nursing homes (1,599 beds. Team interventions were assessed in terms of improvements in the management of wound care supplies. In addition, a retrospective descriptive study was carried out on those patients with pressure ulcers, in order to consider future interventions. Results: The team interventions led to a selection of 15% of the 180 wound care supplies from the public tender process. The monthly savings in wound dressing material purchases was at least 17%. Furthermore, a reduction in consumption greater than 50% was found in 7 centres. The prevalence of ulcers was 5.59%. A fourth of these ulcers were originated outside nursing homes. Conclusions: The creation of an interdisciplinary team, in which the pharmacist gets closer to patient needs, and where nurses share responsibility for the selection and management of medical supplies, leads to positive results and represents an opportunity for improvement in elderly care.

  3. Translation of New Molecular Imaging Approaches to the Clinical Setting: Bridging the Gap to Implementation.

    Science.gov (United States)

    van Es, Suzanne C; Venema, Clasina M; Glaudemans, Andor W J M; Lub-de Hooge, Marjolijn N; Elias, Sjoerd G; Boellaard, Ronald; Hospers, Geke A P; Schröder, Carolina P; de Vries, Elisabeth G E

    2016-02-01

    Molecular imaging with PET is a rapidly emerging technique. In breast cancer patients, more than 45 different PET tracers have been or are presently being tested. With a good rationale, after development of the tracer and proven feasibility, it is of interest to evaluate whether there is a potential meaningful role for the tracer in the clinical setting-such as in staging, in the (early) prediction of a treatment response, or in supporting drug choices. So far, only (18)F-FDG PET has been incorporated into breast cancer guidelines. For proof of the clinical relevance of tracers, especially for analysis in a multicenter setting, standardization of the technology and access to the novel PET tracer are required. However, resources for PET implementation research are limited. Therefore, next to randomized studies, novel approaches are required for proving the clinical value of PET tracers with the smallest possible number of patients. The aim of this review is to describe the process of the development of PET tracers and the level of evidence needed for the use of these tracers in breast cancer. Several breast cancer trials have been performed with the PET tracers (18)F-FDG, 3'-deoxy-3'-(18)F-fluorothymidine ((18)F-FLT), and (18)F-fluoroestradiol ((18)F-FES). We studied them to learn lessons for the implementation of novel tracers. After defining the gap between a good rationale for a tracer and implementation in the clinical setting, we propose solutions to fill the gap to try to bring more PET tracers to daily clinical practice. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  4. The preparatory set: A novel approach to understanding "stress", trauma, and the bodymind therapies

    Directory of Open Access Journals (Sweden)

    Peter ePayne

    2015-04-01

    Full Text Available Basic to all motile life is a differential approach/avoid response to perceived features of environment. The stages of response are initial reflexive noticing and orienting to the stimulus, preparation, and execution of response. Preparation involves a coordination of many aspects of the organism: muscle tone, posture, breathing, autonomic functions, motivational/emotional state, attentional orientation and expectations. The organism organizes itself in relation to the challenge. We propose to call this the preparatory set (PS. We suggest that the concept of the PS can offer a more nuanced and flexible perspective on the stress response than do current theories. We also hypothesize that the mechanisms of bodymind therapeutic and educational systems (BTES can be understood through the PS framework. We suggest that the BTES, including meditative movement, meditation, somatic education, and the body-oriented psychotherapies, are approaches that use interventions on the PS to remedy stress and trauma. We discuss how the PS can be adaptive or maladaptive, how BTES interventions may restore adaptive PS, and how these concepts offer a broader and more flexible view of the phenomena of stress and trauma. We offer supportive evidence for our hypotheses, and suggest directions for future research. We believe that the PS framework will point to ways of improving the management of stress and trauma, and that it will suggest directions of research into the mechanisms of action of BTES.

  5. The risk implications of approaches to setting soil remediation goals at hazardous waste contaminated sites

    Energy Technology Data Exchange (ETDEWEB)

    Labieniec, Paula Ann [Carnegie Mellon Univ., Pittsburgh, PA (United States)

    1994-08-01

    An integrated exposure and carcinogenic risk assessment model for organic contamination in soil, SoilRisk, was developed and used for evaluating the risk implications of both site-specific and uniform-concentration approaches to setting soil remediation goals at hazardous-waste-contaminated sites. SoilRisk was applied to evaluate the uncertainty in the risk estimate due to uncertainty in site conditions at a representative site. It was also used to evaluate the variability in risk across a region of sites that can occur due to differences in site characteristics that affect contaminant transport and fate when a uniform concentration approach is used. In evaluating regional variability, Ross County, Ohio and the State of Ohio were used as examples. All analyses performed considered four contaminants (benzene, trichloroethylene (TCE), chlordane, and benzo[a]pyrene (BAP)) and four exposure scenarios (commercial, recreational and on- and offsite residential). Regardless of whether uncertainty in risk at a single site or variability in risk across sites was evaluated, the exposure scenario specified and the properties of the target contaminant had more influence than variance in site parameters on the resulting variance and magnitude of the risk estimate. In general, variance in risk was found to be greater for the relatively less degradable and more mobile of the chemicals studied (TCE and chlordane) than for benzene which is highly degradable and BAP which is very immobile in the subsurface.

  6. The risk implications of approaches to setting soil remediation goals at hazardous waste contaminated sites

    International Nuclear Information System (INIS)

    Labieniec, P.A.

    1994-08-01

    An integrated exposure and carcinogenic risk assessment model for organic contamination in soil, SoilRisk, was developed and used for evaluating the risk implications of both site-specific and uniform-concentration approaches to setting soil remediation goals at hazardous-waste-contaminated sites. SoilRisk was applied to evaluate the uncertainty in the risk estimate due to uncertainty in site conditions at a representative site. It was also used to evaluate the variability in risk across a region of sites that can occur due to differences in site characteristics that affect contaminant transport and fate when a uniform concentration approach is used. In evaluating regional variability, Ross County, Ohio and the State of Ohio were used as examples. All analyses performed considered four contaminants (benzene, trichloroethylene (TCE), chlordane, and benzo[a]pyrene (BAP)) and four exposure scenarios (commercial, recreational and on- and offsite residential). Regardless of whether uncertainty in risk at a single site or variability in risk across sites was evaluated, the exposure scenario specified and the properties of the target contaminant had more influence than variance in site parameters on the resulting variance and magnitude of the risk estimate. In general, variance in risk was found to be greater for the relatively less degradable and more mobile of the chemicals studied (TCE and chlordane) than for benzene which is highly degradable and BAP which is very immobile in the subsurface

  7. A Core Set Based Large Vector-Angular Region and Margin Approach for Novelty Detection

    Directory of Open Access Journals (Sweden)

    Jiusheng Chen

    2016-01-01

    Full Text Available A large vector-angular region and margin (LARM approach is presented for novelty detection based on imbalanced data. The key idea is to construct the largest vector-angular region in the feature space to separate normal training patterns; meanwhile, maximize the vector-angular margin between the surface of this optimal vector-angular region and abnormal training patterns. In order to improve the generalization performance of LARM, the vector-angular distribution is optimized by maximizing the vector-angular mean and minimizing the vector-angular variance, which separates the normal and abnormal examples well. However, the inherent computation of quadratic programming (QP solver takes O(n3 training time and at least O(n2 space, which might be computational prohibitive for large scale problems. By (1+ε  and  (1-ε-approximation algorithm, the core set based LARM algorithm is proposed for fast training LARM problem. Experimental results based on imbalanced datasets have validated the favorable efficiency of the proposed approach in novelty detection.

  8. Evaluation of digital soil mapping approaches with large sets of environmental covariates

    Science.gov (United States)

    Nussbaum, Madlene; Spiess, Kay; Baltensweiler, Andri; Grob, Urs; Keller, Armin; Greiner, Lucie; Schaepman, Michael E.; Papritz, Andreas

    2018-01-01

    The spatial assessment of soil functions requires maps of basic soil properties. Unfortunately, these are either missing for many regions or are not available at the desired spatial resolution or down to the required soil depth. The field-based generation of large soil datasets and conventional soil maps remains costly. Meanwhile, legacy soil data and comprehensive sets of spatial environmental data are available for many regions. Digital soil mapping (DSM) approaches relating soil data (responses) to environmental data (covariates) face the challenge of building statistical models from large sets of covariates originating, for example, from airborne imaging spectroscopy or multi-scale terrain analysis. We evaluated six approaches for DSM in three study regions in Switzerland (Berne, Greifensee, ZH forest) by mapping the effective soil depth available to plants (SD), pH, soil organic matter (SOM), effective cation exchange capacity (ECEC), clay, silt, gravel content and fine fraction bulk density for four soil depths (totalling 48 responses). Models were built from 300-500 environmental covariates by selecting linear models through (1) grouped lasso and (2) an ad hoc stepwise procedure for robust external-drift kriging (georob). For (3) geoadditive models we selected penalized smoothing spline terms by component-wise gradient boosting (geoGAM). We further used two tree-based methods: (4) boosted regression trees (BRTs) and (5) random forest (RF). Lastly, we computed (6) weighted model averages (MAs) from the predictions obtained from methods 1-5. Lasso, georob and geoGAM successfully selected strongly reduced sets of covariates (subsets of 3-6 % of all covariates). Differences in predictive performance, tested on independent validation data, were mostly small and did not reveal a single best method for 48 responses. Nevertheless, RF was often the best among methods 1-5 (28 of 48 responses), but was outcompeted by MA for 14 of these 28 responses. RF tended to over

  9. NEW APPROACHES TO THE IMPLEMENTATION OF THE MINING TECHNOLOGY OF DIMENSION STONE USING A CLOSE-SET DRILLING

    Directory of Open Access Journals (Sweden)

    S. V. Kalchuk

    2017-04-01

    Full Text Available The analysis of the current state the non-blasting monolith extraction technology was conducted. The further research direction was substantiated. Has been considered and justified the rational parameters of close-set drilling technology of dimension stones. Solution is offered that consist the combined drilling (a close-set and a holes line drilling, that provides to increase of stone splitting efficiency under its own weight. The calculation of the parameters of the scheme of partial underdrilling at a monolith of stone with the purpose of reducing the volume of drilling works is given. Diagrams of tensile stress changes depending on the specific area of splitting were built. A rational correlation between the drilling parameters of the holes has been established by solving the problems of loading the cantilever beam and stress concentration by the Kirsch solution. The most important parameter for the implementation of this technology is the ratio of monoloth hight to its length. Engineering formulas are proposed for calculating the technological parameters of the realization of the “gravitational-hole” stone splitting. The configuration of a rough block of stones is determined under which this technology can be realized. Creating of close-set holes provides the increase of maximal tensile stress with equal values of specific splitting area ratio. It is established that the effective drilling depth of close-set holes is 43,2 % of monolith height. It is estimated that combined drilling method application of savings from drilling operation will be 11,36 %.

  10. Armor Plate Surface Roughness Measurements

    National Research Council Canada - National Science Library

    Stanton, Brian; Coburn, William; Pizzillo, Thomas J

    2005-01-01

    ...., surface texture and coatings) that could become important at high frequency. We measure waviness and roughness of various plates to know the parameter range for smooth aluminum and rolled homogenous armor (RHA...

  11. Developing a mental health care plan in a low resource setting: the theory of change approach.

    Science.gov (United States)

    Hailemariam, Maji; Fekadu, Abebaw; Selamu, Medhin; Alem, Atalay; Medhin, Girmay; Giorgis, Tedla Wolde; DeSilva, Mary; Breuer, Erica

    2015-09-28

    . The ToC approach was found to be an important component in the development of the MHCP and to encourage broad political support for the integration of mental health services into primary care. The method may have broader applicability in planning complex health interventions in low resource settings.

  12. Groundwater flow system stability in shield settings a multi-disciplinary approach

    International Nuclear Information System (INIS)

    Jensen, M.R.; Goodwin, B.W.

    2004-01-01

    Within the Deep Geologic Repository Technology Program (DGRTP) several Geoscience activities are focused on advancing the understanding of groundwater flow system evolution and geochemical stability in a Shield setting as affected by long-term climate change. A key aspect is developing confidence in predictions of groundwater flow patterns and residence times as they relate to the safety of a Deep Geologic Repository for used nuclear fuel waste. A specific focus in this regard has been placed on constraining redox stability and groundwater flow system dynamics during the Pleistocene. Attempts are being made to achieve this through a coordinated multi-disciplinary approach intent on; i) demonstrating coincidence between independent geo-scientific data; ii) improving the traceability of geo-scientific data and its interpretation within a conceptual descriptive model(s); iii) improving upon methods to assess and demonstrate robustness in flow domain prediction(s) given inherent flow domain uncertainties (i.e. spatial chemical/physical property distributions; boundary conditions) in time and space; and iv) improving awareness amongst geo-scientists as to the utility various geo-scientific data in supporting a repository safety case. Coordinated by the DGRTP, elements of this program include the development of a climate driven Laurentide ice-sheet model to constrain the understanding of time rate of change in boundary conditions most affecting the groundwater flow domain and its evolution. Further work has involved supporting WRA Paleo-hydrogeologic studies in which constrained thermodynamic analyses coupled with field studies to characterize the paragenesis of fracture infill mineralogy are providing evidence to premise understandings of possible depth of penetration by oxygenated glacial recharge. In parallel. numerical simulations have been undertaken to illustrate aspect of groundwater flow system stability and evolution in a Shield setting. Such simulations

  13. Olkiluoto hydrogeochemistry. A 3-D modelling approach for sparce data set

    International Nuclear Information System (INIS)

    Luukkonen, A.; Partamies, S.; Pitkaenen, P.

    2003-07-01

    Olkiluoto at Eurajoki has been selected as a candidate site for final disposal repository for the used nuclear waste produced in Finland. In the long term safety assessment, one of the principal evaluation tools of safe disposal is hydrogeochemistry. For assessment purposes Posiva Oy excavates in the Olkiluoto bedrock an underground research laboratory (ONKALO). The complexity of the groundwater chemistry is characteristic to the Olkiluoto site and causes a demand to examine and visualise these hydrogeochemical features in 3-D together with the structural model. The need to study the hydrogeochemical features is not inevitable only in the stable undisturbed (pre-excavational) conditions but also in the disturbed system caused by the construction activities and open-tunnel conditions of the ONKALO. The present 3-D approach is based on integrating the independently and separately developed structural model and the results from the geochemical mixing calculations of the groundwater samples. For spatial geochemical regression purposes the study area is divided into four primary sectors on the basis of the occurrence of the samples. The geochemical information within the four primary sector are summed up in the four sector centroids that sum-up the depth distributions of the different water types within each primary sector area. The geographic locations of the centroids are used for secondary division of the study area into secondary sectors. With the aid of secondary sectors spatial regressions between the centroids can be calculated and interpolation of water type fractions within the centroid volume becomes possible. Similarly, extrapolations outside the centroid volume are possible as well. The mixing proportions of the five detected water types in an arbitrary point in the modelling volume can be estimated by applying the four centroids and by using lateral linear regression. This study utilises two separate data sets: the older data set and the newer data set. The

  14. A pharmacology guided approach for setting limits on product-related impurities for bispecific antibody manufacturing.

    Science.gov (United States)

    Rajan, Sharmila; Sonoda, Junichiro; Tully, Timothy; Williams, Ambrose J; Yang, Feng; Macchi, Frank; Hudson, Terry; Chen, Mark Z; Liu, Shannon; Valle, Nicole; Cowan, Kyra; Gelzleichter, Thomas

    2018-04-13

    bFKB1 is a humanized bispecific IgG1 antibody, created by conjoining an anti-Fibroblast Growth Factor Receptor 1 (FGFR1) half-antibody to an anti-Klothoβ (KLB) half-antibody, using the knobs-into-holes strategy. bFKB1 acts as a highly selective agonist for the FGFR1/KLB receptor complex and is intended to ameliorate obesity-associated metabolic defects by mimicking the activity of the hormone FGF21. An important aspect of the biologics product manufacturing process is to establish meaningful product specifications regarding the tolerable levels of impurities that copurify with the drug product. The aim of the current study was to determine acceptable levels of product-related impurities for bFKB1. To determine the tolerable levels of these impurities, we dosed obese mice with bFKB1 enriched with various levels of either HMW impurities or anti-FGFR1-related impurities, and measured biomarkers for KLB-independent FGFR1 signaling. Here, we show that product-related impurities of bFKB1, in particular, high molecular weight (HMW) impurities and anti-FGFR1-related impurities, when purposefully enriched, stimulate FGFR1 in a KLB-independent manner. By taking this approach, the tolerable levels of product-related impurities were successfully determined. Our study demonstrates a general pharmacology-guided approach to setting a product specification for a bispecific antibody whose homomultimer-related impurities could lead to undesired biological effects. Copyright © 2018. Published by Elsevier Inc.

  15. Analysis of accuracy in photogrammetric roughness measurements

    Science.gov (United States)

    Olkowicz, Marcin; Dąbrowski, Marcin; Pluymakers, Anne

    2017-04-01

    Regarding permeability, one of the most important features of shale gas reservoirs is the effective aperture of cracks opened during hydraulic fracturing, both propped and unpropped. In a propped fracture, the aperture is controlled mostly by proppant size and its embedment, and fracture surface roughness only has a minor influence. In contrast, in an unpropped fracture aperture is controlled by the fracture roughness and the wall displacement. To measure fracture surface roughness, we have used the photogrammetric method since it is time- and cost-efficient. To estimate the accuracy of this method we compare the photogrammetric measurements with reference measurements taken with a White Light Interferometer (WLI). Our photogrammetric setup is based on high resolution 50 Mpx camera combined with a focus stacking technique. The first step for photogrammetric measurements is to determine the optimal camera positions and lighting. We compare multiple scans of one sample, taken with different settings of lighting and camera positions, with the reference WLI measurement. The second step is to perform measurements of all studied fractures with the parameters that produced the best results in the first step. To compare photogrammetric and WLI measurements we regrid both data sets onto a regular 10 μm grid and determined the best fit, followed by a calculation of the difference between the measurements. The first results of the comparison show that for 90 % of measured points the absolute vertical distance between WLI and photogrammetry is less than 10 μm, while the mean absolute vertical distance is 5 μm. This proves that our setup can be used for fracture roughness measurements in shales.

  16. An intermittency model for predicting roughness induced transition

    Science.gov (United States)

    Ge, Xuan; Durbin, Paul

    2014-11-01

    An extended model for roughness-induced transition is proposed based on an intermittency transport equation for RANS modeling formulated in local variables. To predict roughness effects in the fully turbulent boundary layer, published boundary conditions for k and ω are used, which depend on the equivalent sand grain roughness height, and account for the effective displacement of wall distance origin. Similarly in our approach, wall distance in the transition model for smooth surfaces is modified by an effective origin, which depends on roughness. Flat plate test cases are computed to show that the proposed model is able to predict the transition onset in agreement with a data correlation of transition location versus roughness height, Reynolds number, and inlet turbulence intensity. Experimental data for a turbine cascade are compared with the predicted results to validate the applicability of the proposed model. Supported by NSF Award Number 1228195.

  17. Reflections on the conceptualization and operationalization of a set-theoretic approach to employee motivation and performance research

    Directory of Open Access Journals (Sweden)

    James Christopher Ryan

    2017-01-01

    Full Text Available The current commentary offers a reflection on the conceptualizations of Lee and Raschke's (2016 proposal for a set-theoretic approach to employee motivation and organizational performance. The commentary is informed by the current author's operationalization of set-theoretic research on employee motivation which occurred contemporaneously to the work of Lee and Raschke. Observations on the state of current research on employee motivation, development of motivation theory and future directions of set-theoretic approaches to employee motivation and performance are offered.

  18. Cooperative Fuzzy Games Approach to Setting Target Levels of ECs in Quality Function Deployment

    Directory of Open Access Journals (Sweden)

    Zhihui Yang

    2014-01-01

    Full Text Available Quality function deployment (QFD can provide a means of translating customer requirements (CRs into engineering characteristics (ECs for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  19. Setting Priorities for Monitoring and Managing Non-native Plants: Toward a Practical Approach.

    Science.gov (United States)

    Koch, Christiane; Jeschke, Jonathan M; Overbeck, Gerhard E; Kollmann, Johannes

    2016-09-01

    Land managers face the challenge to set priorities in monitoring and managing non-native plant species, as resources are limited and not all non-natives become invasive. Existing frameworks that have been proposed to rank non-native species require extensive information on their distribution, abundance, and impact. This information is difficult to obtain and often not available for many species and regions. National watch or priority lists are helpful, but it is questionable whether they provide sufficient information for environmental management on a regional scale. We therefore propose a decision tree that ranks species based on more simple albeit robust information, but still provides reliable management recommendations. To test the decision tree, we collected and evaluated distribution data from non-native plants in highland grasslands of Southern Brazil. We compared the results with a national list from the Brazilian Invasive Species Database for the state to discuss advantages and disadvantages of the different approaches on a regional scale. Out of 38 non-native species found, only four were also present on the national list. If management would solely rely on this list, many species that were identified as spreading based on the decision tree would go unnoticed. With the suggested scheme, it is possible to assign species to active management, to monitoring, or further evaluation. While national lists are certainly important, management on a regional scale should employ additional tools that adequately consider the actual risk of non-natives to become invasive.

  20. A risk-based approach to setting priorities in protecting bridges against terrorist attacks.

    Science.gov (United States)

    Leung, Maria; Lambert, James H; Mosenthal, Alexander

    2004-08-01

    This article presents an approach to the problem of terrorism risk assessment and management by adapting the framework of the risk filtering, ranking, and management method. The assessment is conducted at two levels: (1) the system level, and (2) the asset-specific level. The system-level risk assessment attempts to identify and prioritize critical infrastructures from an inventory of system assets. The definition of critical infrastructures offered by Presidential Decision Directive 63 was used to determine the set of attributes to identify critical assets--categorized according to national, regional, and local impact. An example application is demonstrated using information from the Federal Highway Administration National Bridge Inventory for the State of Virginia. Conversely, the asset-specific risk assessment performs an in-depth analysis of the threats and vulnerabilities of a specific critical infrastructure. An illustration is presented to offer some insights in risk scenario identification and prioritization, multiobjective evaluation of management options, and extreme-event analysis for critical infrastructure protection.

  1. Modeling radiative transfer with the doubling and adding approach in a climate GCM setting

    Science.gov (United States)

    Lacis, A. A.

    2017-12-01

    The nonlinear dependence of multiply scattered radiation on particle size, optical depth, and solar zenith angle, makes accurate treatment of multiple scattering in the climate GCM setting problematic, due primarily to computational cost issues. In regard to the accurate methods of calculating multiple scattering that are available, their computational cost is far too prohibitive for climate GCM applications. Utilization of two-stream-type radiative transfer approximations may be computationally fast enough, but at the cost of reduced accuracy. We describe here a parameterization of the doubling/adding method that is being used in the GISS climate GCM, which is an adaptation of the doubling/adding formalism configured to operate with a look-up table utilizing a single gauss quadrature point with an extra-angle formulation. It is designed to closely reproduce the accuracy of full-angle doubling and adding for the multiple scattering effects of clouds and aerosols in a realistic atmosphere as a function of particle size, optical depth, and solar zenith angle. With an additional inverse look-up table, this single-gauss-point doubling/adding approach can be adapted to model fractional cloud cover for any GCM grid-box in the independent pixel approximation as a function of the fractional cloud particle sizes, optical depths, and solar zenith angle dependence.

  2. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction

    Directory of Open Access Journals (Sweden)

    Michael M. Abdel-Sayed

    2016-11-01

    Full Text Available Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted ℓ1 minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal ℓ1 minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to ℓ1 minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  3. RMP: Reduced-set matching pursuit approach for efficient compressed sensing signal reconstruction.

    Science.gov (United States)

    Abdel-Sayed, Michael M; Khattab, Ahmed; Abu-Elyazeed, Mohamed F

    2016-11-01

    Compressed sensing enables the acquisition of sparse signals at a rate that is much lower than the Nyquist rate. Compressed sensing initially adopted [Formula: see text] minimization for signal reconstruction which is computationally expensive. Several greedy recovery algorithms have been recently proposed for signal reconstruction at a lower computational complexity compared to the optimal [Formula: see text] minimization, while maintaining a good reconstruction accuracy. In this paper, the Reduced-set Matching Pursuit (RMP) greedy recovery algorithm is proposed for compressed sensing. Unlike existing approaches which either select too many or too few values per iteration, RMP aims at selecting the most sufficient number of correlation values per iteration, which improves both the reconstruction time and error. Furthermore, RMP prunes the estimated signal, and hence, excludes the incorrectly selected values. The RMP algorithm achieves a higher reconstruction accuracy at a significantly low computational complexity compared to existing greedy recovery algorithms. It is even superior to [Formula: see text] minimization in terms of the normalized time-error product, a new metric introduced to measure the trade-off between the reconstruction time and error. RMP superior performance is illustrated with both noiseless and noisy samples.

  4. Cooperative fuzzy games approach to setting target levels of ECs in quality function deployment.

    Science.gov (United States)

    Yang, Zhihui; Chen, Yizeng; Yin, Yunqiang

    2014-01-01

    Quality function deployment (QFD) can provide a means of translating customer requirements (CRs) into engineering characteristics (ECs) for each stage of product development and production. The main objective of QFD-based product planning is to determine the target levels of ECs for a new product or service. QFD is a breakthrough tool which can effectively reduce the gap between CRs and a new product/service. Even though there are conflicts among some ECs, the objective of developing new product is to maximize the overall customer satisfaction. Therefore, there may be room for cooperation among ECs. A cooperative game framework combined with fuzzy set theory is developed to determine the target levels of the ECs in QFD. The key to develop the model is the formulation of the bargaining function. In the proposed methodology, the players are viewed as the membership functions of ECs to formulate the bargaining function. The solution for the proposed model is Pareto-optimal. An illustrated example is cited to demonstrate the application and performance of the proposed approach.

  5. The contact sport of rough surfaces

    Science.gov (United States)

    Carpick, Robert W.

    2018-01-01

    Describing the way two surfaces touch and make contact may seem simple, but it is not. Fully describing the elastic deformation of ideally smooth contacting bodies, under even low applied pressure, involves second-order partial differential equations and fourth-rank elastic constant tensors. For more realistic rough surfaces, the problem becomes a multiscale exercise in surface-height statistics, even before including complex phenomena such as adhesion, plasticity, and fracture. A recent research competition, the “Contact Mechanics Challenge” (1), was designed to test various approximate methods for solving this problem. A hypothetical rough surface was generated, and the community was invited to model contact with this surface with competing theories for the calculation of properties, including contact area and pressure. A supercomputer-generated numerical solution was kept secret until competition entries were received. The comparison of results (2) provides insights into the relative merits of competing models and even experimental approaches to the problem.

  6. Sub-Patch Roughness in Earthquake Rupture Investigations

    KAUST Repository

    Zielke, Olaf

    2016-02-13

    Fault geometric complexities exhibit fractal characteristics over a wide range of spatial scales (<µm to >km) and strongly affect the rupture process at corresponding scales. Numerical rupture simulations provide a framework to quantitatively investigate the relationship between a fault\\'s roughness and its seismic characteristics. Fault discretization however introduces an artificial lower limit to roughness. Individual fault patches are planar and sub-patch roughnessroughness at spatial scales below fault-patch size– is not incorporated. Does negligence of sub-patch roughness measurably affect the outcome of earthquake rupture simulations? We approach this question with a numerical parameter space investigation and demonstrate that sub-patch roughness significantly modifies the slip-strain relationship –a fundamental aspect of dislocation theory. Faults with sub-patch roughness induce less strain than their planar-fault equivalents at distances beyond the length of a slipping fault. We further provide regression functions that characterize the stochastic effect sub-patch roughness.

  7. An approach and a tool for setting sustainable energy retrofitting strategies referring to the 2010 EP

    Directory of Open Access Journals (Sweden)

    Charlot-Valdieu, C.

    2011-10-01

    Full Text Available The 2010 EPBD asks for an economic and social analysis in order to preserve social equity and to promote innovation and building productivity. This is possible with a life cycle energy cost (LCEC analysis, such as with the SEC (Sustainable Energy Cost model whose bottom up approach begins with a building typology including inhabitants. Then the analysis of some representative buildings includes the identification of a technico-economical optimum and energy retrofitting scenarios for each retrofitting programme and the extrapolation for the whole building stock. An extrapolation for the whole building stock allows to set up the strategy and to identify the needed means for reaching the objectives. SEC is a decision aid tool for optimising sustainable energy retrofitting strategies for buildings at territorial and patrimonial scales inside a sustainable development approach towards the factor 4. Various versions of the SEC model are now available for housing and for tertiary buildings.

    La directiva europea de 2010 sobre eficiencia energética en los edificios exige un análisis económico y social con el objetivo de preservar la equidad social, promover la innovación y reforzar la productividad en la construcción. Esto es posible con el análisis del coste global ampliado y especialmente con el modelo SEC. El análisis “bottom up” realizado con la SEC se basa en una tipología de edificio/usuario y en el análisis de edificios representativos: la identificación del óptimo técnico-económico y elaboración de escenarios antes de hacer una extrapolación al conjunto del parque. SEC es una herramienta de ayuda a la decisión para desarrollar estrategias territoriales o patrimoniales de rehabilitación energética. Existen diversas versiones del modelo: para edificios residenciales (unifamiliares y plurifamiliares, públicos y privados y para edificios terciarios.

  8. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting.

    Science.gov (United States)

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-02-28

    opportunities based on problem-solving activities and offer timely feedback/guidance to students. Yet in our study, this teaching style had its fair share of challenges, which were largely dependent on the use and management of technology. Despite these challenges, the Flipped Classroom proved to be a novel and effective teaching approach at the graduate level setting.

  9. ASSET Queries: A Set-Oriented and Column-Wise Approach to Modern OLAP

    Science.gov (United States)

    Chatziantoniou, Damianos; Sotiropoulos, Yannis

    Modern data analysis has given birth to numerous grouping constructs and programming paradigms, way beyond the traditional group by. Applications such as data warehousing, web log analysis, streams monitoring and social networks understanding necessitated the use of data cubes, grouping variables, windows and MapReduce. In this paper we review the associated set (ASSET) concept and discuss its applicability in both continuous and traditional data settings. Given a set of values B, an associated set over B is just a collection of annotated data multisets, one for each b(B. The goal is to efficiently compute aggregates over these data sets. An ASSET query consists of repeated definitions of associated sets and aggregates of these, possibly correlated, resembling a spreadsheet document. We review systems implementing ASSET queries both in continuous and persistent contexts and argue for associated sets' analytical abilities and optimization opportunities.

  10. Rough – Granular Computing knowledge discovery models

    Directory of Open Access Journals (Sweden)

    Mohammed M. Eissa

    2016-11-01

    Full Text Available Medical domain has become one of the most important areas of research in order to richness huge amounts of medical information about the symptoms of diseases and how to distinguish between them to diagnose it correctly. Knowledge discovery models play vital role in refinement and mining of medical indicators to help medical experts to settle treatment decisions. This paper introduces four hybrid Rough – Granular Computing knowledge discovery models based on Rough Sets Theory, Artificial Neural Networks, Genetic Algorithm and Rough Mereology Theory. A comparative analysis of various knowledge discovery models that use different knowledge discovery techniques for data pre-processing, reduction, and data mining supports medical experts to extract the main medical indicators, to reduce the misdiagnosis rates and to improve decision-making for medical diagnosis and treatment. The proposed models utilized two medical datasets: Coronary Heart Disease dataset and Hepatitis C Virus dataset. The main purpose of this paper was to explore and evaluate the proposed models based on Granular Computing methodology for knowledge extraction according to different evaluation criteria for classification of medical datasets. Another purpose is to make enhancement in the frame of KDD processes for supervised learning using Granular Computing methodology.

  11. How Iranian Medical Trainees Approach their Responsibilities in Clinical Settings; A Grounded Theory Research

    Directory of Open Access Journals (Sweden)

    Omid Asemani

    2015-09-01

    Full Text Available Background: It seems we are now experiencing “responsibility problems” among medical trainees (MTs and some of those recently graduated from medical schools in Iran. Training responsible professionals have always been one of the main concerns of medical educators. Nevertheless, there is a dearth of research in the literature on “responsibility” especially from the medical education point of view. Therefore, the present study was carried out with the aim of presenting a theoretical based framework for understanding how MTs approach their responsibilities in educational settings. Method: This qualitative study was conducted at Shiraz University of Medical Sciences (SUMS using the grounded theory methodology. 15 MTs and 10 clinical experts and professional nurses were purposefully chosen as participants. Data was analyzed using the methodology suggested by Corbin and Strauss, 1998. Results: “Try to find acceptance toward expectations”, “try to be committed to meet the expectations” and “try to cope with unacceptable expectations” were three main categories extracted based on the research data. Abstractly, the main objective for using these processes was “to preserve the integrity of student identity” which was the core category of this research too. Moreover, it was also found that practically, “responsibility” is considerably influenced by lots of positive and negative contextual and intervening conditions. Conclusion: “Acceptance” was the most decisive variable highly effective in MTs’ responsibility. Therefore, investigating the “process of acceptance” regarding the involved contextual and intervening conditions might help medical educators correctly identify and effectively control negative factors and reinforce the constructive ones that affect the concept of responsibility in MTs.

  12. A parameter tree approach to estimating system sensitivities to parameter sets

    International Nuclear Information System (INIS)

    Jarzemba, M.S.; Sagar, B.

    2000-01-01

    A post-processing technique for determining relative system sensitivity to groups of parameters and system components is presented. It is assumed that an appropriate parametric model is used to simulate system behavior using Monte Carlo techniques and that a set of realizations of system output(s) is available. The objective of our technique is to analyze the input vectors and the corresponding output vectors (that is, post-process the results) to estimate the relative sensitivity of the output to input parameters (taken singly and as a group) and thereby rank them. This technique is different from the design of experimental techniques in that a partitioning of the parameter space is not required before the simulation. A tree structure (which looks similar to an event tree) is developed to better explain the technique. Each limb of the tree represents a particular combination of parameters or a combination of system components. For convenience and to distinguish it from the event tree, we call it the parameter tree. To construct the parameter tree, the samples of input parameter values are treated as either a '+' or a '-' based on whether or not the sampled parameter value is greater than or less than a specified branching criterion (e.g., mean, median, percentile of the population). The corresponding system outputs are also segregated into similar bins. Partitioning the first parameter into a '+' or a '-' bin creates the first level of the tree containing two branches. At the next level, realizations associated with each first-level branch are further partitioned into two bins using the branching criteria on the second parameter and so on until the tree is fully populated. Relative sensitivities are then inferred from the number of samples associated with each branch of the tree. The parameter tree approach is illustrated by applying it to a number of preliminary simulations of the proposed high-level radioactive waste repository at Yucca Mountain, NV. Using a

  13. Stochastic control with rough paths

    International Nuclear Information System (INIS)

    Diehl, Joscha; Friz, Peter K.; Gassiat, Paul

    2017-01-01

    We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically involve anticipating stochastic analysis. We make the link to old work of Davis and Burstein (Stoch Stoch Rep 40:203–256, 1992) and then prove a continuous-time generalization of Roger’s duality formula [SIAM J Control Optim 46:1116–1132, 2007]. The generic case of controlled volatility is seen to give trivial duality bounds, and explains the focus in Burstein–Davis’ (and this) work on controlled drift. Our study of controlled rough differential equations also relates to work of Mazliak and Nourdin (Stoch Dyn 08:23, 2008).

  14. Heat transfer from rough surfaces

    International Nuclear Information System (INIS)

    Dalle Donne, M.

    1977-01-01

    Artificial roughness is often used in nuclear reactors to improve the thermal performance of the fuel elements. Although these are made up of clusters of rods, the experiments to measure the heat transfer and friction coefficients of roughness are performed with single rods contained in smooth tubes. This work illustrated a new transformation method to obtain data applicable to reactor fuel elements from these annulus experiments. New experimental friction data are presented for ten rods, each with a different artificial roughness made up of two-dimensional rectangular ribs. For each rod four tests have been performed, each in a different outer smooth tube. For two of these rods, each for two different outer tubes, heat transfer data are also given. The friction and heat transfer data, transformed with the present method, are correlated by simple equations. In the paper, these equations are applied to a case typical for a Gas Cooled Fast Reactor fuel element. (orig.) [de

  15. Stochastic control with rough paths

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Joscha [University of California San Diego (United States); Friz, Peter K., E-mail: friz@math.tu-berlin.de [TU & WIAS Berlin (Germany); Gassiat, Paul [CEREMADE, Université Paris-Dauphine, PSL Research University (France)

    2017-04-15

    We study a class of controlled differential equations driven by rough paths (or rough path realizations of Brownian motion) in the sense of Lyons. It is shown that the value function satisfies a HJB type equation; we also establish a form of the Pontryagin maximum principle. Deterministic problems of this type arise in the duality theory for controlled diffusion processes and typically involve anticipating stochastic analysis. We make the link to old work of Davis and Burstein (Stoch Stoch Rep 40:203–256, 1992) and then prove a continuous-time generalization of Roger’s duality formula [SIAM J Control Optim 46:1116–1132, 2007]. The generic case of controlled volatility is seen to give trivial duality bounds, and explains the focus in Burstein–Davis’ (and this) work on controlled drift. Our study of controlled rough differential equations also relates to work of Mazliak and Nourdin (Stoch Dyn 08:23, 2008).

  16. The influence of roughness and obstacle on wind power map

    International Nuclear Information System (INIS)

    Abas Ab Wahab; Mohd Fadhil Abas; Mohd Hafiz Ismail

    2006-01-01

    In the development of wind energy in Malaysia, the need for wind power map of Peninsular Malaysia has aroused. The map is needed to help in determining the potential areas where low wind speed wind turbines could operate optimally. In establishing the wind power map the effects of roughness and obstacles have been investigated. Wind data from 24 meteorological stations around the country have been utilized in conjunction with the respective local roughness and obstacles. Two sets of wind power maps have been developed i.e. the wind power maps with and without roughness and obstacles. These two sets of wind power maps exhibit great significant amount of difference in the wind power values especially in the inland areas where the wind power map without roughness and obstacles gives much lower values than those with roughness and obstacles. This paper outlines the process of establishing the two sets of wind power map as well as discussing the influence of roughness and obstacles based on the results obtained

  17. Effects of capillary condensation in adhesion between rough surfaces.

    Science.gov (United States)

    Wang, Jizeng; Qian, Jin; Gao, Huajian

    2009-10-06

    Experiments on the effects of humidity in adhesion between rough surfaces have shown that the adhesion energy remains constant below a critical relative humidity (RHcr) and then abruptly jumps to a higher value at RHcr before approaching its upper limit at 100% relative humidity. A model based on a hierarchical rough surface topography is proposed, which quantitatively explains the experimental observations and predicts two threshold RH values, RHcr and RHdry, which define three adhesion regimes: (1) RHRHcr, water menisci freely form and spread along the interface between the rough surfaces.

  18. The accountability for reasonableness approach to guide priority setting in health systems within limited resources

    DEFF Research Database (Denmark)

    Byskov, Jens; Marchal, Bruno; Maluka, Stephen

    2014-01-01

    : relevance, publicity, appeals, and enforcement, which facilitate agreement on priority-setting decisions and gain support for their implementation. This paper focuses on the assessment of AFR within the project REsponse to ACcountable priority setting for Trust in health systems (REACT). METHODS...... of the potential of AFR in supporting priority-setting and other decision-making processes in health systems to achieve better agreed and more sustainable health improvements linked to a mutual democratic learning with potential wider implications....

  19. Radiative transfer model for contaminated rough slabs.

    Science.gov (United States)

    Andrieu, François; Douté, Sylvain; Schmidt, Frédéric; Schmitt, Bernard

    2015-11-01

    We present a semi-analytical model to simulate the bidirectional reflectance distribution function (BRDF) of a rough slab layer containing impurities. This model has been optimized for fast computation in order to analyze massive hyperspectral data by a Bayesian approach. We designed it for planetary surface ice studies but it could be used for other purposes. It estimates the bidirectional reflectance of a rough slab of material containing inclusions, overlaying an optically thick media (semi-infinite media or stratified media, for instance granular material). The inclusions are assumed to be close to spherical and constituted of any type of material other than the ice matrix. It can be any other type of ice, mineral, or even bubbles defined by their optical constants. We assume a low roughness and we consider the geometrical optics conditions. This model is thus applicable for inclusions larger than the considered wavelength. The scattering on the inclusions is assumed to be isotropic. This model has a fast computation implementation and thus is suitable for high-resolution hyperspectral data analysis.

  20. Traceability of optical roughness measurements on polymers

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Gasparin, Stefania; Carli, Lorenzo

    2008-01-01

    -focus instrument, and a confocal microscope. Using stylus measurements as reference, parameter settings on the optical instruments were optimised and residual noise reduced by low pass filtering. Traceability of optical measurements could be established with expanded measuring uncertainties (k=2) of 4......An experimental investigation on surface roughness measurements on plastics was carried out with the objective of developing a methodology to achieve traceability of optical instruments. A ground steel surface and its replicas were measured using a stylus instrument, an optical auto......% for the auto-focus instrument and 10% for confocal microscope....

  1. Wave scattering from statistically rough surfaces

    CERN Document Server

    Bass, F G; ter Haar, D

    2013-01-01

    Wave Scattering from Statistically Rough Surfaces discusses the complications in radio physics and hydro-acoustics in relation to wave transmission under settings seen in nature. Some of the topics that are covered include radar and sonar, the effect of variations in topographic relief or ocean waves on the transmission of radio and sound waves, the reproduction of radio waves from the lower layers of the ionosphere, and the oscillations of signals within the earth-ionosphere waveguide. The book begins with some fundamental idea of wave transmission theory and the theory of random processes a

  2. Maxwell’s Equations on Cantor Sets: A Local Fractional Approach

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    2013-01-01

    Full Text Available Maxwell’s equations on Cantor sets are derived from the local fractional vector calculus. It is shown that Maxwell’s equations on Cantor sets in a fractal bounded domain give efficiency and accuracy for describing the fractal electric and magnetic fields. Local fractional differential forms of Maxwell’s equations on Cantor sets in the Cantorian and Cantor-type cylindrical coordinates are obtained. Maxwell's equations on Cantor set with local fractional operators are the first step towards a unified theory of Maxwell’s equations for the dynamics of cold dark matter.

  3. Surface roughness influences on the behaviour of flow inside microchannels

    Science.gov (United States)

    Farias, M. H.; Castro, C. S.; Garcia, D. A.; Henrique, J. S.

    2018-03-01

    This work discusses influence of the surface roughness on the behavior of liquids flowing inside microchannels. By measuring the flow profile using the micro-PIV technique, the flow of water inside two rectangular microchannels of different wall roughness and in a circular smooth microchannel was studied. Comparisons were made among the experimental results, showing that a metrological approach concerning surface characteristics of microdevices is required to ensure reliability of the measurements for flow analyses in microfluidic processes.

  4. Role of roughness parameters on the tribology of randomly nano-textured silicon surface.

    Science.gov (United States)

    Gualtieri, E; Pugno, N; Rota, A; Spagni, A; Lepore, E; Valeri, S

    2011-10-01

    This experimental work is oriented to give a contribution to the knowledge of the relationship among surface roughness parameters and tribological properties of lubricated surfaces; it is well known that these surface properties are strictly related, but a complete comprehension of such correlations is still far to be reached. For this purpose, a mechanical polishing procedure was optimized in order to induce different, but well controlled, morphologies on Si(100) surfaces. The use of different abrasive papers and slurries enabled the formation of a wide spectrum of topographical irregularities (from the submicro- to the nano-scale) and a broad range of surface profiles. An AFM-based morphological and topographical campaign was carried out to characterize each silicon rough surface through a set of parameters. Samples were subsequently water lubricated and tribologically characterized through ball-on-disk tribometer measurements. Indeed, the wettability of each surface was investigated by measuring the water droplet contact angle, that revealed a hydrophilic character for all the surfaces, even if no clear correlation with roughness emerged. Nevertheless, this observation brings input to the purpose, as it allows to exclude that the differences in surface profile affect lubrication. So it is possible to link the dynamic friction coefficient of rough Si samples exclusively to the opportune set of surface roughness parameters that can exhaustively describe both height amplitude variations (Ra, Rdq) and profile periodicity (Rsk, Rku, Ic) that influence asperity-asperity interactions and hydrodynamic lift in different ways. For this main reason they cannot be treated separately, but with dependent approach through which it was possible to explain even counter intuitive results: the unexpected decreasing of friction coefficient with increasing Ra is justifiable by a more consistent increasing of kurtosis Rku.

  5. A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings

    Science.gov (United States)

    Lee, Guemin; Lewis, Daniel M.

    2008-01-01

    The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error…

  6. Goodness of Fit of Skills Assessment Approaches: Insights from Patterns of Real vs. Synthetic Data Sets

    Science.gov (United States)

    Beheshti, Behzad; Desmarais, Michel C.

    2015-01-01

    This study investigates the issue of the goodness of fit of different skills assessment models using both synthetic and real data. Synthetic data is generated from the different skills assessment models. The results show wide differences of performances between the skills assessment models over synthetic data sets. The set of relative performances…

  7. Subcutaneous Fentanyl Administration: A Novel Approach for Pain Management in a Rural and Suburban Prehospital Setting.

    Science.gov (United States)

    Lebon, Johann; Fournier, Francis; Bégin, François; Hebert, Denise; Fleet, Richard; Foldes-Busque, Guilaume; Tanguay, Alain

    2016-01-01

    To determine the feasibility, safety, and effectiveness of the subcutaneous route of fentanyl administration by Basic Life Support-Emergency Medical Technicians (BLS-EMT) in a rural and suburban region, with the support of an online pain management medical control center. Retrospective study of patients who received subcutaneous fentanyl and were transported by BLS-EMT to the emergency department (ED) of an academic hospital between July 1, 2013 and January 1, 2014, inclusively. Fentanyl orders were obtained from emergency physicians via an online medical control (OLMC) center. Effectiveness was defined by changes in pain scores 15 minutes, 30 minutes, and 45+ minutes after initial fentanyl administration. Safety was evaluated by measuring vital signs, Ramsay sedation scores, and adverse events subsequent to fentanyl administration. Feasibility was defined as successful fentanyl administration by BLS-EMT. SPSS-20 was used for descriptive statistics, and independent t-tests and Mann-Whitney U tests were used to determine inter- and intra-group differences based on transport time. Two hundred and eighty-eight patients (288; 14 to 93 years old) with pain scores ≥7 were eligible for the study. Of the 284 (98.6%) who successfully received subcutaneous fentanyl, 35 had missing records or data, and 249 (86.5%) were included in analyses. Average pain score pre-fentanyl was 8.9 ± 1.1. Patients fentanyl than those ≥70 years old (1.4 ± 0.3 vs, 0.8 ± 0.2 mcg/kg, p fentanyl administration and the proportion of patients achieving pain relief increased significantly (p 3 (n = 1; 0.4%). Prehospital subcutaneous fentanyl administration by BLS-EMT with the support of an OLMC center is a safe and feasible approach to pain relief in prehospital settings, and is not associated with major adverse events. Effectiveness, subsequent to subcutaneous fentanyl administration is characterized by a decrease in pain over the course of transport to ED. Further studies are needed to

  8. Investigation of the potential of fuzzy sets and related approaches for treating uncertainties in radionuclide transfer predictions

    International Nuclear Information System (INIS)

    Shaw, W.; Grindrod, P.

    1989-01-01

    This document encompasses two main items. The first consists of a review of four aspects of fuzzy sets, namely, the general framework, the role of expert judgment, mathematical and computational aspects, and present applications. The second consists of the application of fuzzy-set theory to simplified problems in radionuclide migration, with comparisons between fuzzy and probabilistic approaches, treated both analytically and computationally. A new approach to fuzzy differential equations is presented, and applied to simple ordinary and partial differential equations. It is argued that such fuzzy techniques represent a viable alternative to probabilistic risk assessment, for handling systems subject to uncertainties

  9. Does Surface Roughness Amplify Wetting?

    Czech Academy of Sciences Publication Activity Database

    Malijevský, Alexandr

    2014-01-01

    Roč. 141, č. 18 (2014), s. 184703 ISSN 0021-9606 R&D Projects: GA ČR GA13-09914S Institutional support: RVO:67985858 Keywords : density functional theory * wetting * roughness Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 2.952, year: 2014

  10. Calibration of surface roughness standards

    DEFF Research Database (Denmark)

    Thalmann, R.; Nicolet, A.; Meli, F.

    2016-01-01

    organisations. Five surface texture standards of different type were circulated and on each of the standards several roughness parameters according to the standard ISO 4287 had to be determined. 32 out of 395 individual results were not consistent with the reference value. After some corrective actions...

  11. Cluster Analysis-Based Approaches for Geospatiotemporal Data Mining of Massive Data Sets for Identification of Forest Threats

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Richard T [ORNL; Hoffman, Forrest M [ORNL; Kumar, Jitendra [ORNL; HargroveJr., William Walter [USDA Forest Service

    2011-01-01

    We investigate methods for geospatiotemporal data mining of multi-year land surface phenology data (250 m2 Normalized Difference Vegetation Index (NDVI) values derived from the Moderate Resolution Imaging Spectrometer (MODIS) in this study) for the conterminous United States (CONUS) as part of an early warning system for detecting threats to forest ecosystems. The approaches explored here are based on k-means cluster analysis of this massive data set, which provides a basis for defining the bounds of the expected or normal phenological patterns that indicate healthy vegetation at a given geographic location. We briefly describe the computational approaches we have used to make cluster analysis of such massive data sets feasible, describe approaches we have explored for distinguishing between normal and abnormal phenology, and present some examples in which we have applied these approaches to identify various forest disturbances in the CONUS.

  12. Human roughness perception and possible factors effecting roughness sensation.

    Science.gov (United States)

    Aktar, Tugba; Chen, Jianshe; Ettelaie, Rammile; Holmes, Melvin; Henson, Brian

    2017-06-01

    Surface texture sensation is significant for business success, in particular for solid surfaces for most of the materials; including foods. Mechanisms of roughness perception are still unknown, especially under different conditions such as lubricants with varying viscosities, different temperatures, or under different force loads during the observation of the surface. This work aims to determine the effect of those unknown factors, with applied sensory tests on 62 healthy participants. Roughness sensation of fingertip was tested under different lubricants including water and diluted syrup solutions at room temperature (25C) and body temperature (37C) by using simple pair-wise comparison to observe the just noticeable difference threshold and perception levels. Additionally, in this research applied force load during roughness observation was tested with pair-wise ranking method to illustrate its possible effect on human sensation. Obtained results showed that human's capability of roughness discrimination reduces with increased viscosity of the lubricant, where the influence of the temperature was not found to be significant. Moreover, the increase in the applied force load showed an increase in the sensitivity of roughness discrimination. Observed effects of the applied factors were also used for estimating the oral sensation of texture during eating. These findings are significant for our fundamental understanding to texture perception, and for the development of new food products with controlled textural features. Texture discrimination ability, more specifically roughness discrimination capability, is a significant factor for preference and appreciation for a wide range of materials, including food, furniture, or fabric. To explore the mechanism of sensation capability through tactile senses, it is necessary to identify the relevant factors and define characteristics that dominate the process involved. The results that will be obtained under these principles

  13. A robust algorithm to solve the signal setting problem considering different traffic assignment approaches

    Directory of Open Access Journals (Sweden)

    Adacher Ludovica

    2017-12-01

    Full Text Available In this paper we extend a stochastic discrete optimization algorithm so as to tackle the signal setting problem. Signalized junctions represent critical points of an urban transportation network, and the efficiency of their traffic signal setting influences the overall network performance. Since road congestion usually takes place at or close to junction areas, an improvement in signal settings contributes to improving travel times, drivers’ comfort, fuel consumption efficiency, pollution and safety. In a traffic network, the signal control strategy affects the travel time on the roads and influences drivers’ route choice behavior. The paper presents an algorithm for signal setting optimization of signalized junctions in a congested road network. The objective function used in this work is a weighted sum of delays caused by the signalized intersections. We propose an iterative procedure to solve the problem by alternately updating signal settings based on fixed flows and traffic assignment based on fixed signal settings. To show the robustness of our method, we consider two different assignment methods: one based on user equilibrium assignment, well established in the literature as well as in practice, and the other based on a platoon simulation model with vehicular flow propagation and spill-back. Our optimization algorithm is also compared with others well known in the literature for this problem. The surrogate method (SM, particle swarm optimization (PSO and the genetic algorithm (GA are compared for a combined problem of global optimization of signal settings and traffic assignment (GOSSTA. Numerical experiments on a real test network are reported.

  14. A hybrid approach of gene sets and single genes for the prediction of survival risks with gene expression data.

    Science.gov (United States)

    Seok, Junhee; Davis, Ronald W; Xiao, Wenzhong

    2015-01-01

    Accumulated biological knowledge is often encoded as gene sets, collections of genes associated with similar biological functions or pathways. The use of gene sets in the analyses of high-throughput gene expression data has been intensively studied and applied in clinical research. However, the main interest remains in finding modules of biological knowledge, or corresponding gene sets, significantly associated with disease conditions. Risk prediction from censored survival times using gene sets hasn't been well studied. In this work, we propose a hybrid method that uses both single gene and gene set information together to predict patient survival risks from gene expression profiles. In the proposed method, gene sets provide context-level information that is poorly reflected by single genes. Complementarily, single genes help to supplement incomplete information of gene sets due to our imperfect biomedical knowledge. Through the tests over multiple data sets of cancer and trauma injury, the proposed method showed robust and improved performance compared with the conventional approaches with only single genes or gene sets solely. Additionally, we examined the prediction result in the trauma injury data, and showed that the modules of biological knowledge used in the prediction by the proposed method were highly interpretable in biology. A wide range of survival prediction problems in clinical genomics is expected to benefit from the use of biological knowledge.

  15. Skin friction measurements of mathematically generated roughness in the transitionally- to fully-rough regimes

    Science.gov (United States)

    Barros, Julio; Schultz, Michael; Flack, Karen

    2016-11-01

    Engineering systems are affected by surface roughness which cause an increase in drag leading to significant performance penalties. One important question is how to predict frictional drag purely based upon surface topography. Although significant progress has been made in recent years, this has proven to be challenging. The present work takes a systematic approach by generating surface roughness in which surfaces parameters, such as rms , skewness, can be controlled. Surfaces were produced using the random Fourier modes method with enforced power-law spectral slopes. The surfaces were manufactured using high resolution 3D-printing. In this study three surfaces with constant amplitude and varying slope, P, were investigated (P = - 0 . 5 , - 1 . 0 , - 1 . 5). Skin-friction measurements were conducted in a high Reynolds number turbulent channel flow facility, covering a wide range of Reynolds numbers, from hydraulic-smooth to fully-rough regimes. Results show that some long wavelength roughness scales do not contribute significantly to the frictional drag, thus highlighting the need for filtering in the calculation of surface statistics. Upon high-pass filtering, it was found that krms is highly correlated with the measured ks.

  16. The geometric approach to sets of ordinary differential equations and Hamiltonian dynamics

    Science.gov (United States)

    Estabrook, F. B.; Wahlquist, H. D.

    1975-01-01

    The calculus of differential forms is used to discuss the local integration theory of a general set of autonomous first order ordinary differential equations. Geometrically, such a set is a vector field V in the space of dependent variables. Integration consists of seeking associated geometric structures invariant along V: scalar fields, forms, vectors, and integrals over subspaces. It is shown that to any field V can be associated a Hamiltonian structure of forms if, when dealing with an odd number of dependent variables, an arbitrary equation of constraint is also added. Families of integral invariants are an immediate consequence. Poisson brackets are isomorphic to Lie products of associated CT-generating vector fields. Hamilton's variational principle follows from the fact that the maximal regular integral manifolds of a closed set of forms must include the characteristics of the set.

  17. Advancing the skill set of SCM graduates – An active learning approach

    NARCIS (Netherlands)

    Scholten, Kirstin; Dubois, Anna

    2017-01-01

    Purpose Drawing on a novel approach to active learning in supply chain management, the purpose of this paper is to describe and analyze how the students’ learning process as well as their learning outcomes are influenced by the learning and teaching contexts. Design/methodology/approach A case study

  18. Counsellor Use of the Adlerian-Dreikurs Approach with Parents in the School Setting.

    Science.gov (United States)

    Van Hesteren, Frank

    1979-01-01

    Involvement with parents constitutes an important dimension of the elementary school counselor's role. The Adlerian-Dreikurs approach is described in terms of its underlying theory and the means by which it can be implemented by school counselors. Certain advantages of using the approach in the schools are also discussed. (Author)

  19. Setting Objectives of Value Education in Constructivist Approach in the Light of Revised Blooms Taxonomy (RBT)

    Science.gov (United States)

    Paleeri, Sankaranarayanan

    2015-01-01

    Transaction methods and approaches of value education have to change from lecturing to process based methods according to the development of constructivist approach. The process based methods provide creative interpretation and active participation from student side. Teachers have to organize suitable activities to transact values through process…

  20. Stochastic Discount Factor Approach to International Risk-Sharing:A Robustness Check of the Bilateral Setting

    NARCIS (Netherlands)

    Hadzi-Vaskov, M.; Kool, C.J.M.

    2007-01-01

    This paper presents a robustness check of the stochastic discount factor approach to international (bilateral) risk-sharing given in Brandt, Cochrane, and Santa-Clara (2006). We demonstrate two main inherent limitations of the bilateral SDF approach to international risk-sharing. First, the discount

  1. Advancing the skill set of SCM graduates – An active learning approach

    NARCIS (Netherlands)

    Scholten, Kirstin; Dubois, Anna

    Purpose Drawing on a novel approach to active learning in supply chain management, the purpose of this paper is to describe and analyze how the students’ learning process as well as their learning outcomes are influenced by the learning and teaching contexts. Design/methodology/approach A case study

  2. Multiscale Analysis of the Roughness Effect on Lubricated Rough Contact

    OpenAIRE

    Demirci , Ibrahim; MEZGHANI , Sabeur; YOUSFI , Mohammed; El Mansori , Mohamed

    2014-01-01

    Determining friction is as equally essential as determining the film thickness in the lubricated contact, and is an important research subject. Indeed, reduction of friction in the automotive industry is important for both the minimization of fuel consumption as well as the decrease in the emissions of greenhouse gases. However, the progress in friction reduction has been limited by the difficulty in understanding the mechanism of roughness effects on friction. It was observed that micro-surf...

  3. Simplified Analytic Approach of Pole-to-Pole Faults in MMC-HVDC for AC System Backup Protection Setting Calculation

    Directory of Open Access Journals (Sweden)

    Tongkun Lan

    2018-01-01

    Full Text Available AC (alternating current system backup protection setting calculation is an important basis for ensuring the safe operation of power grids. With the increasing integration of modular multilevel converter based high voltage direct current (MMC-HVDC into power grids, it has been a big challenge for the AC system backup protection setting calculation, as the MMC-HVDC lacks the fault self-clearance capability under pole-to-pole faults. This paper focused on the pole-to-pole faults analysis for the AC system backup protection setting calculation. The principles of pole-to-pole faults analysis were discussed first according to the standard of the AC system protection setting calculation. Then, the influence of fault resistance on the fault process was investigated. A simplified analytic approach of pole-to-pole faults in MMC-HVDC for the AC system backup protection setting calculation was proposed. In the proposed approach, the derived expressions of fundamental frequency current are applicable under arbitrary fault resistance. The accuracy of the proposed approach was demonstrated by PSCAD/EMTDC (Power Systems Computer-Aided Design/Electromagnetic Transients including DC simulations.

  4. ReMashed – Recommendation Approaches for Mash-Up Personal Learning Environments in Formal and Informal Learning Settings

    NARCIS (Netherlands)

    Drachsler, Hendrik; Pecceu, Dries; Arts, Tanja; Hutten, Edwin; Rutledge, Lloyd; Van Rosmalen, Peter; Hummel, Hans; Koper, Rob

    2009-01-01

    Drachsler, H., Peccau, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H. G. K., & Koper, R. (2009). ReMashed – Recommendation Approaches for Mash-Up Personal Learning Environments in Formal and Informal Learning Settings. Presentation at the 2nd Workshop Mash-Up Personal Learning

  5. ReMashed – Recommendation Approaches for Mash-Up Personal Learning Environments in Formal and Informal Learning Settings

    NARCIS (Netherlands)

    Drachsler, Hendrik; Pecceu, Dries; Arts, Tanja; Hutten, Edwin; Rutledge, Lloyd; Van Rosmalen, Peter; Hummel, Hans; Koper, Rob

    2009-01-01

    Drachsler, H., Peccau, D., Arts, T., Hutten, E., Rutledge, L., Van Rosmalen, P., Hummel, H. G. K., & Koper, R. (2009). ReMashed – Recommendation Approaches for Mash-Up Personal Learning Environments in Formal and Informal Learning Settings. In F. Wild, M. Kalz, M. Palmér & D. Müller (Eds.),

  6. Molecular basis sets - a general similarity-based approach for representing chemical spaces.

    Science.gov (United States)

    Raghavendra, Akshay S; Maggiora, Gerald M

    2007-01-01

    A new method, based on generalized Fourier analysis, is described that utilizes the concept of "molecular basis sets" to represent chemical space within an abstract vector space. The basis vectors in this space are abstract molecular vectors. Inner products among the basis vectors are determined using an ansatz that associates molecular similarities between pairs of molecules with their corresponding inner products. Moreover, the fact that similarities between pairs of molecules are, in essentially all cases, nonzero implies that the abstract molecular basis vectors are nonorthogonal, but since the similarity of a molecule with itself is unity, the molecular vectors are normalized to unity. A symmetric orthogonalization procedure, which optimally preserves the character of the original set of molecular basis vectors, is used to construct appropriate orthonormal basis sets. Molecules can then be represented, in general, by sets of orthonormal "molecule-like" basis vectors within a proper Euclidean vector space. However, the dimension of the space can become quite large. Thus, the work presented here assesses the effect of basis set size on a number of properties including the average squared error and average norm of molecular vectors represented in the space-the results clearly show the expected reduction in average squared error and increase in average norm as the basis set size is increased. Several distance-based statistics are also considered. These include the distribution of distances and their differences with respect to basis sets of differing size and several comparative distance measures such as Spearman rank correlation and Kruscal stress. All of the measures show that, even though the dimension can be high, the chemical spaces they represent, nonetheless, behave in a well-controlled and reasonable manner. Other abstract vector spaces analogous to that described here can also be constructed providing that the appropriate inner products can be directly

  7. Hybrid Wavelet De-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series

    Science.gov (United States)

    WANG, D.; Wang, Y.; Zeng, X.

    2017-12-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, Wavelet De-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series.

  8. A rough multi-factor model of electricity spot prices

    International Nuclear Information System (INIS)

    Bennedsen, Mikkel

    2017-01-01

    We introduce a new continuous-time mathematical model of electricity spot prices which accounts for the most important stylized facts of these time series: seasonality, spikes, stochastic volatility, and mean reversion. Empirical studies have found a possible fifth stylized fact, roughness, and our approach explicitly incorporates this into the model of the prices. Our setup generalizes the popular Ornstein–Uhlenbeck-based multi-factor framework of and allows us to perform statistical tests to distinguish between an Ornstein–Uhlenbeck-based model and a rough model. Further, through the multi-factor approach we account for seasonality and spikes before estimating – and making inference on – the degree of roughness. This is novel in the literature and we present simulation evidence showing that these precautions are crucial for accurate estimation. Lastly, we estimate our model on recent data from six European energy exchanges and find statistical evidence of roughness in five out of six markets. As an application of our model, we show how, in these five markets, a rough component improves short term forecasting of the prices. - Highlights: • Statistical modeling of electricity spot prices • Multi-factor decomposition • Roughness • Electricity price forecasting

  9. Data Set for the manuscript entitled, "Sample Processing Approach for Detection of Ricin in Surface Samples."

    Data.gov (United States)

    U.S. Environmental Protection Agency — Figure. This dataset is associated with the following publication: Shah, S., S. Kane, A.M. Erler, and T. Alfaro. Sample Processing Approach for Detection of Ricin in...

  10. A Cross-Cultural Approach to Speech-Act-Sets: The Case of Apologies

    Directory of Open Access Journals (Sweden)

    Válková Silvie

    2014-07-01

    Full Text Available The aim of this paper is to contribute to the validity of recent research into speech act theory by advocating the idea that with some of the traditional speech acts, their overt language manifestations that emerge from corpus data remind us of ritualised scenarios of speech-act-sets rather than single acts, with configurations of core and peripheral units reflecting the socio-cultural norms of the expectations and culture-bound values of a given language community. One of the prototypical manifestations of speech-act-sets, apologies, will be discussed to demonstrate a procedure which can be used to identify, analyse, describe and cross-culturally compare the validity of speech-act-set theory and provide evidence of its relevance for studying the English-Czech interface in this particular domain of human interaction.

  11. Matched pairs approach to set theoretic solutions of the Yang-Baxter equation

    International Nuclear Information System (INIS)

    Gateva-Ivanova, T.; Majid, S.

    2005-08-01

    We study set-theoretic solutions (X,r) of the Yang-Baxter equations on a set X in terms of the induced left and right actions of X on itself. We give a characterization of involutive square-free solutions in terms of cyclicity conditions. We characterise general solutions in terms of an induced matched pair of unital semigroups S(X,r) and construct (S,r S ) from the matched pair. Finally, we study extensions of solutions in terms of matched pairs of their associated semigroups. We also prove several general results about matched pairs of unital semigroups of the required type, including iterated products S bowtie S bowtie S underlying the proof that r S is a solution, and extensions (S bowtie T, r Sb owtie T ). Examples include a general 'double' construction (S bowtie S,r Sb owtie S ) and some concrete extensions, their actions and graphs based on small sets. (author)

  12. A set-valued approach to FDI and FTC: Theory and implementation issues

    DEFF Research Database (Denmark)

    Rosa, Paulo Andre Nobre; Casau, Pedro; Silvestre, Carlos

    2012-01-01

    A complete methodology to design robust Fault Detection and Isolation (FDI) filters and Fault Tolerant Control (FTC) schemes for Linear Time-Varying (LTV) systems is proposed. The paper takes advantage of the recent advances in model invalidation using Set-Valued Observers (SVOs) that led...

  13. Selective Mutism: A Team Approach to Assessment and Treatment in the School Setting

    Science.gov (United States)

    Ponzurick, Joan M.

    2012-01-01

    The school nurse plays a pivotal role in the assessment and treatment of selective mutism (SM), a rare disorder found in elementary school children. Due to anxiety, children with SM do not speak in uncomfortable situations, primarily the school setting. Diagnosis of SM is often missed in the formative years because the child does speak at home.…

  14. An individual-based modeling approach to simulating recreation use in wilderness settings

    Science.gov (United States)

    Randy Gimblett; Terry Daniel; Michael J. Meitner

    2000-01-01

    Landscapes protect biological diversity and provide unique opportunities for human-nature interactions. Too often, these desirable settings suffer from extremely high visitation. Given the complexity of social, environmental and economic interactions, resource managers need tools that provide insights into the cause and effect relationships between management actions...

  15. Chaotic logic gate: A new approach in set and design by genetic algorithm

    International Nuclear Information System (INIS)

    Beyki, Mahmood; Yaghoobi, Mahdi

    2015-01-01

    How to reconfigure a logic gate is an attractive subject for different applications. Chaotic systems can yield a wide variety of patterns and here we use this feature to produce a logic gate. This feature forms the basis for designing a dynamical computing device that can be rapidly reconfigured to become any wanted logical operator. This logic gate that can reconfigure to any logical operator when placed in its chaotic state is called chaotic logic gate. The reconfiguration realize by setting the parameter values of chaotic logic gate. In this paper we present mechanisms about how to produce a logic gate based on the logistic map in its chaotic state and genetic algorithm is used to set the parameter values. We use three well-known selection methods used in genetic algorithm: tournament selection, Roulette wheel selection and random selection. The results show the tournament selection method is the best method for set the parameter values. Further, genetic algorithm is a powerful tool to set the parameter values of chaotic logic gate

  16. A fuzzy set approach to assess the predictive accuracy of land use simulations

    NARCIS (Netherlands)

    van Vliet, J.; Hagen-Zanker, A.; Hurkens, J.; van van Delden, H.

    2013-01-01

    The predictive accuracy of land use models is frequently assessed by comparing two data sets: the simulated land use map and the observed land use map at the end of the simulation period. A common statistic for this is Kappa, which expresses the agreement between two categorical maps, corrected for

  17. The surface roughness and planetary boundary layer

    Science.gov (United States)

    Telford, James W.

    1980-03-01

    of widely varying sizes are combined this paper shows how the surface roughness parameter, z 0, can be calculated for an ideal case of a random distribution of vertical cylinders of the same height. To treat a water surface, with various sized waves, such an approach modified to treat the surface by the superposition of various sized roughness elements, is likely to be helpful. Such a theory is particularly desirable when such a surface is changing, as the ocean does when the wind varies. The formula, 2 24_2004_Article_BF00877766_TeX2GIFE2.gif {0.118}/{a_s C_D }< z_0< {0.463}/{a_s C_D (u^* )} is the result derived here. It applies to cylinders of radius, r, and number, m, per unit boundary area, where a s = 2rm, is the area of the roughness elements, per unit area perpendicular to the wind, per unit distance downwind. The drag coefficient of the cylinders is C D . The smaller value of z o is for large Reynolds numbers where the larger scale turbulence at the surface dominates, and the drag coefficient is about constant. Here the flow between the cylinders is intermittent. When the Reynolds number is small enough then the intermittent nature of the turbulence is reduced and this results in the average velocity at each level determining the drag. In this second case the larger limit for z 0 is more appropriate.

  18. Goal setting in psychotherapy: the relevance of approach and avoidance goals for treatment outcome.

    Science.gov (United States)

    Wollburg, Eileen; Braukhaus, Christoph

    2010-07-01

    The present study is the first aimed at investigating the influence of goal definition on treatment outcome in a sample of depressed patients. Data from 657 inpatients admitted to a psychosomatic clinic in Germany being treated in a cognitive-behavioral therapy program were analyzed. Treatment goals were identified as either approach or avoidance, and the sample was classified accordingly. Patients who identified approach goals only were placed in the approach group, and those who identified at least one avoidance goal were placed in the avoidance group. Results showed that framing goals using avoidance terms was associated with less symptomatic improvement but did not affect goal attainment. Findings from this research should be utilized in practice not only for process management such as individual treatment planning but also to control outcome quality. Furthermore, goal definition should be considered as a control variable in research on depression.

  19. Nuclear-electronic orbital reduced explicitly correlated Hartree-Fock approach: Restricted basis sets and open-shell systems

    International Nuclear Information System (INIS)

    Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon

    2015-01-01

    The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF − and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN + , and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects

  20. Nuclear-electronic orbital reduced explicitly correlated Hartree-Fock approach: Restricted basis sets and open-shell systems

    Energy Technology Data Exchange (ETDEWEB)

    Brorsen, Kurt R.; Sirjoosingh, Andrew; Pak, Michael V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu [Department of Chemistry, University of Illinois at Urbana-Champaign, 600 South Mathews Ave., Urbana, Illinois 61801 (United States)

    2015-06-07

    The nuclear electronic orbital (NEO) reduced explicitly correlated Hartree-Fock (RXCHF) approach couples select electronic orbitals to the nuclear orbital via Gaussian-type geminal functions. This approach is extended to enable the use of a restricted basis set for the explicitly correlated electronic orbitals and an open-shell treatment for the other electronic orbitals. The working equations are derived and the implementation is discussed for both extensions. The RXCHF method with a restricted basis set is applied to HCN and FHF{sup −} and is shown to agree quantitatively with results from RXCHF calculations with a full basis set. The number of many-particle integrals that must be calculated for these two molecules is reduced by over an order of magnitude with essentially no loss in accuracy, and the reduction factor will increase substantially for larger systems. Typically, the computational cost of RXCHF calculations with restricted basis sets will scale in terms of the number of basis functions centered on the quantum nucleus and the covalently bonded neighbor(s). In addition, the RXCHF method with an odd number of electrons that are not explicitly correlated to the nuclear orbital is implemented using a restricted open-shell formalism for these electrons. This method is applied to HCN{sup +}, and the nuclear densities are in qualitative agreement with grid-based calculations. Future work will focus on the significance of nonadiabatic effects in molecular systems and the further enhancement of the NEO-RXCHF approach to accurately describe such effects.

  1. Setting water quality criteria in China: approaches for developing species sensitivity distributions for metals and metalloids.

    Science.gov (United States)

    Liu, Yuedan; Wu, Fengchang; Mu, Yunsong; Feng, Chenglian; Fang, Yixiang; Chen, Lulu; Giesy, John P

    2014-01-01

    Both nonparametric and parametric approaches were used to construct SSDs for use in ecological risk assessments. Based on toxicity to representative aquatic species and typical water contaminants of metals and metalloids in China, nonparametric methods based on the bootstrap were statistically superior to the parametric curve-fitting approaches. Knowing what the SSDs for each targeted species are might help in selecting efficient indicator species to use for water quality monitoring. The species evaluated herein showed sensitivity variations to different chemical treatments that were used in constructing the SSDs. For example, D. magna was more sensitive than most species to most chemical treatments, whereas D. rerio was sensitive to Hg and Pb but was tolerant to Zn. HC5 values, derived for the pollutants in this study for protecting Chinese species, differed from those published by the USEPA. Such differences may result from differences in geographical conditions and biota between China and the United States. Thus, the degree of protection desired for aquatic organisms should be formulated to fit local conditions. For approach selection, we recommend all approaches be considered and the most suitable approaches chosen. The selection should be based on the practical information needs of the researcher (viz., species composition, species sensitivity, and geological characteristics of aquatic habitats), since risk assessments usually are focused on certain substances, species, or monitoring sites. We used Tai Lake as a typical freshwater lake in China to assess the risk of metals and metalloids to the aquatic species. We calculated hazard quotients for the metals and metalloids that were found in the water of this lake. Results indicated the decreasing ecological risk of these contaminants in the following order: Hg metalloids to aquatic species. Based on the MEC and HC5 derived from SSDs by nonparametric and parametric approaches together, the risk levels of metals

  2. Ultrasonic backward radiation on painted rough interface

    International Nuclear Information System (INIS)

    Kwon, Yong Gyu; Yoon, Seok Soo; Kwon, Sung Duck

    2002-01-01

    The angular dependence(profile) of backscattered ultrasound was measured for steel and brass specimens with periodical surface roughness (1-71μm). Backward radiations showed more linear dependency than normal profile. Direct amplitude increased and averaging amplitude decreased with surface roughness. Painting treatment improved the linearity in direct backward radiation below roughness of 0.03. Scholte and Rayleigh-like waves were observed in the spectrum of averaging backward radiation on periodically rough surface. Painting on periodically rough surface could be used in removing the interface mode effect by periodic roughness.

  3. An FMEA analysis using grey theory and grey rough sets

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-10-01

    Full Text Available This paper presents a hybrid method for detecting the most important failure items as well as the most effective alternative strategy to cope with possible events. The proposed model of this paper uses grey technique to rank various alternatives and FMEA technique to find important faults. The implementation of the proposed method has been illustrated for an existing example on the literature. The results of this method show that the proposed model has been capable of detecting the most trouble making problems with fuzzy logic and finds the most important solution strategy using FMEA technique.

  4. Rough set based decision rule generation to find behavioural ...

    Indian Academy of Sciences (India)

    L Sumalatha

    conducted experiments over data of Portuguese Banking institution. From the proposed ... nomic, banking [7, 8], pharmacology [9], and text mining. [10]. In this paper we ..... Age. Numeric. Job. Categorical: admin, unemployed, management,.

  5. Signal Processing for Nondifferentiable Data Defined on Cantor Sets: A Local Fractional Fourier Series Approach

    Directory of Open Access Journals (Sweden)

    Zhi-Yong Chen

    2014-01-01

    Full Text Available From the signal processing point of view, the nondifferentiable data defined on the Cantor sets are investigated in this paper. The local fractional Fourier series is used to process the signals, which are the local fractional continuous functions. Our results can be observed as significant extensions of the previously known results for the Fourier series in the framework of the local fractional calculus. Some examples are given to illustrate the efficiency and implementation of the present method.

  6. Set-Based Approach to Design under Uncertainty and Applications to Shaping a Hydrofoil

    Science.gov (United States)

    2016-01-01

    given requirements. This notion of set-based designwas pioneered by Toyota and adopted by the U.S. Navy [1]. It responds to most real-world design...in such a way that all desired shape variations are allowed both on the suction and pressure side. Figure 2 gives a schematic representation of the...of the hydrofoil. The control points of the pressure side have been changed in different ways to en- sure the overall hydrodynamic performance

  7. Tools and approaches for simplifying serious games development in educational settings

    OpenAIRE

    Calvo, Antonio; Rotaru, Dan C.; Freire, Manuel; Fernandez-Manjon, Baltasar

    2016-01-01

    Serious Games can benefit from the commercial video games industry by taking advantage of current development tools. However, the economics and requirements of serious games and commercial games are very different. In this paper, we describe the factors that impact the total cost of ownership of serious games used in educational settings, review the specific requirements of games used as learning material, and analyze the different development tools available in the industry highlighting thei...

  8. New approaches to wipe sampling methods for antineoplastic and other hazardous drugs in healthcare settings.

    Science.gov (United States)

    Connor, Thomas H; Smith, Jerome P

    2016-09-01

    At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.

  9. Set Based PLM Implementation, a Modular Approach to PLM Process Knowledge, Management and Automation

    NARCIS (Netherlands)

    Koomen, Sebastiaan Pieter; Ríos, J.; Bernard, A.; Bouras, A.; Foufou, S.

    2017-01-01

    In many cases PLM implementations are halted in the first phases of larger projects. On average, implementation projects take longer, cost more than planned and not all goals are achieved despite modern software implementation methods like Agile or Scrum. This paper proposes another approach, in

  10. A narrative approach to studying the diversification of inquiry learning across instructional settings

    NARCIS (Netherlands)

    Rutten, N.P.G.; van Joolingen, W.R.; Haverkamp-Hermans, Gerdi G.N.; Bogner, Franz X.; Kretschmer, Thomas; Stracke, Christian M.; Lameras, Petros; Chioccariello, Augusto; Doran, Rosa; Tiemann, Rüdiger; Kastrinogiannis, Timotheos; Maravic, Jasminka; Crotty, Yvonne; Kelly, Claire; Markaki, Vassiliki; Lazoudis, Angelos; Koivula, Jani; Polymatidis, Dimitris

    2015-01-01

    In this study we used a narrative approach to investigate the function that digital, interactive tools can fulfill in inquiry teaching and learning. Such a narrative can be conceived of as 'talking through' a lesson in which a teacher supports inquiry with technology. By subsequently coding these

  11. Using an Ecocultural Approach to Explore Young Children's Experiences of Prior-to-School Care Settings

    Science.gov (United States)

    Grace, Rebekah; Bowes, Jennifer

    2011-01-01

    This paper contributes to the discussion around methodologies effective in gathering the perspectives of young children for the purposes of research. It describes ecocultural theory, a theoretical model that has grown out of anthropology and cross-cultural psychology, and argues for the benefits of applying an ecocultural approach to interviews…

  12. Gear technical contributions to an ecosystem approach in the Danish bottom set nets fisheries

    DEFF Research Database (Denmark)

    Savina, Esther

    on passive gears is partly due to historical focus on active gears, but also because data collection and analysis calls for the development of appropriate innovative assessment methodologies to properly assess the new type of information which has to be gathered as part of an Ecosystem Approach to Fisheries...

  13. A Constructivist Approach to Teaching Web Development in Post-Secondary Vocational Settings

    Science.gov (United States)

    Bunch, John M.

    2009-01-01

    Vocational education by its nature has a need for delivery methods that place a strong focus on the relationship between school and work and seeks to deliver instruction in a manner that bridges the two as seamlessly as possible. This paper presents a curriculum and constructivist-based instructional delivery approach, designed to emphasize a…

  14. A Methodological Demonstration of Set-theoretical Approach to Social Media Maturity Models Using Necessary Condition Analysis

    DEFF Research Database (Denmark)

    Lasrado, Lester Allan; Vatrapu, Ravi; Andersen, Kim Normann

    2016-01-01

    Despite being widely accepted and applied across research domains, maturity models have been criticized for lacking academic rigor, especially methodologically rigorous and empirically grounded or tested maturity models are quite rare. Attempting to close this gap, we adopt a set-theoretic approach...... and evaluate some of arguments presented by previous conceptual focused social media maturity models....... by applying the Necessary Condition Analysis (NCA) technique to derive maturity stages and stage boundaries conditions. The ontology is to view stages (boundaries) in maturity models as a collection of necessary condition. Using social media maturity data, we demonstrate the strength of our approach...

  15. Evidence Aid approach to gap analysis and priority setting of questions for systematic reviews in disasters.

    Science.gov (United States)

    Kayabu, Bonnix

    2015-02-01

    This article is based on a presentation at the Evidence Aid Symposium, on 20 September 2014, at Hyderabad, India. Ten years after the Indian Ocean Tsunami, Evidence Aid and it parters and other humanitarian stakeholders met to update about Evidence Aid work and discussed it future. The Evidence Aid approach to fill in the gap on the production and use of evidence in disater sector and other humanitarian health emergencies was widely discussed. Iterative approach to prioritise evidence reinforced Evidence Aid principle of independacy and a coordinated international orgasisation. The generation of 30 research questions during the prioritisation process contitute the first big step for Evidence Aid to become a one stop shop for the seach evidence on the effectiveness of interventions in disasters. © 2015 Chinese Cochrane Center, West China Hospital of Sichuan University and Wiley Publishing Asia Pty Ltd.

  16. Flipping for success: evaluating the effectiveness of a novel teaching approach in a graduate level setting

    OpenAIRE

    Moraros, John; Islam, Adiba; Yu, Stan; Banow, Ryan; Schindelka, Barbara

    2015-01-01

    Background Flipped Classroom is a model that?s quickly gaining recognition as a novel teaching approach among health science curricula. The purpose of this study was four-fold and aimed to compare Flipped Classroom effectiveness ratings with: 1) student socio-demographic characteristics, 2) student final grades, 3) student overall course satisfaction, and 4) course pre-Flipped Classroom effectiveness ratings. Methods The participants in the study consisted of 67 Masters-level graduate student...

  17. A set-valued approach to FDI an FTC of wind turbines

    DEFF Research Database (Denmark)

    Casau, Pedro; Rosa, Paulo; Tabatabaeipour, Mojtaba

    2015-01-01

    A complete methodology to design robust Fault Detection and Isolation (FDI) filters and Fault Tolerant Control (FTC) schemes for Linear Parameter Varying (LPV) systems is proposed, with particular focus on its applicability to wind turbines. The paper takes advantage of the recent advances in model...... falsification using Set-Valued Observers (SVOs) that led to the development of FDI methods for uncertain linear time-varying systems, with promising results in terms of the time required to diagnose faults. An integration of such SVO-based FDI methods with robust control synthesis is described, in order...

  18. Modeling Restrained Shrinkage Induced Cracking in Concrete Rings Using the Thick Level Set Approach

    Directory of Open Access Journals (Sweden)

    Rebecca Nakhoul

    2018-03-01

    Full Text Available Modeling restrained shrinkage-induced damage and cracking in concrete is addressed herein. The novel Thick Level Set (TLS damage growth and crack propagation model is used and adapted by introducing shrinkage contribution into the formulation. The TLS capacity to predict damage evolution, crack initiation and growth triggered by restrained shrinkage in absence of external loads is evaluated. A study dealing with shrinkage-induced cracking in elliptical concrete rings is presented herein. Key results such as the effect of rings oblateness on stress distribution and critical shrinkage strain needed to initiate damage are highlighted. In addition, crack positions are compared to those observed in experiments and are found satisfactory.

  19. Approaches and methods for eutrophication target setting in the Baltic Sea region

    Energy Technology Data Exchange (ETDEWEB)

    Carstensen, J.; Andersen, J.; Dromph, K. [and others

    2013-06-01

    This report describes the outcome of the project 'Review of the ecological targets for eutrophication of the HELCOM BSAP', also known as HELCOM TARGREV. The objectives of HELCOM TARGREV have been to revise the scientific basis underlying the ecological targets for eutrophication, placing much emphasis on providing a strengthened data and information basis for the setting of quantitative targets. The results are first of all likely to form the information basis on which decisions in regard to reviewing and if necessary revising the maximum allowable inputs (MAI) of nutrient of the Baltic Sea Action Plan, including the provisional country-wise allocation reduction targets (CART), will be made.

  20. OPTIMIZATION-BASED APPROACH TO TILING OF FINITE AREAS WITH ARBITRARY SETS OF WANG TILES

    Directory of Open Access Journals (Sweden)

    Marek Tyburec

    2017-11-01

    Full Text Available Wang tiles proved to be a convenient tool for the design of aperiodic tilings in computer graphics and in materials engineering. While there are several algorithms for generation of finite-sized tilings, they exploit the specific structure of individual tile sets, which prevents their general usage. In this contribution, we reformulate the NP-complete tiling generation problem as a binary linear program, together with its linear and semidefinite relaxations suitable for the branch and bound method. Finally, we assess the performance of the established formulations on generations of several aperiodic tilings reported in the literature, and conclude that the linear relaxation is better suited for the problem.

  1. Towards predictive models for transitionally rough surfaces

    Science.gov (United States)

    Abderrahaman-Elena, Nabil; Garcia-Mayoral, Ricardo

    2017-11-01

    We analyze and model the previously presented decomposition for flow variables in DNS of turbulence over transitionally rough surfaces. The flow is decomposed into two contributions: one produced by the overlying turbulence, which has no footprint of the surface texture, and one induced by the roughness, which is essentially the time-averaged flow around the surface obstacles, but modulated in amplitude by the first component. The roughness-induced component closely resembles the laminar steady flow around the roughness elements at the same non-dimensional roughness size. For small - yet transitionally rough - textures, the roughness-free component is essentially the same as over a smooth wall. Based on these findings, we propose predictive models for the onset of the transitionally rough regime. Project supported by the Engineering and Physical Sciences Research Council (EPSRC).

  2. The surface roughness effect on the performance of supersonic ejectors

    Science.gov (United States)

    Brezgin, D. V.; Aronson, K. E.; Mazzelli, F.; Milazzo, A.

    2017-07-01

    The paper presents the numerical simulation results of the surface roughness influence on gas-dynamic processes inside flow parts of a supersonic ejector. These simulations are performed using two commercial CFD solvers (Star- CCM+ and Fluent). The results are compared to each other and verified by a full-scale experiment in terms of global flow parameters (the entrainment ratio: the ratio between secondary to primary mass flow rate - ER hereafter) and local flow parameters distribution (the static pressure distribution along the mixing chamber and diffuser walls). A detailed comparative study of the employed methods and approaches in both CFD packages is carried out in order to estimate the roughness effect on the logarithmic law velocity distribution inside the boundary layer. Influence of the surface roughness is compared with the influence of the backpressure (static pressure at the ejector outlet). It has been found out that increasing either the ejector backpressure or the surface roughness height, the shock position displaces upstream. Moreover, the numerical simulation results of an ejector with rough walls in the both CFD solvers are well quantitatively agreed with each other in terms of the mean ER and well qualitatively agree in terms of the local flow parameters distribution. It is found out that in the case of exceeding the "critical roughness height" for the given boundary conditions and ejector's geometry, the ejector switches to the "off-design" mode and its performance decreases considerably.

  3. AN AUTOMATED ROAD ROUGHNESS DETECTION FROM MOBILE LASER SCANNING DATA

    Directory of Open Access Journals (Sweden)

    P. Kumar

    2017-05-01

    Full Text Available Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.

  4. An Automated Road Roughness Detection from Mobile Laser Scanning Data

    Science.gov (United States)

    Kumar, P.; Angelats, E.

    2017-05-01

    Rough roads influence the safety of the road users as accident rate increases with increasing unevenness of the road surface. Road roughness regions are required to be efficiently detected and located in order to ensure their maintenance. Mobile Laser Scanning (MLS) systems provide a rapid and cost-effective alternative by providing accurate and dense point cloud data along route corridor. In this paper, an automated algorithm is presented for detecting road roughness from MLS data. The presented algorithm is based on interpolating smooth intensity raster surface from LiDAR point cloud data using point thinning process. The interpolated surface is further processed using morphological and multi-level Otsu thresholding operations to identify candidate road roughness regions. The candidate regions are finally filtered based on spatial density and standard deviation of elevation criteria to detect the roughness along the road surface. The test results of road roughness detection algorithm on two road sections are presented. The developed approach can be used to provide comprehensive information to road authorities in order to schedule maintenance and ensure maximum safety conditions for road users.

  5. Rheological State Diagrams for Rough Colloids in Shear Flow

    Science.gov (United States)

    Hsiao, Lilian C.; Jamali, Safa; Glynos, Emmanouil; Green, Peter F.; Larson, Ronald G.; Solomon, Michael J.

    2017-10-01

    To assess the role of particle roughness in the rheological phenomena of concentrated colloidal suspensions, we develop model colloids with varying surface roughness length scales up to 10% of the particle radius. Increasing surface roughness shifts the onset of both shear thickening and dilatancy towards lower volume fractions and critical stresses. Experimental data are supported by computer simulations of spherical colloids with adjustable friction coefficients, demonstrating that a reduction in the onset stress of thickening and a sign change in the first normal stresses occur when friction competes with lubrication. In the quasi-Newtonian flow regime, roughness increases the effective packing fraction of colloids. As the shear stress increases and suspensions of rough colloids approach jamming, the first normal stresses switch signs and the critical force required to generate contacts is drastically reduced. This is likely a signature of the lubrication films giving way to roughness-induced tangential interactions that bring about load-bearing contacts in the compression axis of flow.

  6. Rheological State Diagrams for Rough Colloids in Shear Flow.

    Science.gov (United States)

    Hsiao, Lilian C; Jamali, Safa; Glynos, Emmanouil; Green, Peter F; Larson, Ronald G; Solomon, Michael J

    2017-10-13

    To assess the role of particle roughness in the rheological phenomena of concentrated colloidal suspensions, we develop model colloids with varying surface roughness length scales up to 10% of the particle radius. Increasing surface roughness shifts the onset of both shear thickening and dilatancy towards lower volume fractions and critical stresses. Experimental data are supported by computer simulations of spherical colloids with adjustable friction coefficients, demonstrating that a reduction in the onset stress of thickening and a sign change in the first normal stresses occur when friction competes with lubrication. In the quasi-Newtonian flow regime, roughness increases the effective packing fraction of colloids. As the shear stress increases and suspensions of rough colloids approach jamming, the first normal stresses switch signs and the critical force required to generate contacts is drastically reduced. This is likely a signature of the lubrication films giving way to roughness-induced tangential interactions that bring about load-bearing contacts in the compression axis of flow.

  7. A socio-cultural approach to learning in the practice setting.

    LENUS (Irish Health Repository)

    White, Ciara

    2010-11-01

    Practice learning is an essential part of the curriculum and accounts for approximately 60% of the current pre-registration nursing programmes in the Republic of Ireland. The nature and quality of the clinical learning environment and the student nurses\\' experience of their practice placements is recognised as being influential in promoting the integration of theory and practice. However, the problem experienced by many learners is how to relate their theoretical knowledge to the situation-at-hand within the practice setting. Socio-cultural or activity theories of learning seek to explain the social nature of learning and propose that knowledge and learning are considered to be contextually situated. Lave and Wenger (1991) argue that learning is integrated with practice and through engagement with a community of practice, by means of sponsorship; students become increasingly competent in their identity as practitioners. This paper examines the changes which have occurred within the pre-registration nursing curriculum in the Republic of Ireland with the transition from the apprenticeship system to the graduate programme, and the resulting reduction in clinical learning hours. It also examines the potential impact on the development of student learning with the implementation of the concepts proposed by Lave and Wenger to learning in the practice setting.

  8. A new approach to the identification of Landscape Quality Objectives (LQOs) as a set of indicators.

    Science.gov (United States)

    Sowińska-Świerkosz, Barbara Natalia; Chmielewski, Tadeusz J

    2016-12-15

    The objective of the paper is threefold: (1) to introduce Landscape Quality Objectives (LQOs) as a set of indicators; (2) to present a method of linking social and expert opinion in the process of the formulation of landscape indicators; and (3) to present a methodological framework for the identification of LQOs. The implementation of these goals adopted a six-stage procedure based on the use of landscape units: (1) GIS analysis; (2) classification; (3) social survey; (4) expert value judgement; (5) quality assessment; and (6) guidelines formulation. The essence of the research was the presentation of features that determine landscape quality according to public opinion as a set of indicators. The results showed that 80 such indicators were identified, of both a qualitative (49) and a quantitative character (31). Among the analysed units, 60% (18 objects) featured socially expected (and confirmed by experts) levels of landscape quality, and 20% (6 objects) required overall quality improvement in terms of both public and expert opinion. The adopted procedure provides a new tool for integrating social responsibility into environmental management. The advantage of the presented method is the possibility of its application in the territories of various European countries. It is flexible enough to be based on cartographic studies, landscape research methods, and environmental quality standards existing in a given country. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Self-regulation of health behavior: social psychological approaches to goal setting and goal striving.

    Science.gov (United States)

    Mann, Traci; de Ridder, Denise; Fujita, Kentaro

    2013-05-01

    The goal of this article is to review and highlight the relevance of social psychological research on self-regulation for health-related theory and practice. We first review research on goal setting, or determining which goals to pursue and the criteria to determine whether one has succeeded. We discuss when and why people adopt goals, what properties of goals increase the likelihood of their attainment, and why people abandon goals. We then review research on goal striving, which includes the planning and execution of actions that lead to goal attainment, and the processes that people use to shield their goals from being disrupted by other competing goals, temptations, or distractions. We describe four types of strategies that people use when pursuing goals. We find that self-regulation entails the operation of a number of psychological mechanisms, and that there is no single solution that will help all people in all situations. We recommend a number of strategies that can help people to more effectively set and attain health-related goals. We conclude that enhancing health behavior requires a nuanced understanding and sensitivity to the varied, dynamic psychological processes involved in self-regulation, and that health is a prototypical and central domain in which to examine the relevance of these theoretical models for real behavior. We discuss the implications of this research for theory and practice in health-related domains. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  10. To do good might hurt bad : Exploring nurses' understanding and approach to suffering in forensic psychiatric settings

    OpenAIRE

    Vincze, M.; Fredriksson, L.; Wiklund Gustin, Lena

    2015-01-01

    Patients in forensic psychiatric settings not only have to deal with their mental illness, but also memories of criminal activities and being involuntarily hospitalized. The aim of the present study was to explore how nurses working in forensic psychiatric services understand and approach patients' experiences of suffering. Data were generated by semistructured interviews with psychiatric nurses from two different forensic psychiatric units in Sweden. Data were analysed by means of a hermeneu...

  11. Active fault detection and isolation of discrete-time linear time-varying systems: a set-membership approach

    DEFF Research Database (Denmark)

    Tabatabaeipour, Mojtaba

    2013-01-01

    Active fault detection and isolation (AFDI) is used for detection and isolation of faults that are hidden in the normal operation because of a low excitation signal or due to the regulatory actions of the controller. In this paper, a new AFDI method based on set-membership approaches is proposed...... un-falsified, the AFDI method is used to generate an auxiliary signal that is injected into the system for detection and isolation of faults that remain otherwise hidden or non-isolated using passive FDI (PFDI) methods. Having the set-valued estimation of the states for each model, the proposed AFDI...... method finds an optimal input signal that guarantees FDI in a finite time horizon. The input signal is updated at each iteration in a decreasing receding horizon manner based on the set-valued estimation of the current states and un-falsified models at the current sample time. The problem is solved...

  12. A Normative Data Set for the Clinical Assessment of Achromatic and Chromatic Contrast Sensitivity Using a qCSF Approach.

    Science.gov (United States)

    Kim, Yeon Jin; Reynaud, Alexandre; Hess, Robert F; Mullen, Kathy T

    2017-07-01

    The measurement of achromatic sensitivity has been an important tool for monitoring subtle changes in vision as the result of disease or response to therapy. In this study, we aimed to provide a normative data set for achromatic and chromatic contrast sensitivity functions within a common cone contrast space using an abbreviated measurement approach suitable for clinical practice. In addition, we aimed to provide comparisons of achromatic and chromatic binocular summation across spatial frequency. We estimated monocular cone contrast sensitivity functions (CCSFs) using a quick Contrast Sensitivity Function (qCSF) approach for achromatic as well as isoluminant, L/M cone opponent, and S cone opponent stimuli in a healthy population of 51 subjects. We determined the binocular CCSFs for achromatic and chromatic vision to evaluate the degree of binocular summation across spatial frequency for these three different mechanisms in a subset of 20 subjects. Each data set shows consistent contrast sensitivity across the population. They highlight the extremely high cone contrast sensitivity of L/M cone opponency compared with the S-cone and achromatic responses. We also find that the two chromatic sensitivities are correlated across the healthy population. In addition, binocular summation for all mechanisms depends strongly on stimulus spatial frequency. This study, using an approach well suited to the clinic, is the first to provide a comparative normative data set for the chromatic and achromatic contrast sensitivity functions, yielding quantitative comparisons of achromatic, L/M cone opponent, and S cone opponent chromatic sensitivities as a function of spatial frequency.

  13. The cybernetic-statistical approach to the search for substances with pre-set properties

    International Nuclear Information System (INIS)

    Savitskij, E.M.; Kiseleva, N.N.; Shkatova, T.M.

    1982-01-01

    Using the cybernetic methods the forecast for a new phase Agsub(x)Mosub(6)Ssub(8), the data on which have not been used when computer learning, is given. Its Tsub(c) is evaluated and optimum contents of silver and conditions of phase obtaining with Tsub(c) somewhat higher than it is mentioned in literature for the same ratio of molybdenum and sulphur in the compound are found using statistical methods. The results obtained when solving the model task of the search for new ternary superconducting phases Asub(x)Bsub(6)Ssub(8) permitted to make a conclusion on correctness of the cybernetic-statistical approach suggested [ru

  14. A hybrid wavelet de-noising and Rank-Set Pair Analysis approach for forecasting hydro-meteorological time series.

    Science.gov (United States)

    Wang, Dong; Borthwick, Alistair G; He, Handan; Wang, Yuankun; Zhu, Jieyu; Lu, Yuan; Xu, Pengcheng; Zeng, Xiankui; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin

    2018-01-01

    Accurate, fast forecasting of hydro-meteorological time series is presently a major challenge in drought and flood mitigation. This paper proposes a hybrid approach, wavelet de-noising (WD) and Rank-Set Pair Analysis (RSPA), that takes full advantage of a combination of the two approaches to improve forecasts of hydro-meteorological time series. WD allows decomposition and reconstruction of a time series by the wavelet transform, and hence separation of the noise from the original series. RSPA, a more reliable and efficient version of Set Pair Analysis, is integrated with WD to form the hybrid WD-RSPA approach. Two types of hydro-meteorological data sets with different characteristics and different levels of human influences at some representative stations are used to illustrate the WD-RSPA approach. The approach is also compared to three other generic methods: the conventional Auto Regressive Integrated Moving Average (ARIMA) method, Artificial Neural Networks (ANNs) (BP-error Back Propagation, MLP-Multilayer Perceptron and RBF-Radial Basis Function), and RSPA alone. Nine error metrics are used to evaluate the model performance. Compared to three other generic methods, the results generated by WD-REPA model presented invariably smaller error measures which means the forecasting capability of the WD-REPA model is better than other models. The results show that WD-RSPA is accurate, feasible, and effective. In particular, WD-RSPA is found to be the best among the various generic methods compared in this paper, even when the extreme events are included within a time series. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Benchmarking performance measurement and lean manufacturing in the rough mill

    Science.gov (United States)

    Dan Cumbo; D. Earl Kline; Matthew S. Bumgardner

    2006-01-01

    Lean manufacturing represents a set of tools and a stepwise strategy for achieving smooth, predictable product flow, maximum product flexibility, and minimum system waste. While lean manufacturing principles have been successfully applied to some components of the secondary wood products value stream (e.g., moulding, turning, assembly, and finishing), the rough mill is...

  16. A Game Theoretic Approach for Modeling Privacy Settings of an Online Social Network

    Directory of Open Access Journals (Sweden)

    Jundong Chen

    2014-05-01

    Full Text Available Users of online social networks often adjust their privacy settings to control how much information on their profiles is accessible to other users of the networks. While a variety of factors have been shown to affect the privacy strategies of these users, very little work has been done in analyzing how these factors influence each other and collectively contribute towards the users’ privacy strategies. In this paper, we analyze the influence of attribute importance, benefit, risk and network topology on the users’ attribute disclosure behavior by introducing a weighted evolutionary game model. Results show that: irrespective of risk, users aremore likely to reveal theirmost important attributes than their least important attributes; when the users’ range of influence is increased, the risk factor plays a smaller role in attribute disclosure; the network topology exhibits a considerable effect on the privacy in an environment with risk.

  17. Level set method for optimal shape design of MRAM core. Micromagnetic approach

    International Nuclear Information System (INIS)

    Melicher, Valdemar; Cimrak, Ivan; Keer, Roger van

    2008-01-01

    We aim at optimizing the shape of the magnetic core in MRAM memories. The evolution of the magnetization during the writing process is described by the Landau-Lifshitz equation (LLE). The actual shape of the core in one cell is characterized by the coefficient γ. Cost functional f=f(γ) expresses the quality of the writing process having in mind the competition between the full-select and the half-select element. We derive an explicit form of the derivative F=∂f/∂γ which allows for the use of gradient-type methods for the actual computation of the optimized shape (e.g., steepest descend method). The level set method (LSM) is employed for the representation of the piecewise constant coefficient γ

  18. A Multilevel Approach to the Path to Expertise in Three Different Competitive Settings

    Directory of Open Access Journals (Sweden)

    Carlos Eduardo Gonçalves

    2014-03-01

    Full Text Available The objectives of the study were to analyze the deliberate practice variables in three different youth competitive sport settings; to analyze the effects of a season-long exposure on deliberate practice variables. The study explores three contexts in two different sports, soccer and volleyball, and at two competitive levels. The athletes fulfilled the questionnaire at the beginning and at the end of the season. A multilevel analysis was performed. Forty eight boys aged 15-17 years (14 from a volleyball club; 14 from an elite volleyball centre; 20 from a professional soccer club participated in the study. The measure was an adapted version for soccer and volleyball of the Deliberate Practice Motivation Questionnaire, which assesses two dimensions: the will to compete and the will to excel. Fewer people in the volleyball group showed a will to excel, the soccer group showed an increase in the scores. In will to compete, the three teams showed a decrease in their means. The decrease is more pronounced in the will to excel but the context effect is not significant. The biggest decrease is shown by the elite volleyball team, followed by the club teams. The findings raise questions for managers and coaches who look for physical and technical gifted young athletes and aim to develop their qualities through a careful planned training programme. The insertion in programmes that are believed to foster expertise seems to have unexpected consequences. Sport participation cannot rely exclusively on an orientation toward expertise, forgetting the autonomy of young people to set their goals.

  19. Process-outcome interrelationship and standard setting in medical education: the need for a comprehensive approach.

    Science.gov (United States)

    Christensen, Leif; Karle, Hans; Nystrup, Jørgen

    2007-09-01

    An outcome-based approach to medical education compared to a process/content orientation is currently being discussed intensively. In this article, the process and outcome interrelationship in medical education is discussed, with specific emphasis on the relation to the definition of standards in basic medical education. Perceptions of outcome have always been an integrated element of curricular planning. The present debate underlines the need for stronger focus on learning objectives and outcome assessment in many medical schools around the world. The need to maintain an integrated approach of process/content and outcome is underlined in this paper. A worry is expressed about the taxonomy of learning in pure outcome-based medical education, in which student assessment can be a major determinant for the learning process, leaving the control of the medical curriculum to medical examiners. Moreover, curricula which favour reductionism by stating everything in terms of instrumental outcomes or competences, do face a risk of lowering quality and do become a prey for political interference. Standards based on outcome alone rise unclarified problems in relationship to licensure requirements of medical doctors. It is argued that the alleged dichotomy between process/content and outcome seems artificial, and that formulation of standards in medical education must follow a comprehensive line in curricular planning.

  20. A variational approach to multi-phase motion of gas, liquid and solid based on the level set method

    Science.gov (United States)

    Yokoi, Kensuke

    2009-07-01

    We propose a simple and robust numerical algorithm to deal with multi-phase motion of gas, liquid and solid based on the level set method [S. Osher, J.A. Sethian, Front propagating with curvature-dependent speed: Algorithms based on Hamilton-Jacobi formulation, J. Comput. Phys. 79 (1988) 12; M. Sussman, P. Smereka, S. Osher, A level set approach for capturing solution to incompressible two-phase flow, J. Comput. Phys. 114 (1994) 146; J.A. Sethian, Level Set Methods and Fast Marching Methods, Cambridge University Press, 1999; S. Osher, R. Fedkiw, Level Set Methods and Dynamics Implicit Surface, Applied Mathematical Sciences, vol. 153, Springer, 2003]. In Eulerian framework, to simulate interaction between a moving solid object and an interfacial flow, we need to define at least two functions (level set functions) to distinguish three materials. In such simulations, in general two functions overlap and/or disagree due to numerical errors such as numerical diffusion. In this paper, we resolved the problem using the idea of the active contour model [M. Kass, A. Witkin, D. Terzopoulos, Snakes: active contour models, International Journal of Computer Vision 1 (1988) 321; V. Caselles, R. Kimmel, G. Sapiro, Geodesic active contours, International Journal of Computer Vision 22 (1997) 61; G. Sapiro, Geometric Partial Differential Equations and Image Analysis, Cambridge University Press, 2001; R. Kimmel, Numerical Geometry of Images: Theory, Algorithms, and Applications, Springer-Verlag, 2003] introduced in the field of image processing.

  1. New Integrated Quality Function Deployment Approach Based on Interval Neutrosophic Set for Green Supplier Evaluation and Selection

    Directory of Open Access Journals (Sweden)

    Luu Huu Van

    2018-03-01

    Full Text Available Green supplier evaluation and selection plays a crucial role in the green supply chain management of any organization to reduce the purchasing cost of materials and increase the flexibility and quality of products. An interval neutrosophic set (INS—which is a generalization of fuzzy sets, intuitionistic fuzzy sets (IFS and neutrosophic sets (NS—can better handle the incomplete, indeterminate and inconsistent information than the other sets. This paper proposes a new integrated Quality Function Deployment (QFD in support of the green supplier evaluation and selection process. In the proposed approach, INS is used to assess the relative importance of the characteristics that the purchased product should have (internal variables “WHATs” in order to satisfy the company’s needs, the relevant supplier assessment criteria (external variables “HOWs”, the “HOWs”-“WHATs” correlation scores, the resulting weights of the “HOWs” and the impact of each potential supplier. The normalized weighted rating is then defined and the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS method is developed to obtain a final ranking of green suppliers. A case study is applied to demonstrate the efficiency and computational procedure of the proposed method.

  2. Assessment of iron status in settings of inflammation: challenges and potential approaches.

    Science.gov (United States)

    Suchdev, Parminder S; Williams, Anne M; Mei, Zuguo; Flores-Ayala, Rafael; Pasricha, Sant-Rayn; Rogers, Lisa M; Namaste, Sorrel Ml

    2017-12-01

    The determination of iron status is challenging when concomitant infection and inflammation are present because of confounding effects of the acute-phase response on the interpretation of most iron indicators. This review summarizes the effects of inflammation on indicators of iron status and assesses the impact of a regression analysis to adjust for inflammation on estimates of iron deficiency (ID) in low- and high-infection-burden settings. We overviewed cross-sectional data from 16 surveys for preschool children (PSC) ( n = 29,765) and from 10 surveys for nonpregnant women of reproductive age (WRA) ( n = 25,731) from the Biomarkers Reflecting the Inflammation and Nutritional Determinants of Anemia (BRINDA) project. Effects of C-reactive protein (CRP) and α1-acid glycoprotein (AGP) concentrations on estimates of ID according to serum ferritin (SF) (used generically to include plasma ferritin), soluble transferrin receptor (sTfR), and total body iron (TBI) were summarized in relation to infection burden (in the United States compared with other countries) and population group (PSC compared with WRA). Effects of the concentrations of CRP and AGP on SF, sTfR, and TBI were generally linear, especially in PSC. Overall, regression correction changed the estimated prevalence of ID in PSC by a median of +25 percentage points (pps) when SF concentrations were used, by -15 pps when sTfR concentrations were used, and by +14 pps when TBI was used; the estimated prevalence of ID in WRA changed by a median of +8 pps when SF concentrations were used, by -10 pps when sTfR concentrations were used, and by +3 pps when TBI was used. In the United States, inflammation correction was done only for CRP concentrations because AGP concentrations were not measured; regression correction for CRP concentrations increased the estimated prevalence of ID when SF concentrations were used by 3 pps in PSC and by 7 pps in WRA. The correction of iron-status indicators for inflammation with the

  3. Performing Causal Configurations in e-Tourism: a Fuzzy-Set Approach

    Directory of Open Access Journals (Sweden)

    Hugues Seraphin

    2016-07-01

    Full Text Available Search engines are constantly endeavouring to integrate social media mentions in the website ranking process. Search Engine Optimization (SEO principles can be used to impact website ranking, considering various social media channels� capability to drive traffic. Both practitioners and researchers has focused on the impact of social media on SEO, but paid little attention to the influences of social media interactions on organic search results. This study explores the causal configurations between social mention variables (strength, sentiment, passion, reach and the rankings of nine websites dedicated to hotel booking (according to organic search results. The social mention variables embedded into the conceptual model were provided by the real-time social media search and analysis tool (www.socialmention.com, while the rankings websites dedicated to hotel booking were determined after a targeted search on Google. The study employs fuzzy-set qualitative comparative analysis (fsQCA and the results reveal that social mention variables has complex links with the rankings of the hotel booking websites included into the sample, according to Quine-McCluskey algorithm solution. The findings extend the body of knowledge related to the impact of social media mentions on

  4. Task-Sharing Approaches to Improve Mental Health Care in Rural and Other Low-Resource Settings: A Systematic Review.

    Science.gov (United States)

    Hoeft, Theresa J; Fortney, John C; Patel, Vikram; Unützer, Jürgen

    2018-12-01

    Rural areas persistently face a shortage of mental health specialists. Task shifting, or task sharing, is an approach in global mental health that may help address unmet mental health needs in rural and other low-resource areas. This review focuses on task-shifting approaches and highlights future directions for research in this area. Systematic review on task sharing of mental health care in rural areas of high-income countries included: (1) PubMed, (2) gray literature for innovations not yet published in peer-reviewed journals, and (3) outreach to experts for additional articles. We included English language articles published before August 31, 2013, on interventions sharing mental health care tasks across a team in rural settings. We excluded literature: (1) from low- and middle-income countries, (2) involving direct transfer of care to another provider, and (3) describing clinical guidelines and shared decision-making tools. The review identified approaches to task sharing focused mainly on community health workers and primary care providers. Technology was identified as a way to leverage mental health specialists to support care across settings both within primary care and out in the community. The review also highlighted how provider education, supervision, and partnerships with local communities can support task sharing. Challenges, such as confidentiality, are often not addressed in the literature. Approaches to task sharing may improve reach and effectiveness of mental health care in rural and other low-resource settings, though important questions remain. We recommend promising research directions to address these questions. © 2017 National Rural Health Association.

  5. A summarization approach for Affymetrix GeneChip data using a reference training set from a large, biologically diverse database

    Directory of Open Access Journals (Sweden)

    Tripputi Mark

    2006-10-01

    Full Text Available Abstract Background Many of the most popular pre-processing methods for Affymetrix expression arrays, such as RMA, gcRMA, and PLIER, simultaneously analyze data across a set of predetermined arrays to improve precision of the final measures of expression. One problem associated with these algorithms is that expression measurements for a particular sample are highly dependent on the set of samples used for normalization and results obtained by normalization with a different set may not be comparable. A related problem is that an organization producing and/or storing large amounts of data in a sequential fashion will need to either re-run the pre-processing algorithm every time an array is added or store them in batches that are pre-processed together. Furthermore, pre-processing of large numbers of arrays requires loading all the feature-level data into memory which is a difficult task even with modern computers. We utilize a scheme that produces all the information necessary for pre-processing using a very large training set that can be used for summarization of samples outside of the training set. All subsequent pre-processing tasks can be done on an individual array basis. We demonstrate the utility of this approach by defining a new version of the Robust Multi-chip Averaging (RMA algorithm which we refer to as refRMA. Results We assess performance based on multiple sets of samples processed over HG U133A Affymetrix GeneChip® arrays. We show that the refRMA workflow, when used in conjunction with a large, biologically diverse training set, results in the same general characteristics as that of RMA in its classic form when comparing overall data structure, sample-to-sample correlation, and variation. Further, we demonstrate that the refRMA workflow and reference set can be robustly applied to naïve organ types and to benchmark data where its performance indicates respectable results. Conclusion Our results indicate that a biologically diverse

  6. A storied-identity analysis approach to teacher candidates learning to teach in an urban setting

    Science.gov (United States)

    Ibourk, Amal

    While many studies have investigated the relationship between teachers' identity work and their developing practices, few of these identity focused studies have honed in on teacher candidates' learning to teach in an urban setting. Drawing upon narrative inquiry methodology and a "storied identity" analytic framework, I examined how the storied identities of science learning and becoming a science teacher shape teacher candidates' developing practice. In particular, I examined the stories of three interns, Becky, David, and Ashley, and I tell about their own experiences as science learners, their transitions to science teachers, and the implications this has for the identity work they did as they navigated the challenges of learning to teach in high-needs schools. Initially, each of the interns highlighted a feeling of being an outsider, and having a difficult time becoming a fully valued member of their classroom community in their storied identities of becoming a science teacher in the beginning of their internship year. While the interns named specific challenges, such as limited lab materials and different math abilities, I present how they adapted their lesson plans to address these challenges while drawing from their storied identities of science learning. My study reveals that the storied identities of becoming a science teacher informed how they framed their initial experiences teaching in an urban context. In addition, my findings reveal that the more their storied identities of science learning and becoming a science teacher overlapped, the more they leveraged their storied identity of science learning in order to implement teaching strategies that helped them make sense of the challenges that surfaced in their classroom contexts. Both Becky and Ashley leveraged their storied identities of science learning more than David did in their lesson planning and learning to teach. David's initial storied identity of becoming a science teacher revealed how he

  7. A semi-linguistic approach based on fuzzy set theory: application to expert judgments aggregation

    International Nuclear Information System (INIS)

    Ghyym, Seong Ho

    1998-01-01

    In the present work, a semi-linguistic fuzzy algorithm is proposed to obtain the fuzzy weighting values for multi-criterion, multi-alternative performance evaluation problem, with application to the aggregated estimate in the aggregation process of multi-expert judgments. The algorithm framework proposed is composed of the hierarchical structure, the semi-linguistic approach, the fuzzy R-L type integral value, and the total risk attitude index. In this work, extending the Chang/Chen method for triangular fuzzy numbers, the total risk attitude index is devised for a trapezoidal fuzzy number system. To illustrate the application of the algorithm proposed, a case problem available in literature is studied in connection to the weighting value evaluation of three-alternative (i.e., the aggregation of three-expert judgments) under seven-criterion. The evaluation results such as overall utility value, aggregation weighting value, and aggregated estimate obtained using the present fuzzy model are compared with those for other fuzzy models based on the Kim/Park method, the Liou/Wang method, and the Chang/Chen method

  8. A semi-linguistic approach based on fuzzy set theory: application to expert judgments aggregation

    Energy Technology Data Exchange (ETDEWEB)

    Ghyym, Seong Ho [KEPRI, Taejon (Korea, Republic of)

    1998-10-01

    In the present work, a semi-linguistic fuzzy algorithm is proposed to obtain the fuzzy weighting values for multi-criterion, multi-alternative performance evaluation problem, with application to the aggregated estimate in the aggregation process of multi-expert judgments. The algorithm framework proposed is composed of the hierarchical structure, the semi-linguistic approach, the fuzzy R-L type integral value, and the total risk attitude index. In this work, extending the Chang/Chen method for triangular fuzzy numbers, the total risk attitude index is devised for a trapezoidal fuzzy number system. To illustrate the application of the algorithm proposed, a case problem available in literature is studied in connection to the weighting value evaluation of three-alternative (i.e., the aggregation of three-expert judgments) under seven-criterion. The evaluation results such as overall utility value, aggregation weighting value, and aggregated estimate obtained using the present fuzzy model are compared with those for other fuzzy models based on the Kim/Park method, the Liou/Wang method, and the Chang/Chen method.

  9. Coupled transient thermo-fluid/thermal-stress analysis approach in a VTBM setting

    International Nuclear Information System (INIS)

    Ying, A.; Narula, M.; Zhang, H.; Abdou, M.

    2008-01-01

    A virtual test blanket module (VTBM) has been envisioned as a utility to aid in streamlining and optimizing the US ITER TBM design effort by providing an integrated multi-code, multi-physics modeling environment. Within this effort, an integrated simulation approach is being developed for TBM design calculations and performance evaluation. Particularly, integrated thermo-fluid/thermal-stress analysis is important for enabling TBM design and performance calculations. In this paper, procedures involved in transient coupled thermo-fluid/thermal-stress analysis are investigated. The established procedure is applied to study the impact of pulsed operational phenomenon on the thermal-stress response of the TBM first wall. A two-way coupling between the thermal strain and temperature field is also studied, in the context of a change in thermal conductivity of the beryllium pebble bed in a solid breeder blanket TBM due to thermal strain. The temperature field determines the thermal strain in beryllium, which in turn changes the temperature field. Iterative thermo-fluid/thermal strain calculations have been applied to both steady-state and pulsed operation conditions. All calculations have been carried out in three dimensions with representative MCAD models, including all the TBM components in their entirety

  10. Holistic approach to prevention and management of type 2 diabetes mellitus in a family setting.

    Science.gov (United States)

    Ofori, Sandra N; Unachukwu, Chioma N

    2014-01-01

    Diabetes mellitus (DM) is a chronic, progressive metabolic disorder with several complications that affect virtually all the systems in the human body. Type 2 DM (T2DM) is a major risk factor for cardiovascular disease (CVD). The management of T2DM is multifactorial, taking into account other major modifiable risk factors, like obesity, physical inactivity, smoking, blood pressure, and dyslipidemia. A multidisciplinary team is essential to maximize the care of individuals with DM. DM self-management education and patient-centered care are the cornerstones of management in addition to effective lifestyle strategies and pharmacotherapy with individualization of glycemic goals. Robust evidence supports the effectiveness of this approach when implemented. Individuals with DM and their family members usually share a common lifestyle that, not only predisposes the non-DM members to developing DM but also, increases their collective risk for CVD. In treating DM, involvement of the entire family, not only improves the care of the DM individual but also, helps to prevent the risk of developing DM in the family members.

  11. Holistic approach to prevention and management of type 2 diabetes mellitus in a family setting

    Directory of Open Access Journals (Sweden)

    Ofori SN

    2014-05-01

    Full Text Available Sandra N Ofori, Chioma N Unachukwu Department of Internal Medicine, University of Port Harcourt Teaching Hospital, Port Harcourt, Rivers State, Nigeria Abstract: Diabetes mellitus (DM is a chronic, progressive metabolic disorder with several complications that affect virtually all the systems in the human body. Type 2 DM (T2DM is a major risk factor for cardiovascular disease (CVD. The management of T2DM is multifactorial, taking into account other major modifiable risk factors, like obesity, physical inactivity, smoking, blood pressure, and dyslipidemia. A multidisciplinary team is essential to maximize the care of individuals with DM. DM self-management education and patient-centered care are the cornerstones of management in addition to effective lifestyle strategies and pharmacotherapy with individualization of glycemic goals. Robust evidence supports the effectiveness of this approach when implemented. Individuals with DM and their family members usually share a common lifestyle that, not only predisposes the non-DM members to developing DM but also, increases their collective risk for CVD. In treating DM, involvement of the entire family, not only improves the care of the DM individual but also, helps to prevent the risk of developing DM in the family members. Keywords: cardiovascular disease, multifactorial management

  12. Computer simulations of a rough sphere fluid

    International Nuclear Information System (INIS)

    Lyklema, J.W.

    1978-01-01

    A computer simulation is described on rough hard spheres with a continuously variable roughness parameter, including the limits of smooth and completely rough spheres. A system of 500 particles is simulated with a homogeneous mass distribution at 8 different densities and for 5 different values of the roughness parameter. For these 40 physically different situations the intermediate scattering function for 6 values of the wave number, the orientational correlation functions and the velocity autocorrelation functions have been calculated. A comparison has been made with a neutron scattering experiment on neopentane and agreement was good for an intermediate value of the roughness parameter. Some often made approximations in neutron scattering experiments are also checked. The influence of the variable roughness parameter on the correlation functions has been investigated and three simple stochastic models studied to describe the orientational correlation function which shows the most pronounced dependence on the roughness. (Auth.)

  13. Determination of forest road surface roughness by Kinect depth imaging

    Directory of Open Access Journals (Sweden)

    Francesco Marinello

    2017-12-01

    Full Text Available Roughness is a dynamic property of the gravel road surface that affects safety, ride comfort as well as vehicle tyre life and maintenance costs. A rapid survey of gravel road condition is fundamental for an effective maintenance planning and definition of the intervention priorities.Different non-contact techniques such as laser scanning, ultrasonic sensors and photogrammetry have recently been proposed to reconstruct three-dimensional topography of road surface and allow extraction of roughness metrics. The application of Microsoft Kinect™ depth camera is proposed and discussed here for collection of 3D data sets from gravel roads, to be implemented in order to allow quantification of surface roughness.The objectives are to: i verify the applicability of the Kinect sensor for characterization of different forest roads, ii identify the appropriateness and potential of different roughness parameters and iii analyse the correlation with vibrations recoded by 3-axis accelerometers installed on different vehicles. The test took advantage of the implementation of the Kinect depth camera for surface roughness determination of 4 different forest gravel roads and one well-maintained asphalt road as reference. Different vehicles (mountain bike, off-road motorcycle, ATV vehicle, 4WD car and compact crossover were included in the experiment in order to verify the vibration intensity when travelling on different road surface conditions. Correlations between the extracted roughness parameters and vibration levels of the tested vehicles were then verified. Coefficients of determination of between 0.76 and 0.97 were detected between average surface roughness and standard deviation of relative accelerations, with higher values in the case of lighter vehicles.

  14. Calibrating the Truax Rough Rider seed drill for restoration plantings

    Science.gov (United States)

    Loren St. John; Brent Cornforth; Boyd Simonson; Dan Ogle; Derek Tilley

    2008-01-01

    The purpose of this technical note is to provide a step-by-step approach to calibrating the Truax Rough Rider range drill, a relatively new, state-of-the-art rangeland drill. To achieve the desired outcome of a seeding project, an important step following proper weed control and seedbed preparation is the calibration of the seeding equipment to ensure the recommended...

  15. The influence of spatial grain size on the suitability of the higher-taxon approach in continental priority-setting

    DEFF Research Database (Denmark)

    Larsen, Frank Wugt; Rahbek, Carsten

    2005-01-01

    The higher-taxon approach may provide a pragmatic surrogate for the rapid identification of priority areas for conservation. To date, no continent-wide study has examined the use of higher-taxon data to identify complementarity-based networks of priority areas, nor has the influence of spatial gr...... grain size been assessed. We used data obtained from 939 sub-Saharan mammals to analyse the performance of higher-taxon data for continental priority-setting and to assess the influence of spatial grain sizes in terms of the size of selection units (1°× 1°, 2°× 2° and 4°× 4° latitudinal...... as effectively as species-based priority areas, genus-based areas perform considerably less effectively than species-based areas for the 1° and 2° grain size. Thus, our results favour the higher-taxon approach for continental priority-setting only when large grain sizes (= 4°) are used.......The higher-taxon approach may provide a pragmatic surrogate for the rapid identification of priority areas for conservation. To date, no continent-wide study has examined the use of higher-taxon data to identify complementarity-based networks of priority areas, nor has the influence of spatial...

  16. Sensing roughness and polish direction

    DEFF Research Database (Denmark)

    Jakobsen, Michael Linde; Olesen, Anders Sig; Larsen, Henning Engelbrecht

    2016-01-01

    As a part of the work carried out in a project supported by the Danish Council for Technology and Innovation, we have investigated the option of smoothing standard CNC-machined surfaces. In the process of constructing optical prototypes, involving custom-designed optics, the development cost...... and time consumption can become prohibitive in a research budget. Machining the optical surfaces directly is expensive and time consuming. Alternatively, a more standardized and cheaper machining method can be used, calling for the object to be manually polished. During the polishing process, the operator...... needs information about the RMS-value of the surface roughness and the current direction of the scratches introduced by the polishing process. The RMS-value indicates to the operator how far he is from the final finish, and the scratch orientation is often specified by the customer in order to avoid...

  17. Software testing in roughness calculation

    International Nuclear Information System (INIS)

    Chen, Y L; Hsieh, P F; Fu, W E

    2005-01-01

    A test method to determine the function quality provided by the software for roughness measurement is presented in this study. The function quality of the software requirements should be part of and assessed through the entire life cycle of the software package. The specific function, or output accuracy, is crucial for the analysis of the experimental data. For scientific applications, however, commercial software is usually embedded with specific instrument, which is used for measurement or analysis during the manufacture process. In general, the error ratio caused by the software would be more apparent especially when dealing with relatively small quantities, like the measurements in the nanometer-scale range. The model of 'using a data generator' proposed by NPL of UK was applied in this study. An example of the roughness software is tested and analyzed by the above mentioned process. After selecting the 'reference results', the 'reference data' was generated by a programmable 'data generator'. The filter function of 0.8 mm long cutoff value, defined in ISO 11562 was tested with 66 sinusoid data at different wavelengths. Test results from commercial software and CMS written program were compared to the theoretical data calculated from ISO standards. As for the filter function in this software, the result showed a significant disagreement between the reference and test results. The short cutoff feature for filtering at the high frequencies does not function properly, while the long cutoff feature has the maximum difference in the filtering ratio, which is more than 70% between the wavelength of 300 μm and 500 μm. Conclusively, the commercial software needs to be tested more extensively for specific application by appropriate design of reference dataset to ensure its function quality

  18. Roughness analysis of graphite surfaces of casting elements

    Directory of Open Access Journals (Sweden)

    M. Wieczorowski

    2010-01-01

    Full Text Available In the paper profilometric measurements of graphite casting elements were described. Basic topics necessary to assess roughness of their surfaces and influence of asperities on various properties related to manufacturing and use were discussed. Stylus profilometer technique of surface irregularities measurements including its limits resulting from pickup geometry and its contact with measured object were ana-lyzed. Working principle of tactile profilometer and phenomena taking place during movement of a probe on a measured surface were shown. One of the important aspects is a flight phenomenon, which means movement of a pickup without contact with a surface during inspection resulting from too high scanning speed. results of comparison research for graphite elements of new and used mould and pin composing a set were presented. Using some surface roughness, waviness and primary profile parameters (arithmetical mean of roughness profile heights Ra, biggest roughness profile height Rz, maximum primary profile height Pt as well as maximum waviness profile height Wt a possibility of using surface asperities parameters as a measure of wear of chill graphite elements was proved. The most often applied parameter is Ra, but with a help of parameters from W and P family it was shown, that big changes occur not only for roughness but also for other components of surface irregularities.

  19. Patient-as-observer approach: an alternative method for hand hygiene auditing in an ambulatory care setting.

    Science.gov (United States)

    Le-Abuyen, Sheila; Ng, Jessica; Kim, Susie; De La Franier, Anne; Khan, Bibi; Mosley, Jane; Gardam, Michael

    2014-04-01

    A survey pilot asked patients to observe the hand hygiene compliance of their health care providers. Patients returned 75.1% of the survey cards distributed, and the overall hand hygiene compliance was 96.8%. Survey results and patient commentary were used to motivate hand hygiene compliance. The patient-as-observer approach appeared to be a viable alternative for hand hygiene auditing in an ambulatory care setting because it educated, engaged, and empowered patients to play a more active role in their own health care. Copyright © 2014 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  20. A comparative study of reinitialization approaches of the level set method for simulating free-surface flows

    Energy Technology Data Exchange (ETDEWEB)

    Sufyan, Muhammad; Ngo, Long Cu; Choi, Hyoung Gwon [Seoul National University, Seoul (Korea, Republic of)

    2016-04-15

    Unstructured grids were used to compare the performance of a direct reinitialization scheme with those of two reinitialization approaches based on the solution of a hyperbolic Partial differential equation (PDE). The problems of moving interface were solved in the context of a finite element method. A least-square weighted residual method was used to discretize the advection equation of the level set method. The benchmark problems of rotating Zalesak's disk, time-reversed single vortex, and two-dimensional sloshing were examined. Numerical results showed that the direct reinitialization scheme performed better than the PDE-based reinitialization approaches in terms of mass conservation, dissipation and dispersion error, and computational time. In the case of sloshing, numerical results were found to be in good agreement with existing experimental data. The direct reinitialization approach consumed considerably less CPU time than the PDE-based simulations for 20 time periods of sloshing. This approach was stable, accurate, and efficient for all the problems considered in this study.

  1. Response Ant Colony Optimization of End Milling Surface Roughness

    Directory of Open Access Journals (Sweden)

    Ahmed N. Abd Alla

    2010-03-01

    Full Text Available Metal cutting processes are important due to increased consumer demands for quality metal cutting related products (more precise tolerances and better product surface roughness that has driven the metal cutting industry to continuously improve quality control of metal cutting processes. This paper presents optimum surface roughness by using milling mould aluminium alloys (AA6061-T6 with Response Ant Colony Optimization (RACO. The approach is based on Response Surface Method (RSM and Ant Colony Optimization (ACO. The main objectives to find the optimized parameters and the most dominant variables (cutting speed, feedrate, axial depth and radial depth. The first order model indicates that the feedrate is the most significant factor affecting surface roughness.

  2. Investigation and modelling of rubber stationary friction on rough surfaces

    International Nuclear Information System (INIS)

    Le Gal, A; Klueppel, M

    2008-01-01

    This paper presents novel aspects regarding the physically motivated modelling of rubber stationary sliding friction on rough surfaces. The description of dynamic contact is treated within the framework of a generalized Greenwood-Williamson theory for rigid/soft frictional pairings. Due to the self-affinity of rough surfaces, both hysteresis and adhesion friction components arise from a multi-scale excitation of surface roughness. Beside a complete analytical formulation of contact parameters, the morphology of macrotexture is considered via the introduction of a second scaling range at large length scales which mostly contribute to hysteresis friction. Moreover, adhesion friction is related to the real area of contact combined with the kinetics of interfacial peeling effects. Friction experiments carried out with different rubbers on rough granite and asphalt point out the relevance of hysteresis and adhesion friction concepts on rough surfaces. The two scaling ranges approach significantly improves the description of wet and dry friction behaviour within the range of low sliding velocity. In addition, material and surface effects are predicted and understood on a physical basis. The applicability of such modelling is of high interest for materials developers and road constructors regarding the prediction of wet grip performance of tyres on road tracks

  3. Fuzzy Linguistic Optimization on Surface Roughness for CNC Turning

    Directory of Open Access Journals (Sweden)

    Tian-Syung Lan

    2010-01-01

    Full Text Available Surface roughness is often considered the main purpose in contemporary computer numerical controlled (CNC machining industry. Most existing optimization researches for CNC finish turning were either accomplished within certain manufacturing circumstances or achieved through numerous equipment operations. Therefore, a general deduction optimization scheme is deemed to be necessary for the industry. In this paper, the cutting depth, feed rate, speed, and tool nose runoff with low, medium, and high level are considered to optimize the surface roughness for finish turning based on L9(34 orthogonal array. Additionally, nine fuzzy control rules using triangle membership function with respective to five linguistic grades for surface roughness are constructed. Considering four input and twenty output intervals, the defuzzification using center of gravity is then completed. Thus, the optimum general fuzzy linguistic parameters can then be received. The confirmation experiment result showed that the surface roughness from the fuzzy linguistic optimization parameters is significantly advanced compared to that from the benchmark. This paper certainly proposes a general optimization scheme using orthogonal array fuzzy linguistic approach to the surface roughness for CNC turning with profound insight.

  4. Investigation and modelling of rubber stationary friction on rough surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Le Gal, A; Klueppel, M [Deutsches Institut fuer Kautschuktechnologie, Eupener Strasse 33, D-30519 Hannover (Germany)

    2008-01-09

    This paper presents novel aspects regarding the physically motivated modelling of rubber stationary sliding friction on rough surfaces. The description of dynamic contact is treated within the framework of a generalized Greenwood-Williamson theory for rigid/soft frictional pairings. Due to the self-affinity of rough surfaces, both hysteresis and adhesion friction components arise from a multi-scale excitation of surface roughness. Beside a complete analytical formulation of contact parameters, the morphology of macrotexture is considered via the introduction of a second scaling range at large length scales which mostly contribute to hysteresis friction. Moreover, adhesion friction is related to the real area of contact combined with the kinetics of interfacial peeling effects. Friction experiments carried out with different rubbers on rough granite and asphalt point out the relevance of hysteresis and adhesion friction concepts on rough surfaces. The two scaling ranges approach significantly improves the description of wet and dry friction behaviour within the range of low sliding velocity. In addition, material and surface effects are predicted and understood on a physical basis. The applicability of such modelling is of high interest for materials developers and road constructors regarding the prediction of wet grip performance of tyres on road tracks.

  5. Rough-fuzzy pattern recognition applications in bioinformatics and medical imaging

    CERN Document Server

    Maji, Pradipta

    2012-01-01

    Learn how to apply rough-fuzzy computing techniques to solve problems in bioinformatics and medical image processing Emphasizing applications in bioinformatics and medical image processing, this text offers a clear framework that enables readers to take advantage of the latest rough-fuzzy computing techniques to build working pattern recognition models. The authors explain step by step how to integrate rough sets with fuzzy sets in order to best manage the uncertainties in mining large data sets. Chapters are logically organized according to the major phases of pattern recognition systems dev

  6. Use of ALS data for digital terrain extraction and roughness parametrization in floodplain areas

    Science.gov (United States)

    Idda, B.; Nardinocchi, C.; Marsella, M.

    2009-04-01

    In order to undertake structural and land planning actions aimed at improving risk thresholds and vulnerability associated to floodplain inundation, the evaluation of the area concerning the channel overflowing from his natural embankments it is of essential importance. Floodplain models requires the analysis of historical floodplains extensions, ground's morphological structure and hydraulic measurements. Within this set of information, a more detailed characterization about the hydraulic roughness, which controls the velocity to the hydraulic flow, is a interesting challenge to achieve a 2D spatial distribution into the model. Remote sensing optical and radar sensors techniques can be applied to generate 2D and 3D map products useful to perimeter floodplains extension during the main event and extrapolate river cross-sections. Among these techniques, it is unquestionable the enhancement that the Airborne Laser Scanner (ALS) have brought for its capability to extract high resolution and accurate Digital Terrain Models. In hydraulic applications, a number of studies investigated the use of ALS for DTM generation and approached the quantitative estimations of the hydraulic roughness. The aim of this work is the generation of a digital terrain model and the estimation of hydraulic parameters useful for floodplains models from Airborne Laser Scanner data collected in a test area, which encloses a portion of a drainage basin of the Mela river (Sicily, Italy). From the Airborne Laser Scanner dataset, a high resolution Digital Elevation Model was first created, then after applying filtering and classification processes, a dedicated procedure was implemented to assess automatically a value for the hydraulic roughness coefficient (in Manning's formulation) per each point interested in the floodplain. The obtained results allowed to generate maps of equal roughness, hydraulic level depending, based on the application of empirical formulas for specific-type vegetation at

  7. Can children identify and achieve goals for intervention? A randomized trial comparing two goal-setting approaches.

    Science.gov (United States)

    Vroland-Nordstrand, Kristina; Eliasson, Ann-Christin; Jacobsson, Helén; Johansson, Ulla; Krumlinde-Sundholm, Lena

    2016-06-01

    The efficacy of two different goal-setting approaches (children's self-identified goals and goals identified by parents) were compared on a goal-directed, task-oriented intervention. In this assessor-blinded parallel randomized trial, 34 children with disabilities (13 males, 21 females; mean age 9y, SD 1y 4mo) were randomized using concealed allocation to one of two 8-week, goal-directed, task-oriented intervention groups with different goal-setting approaches: (1) children's self-identified goals (n=18) using the Perceived Efficacy and Goal-Setting System, or (2) goals identified by parents (n=16) using the Canadian Occupational Performance Measure (COPM). Participants were recruited through eight paediatric rehabilitation centres and randomized between October 2011 and May 2013. The primary outcome measure was the Goal Attainment Scaling and the secondary measure, the COPM performance scale (COPM-P). Data were collected pre- and post-intervention and at the 5-month follow-up. There was no evidence of a difference in mean characteristics at baseline between groups. There was evidence of an increase in mean goal attainment (mean T score) in both groups after intervention (child-goal group: estimated mean difference [EMD] 27.84, 95% CI 22.93-32.76; parent-goal group: EMD 21.42, 95% CI 16.16-26.67). There was no evidence of a difference in the mean T scores post-intervention between the two groups (EMD 6.42, 95% CI -0.80 to 13.65). These results were sustained at the 5-month follow-up. Children's self-identified goals are achievable to the same extent as parent-identified goals and remain stable over time. Thus children can be trusted to identify their own goals for intervention, thereby influencing their involvement in their intervention programmes. © 2015 Mac Keith Press.

  8. Pain management for children with cerebral palsy in school settings in two cultures: action and reaction approaches.

    Science.gov (United States)

    Adolfsson, Margareta; Johnson, Ensa; Nilsson, Stefan

    2017-05-18

    Children with cerebral palsy (CP) face particular challenges, e.g. daily pain that threaten their participation in school activities. This study focuses on how teachers, personal assistants, and clinicians in two countries with different cultural prerequisites, Sweden and South Africa, manage the pain of children in school settings. Participants' statements collected in focus groups were analysed using a directed qualitative content analysis framed by a Frequency of attendance-Intensity of involvement model, which was modified into a Knowing-Doing model. Findings indicated that pain management focused more on children's attendance in the classroom than on their involvement, and a difference between countries in terms of action-versus-reaction approaches. Swedish participants reported action strategies to prevent pain whereas South African participants primarily discussed interventions when observing a child in pain. Differences might be due to school- and healthcare systems. To provide effective support when children with CP are in pain in school settings, an action-and-reaction approach would be optimal and the use of alternative and augmentative communication strategies would help to communicate children's pain. As prevention of pain is desired, structured surveillance and treatment programs are recommended along with trustful collaboration with parents and access to "hands-on" pain management when needed. Implications for rehabilitation • When providing support, hands-on interventions should be supplemented by structured preventive programs and routines for parent collaboration (action-and-reaction approach). • When regulating support, Sweden and South Africa can learn from each other; ○ In Sweden, the implementation of a prevention program has been successful. ○ In South Africa, the possibilities giving support directly when pain in children is observed have been beneficial.

  9. Robust nuclei segmentation in cyto-histopathological images using statistical level set approach with topology preserving constraint

    Science.gov (United States)

    Taheri, Shaghayegh; Fevens, Thomas; Bui, Tien D.

    2017-02-01

    Computerized assessments for diagnosis or malignancy grading of cyto-histopathological specimens have drawn increased attention in the field of digital pathology. Automatic segmentation of cell nuclei is a fundamental step in such automated systems. Despite considerable research, nuclei segmentation is still a challenging task due noise, nonuniform illumination, and most importantly, in 2D projection images, overlapping and touching nuclei. In most published approaches, nuclei refinement is a post-processing step after segmentation, which usually refers to the task of detaching the aggregated nuclei or merging the over-segmented nuclei. In this work, we present a novel segmentation technique which effectively addresses the problem of individually segmenting touching or overlapping cell nuclei during the segmentation process. The proposed framework is a region-based segmentation method, which consists of three major modules: i) the image is passed through a color deconvolution step to extract the desired stains; ii) then the generalized fast radial symmetry transform is applied to the image followed by non-maxima suppression to specify the initial seed points for nuclei, and their corresponding GFRS ellipses which are interpreted as the initial nuclei borders for segmentation; iii) finally, these nuclei border initial curves are evolved through the use of a statistical level-set approach along with topology preserving criteria for segmentation and separation of nuclei at the same time. The proposed method is evaluated using Hematoxylin and Eosin, and fluorescent stained images, performing qualitative and quantitative analysis, showing that the method outperforms thresholding and watershed segmentation approaches.

  10. Tantalum films with well-controlled roughness grown by oblique incidence deposition

    Science.gov (United States)

    Rechendorff, K.; Hovgaard, M. B.; Chevallier, J.; Foss, M.; Besenbacher, F.

    2005-08-01

    We have investigated how tantalum films with well-controlled surface roughness can be grown by e-gun evaporation with oblique angle of incidence between the evaporation flux and the surface normal. Due to a more pronounced shadowing effect the root-mean-square roughness increases from about 2 to 33 nm as grazing incidence is approached. The exponent, characterizing the scaling of the root-mean-square roughness with length scale (α), varies from 0.75 to 0.93, and a clear correlation is found between the angle of incidence and root-mean-square roughness.

  11. Rock discontinuity surface roughness variation with scale

    Science.gov (United States)

    Bitenc, Maja; Kieffer, D. Scott; Khoshelham, Kourosh

    2017-04-01

    ABSTRACT: Rock discontinuity surface roughness refers to local departures of the discontinuity surface from planarity and is an important factor influencing the shear resistance. In practice, the Joint Roughness Coefficient (JRC) roughness parameter is commonly relied upon and input to a shear strength criterion such as developed by Barton and Choubey [1977]. The estimation of roughness by JRC is hindered firstly by the subjective nature of visually comparing the joint profile to the ten standard profiles. Secondly, when correlating the standard JRC values and other objective measures of roughness, the roughness idealization is limited to a 2D profile of 10 cm length. With the advance of measuring technologies that provide accurate and high resolution 3D data of surface topography on different scales, new 3D roughness parameters have been developed. A desirable parameter is one that describes rock surface geometry as well as the direction and scale dependency of roughness. In this research a 3D roughness parameter developed by Grasselli [2001] and adapted by Tatone and Grasselli [2009] is adopted. It characterizes surface topography as the cumulative distribution of local apparent inclination of asperities with respect to the shear strength (analysis) direction. Thus, the 3D roughness parameter describes the roughness amplitude and anisotropy (direction dependency), but does not capture the scale properties. In different studies the roughness scale-dependency has been attributed to data resolution or size of the surface joint (see a summary of researches in [Tatone and Grasselli, 2012]). Clearly, the lower resolution results in lower roughness. On the other hand, have the investigations of surface size effect produced conflicting results. While some studies have shown a decrease in roughness with increasing discontinuity size (negative scale effect), others have shown the existence of positive scale effects, or both positive and negative scale effects. We

  12. AMSRIce03 Surface Roughness Data

    Data.gov (United States)

    National Aeronautics and Space Administration — Notice to Data Users: The documentation for this data set was provided solely by the Principal Investigator(s) and was not further developed, thoroughly reviewed, or...

  13. Constraining the roughness degree of slip heterogeneity

    KAUST Repository

    Causse, Mathieu

    2010-05-07

    This article investigates different approaches for assessing the degree of roughness of the slip distribution of future earthquakes. First, we analyze a database of slip images extracted from a suite of 152 finite-source rupture models from 80 events (Mw = 4.1–8.9). This results in an empirical model defining the distribution of the slip spectrum corner wave numbers (kc) as a function of moment magnitude. To reduce the “epistemic” uncertainty, we select a single slip model per event and screen out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests on synthetic data with a frequency domain inversion method. These tests reveal that due to smoothing constraints used to stabilize the inversion process, kc tends to be underestimated. We then develop an alternative approach: (1) we establish a proportionality relationship between kc and the peak ground acceleration (PGA), using a k−2 kinematic source model, and (2) we analyze the PGA distribution, which is believed to be better constrained than slip images. These two methods reveal that kc follows a lognormal distribution, with similar standard deviations for both methods.

  14. Bed roughness experiments in supply limited conditions

    NARCIS (Netherlands)

    Spekkers, Matthieu; Tuijnder, Arjan; Ribberink, Jan S.; Hulscher, Suzanne J.M.H.; Parsons, D.R.; Garlan, T.; Best, J.L.

    2008-01-01

    Reliable roughness models are of great importance, for example, when predicting water levels in rivers. The currently available roughness models are based on fully mobile bed conditions. However, in rivers where widely graded sediments are present more or less permanent armour layers can develop

  15. Axiomatic Characterizations of IVF Rough Approximation Operators

    Directory of Open Access Journals (Sweden)

    Guangji Yu

    2014-01-01

    Full Text Available This paper is devoted to the study of axiomatic characterizations of IVF rough approximation operators. IVF approximation spaces are investigated. The fact that different IVF operators satisfy some axioms to guarantee the existence of different types of IVF relations which produce the same operators is proved and then IVF rough approximation operators are characterized by axioms.

  16. Wall roughness induces asymptotic ultimate turbulence

    NARCIS (Netherlands)

    Zhu, Xiaojue; Verschoof, Ruben Adriaan; Bakhuis, Dennis; Huisman, Sander Gerard; Verzicco, Roberto; Sun, Chao; Lohse, Detlef

    2018-01-01

    Turbulence governs the transport of heat, mass and momentum on multiple scales. In real-world applications, wall-bounded turbulence typically involves surfaces that are rough; however, characterizing and understanding the effects of wall roughness on turbulence remains a challenge. Here, by

  17. Pre-IceBridge ATM L2 Icessn Elevation, Slope, and Roughness

    Data.gov (United States)

    National Aeronautics and Space Administration — The NASA Pre-IceBridge ATM Level-2 Icessn Elevation, Slope, and Roughness (BLATM2) data set contains resampled and smoothed elevation measurements of Arctic and...

  18. Electrochemically grown rough-textured nanowires

    International Nuclear Information System (INIS)

    Tyagi, Pawan; Postetter, David; Saragnese, Daniel; Papadakis, Stergios J.; Gracias, David H.

    2010-01-01

    Nanowires with a rough surface texture show unusual electronic, optical, and chemical properties; however, there are only a few existing methods for producing these nanowires. Here, we describe two methods for growing both free standing and lithographically patterned gold (Au) nanowires with a rough surface texture. The first strategy is based on the deposition of nanowires from a silver (Ag)-Au plating solution mixture that precipitates an Ag-Au cyanide complex during electrodeposition at low current densities. This complex disperses in the plating solution, thereby altering the nanowire growth to yield a rough surface texture. These nanowires are mass produced in alumina membranes. The second strategy produces long and rough Au nanowires on lithographically patternable nickel edge templates with corrugations formed by partial etching. These rough nanowires can be easily arrayed and integrated with microscale devices.

  19. Suppression of intrinsic roughness in encapsulated graphene

    DEFF Research Database (Denmark)

    Thomsen, Joachim Dahl; Gunst, Tue; Gregersen, Søren Schou

    2017-01-01

    Roughness in graphene is known to contribute to scattering effects which lower carrier mobility. Encapsulating graphene in hexagonal boron nitride (hBN) leads to a significant reduction in roughness and has become the de facto standard method for producing high-quality graphene devices. We have...... fabricated graphene samples encapsulated by hBN that are suspended over apertures in a substrate and used noncontact electron diffraction measurements in a transmission electron microscope to measure the roughness of encapsulated graphene inside such structures. We furthermore compare the roughness...... of these samples to suspended bare graphene and suspended graphene on hBN. The suspended heterostructures display a root mean square (rms) roughness down to 12 pm, considerably less than that previously reported for both suspended graphene and graphene on any substrate and identical within experimental error...

  20. A set cover approach to fast beam orientation optimization in intensity modulated radiation therapy for total marrow irradiation

    International Nuclear Information System (INIS)

    Lee, Chieh-Hsiu Jason; Aleman, Dionne M; Sharpe, Michael B

    2011-01-01

    The beam orientation optimization (BOO) problem in intensity modulated radiation therapy (IMRT) treatment planning is a nonlinear problem, and existing methods to obtain solutions to the BOO problem are time consuming due to the complex nature of the objective function and size of the solution space. These issues become even more difficult in total marrow irradiation (TMI), where many more beams must be used to cover a vastly larger treatment area than typical site-specific treatments (e.g., head-and-neck, prostate, etc). These complications result in excessively long computation times to develop IMRT treatment plans for TMI, so we attempt to develop methods that drastically reduce treatment planning time. We transform the BOO problem into the classical set cover problem (SCP) and use existing methods to solve SCP to obtain beam solutions. Although SCP is NP-Hard, our methods obtain beam solutions that result in quality treatments in minutes. We compare our approach to an integer programming solver for the SCP to illustrate the speed advantage of our approach.

  1. A set cover approach to fast beam orientation optimization in intensity modulated radiation therapy for total marrow irradiation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chieh-Hsiu Jason; Aleman, Dionne M [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, ON M5S 3G8 (Canada); Sharpe, Michael B, E-mail: chjlee@mie.utoronto.ca, E-mail: aleman@mie.utoronto.ca, E-mail: michael.sharpe@rmp.uhn.on.ca [Princess Margaret Hospital, Department of Radiation Oncology, University of Toronto, 610 University Avenue, Toronto, ON M5G 2M9 (Canada)

    2011-09-07

    The beam orientation optimization (BOO) problem in intensity modulated radiation therapy (IMRT) treatment planning is a nonlinear problem, and existing methods to obtain solutions to the BOO problem are time consuming due to the complex nature of the objective function and size of the solution space. These issues become even more difficult in total marrow irradiation (TMI), where many more beams must be used to cover a vastly larger treatment area than typical site-specific treatments (e.g., head-and-neck, prostate, etc). These complications result in excessively long computation times to develop IMRT treatment plans for TMI, so we attempt to develop methods that drastically reduce treatment planning time. We transform the BOO problem into the classical set cover problem (SCP) and use existing methods to solve SCP to obtain beam solutions. Although SCP is NP-Hard, our methods obtain beam solutions that result in quality treatments in minutes. We compare our approach to an integer programming solver for the SCP to illustrate the speed advantage of our approach.

  2. Establishing a regulatory value chain model: An innovative approach to strengthening medicines regulatory systems in resource-constrained settings.

    Science.gov (United States)

    Chahal, Harinder Singh; Kashfipour, Farrah; Susko, Matt; Feachem, Neelam Sekhri; Boyle, Colin

    2016-05-01

    Medicines Regulatory Authorities (MRAs) are an essential part of national health systems and are charged with protecting and promoting public health through regulation of medicines. However, MRAs in resource-constrained settings often struggle to provide effective oversight of market entry and use of health commodities. This paper proposes a regulatory value chain model (RVCM) that policymakers and regulators can use as a conceptual framework to guide investments aimed at strengthening regulatory systems. The RVCM incorporates nine core functions of MRAs into five modules: (i) clear guidelines and requirements; (ii) control of clinical trials; (iii) market authorization of medical products; (iv) pre-market quality control; and (v) post-market activities. Application of the RVCM allows national stakeholders to identify and prioritize investments according to where they can add the most value to the regulatory process. Depending on the economy, capacity, and needs of a country, some functions can be elevated to a regional or supranational level, while others can be maintained at the national level. In contrast to a "one size fits all" approach to regulation in which each country manages the full regulatory process at the national level, the RVCM encourages leveraging the expertise and capabilities of other MRAs where shared processes strengthen regulation. This value chain approach provides a framework for policymakers to maximize investment impact while striving to reach the goal of safe, affordable, and rapidly accessible medicines for all.

  3. To do good might hurt bad: exploring nurses' understanding and approach to suffering in forensic psychiatric settings.

    Science.gov (United States)

    Vincze, Mattias; Fredriksson, Lennart; Wiklund Gustin, Lena

    2015-04-01

    Patients in forensic psychiatric settings not only have to deal with their mental illness, but also memories of criminal activities and being involuntarily hospitalized. The aim of the present study was to explore how nurses working in forensic psychiatric services understand and approach patients' experiences of suffering. Data were generated by semistructured interviews with psychiatric nurses from two different forensic psychiatric units in Sweden. Data were analysed by means of a hermeneutic approach inspired by Ricoeur's hermeneutics. The findings are reflected in four main themes: (i) ignoring suffering; (ii) explaining suffering as a natural and inevitable part of daily life in the forensic context; (iii) ascribing meaning to suffering; and, (iv) being present in suffering. To engage in alleviating suffering is a struggle that demands courage and the strength to reflect on its character and consequences. To encounter suffering means that nurses are not only confronted with patients' suffering, but also their own reactions to those patients. If suffering is not recognized or encountered, there is a risk that actions may have a negative impact on patients. © 2015 Australian College of Mental Health Nurses Inc.

  4. An approach for setting evidence-based and stakeholder-informed research priorities in low- and middle-income countries.

    Science.gov (United States)

    Rehfuess, Eva A; Durão, Solange; Kyamanywa, Patrick; Meerpohl, Joerg J; Young, Taryn; Rohwer, Anke

    2016-04-01

    To derive evidence-based and stakeholder-informed research priorities for implementation in African settings, the international research consortium Collaboration for Evidence-Based Healthcare and Public Health in Africa (CEBHA+) developed and applied a pragmatic approach. First, an online survey and face-to-face consultation between CEBHA+ partners and policy-makers generated priority research areas. Second, evidence maps for these priority research areas identified gaps and related priority research questions. Finally, study protocols were developed for inclusion within a grant proposal. Policy and practice representatives were involved throughout the process. Tuberculosis, diabetes, hypertension and road traffic injuries were selected as priority research areas. Evidence maps covered screening and models of care for diabetes and hypertension, population-level prevention of diabetes and hypertension and their risk factors, and prevention and management of road traffic injuries. Analysis of these maps yielded three priority research questions on hypertension and diabetes and one on road traffic injuries. The four resulting study protocols employ a broad range of primary and secondary research methods; a fifth promotes an integrated methodological approach across all research activities. The CEBHA+ approach, in particular evidence mapping, helped to formulate research questions and study protocols that would be owned by African partners, fill gaps in the evidence base, address policy and practice needs and be feasible given the existing research infrastructure and expertise. The consortium believes that the continuous involvement of decision-makers throughout the research process is an important means of ensuring that studies are relevant to the African context and that findings are rapidly implemented.

  5. Screen-and-treat approaches for cervical cancer prevention in low-resource settings: a randomized controlled trial.

    Science.gov (United States)

    Denny, Lynette; Kuhn, Louise; De Souza, Michelle; Pollack, Amy E; Dupree, William; Wright, Thomas C

    2005-11-02

    Non-cytology-based screen-and-treat approaches for cervical cancer prevention have been developed for low-resource settings, but few have directly addressed efficacy. To determine the safety and efficacy of 2 screen-and-treat approaches for cervical cancer prevention that were designed to be more resource-appropriate than conventional cytology-based screening programs. Randomized clinical trial of 6555 nonpregnant women, aged 35 to 65 years, recruited through community outreach and conducted between June 2000 and December 2002 at ambulatory women's health clinics in Khayelitsha, South Africa. All patients were screened using human papillomavirus (HPV) DNA testing and visual inspection with acetic acid (VIA). Women were subsequently randomized to 1 of 3 groups: cryotherapy if she had a positive HPV DNA test result; cryotherapy if she had a positive VIA test result; or to delayed evaluation. Biopsy-confirmed high-grade cervical cancer precursor lesions and cancer at 6 and 12 months in the HPV DNA and VIA groups compared with the delayed evaluation (control) group; complications after cryotherapy. The prevalence of high-grade cervical intraepithelial neoplasia and cancer (CIN 2+) was significantly lower in the 2 screen-and-treat groups at 6 months after randomization than in the delayed evaluation group. At 6 months, CIN 2+ was diagnosed in 0.80% (95% confidence interval [CI], 0.40%-1.20%) of the women in the HPV DNA group and 2.23% (95% CI, 1.57%-2.89%) in the VIA group compared with 3.55% (95% CI, 2.71%-4.39%) in the delayed evaluation group (Pcryotherapy, major complications were rare. Both screen-and-treat approaches are safe and result in a lower prevalence of high-grade cervical cancer precursor lesions compared with delayed evaluation at both 6 and 12 months. Trial Registration http://clinicaltrials.gov Identifier: NCT00233727.

  6. Simulations of roughness initiation and growth on railway rails

    Science.gov (United States)

    Sheng, X.; Thompson, D. J.; Jones, C. J. C.; Xie, G.; Iwnicki, S. D.; Allen, P.; Hsu, S. S.

    2006-06-01

    A model for the prediction of the initiation and growth of roughness on the rail is presented. The vertical interaction between a train and the track is calculated as a time history for single or multiple wheels moving on periodically supported rails, using a wavenumber-based approach. This vertical dynamic wheel/rail force arises from the varying stiffness due to discrete supports (i.e. parametric excitation) and the roughness excitation on the railhead. The tangential contact problem between the wheel and rail is modelled using an unsteady two-dimensional approach and also using the three-dimensional contact model, FASTSIM. This enables the slip and stick regions in the contact patch to be identified from the input geometry and creepage between the wheel and rail. The long-term wear growth is then predicted by applying repeated passages of the vehicle wheelsets, as part of an iterative solution.

  7. Surface roughness effects on turbulent Couette flow

    Science.gov (United States)

    Lee, Young Mo; Lee, Jae Hwa

    2017-11-01

    Direct numerical simulation of a turbulent Couette flow with two-dimensional (2-D) rod roughness is performed to examine the effects of the surface roughness. The Reynolds number based on the channel centerline laminar velocity (Uco) and channel half height (h) is Re =7200. The 2-D rods are periodically arranged with a streamwise pitch of λ = 8 k on the bottom wall, and the roughness height is k = 0.12 h. It is shown that the wall-normal extent for the logarithmic layer is significantly shortened in the rough-wall turbulent Couette flow, compared to a turbulent Couette flow with smooth wall. Although the Reynolds stresses are increased in a turbulent channel flow with surface roughness in the outer layer due to large-scale ejection motions produced by the 2-D rods, those of the rough-wall Couette flow are decreased. Isosurfaces of the u-structures averaged in time suggest that the decrease of the turbulent activity near the centerline is associated with weakened large-scale counter-rotating roll modes by the surface roughness. This research was supported by the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1A09000537) and the Ministry of Science, ICT & Future Planning (NRF-2017R1A5A1015311).

  8. Financial Literacy; Strategies and Concepts in Understanding the Financial Planning With Self-EfficacyTheory and Goal SettingTheory of Motivation Approach

    OpenAIRE

    Mu’izzuddin, -; Taufik, -; Ghasarma, Reza; Putri, Leonita; Adam, Mohamad

    2017-01-01

    This article discusses the strategies and concepts in understanding the financial literacy with the approach of self-efficacy theory and goal setting theory of motivation. The discussion begins with the concept of behavioral finance that discusses links between financial concepts to the behavior, and then proceed with the concept and measurement of financial literacy of individuals altogether with some approaches and factors that may affect it. Self-efficacy theory and goal setting theory of ...

  9. Optimization of surface roughness parameters in dry turning

    OpenAIRE

    R.A. Mahdavinejad; H. Sharifi Bidgoli

    2009-01-01

    Purpose: The precision of machine tools on one hand and the input setup parameters on the other hand, are strongly influenced in main output machining parameters such as stock removal, toll wear ratio and surface roughnes.Design/methodology/approach: There are a lot of input parameters which are effective in the variations of these output parameters. In CNC machines, the optimization of machining process in order to predict surface roughness is very important.Findings: From this point of view...

  10. Modeling Surface Roughness to Estimate Surface Moisture Using Radarsat-2 Quad Polarimetric SAR Data

    Science.gov (United States)

    Nurtyawan, R.; Saepuloh, A.; Budiharto, A.; Wikantika, K.

    2016-08-01

    Microwave backscattering from the earth's surface depends on several parameters such as surface roughness and dielectric constant of surface materials. The two parameters related to water content and porosity are crucial for estimating soil moisture. The soil moisture is an important parameter for ecological study and also a factor to maintain energy balance of land surface and atmosphere. Direct roughness measurements to a large area require extra time and cost. Heterogeneity roughness scale for some applications such as hydrology, climate, and ecology is a problem which could lead to inaccuracies of modeling. In this study, we modeled surface roughness using Radasat-2 quad Polarimetric Synthetic Aperture Radar (PolSAR) data. The statistical approaches to field roughness measurements were used to generate an appropriate roughness model. This modeling uses a physical SAR approach to predicts radar backscattering coefficient in the parameter of radar configuration (wavelength, polarization, and incidence angle) and soil parameters (surface roughness and dielectric constant). Surface roughness value is calculated using a modified Campbell and Shepard model in 1996. The modification was applied by incorporating the backscattering coefficient (σ°) of quad polarization HH, HV and VV. To obtain empirical surface roughness model from SAR backscattering intensity, we used forty-five sample points from field roughness measurements. We selected paddy field in Indramayu district, West Java, Indonesia as the study area. This area was selected due to intensive decreasing of rice productivity in the Northern Coast region of West Java. Third degree polynomial is the most suitable data fitting with coefficient of determination R2 and RMSE are about 0.82 and 1.18 cm, respectively. Therefore, this model is used as basis to generate the map of surface roughness.

  11. A knowledge brokerage approach for assessing the impacts of the setting up young farmers policy measure in Greece

    International Nuclear Information System (INIS)

    Bournaris, Th.; Moulogianni, Ch.; Arampatzis, S.; Kiomourtzi, F.; Wascher, D.M.; Manos, B.

    2016-01-01

    This study explores Knowledge Brokerage (KB) aspects of an ex-post Impact Assessment (IA) for the Rural Development Programme (RDP) measure of setting up young farmers, under the Common Agricultural Policy (CAP), at the regional level in Northern Greece. The measure supports the entry of young farmers in agriculture by moving land from older to younger farmers. The aim of the study was to test a set of KB tools for improving the interaction between researchers and policy makers. Our analysis mainly focused on a suite of IA Support Modules to guide practitioners, and on a technical tool kit, a web-based contextualisation platform, to support the IA of the specific test case. Offering a structured approach towards IA, both the Support Modules and LIAISE-KIT allow framing the context, organisation, scheduling and method selection in the light of KB objectives. The evaluation of how IA Support Modules influence the Science Policy Interface (SPI), in the case of the ex-post assessment, demonstrated the high relevance of KB activities for facilitating the interaction between researchers and regional policy makers. The assessment bridges the gap between knowledge producers developing scientific output to be applied in a specific context, and knowledge users, who want clear messages regarding the policy challenges they face. Other conclusions include the need for specific guidelines and training for knowledge users, especially with regard to the use of tools. According to our findings, a consequent application of KB activities is a crucial pre-condition for successfully implementing IAs in future RDP measures.

  12. A knowledge brokerage approach for assessing the impacts of the setting up young farmers policy measure in Greece

    Energy Technology Data Exchange (ETDEWEB)

    Bournaris, Th.; Moulogianni, Ch.; Arampatzis, S.; Kiomourtzi, F. [Aristotle University of Thessaloniki (Greece); Wascher, D.M. [Alterra Wageningen UR (Netherlands); Manos, B., E-mail: manosb@agro.auth.gr [Aristotle University of Thessaloniki (Greece)

    2016-02-15

    This study explores Knowledge Brokerage (KB) aspects of an ex-post Impact Assessment (IA) for the Rural Development Programme (RDP) measure of setting up young farmers, under the Common Agricultural Policy (CAP), at the regional level in Northern Greece. The measure supports the entry of young farmers in agriculture by moving land from older to younger farmers. The aim of the study was to test a set of KB tools for improving the interaction between researchers and policy makers. Our analysis mainly focused on a suite of IA Support Modules to guide practitioners, and on a technical tool kit, a web-based contextualisation platform, to support the IA of the specific test case. Offering a structured approach towards IA, both the Support Modules and LIAISE-KIT allow framing the context, organisation, scheduling and method selection in the light of KB objectives. The evaluation of how IA Support Modules influence the Science Policy Interface (SPI), in the case of the ex-post assessment, demonstrated the high relevance of KB activities for facilitating the interaction between researchers and regional policy makers. The assessment bridges the gap between knowledge producers developing scientific output to be applied in a specific context, and knowledge users, who want clear messages regarding the policy challenges they face. Other conclusions include the need for specific guidelines and training for knowledge users, especially with regard to the use of tools. According to our findings, a consequent application of KB activities is a crucial pre-condition for successfully implementing IAs in future RDP measures.

  13. A Novel Approach to Study Medical Decision Making in the Clinical Setting: The "Own-point-of-view" Perspective.

    Science.gov (United States)

    Pelaccia, Thierry; Tardif, Jacques; Triby, Emmanuel; Charlin, Bernard

    2017-07-01

    Making diagnostic and therapeutic decisions is a critical activity among physicians. It relies on the ability of physicians to use cognitive processes and specific knowledge in the context of a clinical reasoning. This ability is a core competency in physicians, especially in the field of emergency medicine where the rate of diagnostic errors is high. Studies that explore medical decision making in an authentic setting are increasing significantly. They are based on the use of qualitative methods that are applied at two separate times: 1) a video recording of the subject's actual activity in an authentic setting and 2) an interview with the subject, supported by the video recording. Traditionally, activity is recorded from an "external perspective"; i.e., a camera is positioned in the room in which the consultation takes place. This approach has many limits, both technical and with respect to the validity of the data collected. The article aims at 1) describing how decision making is currently being studied, especially from a qualitative standpoint, and the reasons why new methods are needed, and 2) reporting how we used an original, innovative approach to study decision making in the field of emergency medicine and findings from these studies to guide further the use of this method. The method consists in recording the subject's activity from his own point of view, by fixing a microcamera on his temple or the branch of his glasses. An interview is then held on the basis of this recording, so that the subject being interviewed can relive the situation, to facilitate the explanation of his reasoning with respect to his decisions and actions. We describe how this method has been used successfully in investigating medical decision making in emergency medicine. We provide details on how to use it optimally, taking into account the constraints associated with the practice of emergency medicine and the benefits in the study of clinical reasoning. The "own

  14. Increasing the effectiveness of instrumentation and control training programs using integrated training settings and a systematic approach to training

    International Nuclear Information System (INIS)

    McMahon, J.F.; Rakos, N.

    1992-01-01

    The performance of plant maintenance-related tasks assigned to instrumentation and control (I ampersand C) technicians can be broken down into physical skills required to do the task; resident knowledge of how to do the task; effect of maintenance on plant operating conditions; interactions with other plant organizations such as operations, radiation protection, and quality control; and knowledge of consequences of miss-action. A technician who has learned about the task in formal classroom presentations has not had the advantage of integrating that knowledge with the requisite physical and communication skills; hence, the first time these distinct and vital parts of the task equation are put together is on the job, during initial task performance. On-the-job training provides for the integration of skills and knowledge; however, this form of training is limited by plant conditions, availability of supporting players, and training experience levels of the personnel conducting the exercise. For licensed operations personnel, most nuclear utilities use formal classroom and a full-scope control room simulator to achieve the integration of skills and knowledge in a controlled training environment. TU Electric has taken that same approach into maintenance areas by including identical plant equipment in a laboratory setting for the large portion of training received by maintenance personnel at its Comanche Peak steam electric station. The policy of determining training needs and defining the scope of training by using the systematic approach to training has been highly effective and provided training at a reasonable cost (approximately $18.00/student contact hour)

  15. A comparison of two standard-setting approaches in high-stakes clinical performance assessment using generalizability theory.

    Science.gov (United States)

    Richter Lagha, Regina A; Boscardin, Christy K; May, Win; Fung, Cha-Chi

    2012-08-01

    Scoring clinical assessments in a reliable and valid manner using criterion-referenced standards remains an important issue and directly affects decisions made regarding examinee proficiency. This generalizability study of students' clinical performance examination (CPX) scores examines the reliability of those scores and of their interpretation, particularly according to a newly introduced, "critical actions" criterion-referenced standard and scoring approach. The authors applied a generalizability framework to the performance scores of 477 third-year students attending three different medical schools in 2008. The norm-referenced standard included all station checklist items. The criterion-referenced standard included only those items deemed critical to patient care by a faculty panel. The authors calculated and compared variance components and generalizability coefficients for each standard across six common stations. Norm-referenced scores had moderate generalizability (ρ = 0.51), whereas criterion-referenced scores showed low dependability (φ = 0.20). The estimated 63% of measurement error associated with the person-by-station interaction suggests case specificity. Increasing the number of stations on the CPX from 6 to 24, an impractical solution both for cost and time, would still yield only moderate dependability (φ = 0.50). Though the performance assessment of complex skills, like clinical competence, seems intrinsically valid, careful consideration of the scoring standard and approach is needed to avoid misinterpretation of proficiency. Further study is needed to determine how best to improve the reliability of criterion-referenced scores, by implementing changes to the examination structure, the process of standard-setting, or both.

  16. Spin Hall effect by surface roughness

    KAUST Repository

    Zhou, Lingjun; Grigoryan, Vahram L.; Maekawa, Sadamichi; Wang, Xuhui; Xiao, Jiang

    2015-01-01

    induced by surface roughness subscribes only to the side-jump contribution but not the skew scattering. The paradigm proposed in this paper provides the second, not if only, alternative to generate a sizable spin Hall effect.

  17. Roughness coefficients for stream channels in Arizona

    Science.gov (United States)

    Aldridge, B.N.; Garrett, J.M.

    1973-01-01

    When water flows in an open channel, energy is lost through friction along the banks and bed of the channel and through turbulence within the channel. The amount of energy lost is governed by channel roughness, which is expressed in terms of a roughness coefficient. An evaluation of the roughness coefficient is necessary in many hydraulic computations that involve flow in an open channel. Owing to the lack of satisfactory quantitative procedure, the ability of evaluate roughness coefficients can be developed only through experience; however, a basic knowledge of the methods used to assign the coefficients and the factors affecting them will be a great help. One of the most commonly used equations in open-channel hydraulics is that of Manning. The Manning equation is       1.486

  18. Investigation on Surface Roughness in Cylindrical Grinding

    Science.gov (United States)

    Rudrapati, Ramesh; Bandyopadhyay, Asish; Pal, Pradip Kumar

    2011-01-01

    Cylindrical grinding is a complex machining process. And surface roughness is often a key factor in any machining process while considering the machine tool or machining performance. Further, surface roughness is one of the measures of the technological quality of the product and is a factor that greatly influences cost and quality. The present work is related to some aspects of surface finish in the context of traverse-cut cylindrical grinding. The parameters considered have been: infeed, longitudinal feed and work speed. Taguchi quality design is used to design the experiments and to identify the significantly import parameter(s) affecting the surface roughness. By utilization of Response Surface Methodology (RSM), second order differential equation has been developed and attempts have also been made for optimization of the process in the context of surface roughness by using C- programming.

  19. Rough horizontal plates: heat transfer and hysteresis

    Energy Technology Data Exchange (ETDEWEB)

    Tisserand, J-C; Gasteuil, Y; Pabiou, H; Castaing, B; Chilla, F [Universite de Lyon, ENS Lyon, CNRS, 46 Allee d' ltalie, 69364 Lyon Cedex 7 (France); Creyssels, M [LMFA, CNRS, Ecole Centrale Lyon, 69134 Ecully Cedex (France); Gibert, M, E-mail: mathieu.creyssels@ec-lyon.fr [Also at MPI-DS (LFPN) Gottingen (Germany)

    2011-12-22

    To investigate the influence of a rough-wall boundary layer on turbulent heat transport, an experiment of high-Rayleigh convection in water is carried out in a Rayleigh-Benard cell with a rough lower plate and a smooth upper plate. A transition in the heat transport is observed when the thermal boundary layer thickness becomes comparable to or smaller than the roughness height. Besides, at larger Rayleigh numbers than the threshold value, heat transport is found to be increased up to 60%. This enhancement cannot be explained simply by an increase in the contact area of the rough surface since the contact area is increased only by a factor of 40%. Finally, a simple model is proposed to explain the enhanced heat transport.

  20. Surface excitation parameter for rough surfaces

    International Nuclear Information System (INIS)

    Da, Bo; Salma, Khanam; Ji, Hui; Mao, Shifeng; Zhang, Guanghui; Wang, Xiaoping; Ding, Zejun

    2015-01-01

    Graphical abstract: - Highlights: • Instead of providing a general mathematical model of roughness, we directly use a finite element triangle mesh method to build a fully 3D rough surface from the practical sample. • The surface plasmon excitation can be introduced to the realistic sample surface by dielectric response theory and finite element method. • We found that SEP calculated based on ideal plane surface model are still reliable for real sample surface with common roughness. - Abstract: In order to assess quantitatively the importance of surface excitation effect in surface electron spectroscopy measurement, surface excitation parameter (SEP) has been introduced to describe the surface excitation probability as an average number of surface excitations that electrons can undergo when they move through solid surface either in incoming or outgoing directions. Meanwhile, surface roughness is an inevitable issue in experiments particularly when the sample surface is cleaned with ion beam bombardment. Surface roughness alters not only the electron elastic peak intensity but also the surface excitation intensity. However, almost all of the popular theoretical models for determining SEP are based on ideal plane surface approximation. In order to figure out whether this approximation is efficient or not for SEP calculation and the scope of this assumption, we proposed a new way to determine the SEP for a rough surface by a Monte Carlo simulation of electron scattering process near to a realistic rough surface, which is modeled by a finite element analysis method according to AFM image. The elastic peak intensity is calculated for different electron incident and emission angles. Assuming surface excitations obey the Poisson distribution the SEPs corrected for surface roughness are then obtained by analyzing the elastic peak intensity for several materials and for different incident and emission angles. It is found that the surface roughness only plays an

  1. A decision making method based on interval type-2 fuzzy sets: An approach for ambulance location preference

    Directory of Open Access Journals (Sweden)

    Lazim Abdullah

    2018-01-01

    Full Text Available Selecting the best solution to deploy an ambulance in a strategic location is of the important variables that need to be accounted for improving the emergency medical services. The selection requires both quantitative and qualitative evaluation. Fuzzy set based approach is one of the well-known theories that help decision makers to handle fuzziness, uncertainty in decision making and vagueness of information. This paper proposes a new decision making method of Interval Type-2 Fuzzy Simple Additive Weighting (IT2 FSAW as to deal with uncertainty and vagueness. The new IT2 FSAW is applied to establish a preference in ambulance location. The decision making framework defines four criteria and five alternatives of ambulance location preference. Four experts attached to a Malaysian government hospital and a university medical center were interviewed to provide linguistic evaluation prior to analyzing with the new IT2 FSAW. Implementation of the proposed method in the case of ambulance location preference suggests that the ‘road network’ is the best alternative for ambulance location. The results indicate that the proposed method offers a consensus solution for handling the vague and qualitative criteria of ambulance location preference.

  2. The ambient dose equivalent at flight altitudes: a fit to a large set of data using a Bayesian approach

    International Nuclear Information System (INIS)

    Wissmann, F; Reginatto, M; Moeller, T

    2010-01-01

    The problem of finding a simple, generally applicable description of worldwide measured ambient dose equivalent rates at aviation altitudes between 8 and 12 km is difficult to solve due to the large variety of functional forms and parametrisations that are possible. We present an approach that uses Bayesian statistics and Monte Carlo methods to fit mathematical models to a large set of data and to compare the different models. About 2500 data points measured in the periods 1997-1999 and 2003-2006 were used. Since the data cover wide ranges of barometric altitude, vertical cut-off rigidity and phases in the solar cycle 23, we developed functions which depend on these three variables. Whereas the dependence on the vertical cut-off rigidity is described by an exponential, the dependences on barometric altitude and solar activity may be approximated by linear functions in the ranges under consideration. Therefore, a simple Taylor expansion was used to define different models and to investigate the relevance of the different expansion coefficients. With the method presented here, it is possible to obtain probability distributions for each expansion coefficient and thus to extract reliable uncertainties even for the dose rate evaluated. The resulting function agrees well with new measurements made at fixed geographic positions and during long haul flights covering a wide range of latitudes.

  3. Implementing Child-focused Activity Meter Utilization into the Elementary School Classroom Setting Using a Collaborative Community-based Approach.

    Science.gov (United States)

    Lynch, B A; Jones, A; Biggs, B K; Kaufman, T; Cristiani, V; Kumar, S; Quigg, S; Maxson, J; Swenson, L; Jacobson, N

    2015-12-01

    The prevalence of pediatric obesity has increased over the past 3 decades and is a pressing public health program. New technology advancements that can encourage more physical in children are needed. The Zamzee program is an activity meter linked to a motivational website designed for children 8-14 years of age. The objective of the study was to use a collaborative approach between a medical center, the private sector and local school staff to assess the feasibility of using the Zamzee Program in the school-based setting to improve physical activity levels in children. This was a pilot 8-week observational study offered to all children in one fifth grade classroom. Body mass index (BMI), the amount of physical activity by 3-day recall survey, and satisfaction with usability of the Zamzee Program were measured pre- and post-study. Out of 11 children who enrolled in the study, 7 completed all study activities. In those who completed the study, the median (interquartile range) total activity time by survey increased by 17 (1042) minutes and the BMI percentile change was 0 (8). Both children and their caregivers found the Zamzee Activity Meter (6/7) and website (6/7) "very easy" or "easy" to use. The Zamzee Program was found to be usable but did not significantly improve physical activity levels or BMI. Collaborative obesity intervention projects involving medical centers, the private sector and local schools are feasible but the effectiveness needs to be evaluated in larger-scale studies.

  4. A risk modelling approach for setting process hygiene criteria for Salmonella in pork cutting plants, based on enterococci

    DEFF Research Database (Denmark)

    Bollerslev, Anne Mette; Hansen, Tina Beck; Nauta, Maarten

    2015-01-01

    Pork is known to be a key source of foodborne salmonellosis. Processing steps from slaughter to cutting and retail contribute to the Salmonella consumer exposure. In two extensive surveys comprising a total of 5,310 pork samples, cuttings and minced meat were analysed semiquantitatively for Salmo......Pork is known to be a key source of foodborne salmonellosis. Processing steps from slaughter to cutting and retail contribute to the Salmonella consumer exposure. In two extensive surveys comprising a total of 5,310 pork samples, cuttings and minced meat were analysed semiquantitatively...... for Salmonella and quantitatively for the hygiene indicator enterococci. The samples were collected in 2001/2002 and 2010/2011 in Danish cutting plants, retail supermarkets and butcher shops. A positive correlation between prevalence of Salmonella and number of enterococci was shown (Hansen et al., 2013......). As enterococci and Salmonella share a lower growth limit around 5°C, the positive correlation could imply that the meat had been exposed to temperatures above 5°C. Based on these findings, the objective of this study was to develop an approach for setting process hygiene criteria for predicting Salmonella risk...

  5. Small-Scale Surf Zone Geometric Roughness

    Science.gov (United States)

    2017-12-01

    using stereo imagery techniques. A waterproof two- camera system with self-logging and internal power was developed using commercial-off-the-shelf...estimates. 14. SUBJECT TERMS surface roughness, nearshore, aerodynamic roughness, surf zone, structure from motion, 3D imagery 15. NUMBER OF... power was developed using commercial-off-the- shelf components and commercial software for operations 1m above the sea surface within the surf zone

  6. How supercontinents and superoceans affect seafloor roughness.

    Science.gov (United States)

    Whittaker, Joanne M; Müller, R Dietmar; Roest, Walter R; Wessel, Paul; Smith, Walter H F

    2008-12-18

    Seafloor roughness varies considerably across the world's ocean basins and is fundamental to controlling the circulation and mixing of heat in the ocean and dissipating eddy kinetic energy. Models derived from analyses of active mid-ocean ridges suggest that ocean floor roughness depends on seafloor spreading rates, with rougher basement forming below a half-spreading rate threshold of 30-35 mm yr(-1) (refs 4, 5), as well as on the local interaction of mid-ocean ridges with mantle plumes or cold-spots. Here we present a global analysis of marine gravity-derived roughness, sediment thickness, seafloor isochrons and palaeo-spreading rates of Cretaceous to Cenozoic ridge flanks. Our analysis reveals that, after eliminating effects related to spreading rate and sediment thickness, residual roughness anomalies of 5-20 mGal remain over large swaths of ocean floor. We found that the roughness as a function of palaeo-spreading directions and isochron orientations indicates that most of the observed excess roughness is not related to spreading obliquity, as this effect is restricted to relatively rare occurrences of very high obliquity angles (>45 degrees ). Cretaceous Atlantic ocean floor, formed over mantle previously overlain by the Pangaea supercontinent, displays anomalously low roughness away from mantle plumes and is independent of spreading rates. We attribute this observation to a sub-Pangaean supercontinental mantle temperature anomaly leading to slightly thicker than normal Late Jurassic and Cretaceous Atlantic crust, reduced brittle fracturing and smoother basement relief. In contrast, ocean crust formed above Pacific superswells, probably reflecting metasomatized lithosphere underlain by mantle at only slightly elevated temperatures, is not associated with basement roughness anomalies. These results highlight a fundamental difference in the nature of large-scale mantle upwellings below supercontinents and superoceans, and their impact on oceanic crustal

  7. Role of surface roughness in superlubricity

    International Nuclear Information System (INIS)

    Tartaglino, U; Samoilov, V N; Persson, B N J

    2006-01-01

    We study the sliding of elastic solids in adhesive contact with flat and rough interfaces. We consider the dependence of the sliding friction on the elastic modulus of the solids. For elastically hard solids with planar surfaces with incommensurate surface structures we observe extremely low friction (superlubricity), which very abruptly increases as the elastic modulus decreases. We show that even a relatively small surface roughness may completely kill the superlubricity state

  8. Rough terrain motion planning for actively reconfigurable mobile robots

    Energy Technology Data Exchange (ETDEWEB)

    Brunner, Michael

    2015-02-05

    In the aftermath of the Tohoku earthquake and the nuclear meltdown at the power plant of Fukushima Daiichi in 2011, reconfigurable robots like the iRobot Packbot were deployed. Instead of humans, the robots were used to investigate contaminated areas. Other incidents are the two major earthquakes in Northern Italy in May 2012. Besides many casualties, a large number of historical buildings was severely damaged. Due to the imminent danger of collapse, it was too dangerous for rescue personnel to enter many of the buildings. Therefore, the sites were inspected by reconfigurable robots, which are able to traverse the rubble and debris of the partially destroyed buildings. This thesis develops a navigation system enabling wheeled and tracked robots to safely traverse rough terrain and challenging structures. It consists of a planning mechanism and a controller. The focus of this thesis, however, is on the contribution to motion planning. The planning scheme employs a hierarchical approach to motion planning for actively reconfigurable robots in rough environments. Using a map of the environment the algorithm estimates the traversability under the consideration of uncertainties. Based on this analysis, an initial path search determines an approximate solution with respect to the robot's operating limits.Subsequently, a detailed planning step refines the initial path where it is required. The refinement step considers the robot's actuators and stability in addition to the quantities of the first search. Determining the robot-terrain interaction is very important in rough terrain. This thesis presents two path refinement approaches: a deterministic and a randomized approach. The experimental evaluation investigates the separate components of the planning scheme, the robot-terrain interaction for instance.In simulation as well as in real world experiments the evaluation demonstrates the necessity of such a planning algorithm in rough terrain and it provides

  9. Rough terrain motion planning for actively reconfigurable mobile robots

    International Nuclear Information System (INIS)

    Brunner, Michael

    2015-01-01

    In the aftermath of the Tohoku earthquake and the nuclear meltdown at the power plant of Fukushima Daiichi in 2011, reconfigurable robots like the iRobot Packbot were deployed. Instead of humans, the robots were used to investigate contaminated areas. Other incidents are the two major earthquakes in Northern Italy in May 2012. Besides many casualties, a large number of historical buildings was severely damaged. Due to the imminent danger of collapse, it was too dangerous for rescue personnel to enter many of the buildings. Therefore, the sites were inspected by reconfigurable robots, which are able to traverse the rubble and debris of the partially destroyed buildings. This thesis develops a navigation system enabling wheeled and tracked robots to safely traverse rough terrain and challenging structures. It consists of a planning mechanism and a controller. The focus of this thesis, however, is on the contribution to motion planning. The planning scheme employs a hierarchical approach to motion planning for actively reconfigurable robots in rough environments. Using a map of the environment the algorithm estimates the traversability under the consideration of uncertainties. Based on this analysis, an initial path search determines an approximate solution with respect to the robot's operating limits.Subsequently, a detailed planning step refines the initial path where it is required. The refinement step considers the robot's actuators and stability in addition to the quantities of the first search. Determining the robot-terrain interaction is very important in rough terrain. This thesis presents two path refinement approaches: a deterministic and a randomized approach. The experimental evaluation investigates the separate components of the planning scheme, the robot-terrain interaction for instance.In simulation as well as in real world experiments the evaluation demonstrates the necessity of such a planning algorithm in rough terrain and it provides

  10. Wall roughness induces asymptotic ultimate turbulence

    Science.gov (United States)

    Zhu, Xiaojue; Verschoof, Ruben A.; Bakhuis, Dennis; Huisman, Sander G.; Verzicco, Roberto; Sun, Chao; Lohse, Detlef

    2018-04-01

    Turbulence governs the transport of heat, mass and momentum on multiple scales. In real-world applications, wall-bounded turbulence typically involves surfaces that are rough; however, characterizing and understanding the effects of wall roughness on turbulence remains a challenge. Here, by combining extensive experiments and numerical simulations, we examine the paradigmatic Taylor-Couette system, which describes the closed flow between two independently rotating coaxial cylinders. We show how wall roughness greatly enhances the overall transport properties and the corresponding scaling exponents associated with wall-bounded turbulence. We reveal that if only one of the walls is rough, the bulk velocity is slaved to the rough side, due to the much stronger coupling to that wall by the detaching flow structures. If both walls are rough, the viscosity dependence is eliminated, giving rise to asymptotic ultimate turbulence—the upper limit of transport—the existence of which was predicted more than 50 years ago. In this limit, the scaling laws can be extrapolated to arbitrarily large Reynolds numbers.

  11. Roughness characterization of EUV multilayer coatings and ultra-smooth surfaces by light scattering

    Science.gov (United States)

    Trost, M.; Schröder, S.; Lin, C. C.; Duparré, A.; Tünnermann, A.

    2012-09-01

    Optical components for the extreme ultraviolet (EUV) face stringent requirements for surface finish, because even small amounts of surface and interface roughness can cause significant scattering losses and impair image quality. In this paper, we investigate the roughness evolution of Mo/Si multilayers by analyzing the scattering behavior at a wavelength of 13.5 nm as well as taking atomic force microscopy (AFM) measurements before and after coating. Furthermore, a new approach to measure substrate roughness is presented, which is based on light scattering measurements at 405 nm. The high robustness and sensitivity to roughness of this method are illustrated using an EUV mask blank with a highspatial frequency roughness of as low as 0.04 nm.

  12. Prediction of reduced thermal conductivity in nano-engineered rough semiconductor nanowires

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Pierre N; Aksamija, Zlatan; Ravaioli, Umberto [Department of Electrical and Computer Engineering, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States); Beckman Institute for Advanced Technology and Science, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States); Pop, Eric, E-mail: pmartin7@illinois.ed, E-mail: epop@illinois.ed [Department of Electrical and Computer Engineering, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States); Beckman Institute for Advanced Technology and Science, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States); Micro- and Nano-Technology Laboratory, University of Illinois, Urbana-Champaign, Urbana, IL 61801 (United States)

    2009-11-15

    We explore phonon decay processes necessary to the design of efficient rough semiconductor nanowire (NW) thermoelectric devices. A novel approach to surface roughness-limited thermal conductivity of Si, Ge, and GaAs NW with diameter D < 500 nm is presented. In particular, a frequency-dependent phonon scattering rate is computed from perturbation theory and related to a description of the surface through the root-mean-square roughness height {Delta} and autocovariance length L. Using a full phonon dispersion relation, the thermal conductivity varies quadratically with diameter and roughness as (D/{Delta}){sup 2}. Computed results are in agreement with experimental data, and predict remarkably low thermal conductivity below 1 W/m/K in rough-etched 56 nm Ge and GaAs NW at room temperature.

  13. Particle roughness in magnetorheology: effect on the strength of the field-induced structures

    International Nuclear Information System (INIS)

    Vereda, F; Segovia-Gutiérrez, J P; De Vicente, J; Hidalgo-Alvarez, R

    2015-01-01

    We report a study on the effect of particle roughness on the strength of the field-induced structures of magnetorheological (MR) fluids in the quasi-static regime. We prepared one set of MR fluids with carbonyl iron particles and another set with magnetite particles, and in both sets we had particles with different degrees of surface roughness. Small amplitude oscillatory shear (SAOS) magnetosweeps and steady shear (SS) tests were carried out on the suspensions to measure their elastic modulus (G′) and static yield stress (τ static ). Results for both the iron and the magnetite sets of suspensions were consistent: for the MR fluids prepared with rougher particles, G′ increased at smaller fields and τ static was ca. 20% larger than for the suspensions prepared with relatively smooth particles. In addition to the experimental study, we carried out finite element method calculations to assess the effect of particle roughness on the magnetic interaction between particles. These calculations showed that roughness can facilitate the magnetization of the particles, thus increasing the magnetic energy of the system for a given field, but that this effect depends on the concrete morphology of the surface. For our real systems, no major differences were observed between the magnetization cycles of the MR fluids prepared with particles with different degree of roughness, which implied that the effect of roughness on the measured G′ and τ static was due mainly to friction between the solid surfaces of adjacent particles. (paper)

  14. Pollutant Plume Dispersion in the Atmospheric Boundary Layer over Idealized Urban Roughness

    Science.gov (United States)

    Wong, Colman C. C.; Liu, Chun-Ho

    2013-05-01

    The Gaussian model of plume dispersion is commonly used for pollutant concentration estimates. However, its major parameters, dispersion coefficients, barely account for terrain configuration and surface roughness. Large-scale roughness elements (e.g. buildings in urban areas) can substantially modify the ground features together with the pollutant transport in the atmospheric boundary layer over urban roughness (also known as the urban boundary layer, UBL). This study is thus conceived to investigate how urban roughness affects the flow structure and vertical dispersion coefficient in the UBL. Large-eddy simulation (LES) is carried out to examine the plume dispersion from a ground-level pollutant (area) source over idealized street canyons for cross flows in neutral stratification. A range of building-height-to-street-width (aspect) ratios, covering the regimes of skimming flow, wake interference, and isolated roughness, is employed to control the surface roughness. Apart from the widely used aerodynamic resistance or roughness function, the friction factor is another suitable parameter that measures the drag imposed by urban roughness quantitatively. Previous results from laboratory experiments and mathematical modelling also support the aforementioned approach for both two- and three-dimensional roughness elements. Comparing the UBL plume behaviour, the LES results show that the pollutant dispersion strongly depends on the friction factor. Empirical studies reveal that the vertical dispersion coefficient increases with increasing friction factor in the skimming flow regime (lower resistance) but is more uniform in the regimes of wake interference and isolated roughness (higher resistance). Hence, it is proposed that the friction factor and flow regimes could be adopted concurrently for pollutant concentration estimate in the UBL over urban street canyons of different roughness.

  15. Rough mill simulator version 3.0: an analysis tool for refining rough mill operations

    Science.gov (United States)

    Edward Thomas; Joel Weiss

    2006-01-01

    ROMI-3 is a rough mill computer simulation package designed to be used by both rip-first and chop-first rough mill operators and researchers. ROMI-3 allows users to model and examine the complex relationships among cutting bill, lumber grade mix, processing options, and their impact on rough mill yield and efficiency. Integrated into the ROMI-3 software is a new least-...

  16. Investigation of the effect of cutting speed on the Surface Roughness parameters in CNC End Milling using Artificial Neural Network

    International Nuclear Information System (INIS)

    Al Hazza, Muataz H F; Adesta, Erry Y T

    2013-01-01

    This research presents the effect of high cutting speed on the surface roughness in the end milling process by using the Artificial Neural Network (ANN). An experimental investigation was conducted to measure the surface roughness for end milling. A set of sparse experimental data for finish end milling on AISI H13 at hardness of 48 HRC have been conducted. The artificial neural network (ANN) was applied to simulate and study the effect of high cutting speed on the surface roughness

  17. Thickness and roughness measurements of nano thin films by interference

    Directory of Open Access Journals (Sweden)

    A Sabzalipour

    2011-06-01

    Full Text Available In the standard optical interference fringes approach, by measuring shift of the interference fringes due to step edge of thin film on substrate, thickness of the layer has already been measured. In order to improve the measurement precision of this popular method, the interference fringes intensity curve was extracted and analyzed before and after the step preparation. By this method, one can measure a few nanometers films thickness. In addition, using the interference fringes intensity curve and its fluctuations, the roughness of surface is measured within a few nanometers accuracy. Comparison of our results with some direct methods of thickness and roughness measurements, i.e. using surface profilemeter and atomic force microscopy confirms the accuracy of the suggested improvements.

  18. Compact terahertz spectrometer based on disordered rough surfaces

    Science.gov (United States)

    Yang, Tao; Jiang, Bing; Ge, Jia-cheng; Zhu, Yong-yuan; Li, Xing-ao; Huang, Wei

    2018-01-01

    In this paper, a compact spectrometer based on disordered rough surfaces for operation in the terahertz band is presented. The proposed spectrometer consists of three components, which are used for dispersion, modulation and detection respectively. The disordered rough surfaces, which are acted as the dispersion component, are modulated by the modulation component. Different scattering intensities are captured by the detection component with different extent of modulation. With a calibration measurement process, one can reconstruct the spectra of the probe terahertz beam by solving a system of simultaneous linear equations. A Tikhonov regularization approach has been implemented to improve the accuracy of the spectral reconstruction. The reported broadband, compact, high-resolution terahertz spectrometer is well suited for portable terahertz spectroscopy applications.

  19. Effect of surface roughness on ultrasonic echo amplitude in aluminium-copper alloy castings

    International Nuclear Information System (INIS)

    Ambardar, R.; Pathak, S.D.; Prabhakar, O.; Jayakumar, T.

    1996-01-01

    In the present investigation, the influence of test surface roughness on ultrasonic back-wall echo (BWE) amplitude in Al-4.5%Cu alloy cast specimens has been studied. The results indicate that as the value of surface roughness of the specimen increases, the value of relating BWE amplitude at a given probe frequency decreases. However, under the present set of experimental conditions, the decrease in BWE amplitude with the increase in surface roughness of the test specimen is found to be appreciable at 10 MHz probe frequency. (author)

  20. Rough-fuzzy clustering and unsupervised feature selection for wavelet based MR image segmentation.

    Directory of Open Access Journals (Sweden)

    Pradipta Maji

    Full Text Available Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices.

  1. Approaching the theoretical limit in periodic local MP2 calculations with atomic-orbital basis sets: the case of LiH.

    Science.gov (United States)

    Usvyat, Denis; Civalleri, Bartolomeo; Maschio, Lorenzo; Dovesi, Roberto; Pisani, Cesare; Schütz, Martin

    2011-06-07

    The atomic orbital basis set limit is approached in periodic correlated calculations for solid LiH. The valence correlation energy is evaluated at the level of the local periodic second order Møller-Plesset perturbation theory (MP2), using basis sets of progressively increasing size, and also employing "bond"-centered basis functions in addition to the standard atom-centered ones. Extended basis sets, which contain linear dependencies, are processed only at the MP2 stage via a dual basis set scheme. The local approximation (domain) error has been consistently eliminated by expanding the orbital excitation domains. As a final result, it is demonstrated that the complete basis set limit can be reached for both HF and local MP2 periodic calculations, and a general scheme is outlined for the definition of high-quality atomic-orbital basis sets for solids. © 2011 American Institute of Physics

  2. Mitigating mask roughness via pupil filtering

    Science.gov (United States)

    Baylav, B.; Maloney, C.; Levinson, Z.; Bekaert, J.; Vaglio Pret, A.; Smith, B.

    2014-03-01

    The roughness present on the sidewalls of lithographically defined patterns imposes a very important challenge for advanced technology nodes. It can originate from the aerial image or the photoresist chemistry/processing [1]. The latter remains to be the dominant group in ArF and KrF lithography; however, the roughness originating from the mask transferred to the aerial image is gaining more attention [2-9], especially for the imaging conditions with large mask error enhancement factor (MEEF) values. The mask roughness contribution is usually in the low frequency range, which is particularly detrimental to the device performance by causing variations in electrical device parameters on the same chip [10-12]. This paper explains characteristic differences between pupil plane filtering in amplitude and in phase for the purpose of mitigating mask roughness transfer under interference-like lithography imaging conditions, where onedirectional periodic features are to be printed by partially coherent sources. A white noise edge roughness was used to perturbate the mask features for validating the mitigation.

  3. Development of nano-roughness calibration standards

    International Nuclear Information System (INIS)

    Baršić, Gorana; Mahović, Sanjin; Zorc, Hrvoje

    2012-01-01

    At the Laboratory for Precise Measurements of Length, currently the Croatian National Laboratory for Length, unique nano-roughness calibration standards were developed, which have been physically implemented in cooperation with the company MikroMasch Trading OU and the Ruđer Bošković Institute. In this paper, a new design for a calibration standard with two measuring surfaces is presented. One of the surfaces is for the reproduction of roughness parameters, while the other is for the traceability of length units below 50 nm. The nominal values of the groove depths on these measuring surfaces are the same. Thus, a link between the measuring surfaces has been ensured, which makes these standards unique. Furthermore, the calibration standards available on the market are generally designed specifically for individual groups of measuring instrumentation, such as interferometric microscopes, stylus instruments, scanning electron microscopes (SEM) or scanning probe microscopes. In this paper, a new design for nano-roughness standards has been proposed for use in the calibration of optical instruments, as well as for stylus instruments, SEM, atomic force microscopes and scanning tunneling microscopes. Therefore, the development of these new nano-roughness calibration standards greatly contributes to the reproducibility of the results of groove depth measurement as well as the 2D and 3D roughness parameters obtained by various measuring methods. (paper)

  4. THE USE OF NUMBERED HEADS TOGETHER (NHT LEARNING MODEL WITH SCIENCE, ENVIRONMENT, TECHNOLOGY, SOCIETY (SETS APPROACH TO IMPROVE STUDENT LEARNING MOTIVATION OF SENIOR HIGH SCHOOL

    Directory of Open Access Journals (Sweden)

    B. Sutipnyo

    2018-01-01

    Full Text Available This research was aimed to determine the increasing of students' motivation that has been applied by Numbered Heads Together (NHT learning model with Science, Environment, Technology, Society (SETS approach. The design of this study was quasi experiment with One Group Pretest-Posttest Design. The data of students’ learning motivation obtained through questionnaire administered before and after NHT learning model with SETS approach. In this research, the indicators of learning-motivation were facing tasks diligently, showing interest in variety of problems, prefering to work independently, keeping students’ opinions, and feeling happy to find and solve problems. Increasing of the students’ learning motivation was analyzed by using a gain test. The results showed that applying NHT learning model with SETS approach could increase the students’ learning motivation in medium categories.

  5. Accountable priority setting for trust in health systems--the need for research into a new approach for strengthening sustainable health action in developing countries

    DEFF Research Database (Denmark)

    Byskov, Jens; Bloch, Paul; Blystad, Astrid

    2009-01-01

    Despite multiple efforts to strengthen health systems in low and middle income countries, intended sustainable improvements in health outcomes have not been shown. To date most priority setting initiatives in health systems have mainly focused on technical approaches involving information derived...... from burden of disease statistics, cost effectiveness analysis, and published clinical trials. However, priority setting involves value-laden choices and these technical approaches do not equip decision-makers to address a broader range of relevant values - such as trust, equity, accountability...... and fairness - that are of concern to other partners and, not least, the populations concerned. A new focus for priority setting is needed.Accountability for Reasonableness (AFR) is an explicit ethical framework for legitimate and fair priority setting that provides guidance for decision-makers who must...

  6. Effective Boundary Slip Induced by Surface Roughness and Their Coupled Effect on Convective Heat Transfer of Liquid Flow

    Directory of Open Access Journals (Sweden)

    Yunlu Pan

    2018-05-01

    Full Text Available As a significant interfacial property for micro/nano fluidic system, the effective boundary slip can be induced by the surface roughness. However, the effect of surface roughness on the effective slip is still not clear, both increased and decreased effective boundary slip were found with increased roughness. The present work develops a simplified model to study the effect of surface roughness on the effective boundary slip. In the created rough models, the reference position of the rough surfaces to determinate effective boundary slip was set based on ISO/ASME standard and the surface roughness parameters including Ra (arithmetical mean deviation of the assessed profile, Rsm (mean width of the assessed profile elements and shape of the texture varied to form different surface roughness. Then, the effective boundary slip of fluid flow through the rough surface was analyzed by using COMSOL 5.3. The results show that the effective boundary slip induced by surface roughness of fully wetted rough surface keeps negative and further decreases with increasing Ra or decreasing Rsm. Different shape of roughness texture also results in different effective slip. A simplified corrected method for the measured effective boundary slip was developed and proved to be efficient when the Rsm is no larger than 200 nm. Another important finding in the present work is that the convective heat transfer firstly increases followed by an unobvious change with increasing Ra, while the effective boundary slip keeps decreasing. It is believed that the increasing Ra enlarges the area of solid-liquid interface for convective heat transfer, however, when Ra is large enough, the decreasing roughness-induced effective boundary slip counteracts the enhancement effect of roughness itself on the convective heat transfer.

  7. Turbulent flow velocity distribution at rough walls

    International Nuclear Information System (INIS)

    Baumann, W.

    1978-08-01

    Following extensive measurements of the velocity profile in a plate channel with artificial roughness geometries specific investigations were carried out to verify the results obtained. The wall geometry used was formed by high transverse square ribs having a large pitch. The measuring position relative to the ribs was varied as a parameter thus providing a statement on the local influence of roughness ribs on the values measured. As a fundamental result it was found that the gradient of the logarithmic rough wall velocity profiles, which differs widely from the value 2.5, depends but slightly on the measuring position relative to the ribs. The gradients of the smooth wall velocity profiles deviate from 2.5 near the ribs, only. This fact can be explained by the smooth wall shear stress varying with the pitch of the ribs. (orig.) 891 GL [de

  8. Spin Hall effect by surface roughness

    KAUST Repository

    Zhou, Lingjun

    2015-01-08

    The spin Hall and its inverse effects, driven by the spin orbit interaction, provide an interconversion mechanism between spin and charge currents. Since the spin Hall effect generates and manipulates spin current electrically, to achieve a large effect is becoming an important topic in both academia and industries. So far, materials with heavy elements carrying a strong spin orbit interaction, provide the only option. We propose here a new mechanism, using the surface roughness in ultrathin films, to enhance the spin Hall effect without heavy elements. Our analysis based on Cu and Al thin films suggests that surface roughness is capable of driving a spin Hall angle that is comparable to that in bulk Au. We also demonstrate that the spin Hall effect induced by surface roughness subscribes only to the side-jump contribution but not the skew scattering. The paradigm proposed in this paper provides the second, not if only, alternative to generate a sizable spin Hall effect.

  9. Why do rough surfaces appear glossy?

    Science.gov (United States)

    Qi, Lin; Chantler, Mike J; Siebert, J Paul; Dong, Junyu

    2014-05-01

    The majority of work on the perception of gloss has been performed using smooth surfaces (e.g., spheres). Previous studies that have employed more complex surfaces reported that increasing mesoscale roughness increases perceived gloss [Psychol. Sci.19, 196 (2008), J. Vis.10(9), 13 (2010), Curr. Biol.22, 1909 (2012)]. We show that the use of realistic rendering conditions is important and that, in contrast to [Psychol. Sci.19, 196 (2008), J. Vis.10(9), 13 (2010)], after a certain point increasing roughness further actually reduces glossiness. We investigate five image statistics of estimated highlights and show that for our stimuli, one in particular, which we term "percentage of highlight area," is highly correlated with perceived gloss. We investigate a simple model that explains the unimodal, nonmonotonic relationship between mesoscale roughness and percentage highlight area.

  10. Approaching the basis set limit for DFT calculations using an environment-adapted minimal basis with perturbation theory: Formulation, proof of concept, and a pilot implementation

    International Nuclear Information System (INIS)

    Mao, Yuezhi; Horn, Paul R.; Mardirossian, Narbe; Head-Gordon, Teresa; Skylaris, Chris-Kriton; Head-Gordon, Martin

    2016-01-01

    Recently developed density functionals have good accuracy for both thermochemistry (TC) and non-covalent interactions (NC) if very large atomic orbital basis sets are used. To approach the basis set limit with potentially lower computational cost, a new self-consistent field (SCF) scheme is presented that employs minimal adaptive basis (MAB) functions. The MAB functions are optimized on each atomic site by minimizing a surrogate function. High accuracy is obtained by applying a perturbative correction (PC) to the MAB calculation, similar to dual basis approaches. Compared to exact SCF results, using this MAB-SCF (PC) approach with the same large target basis set produces <0.15 kcal/mol root-mean-square deviations for most of the tested TC datasets, and <0.1 kcal/mol for most of the NC datasets. The performance of density functionals near the basis set limit can be even better reproduced. With further improvement to its implementation, MAB-SCF (PC) is a promising lower-cost substitute for conventional large-basis calculations as a method to approach the basis set limit of modern density functionals.

  11. The "expert patient" approach for non-communicable disease management in low and middle income settings: When the reality confronts the rhetoric.

    Science.gov (United States)

    Xiao, Yue

    2015-09-01

    This paper seeks to explore the relevance between the Western "expert patient" rhetoric and the reality of non-communicable diseases (NCDs) control and management in low and middle income settings from the health sociological perspective. It firstly sets up a conceptual framework of the "expert patient" or the patient self-management approach, showing the rhetoric of the initiative in the developed countries. Then by examining the situation of NCDs control and management in low income settings, the paper tries to evaluate the possibilities of implementing the "expert patient" approach in these countries. Kober and Van Damme's study on the relevance of the "expert patient" for an HIV/AIDS program in low income settings is critically studied to show the relevance of the developed countries' rhetoric of the "expert patient" approach for the reality of developing countries. In addition, the MoPoTsyo diabetes peer educator program is analyzed to show the challenges faced by the low income countries in implementing patient self-management programs. Finally, applications of the expert patient approach in China are discussed as well, to remind us of the possible difficulties in introducing it into rural settings.

  12. Velocity distribution in a turbulent flow near a rough wall

    Science.gov (United States)

    Korsun, A. S.; Pisarevsky, M. I.; Fedoseev, V. N.; Kreps, M. V.

    2017-11-01

    Velocity distribution in the zone of developed wall turbulence, regardless of the conditions on the wall, is described by the well-known Prandtl logarithmic profile. In this distribution, the constant, that determines the value of the velocity, is determined by the nature of the interaction of the flow with the wall and depends on the viscosity of the fluid, the dynamic velocity, and the parameters of the wall roughness.In extreme cases depending on the ratio between the thickness of the viscous sublayer and the size of the roughness the constant takes on a value that does not depend on viscosity, or leads to a ratio for a smooth wall.It is essential that this logarithmic profile is the result not only of the Prandtl theory, but can be derived from general considerations of the theory of dimensions, and also follows from the condition of local equilibrium of generation and dissipation of turbulent energy in the wall area. This allows us to consider the profile as a universal law of velocity distribution in the wall area of a turbulent flow.The profile approximation up to the maximum speed line with subsequent integration makes possible to obtain the resistance law for channels of simple shape. For channels of complex shape with rough walls, the universal profile can be used to formulate the boundary condition when applied to the calculation of turbulence models.This paper presents an empirical model for determining the constant of the universal logarithmic profile. The zone of roughness is described by a set of parameters and is considered as a porous structure with variable porosity.

  13. Mars radar clutter and surface roughness characteristics from MARSIS data

    Science.gov (United States)

    Campbell, Bruce A.; Schroeder, Dustin M.; Whitten, Jennifer L.

    2018-01-01

    Radar sounder studies of icy, sedimentary, and volcanic settings can be affected by reflections from surface topography surrounding the sensor nadir location. These off-nadir ;clutter; returns appear at similar time delays to subsurface echoes and complicate geologic interpretation. Additionally, broadening of the radar echo in delay by surface returns sets a limit on the detectability of subsurface interfaces. We use MARSIS 4 MHz data to study variations in the nadir and off-nadir clutter echoes, from about 300 km to 1000 km altitude, R, for a wide range of surface roughness. This analysis uses a new method of characterizing ionospheric attenuation to merge observations over a range of solar zenith angle and date. Mirror-like reflections should scale as R-2, but the observed 4 MHz nadir echoes often decline by a somewhat smaller power-law factor because MARSIS on-board processing increases the number of summed pulses with altitude. Prior predictions of the contributions from clutter suggest a steeper decline with R than the nadir echoes, but in very rough areas the ratio of off-nadir returns to nadir echoes shows instead an increase of about R1/2 with altitude. This is likely due in part to an increase in backscatter from the surface as the radar incidence angle at some round-trip time delay declines with increasing R. It is possible that nadir and clutter echo properties in other planetary sounding observations, including RIME and REASON flyby data for Europa, will vary in the same way with altitude, but there may be differences in the nature and scale of target roughness (e.g., icy versus rocky surfaces). We present global maps of the ionosphere- and altitude-corrected nadir echo strength, and of a ;clutter; parameter based on the ratio of off-nadir to nadir echoes. The clutter map offers a view of surface roughness at ∼75 m length scale, bridging the spatial-scale gap between SHARAD roughness estimates and MOLA-derived parameters.

  14. Diffuse neutron scattering signatures of rough films

    International Nuclear Information System (INIS)

    Pynn, R.; Lujan, M. Jr.

    1992-01-01

    Patterns of diffuse neutron scattering from thin films are calculated from a perturbation expansion based on the distorted-wave Born approximation. Diffuse fringes can be categorised into three types: those that occur at constant values of the incident or scattered neutron wavevectors, and those for which the neutron wavevector transfer perpendicular to the film is constant. The variation of intensity along these fringes can be used to deduce the spectrum of surface roughness for the film and the degree of correlation between the film's rough surfaces

  15. A combined approach of generalized additive model and bootstrap with small sample sets for fault diagnosis in fermentation process of glutamate.

    Science.gov (United States)

    Liu, Chunbo; Pan, Feng; Li, Yun

    2016-07-29

    Glutamate is of great importance in food and pharmaceutical industries. There is still lack of effective statistical approaches for fault diagnosis in the fermentation process of glutamate. To date, the statistical approach based on generalized additive model (GAM) and bootstrap has not been used for fault diagnosis in fermentation processes, much less the fermentation process of glutamate with small samples sets. A combined approach of GAM and bootstrap was developed for the online fault diagnosis in the fermentation process of glutamate with small sample sets. GAM was first used to model the relationship between glutamate production and different fermentation parameters using online data from four normal fermentation experiments of glutamate. The fitted GAM with fermentation time, dissolved oxygen, oxygen uptake rate and carbon dioxide evolution rate captured 99.6 % variance of glutamate production during fermentation process. Bootstrap was then used to quantify the uncertainty of the estimated production of glutamate from the fitted GAM using 95 % confidence interval. The proposed approach was then used for the online fault diagnosis in the abnormal fermentation processes of glutamate, and a fault was defined as the estimated production of glutamate fell outside the 95 % confidence interval. The online fault diagnosis based on the proposed approach identified not only the start of the fault in the fermentation process, but also the end of the fault when the fermentation conditions were back to normal. The proposed approach only used a small sample sets from normal fermentations excitements to establish the approach, and then only required online recorded data on fermentation parameters for fault diagnosis in the fermentation process of glutamate. The proposed approach based on GAM and bootstrap provides a new and effective way for the fault diagnosis in the fermentation process of glutamate with small sample sets.

  16. SuperTRI: A new approach based on branch support analyses of multiple independent data sets for assessing reliability of phylogenetic inferences.

    Science.gov (United States)

    Ropiquet, Anne; Li, Blaise; Hassanin, Alexandre

    2009-09-01

    Supermatrix and supertree are two methods for constructing a phylogenetic tree by using multiple data sets. However, these methods are not a panacea, as conflicting signals between data sets can lead to misinterpret the evolutionary history of taxa. In particular, the supermatrix approach is expected to be misleading if the species-tree signal is not dominant after the combination of the data sets. Moreover, most current supertree methods suffer from two limitations: (i) they ignore or misinterpret secondary (non-dominant) phylogenetic signals of the different data sets; and (ii) the logical basis of node robustness measures is unclear. To overcome these limitations, we propose a new approach, called SuperTRI, which is based on the branch support analyses of the independent data sets, and where the reliability of the nodes is assessed using three measures: the supertree Bootstrap percentage and two other values calculated from the separate analyses: the mean branch support (mean Bootstrap percentage or mean posterior probability) and the reproducibility index. The SuperTRI approach is tested on a data matrix including seven genes for 82 taxa of the family Bovidae (Mammalia, Ruminantia), and the results are compared to those found with the supermatrix approach. The phylogenetic analyses of the supermatrix and independent data sets were done using four methods of tree reconstruction: Bayesian inference, maximum likelihood, and unweighted and weighted maximum parsimony. The results indicate, firstly, that the SuperTRI approach shows less sensitivity to the four phylogenetic methods, secondly, that it is more accurate to interpret the relationships among taxa, and thirdly, that interesting conclusions on introgression and radiation can be drawn from the comparisons between SuperTRI and supermatrix analyses.

  17. Improved algorithms for the classification of rough rice using a bionic electronic nose based on PCA and the Wilks distribution.

    Science.gov (United States)

    Xu, Sai; Zhou, Zhiyan; Lu, Huazhong; Luo, Xiwen; Lan, Yubin

    2014-03-19

    Principal Component Analysis (PCA) is one of the main methods used for electronic nose pattern recognition. However, poor classification performance is common in classification and recognition when using regular PCA. This paper aims to improve the classification performance of regular PCA based on the existing Wilks Λ-statistic (i.e., combined PCA with the Wilks distribution). The improved algorithms, which combine regular PCA with the Wilks Λ-statistic, were developed after analysing the functionality and defects of PCA. Verification tests were conducted using a PEN3 electronic nose. The collected samples consisted of the volatiles of six varieties of rough rice (Zhongxiang1, Xiangwan13, Yaopingxiang, WufengyouT025, Pin 36, and Youyou122), grown in same area and season. The first two principal components used as analysis vectors cannot perform the rough rice varieties classification task based on a regular PCA. Using the improved algorithms, which combine the regular PCA with the Wilks Λ-statistic, many different principal components were selected as analysis vectors. The set of data points of the Mahalanobis distance between each of the varieties of rough rice was selected to estimate the performance of the classification. The result illustrates that the rough rice varieties classification task is achieved well using the improved algorithm. A Probabilistic Neural Networks (PNN) was also established to test the effectiveness of the improved algorithms. The first two principal components (namely PC1 and PC2) and the first and fifth principal component (namely PC1 and PC5) were selected as the inputs of PNN for the classification of the six rough rice varieties. The results indicate that the classification accuracy based on the improved algorithm was improved by 6.67% compared to the results of the regular method. These results prove the effectiveness of using the Wilks Λ-statistic to improve the classification accuracy of the regular PCA approach. The results

  18. TOPOGRAPHIC LOCAL ROUGHNESS EXTRACTION AND CALIBRATION OVER MARTIAN SURFACE BY VERY HIGH RESOLUTION STEREO ANALYSIS AND MULTI SENSOR DATA FUSION

    Directory of Open Access Journals (Sweden)

    J. R. Kim

    2012-08-01

    Full Text Available The planetary topography has been the main focus of the in-orbital remote sensing. In spite of the recent development in active and passive sensing technologies to reconstruct three dimensional planetary topography, the resolution limit of range measurement is theoretically and practically obvious. Therefore, the extraction of inner topographical height variation within a measurement spot is very challengeable and beneficial topic for the many application fields such as the identification of landform, Aeolian process analysis and the risk assessment of planetary lander. In this study we tried to extract the topographic height variation over martian surface so called local roughness with different approaches. One method is the employment of laser beam broadening effect and the other is the multi angle optical imaging. Especially, in both cases, the precise pre processing employing high accuracy DTM (Digital Terrain Model were introduced to minimise the possible errors. Since a processing routine to extract very high resolution DTMs up to 0.5–4m grid-spacing from HiRISE (High Resolution Imaging Science Experiment and 20–10m DTM from CTX (Context Camera stereo pair has been developed, it is now possible to calibrate the local roughness compared with the calculated height variation from very high resolution topographic products. Three testing areas were chosen and processed to extract local roughness with the co-registered multi sensor data sets. Even though, the extracted local roughness products are still showing the strong correlation with the topographic slopes, we demonstrated the potentials of the height variations extraction and calibration methods.

  19. Mining big data sets of plankton images: a zero-shot learning approach to retrieve labels without training data

    Science.gov (United States)

    Orenstein, E. C.; Morgado, P. M.; Peacock, E.; Sosik, H. M.; Jaffe, J. S.

    2016-02-01

    Technological advances in instrumentation and computing have allowed oceanographers to develop imaging systems capable of collecting extremely large data sets. With the advent of in situ plankton imaging systems, scientists must now commonly deal with "big data" sets containing tens of millions of samples spanning hundreds of classes, making manual classification untenable. Automated annotation methods are now considered to be the bottleneck between collection and interpretation. Typically, such classifiers learn to approximate a function that predicts a predefined set of classes for which a considerable amount of labeled training data is available. The requirement that the training data span all the classes of concern is problematic for plankton imaging systems since they sample such diverse, rapidly changing populations. These data sets may contain relatively rare, sparsely distributed, taxa that will not have associated training data; a classifier trained on a limited set of classes will miss these samples. The computer vision community, leveraging advances in Convolutional Neural Networks (CNNs), has recently attempted to tackle such problems using "zero-shot" object categorization methods. Under a zero-shot framework, a classifier is trained to map samples onto a set of attributes rather than a class label. These attributes can include visual and non-visual information such as what an organism is made out of, where it is distributed globally, or how it reproduces. A second stage classifier is then used to extrapolate a class. In this work, we demonstrate a zero-shot classifier, implemented with a CNN, to retrieve out-of-training-set labels from images. This method is applied to data from two continuously imaging, moored instruments: the Scripps Plankton Camera System (SPCS) and the Imaging FlowCytobot (IFCB). Results from simulated deployment scenarios indicate zero-shot classifiers could be successful at recovering samples of rare taxa in image sets. This

  20. Roughness Mapping on Various Vertical Scales Based on Full-Waveform Airborne Laser Scanning Data

    Directory of Open Access Journals (Sweden)

    Wolfgang Wagner

    2011-03-01

    Full Text Available Roughness is an important input parameter for modeling of natural hazards such as floods, rock falls and avalanches, where it is basically assumed that flow velocities decrease with increasing roughness. Seeing roughness as a multi-scale level concept (i.e., ranging from fine-scale soil characteristics to description of understory and lower tree layer various roughness raster products were derived from the original full-waveform airborne laser scanning (FWF-ALS point cloud using two different types of roughness parameters, the surface roughness (SR and the terrain roughness (TR. For the calculation of the SR, ALS terrain points within a defined height range to the terrain surface are considered. For the parameterization of the SR, two approaches are investigated. In the first approach, a geometric description by calculating the standard deviation of plane fitting residuals of terrain points is used. In the second one, the potential of the derived echo widths are analyzed for the parameterization of SR. The echo width is an indicator for roughness and the slope of the target. To achieve a comparable spatial resolution of both SR layers, the calculation of the standard deviation of detrended terrain points requires a higher terrain point density than the SR parameterization using the echo widths. The TR describes objects (i.e., point clusters close but explicitly above the terrain surface, with 20 cm defined as threshold height value for delineation of the surface layer (i.e., forest floor layer. Two different empirically defined vegetation layers below the canopy layer were analyzed (TR I: 0.2 m to 1.0 m; TR II: 0.2 m to 3.0 m. A 1 m output grid cell size was chosen for all roughness parameters in order to provide consistency for further integration of high-resolution optical imagery. The derived roughness parameters were then jointly classified, together with a normalized Digital Surface Model (nDSM showing the height of objects (i

  1. Potential roughness near lithographically fabricated atom chips

    DEFF Research Database (Denmark)

    Krüger, Peter; Andersson, L. M.; Wildermuth, Stefan

    2007-01-01

    Potential roughness has been reported to severely impair experiments in magnetic microtraps. We show that these obstacles can be overcome as we measure disorder potentials that are reduced by two orders of magnitude near lithographically patterned high-quality gold layers on semiconductor atom chip...

  2. Optical measurement of surface roughness in manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Brodmann, R.

    1984-11-01

    The measuring system described here is based on the light-scattering method, and was developed by Optische Werke G. Rodenstock, Munich. It is especially useful for rapid non-contact monitoring of surface roughness in production-related areas. This paper outlines the differences between this system and the common stylus instrument, including descriptions of some applications in industry.

  3. Microscopic Holography for flow over rough plate

    Science.gov (United States)

    Talapatra, Siddharth; Hong, Jiarong; Lu, Yuan; Katz, Joseph

    2008-11-01

    Our objective is to measure the near wall flow structures in a turbulent channel flow over a rough wall. In-line microscopic holographic PIV can resolve the 3-D flow field in a small sample volume, but recording holograms through a rough surface is a challenge. To solve this problem, we match the refractive indices of the fluid with that of the wall. Proof of concept tests involve an acrylic plate containing uniformly distributed, closely packed 0.45mm high pyramids with slope angle of 22^^o located within a concentrated sodium iodide solution. Holograms recorded by a 4864 x 3248 pixel digital camera at 10X magnification provide a field of view of 3.47mm x 2.32mm and pixel resolution of 0.714 μm. Due to index matching, reconstructed seed particles can be clearly seen over the entire volume, with only faint traces with the rough wall that can be removed. Planned experiments will be performed in a 20 x 5 cm rectangular channel with the top and bottom plates having the same roughness as the sample plate.

  4. Factors influencing surface roughness of polyimide film

    International Nuclear Information System (INIS)

    Yao Hong; Zhang Zhanwen; Huang Yong; Li Bo; Li Sai

    2011-01-01

    The polyimide (PI) films of pyromellitic dianhydride-oxydiamiline (PMDA-ODA) were fabricated using vapor deposition polymerization (VDP) method under high vacuum pressure of 10-4 Pa level. The influence of equipment, substrate temperature, the process of heating and deposition ratio of monomers on the surface roughness of the PI films was investigated. The surface topography of films was measured by interferometer microscopy and scanning electron microscopy(SEM), and the surface roughness was probed with atomic force microscopy(AFM). The results show that consecutive films can be formed when the distance from steering flow pipe to substrate is 74 cm. The surface roughnesses are 291.2 nm and 61.9 nm respectively for one-step heating process and multi-step heating process, and using fine mesh can effectively avoid the splash of materials. The surface roughness can be 3.3 nm when the deposition rate ratio of PMDA to ODA is 0.9:1, and keeping the temperature of substrate around 30 degree C is advantageous to form a film with planar micro-surface topography. (authors)

  5. Roughly isometric minimal immersions into Riemannian manifolds

    DEFF Research Database (Denmark)

    Markvorsen, Steen

    of the intrinsic combinatorial discrete Laplacian, and we will show that they share several analytic and geometric properties with their smooth (minimal submanifold) counterparts in $N$. The intrinsic properties thus obtained may hence serve as roughly invariant descriptors for the original metric space $X$....

  6. Three-tier rough superhydrophobic surfaces

    International Nuclear Information System (INIS)

    Cao, Yuanzhi; Yuan, Longyan; Hu, Bin; Zhou, Jun

    2015-01-01

    A three-tier rough superhydrophobic surface was fabricated by growing hydrophobic modified (fluorinated silane) zinc oxide (ZnO)/copper oxide (CuO) hetero-hierarchical structures on silicon (Si) micro-pillar arrays. Compared with the other three control samples with a less rough tier, the three-tier surface exhibits the best water repellency with the largest contact angle 161° and the lowest sliding angle 0.5°. It also shows a robust Cassie state which enables the water to flow with a speed over 2 m s"−"1. In addition, it could prevent itself from being wetted by the droplet with low surface tension (mixed water and ethanol 1:1 in volume) which reveals a flow speed of 0.6 m s"−"1 (dropped from the height of 2 cm). All these features prove that adding another rough tier on a two-tier rough surface could futher improve its water-repellent properties. (paper)

  7. Roughness-induced streaming in turbulent wave boundary layers

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Sumer, B. Mutlu; Fredsøe, Jørgen

    2011-01-01

    -averaged streaming characteristics induced by bottom roughness variations are systematically assessed. The effects of variable roughness ratio, gradual roughness transitions, as well as changing flow orientation in plan are all considered. As part of the latter, roughness-induced secondary flows are predicted...

  8. Self-affine roughness influence on redox reaction charge admittance

    NARCIS (Netherlands)

    Palasantzas, G

    2005-01-01

    In this work we investigate the influence of self-affine electrode roughness on the admittance of redox reactions during facile charge transfer kinetics. The self-affine roughness is characterized by the rms roughness amplitude w, the correlation length xi and the roughness exponent H (0

  9. "Looking and Listening-In": A Methodological Approach to Generating Insights into Infants' Experiences of Early Childhood Education and Care Settings

    Science.gov (United States)

    Sumsion, Jennifer; Goodfellow, Joy

    2012-01-01

    In this article, we describe an observational approach, "looking and listening-in," that we have used to try to understand the experience of an infant in an Australian family day-care home. The article is drawn from a larger study of infants' experiences of early childhood education and care settings. In keeping with the mosaic…

  10. Teaching a Child with ASD to Approach Communication Partners and Use a Speech-Generating Device across Settings: Clinic, School, and Home

    Science.gov (United States)

    Waddington, Hannah; van der Meer, Larah; Carnett, Amarie; Sigafoos, Jeff

    2017-01-01

    Individuals with autism spectrum disorder (ASD) often have difficulty generalizing newly acquired communication skills to different contexts. In this study, a multiple baseline across settings (clinic, school, and home) design was used to determine whether an 8-year-old boy with ASD could learn to approach communication partners to request…

  11. Turbulent boundary layer over roughness transition with variation in spanwise roughness length scale

    Science.gov (United States)

    Westerweel, Jerry; Tomas, Jasper; Eisma, Jerke; Pourquie, Mathieu; Elsinga, Gerrit; Jonker, Harm

    2016-11-01

    Both large-eddy simulations (LES) and water-tunnel experiments, using simultaneous stereoscopic PIV and LIF were done to investigate pollutant dispersion in a region where the surface changes from rural to urban roughness. This consists of rectangular obstacles where we vary the spanwise aspect ratio of the obstacles. A line source of passive tracer was placed upstream of the roughness transition. The objectives of the study are: (i) to determine the influence of the aspect ratio on the roughness-transition flow, and (ii) to determine the dominant mechanisms of pollutant removal from street canyons in the transition region. It is found that for a spanwise aspect ratio of 2 the drag induced by the roughness is largest of all considered cases, which is caused by a large-scale secondary flow. In the roughness transition the vertical advective pollutant flux is the main ventilation mechanism in the first three streets. Furthermore, by means of linear stochastic estimation the mean flow structure is identied that is responsible for exchange of the fluid between the roughness obstacles and the outer part of the boundary layer. Furthermore, it is found that the vertical length scale of this structure increases with increasing aspect ratio of the obstacles in the roughness region.

  12. Numerical Investigation of Effect of Surface Roughness in a Microchannel

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Myung Seob; Byun, Sung Jun; Yoon, Joon Yong [Hanyang University, Seoul (Korea, Republic of)

    2010-05-15

    In this paper, lattice Boltzmann method(LBM) results for a laminar flow in a microchannel with rough surface are presented. The surface roughness is modeled as an array of rectangular modules placed on the top and bottom surface of a parallel-plate channel. The effects of relative surface roughness, roughness distribution, and roughness size are presented in terms of the Poiseuille number. The roughness distribution characterized by the ratio of the roughness height to the spacing between the modules has a negligible effect on the flow and friction factors. Finally, a significant increase in the Poiseuille number is observed when the surface roughness is considered, and the effects of roughness on the microflow field mainly depend on the surface roughness.

  13. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  14. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Influence of pitch, twist, and taper on a blade`s performance loss due to roughness

    Energy Technology Data Exchange (ETDEWEB)

    Tangler, J.L. [National Renewable Energy Laboratory, Golden, Colorado (United States)

    1997-08-01

    The purpose of this study was to determine the influence of blade geometric parameters such as pitch, twist, and taper on a blade`s sensitivity to leading edge roughness. The approach began with an evaluation of available test data of performance degradation due to roughness effects for several rotors. In addition to airfoil geometry, this evaluation suggested that a rotor`s sensitivity to roughness was also influenced by the blade geometric parameters. Parametric studies were conducted using the PROP computer code with wind-tunnel airfoil characteristics for smooth and rough surface conditions to quantify the performance loss due to roughness for tapered and twisted blades relative to a constant-chord, non-twisted blade at several blade pitch angles. The results indicate that a constant-chord, non-twisted blade pitched toward stall will have the greatest losses due to roughness. The use of twist, taper, and positive blade pitch angles all help reduce the angle-of-attack distribution along the blade for a given wind speed and the associated performance degradation due to roughness. (au)

  16. Influence of pitch, twist, and taper on a blade`s performance loss due to roughness

    Energy Technology Data Exchange (ETDEWEB)

    Tangler, J.L. [National Renewable Energy Lab., Golden, CO (United States)

    1996-12-31

    The purpose of this study was to determine the influence of blade geometric parameters such as pitch, twist, and taper on a blade`s sensitivity to leading edge roughness. The approach began with an evaluation of available test data of performance degradation due to roughness effects for several rotors. In addition to airfoil geometry, this evaluation suggested that a rotor`s sensitivity to roughness was also influenced by the blade geometric parameters. Parametric studies were conducted using the PROP computer code with wind-tunnel airfoil characteristics for smooth and rough surface conditions to quantify the performance loss due to roughness for tapered and twisted blades relative to a constant-chord, non-twisted blade at several blade pitch angles. The results indicate that a constant-chord, non-twisted blade pitched toward stall will have the greatest losses due to roughness. The use of twist, taper, and positive blade pitch angles all help reduce the angle-of-attack distribution along the blade for a given wind speed and the associated performance degradation due to roughness. 8 refs., 6 figs.

  17. Teaming in Two-Year Postsecondary Settings: An Approach to Providing Effective and Efficient Services for Students with Disabilities.

    Science.gov (United States)

    Bigaj, Stephen J.; Bazinet, Gregory P.

    1993-01-01

    Suggests a team approach for effectively and efficiently providing services for postsecondary students with disabilities. Reviews various teaming concepts and presents a framework for a postsecondary disability problem-solving team. (Author/JOW)

  18. Hybrid Radar Emitter Recognition Based on Rough k-Means Classifier and Relevance Vector Machine

    Science.gov (United States)

    Yang, Zhutian; Wu, Zhilu; Yin, Zhendong; Quan, Taifan; Sun, Hongjian

    2013-01-01

    Due to the increasing complexity of electromagnetic signals, there exists a significant challenge for recognizing radar emitter signals. In this paper, a hybrid recognition approach is presented that classifies radar emitter signals by exploiting the different separability of samples. The proposed approach comprises two steps, namely the primary signal recognition and the advanced signal recognition. In the former step, a novel rough k-means classifier, which comprises three regions, i.e., certain area, rough area and uncertain area, is proposed to cluster the samples of radar emitter signals. In the latter step, the samples within the rough boundary are used to train the relevance vector machine (RVM). Then RVM is used to recognize the samples in the uncertain area; therefore, the classification accuracy is improved. Simulation results show that, for recognizing radar emitter signals, the proposed hybrid recognition approach is more accurate, and presents lower computational complexity than traditional approaches. PMID:23344380

  19. Proactive Approach for Safe Use of Antimicrobial Coatings in Healthcare Settings: Opinion of the COST Action Network AMiCI

    Directory of Open Access Journals (Sweden)

    Merja Ahonen

    2017-03-01

    Full Text Available Infections and infectious diseases are considered a major challenge to human health in healthcare units worldwide. This opinion paper was initiated by EU COST Action network AMiCI (AntiMicrobial Coating Innovations and focuses on scientific information essential for weighing the risks and benefits of antimicrobial surfaces in healthcare settings. Particular attention is drawn on nanomaterial-based antimicrobial surfaces in frequently-touched areas in healthcare settings and the potential of these nano-enabled coatings to induce (ecotoxicological hazard and antimicrobial resistance. Possibilities to minimize those risks e.g., at the level of safe-by-design are demonstrated.

  20. Rough case-based reasoning system for continues casting

    Science.gov (United States)

    Su, Wenbin; Lei, Zhufeng

    2018-04-01

    The continuous casting occupies a pivotal position in the iron and steel industry. The rough set theory and the CBR (case based reasoning, CBR) were combined in the research and implementation for the quality assurance of continuous casting billet to improve the efficiency and accuracy in determining the processing parameters. According to the continuous casting case, the object-oriented method was applied to express the continuous casting cases. The weights of the attributes were calculated by the algorithm which was based on the rough set theory and the retrieval mechanism for the continuous casting cases was designed. Some cases were adopted to test the retrieval mechanism, by analyzing the results, the law of the influence of the retrieval attributes on determining the processing parameters was revealed. A comprehensive evaluation model was established by using the attribute recognition theory. According to the features of the defects, different methods were adopted to describe the quality condition of the continuous casting billet. By using the system, the knowledge was not only inherited but also applied to adjust the processing parameters through the case based reasoning method as to assure the quality of the continuous casting and improve the intelligent level of the continuous casting.