WorldWideScience

Sample records for rule-based down-scaling methodology

  1. Bioclim deliverable D8a: development of the rule-based down-scaling methodology for BIOCLIM Work-package 3

    International Nuclear Information System (INIS)

    2003-01-01

    The BIOCLIM project on modelling sequential Biosphere systems under Climate change for radioactive waste disposal is part of the EURATOM fifth European framework programme. The project was launched in October 2000 for a three-year period. The project aims at providing a scientific basis and practical methodology for assessing the possible long term impacts on the safety of radioactive waste repositories in deep formations due to climate and environmental change. Five work packages (WP) have been identified to fulfill the project objectives. One of the tasks of BIOCLIM WP3 was to develop a rule-based approach for down-scaling from the MoBidiC model of intermediate complexity in order to provide consistent estimates of monthly temperature and precipitation for the specific regions of interest to BIOCLIM (Central Spain, Central England and Northeast France, together with Germany and the Czech Republic). A statistical down-scaling methodology has been developed by Philippe Marbaix of CEA/LSCE for use with the second climate model of intermediate complexity used in BIOCLIM - CLIMBER-GREMLINS. The rule-based methodology assigns climate states or classes to a point on the time continuum of a region according to a combination of simple threshold values which can be determined from the coarse scale climate model. Once climate states or classes have been defined, monthly temperature and precipitation climatologies are constructed using analogue stations identified from a data base of present-day climate observations. The most appropriate climate classification for BIOCLIM purposes is the Koeppen/Trewartha scheme. This scheme has the advantage of being empirical, but only requires monthly averages of temperature and precipitation as input variables. Section 2 of this deliverable (D8a) outline how each of the eight methodological steps have been undertaken for each of the three main BIOCLIM study regions (Central England, Northeast France and Central Spain) using Mo

  2. New scale-down methodology from commercial to lab scale to optimize plant-derived soft gel capsule formulations on a commercial scale.

    Science.gov (United States)

    Oishi, Sana; Kimura, Shin-Ichiro; Noguchi, Shuji; Kondo, Mio; Kondo, Yosuke; Shimokawa, Yoshiyuki; Iwao, Yasunori; Itai, Shigeru

    2018-01-15

    A new scale-down methodology from commercial rotary die scale to laboratory scale was developed to optimize a plant-derived soft gel capsule formulation and eventually manufacture superior soft gel capsules on a commercial scale, in order to reduce the time and cost for formulation development. Animal-derived and plant-derived soft gel film sheets were prepared using an applicator on a laboratory scale and their physicochemical properties, such as tensile strength, Young's modulus, and adhesive strength, were evaluated. The tensile strength of the animal-derived and plant-derived soft gel film sheets was 11.7 MPa and 4.41 MPa, respectively. The Young's modulus of the animal-derived and plant-derived soft gel film sheets was 169 MPa and 17.8 MPa, respectively, and both sheets showed a similar adhesion strength of approximately 4.5-10 MPa. Using a D-optimal mixture design, plant-derived soft gel film sheets were prepared and optimized by varying their composition, including variations in the mass of κ-carrageenan, ι-carrageenan, oxidized starch and heat-treated starch. The physicochemical properties of the sheets were evaluated to determine the optimal formulation. Finally, plant-derived soft gel capsules were manufactured using the rotary die method and the prepared soft gel capsules showed equivalent or superior physical properties compared with pre-existing soft gel capsules. Therefore, we successfully developed a new scale-down methodology to optimize the formulation of plant-derived soft gel capsules on a commercial scale. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Sensor-based activity recognition using extended belief rule-based inference methodology.

    Science.gov (United States)

    Calzada, A; Liu, J; Nugent, C D; Wang, H; Martinez, L

    2014-01-01

    The recently developed extended belief rule-based inference methodology (RIMER+) recognizes the need of modeling different types of information and uncertainty that usually coexist in real environments. A home setting with sensors located in different rooms and on different appliances can be considered as a particularly relevant example of such an environment, which brings a range of challenges for sensor-based activity recognition. Although RIMER+ has been designed as a generic decision model that could be applied in a wide range of situations, this paper discusses how this methodology can be adapted to recognize human activities using binary sensors within smart environments. The evaluation of RIMER+ against other state-of-the-art classifiers in terms of accuracy, efficiency and applicability was found to be significantly relevant, specially in situations of input data incompleteness, and it demonstrates the potential of this methodology and underpins the basis to develop further research on the topic.

  4. Methodological approaches based on business rules

    Directory of Open Access Journals (Sweden)

    Anca Ioana ANDREESCU

    2008-01-01

    Full Text Available Business rules and business processes are essential artifacts in defining the requirements of a software system. Business processes capture business behavior, while rules connect processes and thus control processes and business behavior. Traditionally, rules are scattered inside application code. This approach makes it very difficult to change rules and shorten the life cycle of the software system. Because rules change more quickly than the application itself, it is desirable to externalize the rules and move them outside the application. This paper analyzes and evaluates three well-known business rules approaches. It also outlines some critical factors that have to be taken into account in the decision to introduce business rules facilities in a software system. Based on the concept of explicit manipulation of business rules in a software system, the need for a general approach based on business rules is discussed.

  5. A top-down design methodology and its implementation for VCSEL-based optical links design

    Science.gov (United States)

    Li, Jiguang; Cao, Mingcui; Cai, Zilong

    2005-01-01

    In order to find the optimal design for a given specification of an optical communication link, an integrated simulation of electronic, optoelectronic, and optical components of a complete system is required. It is very important to be able to simulate at both system level and detailed model level. This kind of model is feasible due to the high potential of Verilog-AMS language. In this paper, we propose an effective top-down design methodology and employ it in the development of a complete VCSEL-based optical links simulation. The principle of top-down methodology is that the development would proceed from the system to device level. To design a hierarchical model for VCSEL based optical links, the design framework is organized in three levels of hierarchy. The models are developed, and implemented in Verilog-AMS. Therefore, the model parameters are fitted to measured data. A sample transient simulation demonstrates the functioning of our implementation. Suggestions for future directions in top-down methodology used for optoelectronic systems technology are also presented.

  6. Bioclim Deliverable D8b: development of the physical/statistical down-scaling methodology and application to climate model Climber for BIOCLIM Work-package 3

    International Nuclear Information System (INIS)

    2003-01-01

    too coarse and simplified. This is why we first need to find these 'physically based' relations between large scale model outputs and regional scale predictors. This is a solution to the specific problem of down-scaling from an intermediate complexity model such as CLIMBER. There are several other types of down-scaling methodologies, such has the dynamical and rule-based method presented in other BIOCLIM deliverables. A specificity of the present method is to attempt to use physical considerations in the down-scaling while a detailed 'dynamical' approach is out of reach because CLIMBER mainly provides the average climate. By contrast, an input of time-variability at various scales is necessary for a more dynamical approach. This report is organised as follows: Section 2 relates to the design and validation of the method, while section 3 reports the application to BIOCLIM simulations. We first present the employed data sources, which are the model results and the observed climatology. We then present the principles of the down-scaling method, the formulation of the predictors and the calibration of the statistical model, including results for the last glacial maximum. In section 3, the results are first presented as time series for each site, then as maps at specific times, or snapshots

  7. Bioclim Deliverable D6b: application of statistical down-scaling within the BIOCLIM hierarchical strategy: methods, data requirements and underlying assumptions

    International Nuclear Information System (INIS)

    2004-01-01

    -study regions were identified, together with the additional issues which arise in applying these techniques to output from the BIOCLIM simulations. This preliminary work is described in this BIOCLIM technical note. It provides an overview of statistical down-scaling methods, together with their underlying assumptions and advantages/disadvantages. Specific issues relating to their application within the BIOCLIM context (i.e., application to the IPSL C M4 D snapshot simulations) are identified, for example, the stationarity issue. The predictor and predictand data sets that would be required to implement these methods within the BIOCLIM hierarchical strategy are also outlined, together with the methodological steps involved. Implementation of these techniques was delayed in order to give priority to the application of the rule-based down-scaling method developed in WP3 to WP2 EMIC output (see Deliverable D8a). This task was not originally planned, but has allowed more comprehensive comparison and evaluation of the BIOCLIM scenarios and down-scaling methods to be undertaken

  8. Scalability of a Methodology for Generating Technical Trading Rules with GAPs Based on Risk-Return Adjustment and Incremental Training

    Science.gov (United States)

    de La Cal, E. A.; Fernández, E. M.; Quiroga, R.; Villar, J. R.; Sedano, J.

    In previous works a methodology was defined, based on the design of a genetic algorithm GAP and an incremental training technique adapted to the learning of series of stock market values. The GAP technique consists in a fusion of GP and GA. The GAP algorithm implements the automatic search for crisp trading rules taking as objectives of the training both the optimization of the return obtained and the minimization of the assumed risk. Applying the proposed methodology, rules have been obtained for a period of eight years of the S&P500 index. The achieved adjustment of the relation return-risk has generated rules with returns very superior in the testing period to those obtained applying habitual methodologies and even clearly superior to Buy&Hold. This work probes that the proposed methodology is valid for different assets in a different market than previous work.

  9. Fuzzy-Rule-Based Object Identification Methodology for NAVI System

    Directory of Open Access Journals (Sweden)

    Yaacob Sazali

    2005-01-01

    Full Text Available We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI system. The NAVI has a single board processing system (SBPS, a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

  10. Fuzzy-Rule-Based Object Identification Methodology for NAVI System

    Science.gov (United States)

    Nagarajan, R.; Sainarayanan, G.; Yaacob, Sazali; Porle, Rosalyn R.

    2005-12-01

    We present an object identification methodology applied in a navigation assistance for visually impaired (NAVI) system. The NAVI has a single board processing system (SBPS), a digital video camera mounted headgear, and a pair of stereo earphones. The captured image from the camera is processed by the SBPS to generate a specially structured stereo sound suitable for vision impaired people in understanding the presence of objects/obstacles in front of them. The image processing stage is designed to identify the objects in the captured image. Edge detection and edge-linking procedures are applied in the processing of image. A concept of object preference is included in the image processing scheme and this concept is realized using a fuzzy-rule base. The blind users are trained with the stereo sound produced by NAVI for achieving a collision-free autonomous navigation.

  11. Methodological approaches based on business rules

    OpenAIRE

    Anca Ioana ANDREESCU; Adina UTA

    2008-01-01

    Business rules and business processes are essential artifacts in defining the requirements of a software system. Business processes capture business behavior, while rules connect processes and thus control processes and business behavior. Traditionally, rules are scattered inside application code. This approach makes it very difficult to change rules and shorten the life cycle of the software system. Because rules change more quickly than the application itself, it is desirable to externalize...

  12. Rule-based decision making model

    International Nuclear Information System (INIS)

    Sirola, Miki

    1998-01-01

    A rule-based decision making model is designed in G2 environment. A theoretical and methodological frame for the model is composed and motivated. The rule-based decision making model is based on object-oriented modelling, knowledge engineering and decision theory. The idea of safety objective tree is utilized. Advanced rule-based methodologies are applied. A general decision making model 'decision element' is constructed. The strategy planning of the decision element is based on e.g. value theory and utility theory. A hypothetical process model is built to give input data for the decision element. The basic principle of the object model in decision making is division in tasks. Probability models are used in characterizing component availabilities. Bayes' theorem is used to recalculate the probability figures when new information is got. The model includes simple learning features to save the solution path. A decision analytic interpretation is given to the decision making process. (author)

  13. An automated laboratory-scale methodology for the generation of sheared mammalian cell culture samples.

    Science.gov (United States)

    Joseph, Adrian; Goldrick, Stephen; Mollet, Michael; Turner, Richard; Bender, Jean; Gruber, David; Farid, Suzanne S; Titchener-Hooker, Nigel

    2017-05-01

    Continuous disk-stack centrifugation is typically used for the removal of cells and cellular debris from mammalian cell culture broths at manufacturing-scale. The use of scale-down methods to characterise disk-stack centrifugation performance enables substantial reductions in material requirements and allows a much wider design space to be tested than is currently possible at pilot-scale. The process of scaling down centrifugation has historically been challenging due to the difficulties in mimicking the Energy Dissipation Rates (EDRs) in typical machines. This paper describes an alternative and easy-to-assemble automated capillary-based methodology to generate levels of EDRs consistent with those found in a continuous disk-stack centrifuge. Variations in EDR were achieved through changes in capillary internal diameter and the flow rate of operation through the capillary. The EDRs found to match the levels of shear in the feed zone of a pilot-scale centrifuge using the experimental method developed in this paper (2.4×10 5 W/Kg) are consistent with those obtained through previously published computational fluid dynamic (CFD) studies (2.0×10 5 W/Kg). Furthermore, this methodology can be incorporated into existing scale-down methods to model the process performance of continuous disk-stack centrifuges. This was demonstrated through the characterisation of culture hold time, culture temperature and EDRs on centrate quality. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. IDENTIFIKASI PENYAKIT PADA TERUMBU KARANG MENGGUNAKAN RIPPLE DOWN RULES

    Directory of Open Access Journals (Sweden)

    Agus Cahyo Nugroho

    2018-01-01

    Full Text Available Along with the development of technology, people developed a system that capable of adopting processes and human thinking as an expert system that contains specific knowledge so that everyone can use it to solve a specific problem, namely the diagnosis of coral reef disease. The purpose of this study is to develop an expert system for diagnosing coral reef disease  in the form of websites using PHP with a MySQL database. Expert system for diagnosing coral reef disease problem is using Ripple Down Rules (RDR method has a goal to discover symptoms that appear in the form of questions that can diagnose the coral reef disease based on website. Web based expert system is able to recognize types of coral reef disease after consultation by answering a few questions that are displayed by the application of expert systems and can infer some types of coral  reef disease. Data coral reef disease that already known adapt to rules which are made for matching the symptoms of coral reef disease.

  15. Decision Tree Repository and Rule Set Based Mingjiang River Estuarine Wetlands Classifaction

    Science.gov (United States)

    Zhang, W.; Li, X.; Xiao, W.

    2018-05-01

    The increasing urbanization and industrialization have led to wetland losses in estuarine area of Mingjiang River over past three decades. There has been increasing attention given to produce wetland inventories using remote sensing and GIS technology. Due to inconsistency training site and training sample, traditionally pixel-based image classification methods can't achieve a comparable result within different organizations. Meanwhile, object-oriented image classification technique shows grate potential to solve this problem and Landsat moderate resolution remote sensing images are widely used to fulfill this requirement. Firstly, the standardized atmospheric correct, spectrally high fidelity texture feature enhancement was conducted before implementing the object-oriented wetland classification method in eCognition. Secondly, we performed the multi-scale segmentation procedure, taking the scale, hue, shape, compactness and smoothness of the image into account to get the appropriate parameters, using the top and down region merge algorithm from single pixel level, the optimal texture segmentation scale for different types of features is confirmed. Then, the segmented object is used as the classification unit to calculate the spectral information such as Mean value, Maximum value, Minimum value, Brightness value and the Normalized value. The Area, length, Tightness and the Shape rule of the image object Spatial features and texture features such as Mean, Variance and Entropy of image objects are used as classification features of training samples. Based on the reference images and the sampling points of on-the-spot investigation, typical training samples are selected uniformly and randomly for each type of ground objects. The spectral, texture and spatial characteristics of each type of feature in each feature layer corresponding to the range of values are used to create the decision tree repository. Finally, with the help of high resolution reference images, the

  16. Depression-Biased Reverse Plasticity Rule Is Required for Stable Learning at Top-Down Connections

    Science.gov (United States)

    Burbank, Kendra S.; Kreiman, Gabriel

    2012-01-01

    Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP) rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP) where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP) produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body. PMID:22396630

  17. Depression-biased reverse plasticity rule is required for stable learning at top-down connections.

    Directory of Open Access Journals (Sweden)

    Kendra S Burbank

    Full Text Available Top-down synapses are ubiquitous throughout neocortex and play a central role in cognition, yet little is known about their development and specificity. During sensory experience, lower neocortical areas are activated before higher ones, causing top-down synapses to experience a preponderance of post-synaptic activity preceding pre-synaptic activity. This timing pattern is the opposite of that experienced by bottom-up synapses, which suggests that different versions of spike-timing dependent synaptic plasticity (STDP rules may be required at top-down synapses. We consider a two-layer neural network model and investigate which STDP rules can lead to a distribution of top-down synaptic weights that is stable, diverse and avoids strong loops. We introduce a temporally reversed rule (rSTDP where top-down synapses are potentiated if post-synaptic activity precedes pre-synaptic activity. Combining analytical work and integrate-and-fire simulations, we show that only depression-biased rSTDP (and not classical STDP produces stable and diverse top-down weights. The conclusions did not change upon addition of homeostatic mechanisms, multiplicative STDP rules or weak external input to the top neurons. Our prediction for rSTDP at top-down synapses, which are distally located, is supported by recent neurophysiological evidence showing the existence of temporally reversed STDP in synapses that are distal to the post-synaptic cell body.

  18. Design of scaled down structural models

    Science.gov (United States)

    Simitses, George J.

    1994-07-01

    In the aircraft industry, full scale and large component testing is a very necessary, time consuming, and expensive process. It is essential to find ways by which this process can be minimized without loss of reliability. One possible alternative is the use of scaled down models in testing and use of the model test results in order to predict the behavior of the larger system, referred to herein as prototype. This viewgraph presentation provides justifications and motivation for the research study, and it describes the necessary conditions (similarity conditions) for two structural systems to be structurally similar with similar behavioral response. Similarity conditions provide the relationship between a scaled down model and its prototype. Thus, scaled down models can be used to predict the behavior of the prototype by extrapolating their experimental data. Since satisfying all similarity conditions simultaneously is in most cases impractical, distorted models with partial similarity can be employed. Establishment of similarity conditions, based on the direct use of the governing equations, is discussed and their use in the design of models is presented. Examples include the use of models for the analysis of cylindrical bending of orthotropic laminated beam plates, of buckling of symmetric laminated rectangular plates subjected to uniform uniaxial compression and shear, applied individually, and of vibrational response of the same rectangular plates. Extensions and future tasks are also described.

  19. Scaling down

    Directory of Open Access Journals (Sweden)

    Ronald L Breiger

    2015-11-01

    Full Text Available While “scaling up” is a lively topic in network science and Big Data analysis today, my purpose in this essay is to articulate an alternative problem, that of “scaling down,” which I believe will also require increased attention in coming years. “Scaling down” is the problem of how macro-level features of Big Data affect, shape, and evoke lower-level features and processes. I identify four aspects of this problem: the extent to which findings from studies of Facebook and other Big-Data platforms apply to human behavior at the scale of church suppers and department politics where we spend much of our lives; the extent to which the mathematics of scaling might be consistent with behavioral principles, moving beyond a “universal” theory of networks to the study of variation within and between networks; and how a large social field, including its history and culture, shapes the typical representations, interactions, and strategies at local levels in a text or social network.

  20. Quantifying reactor safety margins: Part 1: An overview of the code scaling, applicability, and uncertainty evaluation methodology

    International Nuclear Information System (INIS)

    Boyack, B.E.; Duffey, R.B.; Griffith, P.

    1988-01-01

    In August 1988, the Nuclear Regulatory Commission (NRC) approved the final version of a revised rule on the acceptance of emergency core cooling systems (ECCS) entitled ''Emergency Core Cooling System; Revisions to Acceptance Criteria.'' The revised rule states an alternate ECCS performance analysis, based on best-estimate methods, may be used to provide more realistic estimates of plant safety margins, provided the licensee quantifies the uncertainty of the estimates and included that uncertainty when comparing the calculated results with prescribed acceptance limits. To support the revised ECCS rule, the NRC and its contractors and consultants have developed and demonstrated a method called the Code Scaling, Applicability, and Uncertainty (CSAU) evaluation methodology. It is an auditable, traceable, and practical method for combining quantitative analyses and expert opinions to arrive at computed values of uncertainty. This paper provides an overview of the CSAU evaluation methodology and its application to a postulated cold-leg, large-break loss-of-coolant accident in a Westinghouse four-loop pressurized water reactor with 17 /times/ 17 fuel. The code selected for this demonstration of the CSAU methodology was TRAC-PF1/MOD1, Version 14.3. 23 refs., 5 figs., 1 tab

  1. RANS based CFD methodology for a real scale 217-pin wire-wrapped fuel assembly of KAERI PGSFR

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Jae-Ho, E-mail: jhjeong@kaeri.re.kr [Korea Atomic Energy Research Institute, 989-111 Daedeok-daero, Yuseoung-gu, Daejeon (Korea, Republic of); Song, Min-Seop [Department of Nuclear Engineering, Seoul National University, 559 Gwanak-ro, Gwanak-gu, Seoul (Korea, Republic of); Lee, Kwi-Lim [Korea Atomic Energy Research Institute, 989-111 Daedeok-daero, Yuseoung-gu, Daejeon (Korea, Republic of)

    2017-03-15

    Highlights: • This paper presents a suitable way for a practical RANS based CFD methodology which is applicable to real scale 217-pin wire-wrapped fuel assembly of KAERI PGSFR. • A key point of differentiation of the RANS based CFD methodology in this study is adapting an innovative grid generation method using a fortran based in-house code with a GGI function in a general-purpose commercial CFD code, CFX. • The RANS based CFD methodology is implemented with high resolution scheme and SST turbulence model in the 7-pin 37-pin, and 127-pin wire-wrapped fuel assembly of PNC and JNC. Furthermore, the RANS based CFD methodology can be successfully extended to the real scale 217-pin wire-wrapped fuel bundles of KAERI PGSFR. • Three-dimensional thermal-hydraulic characteristics have been also investigated briefly. - Abstract: This paper presents a suitable way for a practical RANS (Reynolds Averaged Navier-Stokes simulation) based CFD (Computational Fluid Dynamics) methodology which is applicable to real scale 217-pin wire-wrapped fuel assembly of KAERI (Korea Atomic Energy Research Institute) PGSFR (Prototype Gen-IV Sodium-cooled Fast Reactor). The main purpose of the current study is to support license issue for the KAERI PGSFR core safety and to elucidate thermal-hydraulic characteristics in a 217-pin wire-wrapped fuel assembly of KAERI PGSFR. A key point of differentiation of the RANS based CFD methodology in this study is adapting an innovative grid generation method using a fortran based in-house code with a GGI (General Grid Interface) function in a general-purpose commercial CFD code, CFX. The innovative grid generation method with GGI function can achieve to simulate a real wire shape with minimizing cell skewness. The RANS based CFD methodology is implemented with high resolution scheme in convection term and SST (Shear Stress Transport) turbulence model in the 7-pin 37-pin, and 127-pin wire-wrapped fuel assembly of PNC (Power reactor and Nuclear fuel

  2. Scaling Rule for Very Shallow Trench IGBT toward CMOS Process Compatibility

    OpenAIRE

    Tanaka, Masahiro; Omura, Ichiro

    2012-01-01

    Deep trench gate is used for latest IGBT to improve device performance. By large difference from deep submicron CMOS structure, there is no process compatibility among CMOS device and trench gate IGBT. We propose IGBT scaling rule for shrinking IGBT cell structure both horizontally and vertically. The scaling rule is theoretically delivered by structure based equations. Device performance improvement was also predicted by TCAD simulations even with very shallow trench gate. The rule enables t...

  3. The fractional scaling methodology (FSM) Part 1. methodology development

    International Nuclear Information System (INIS)

    Novak Zuber; Ivan Catton; Upendra S Rohatgi; Wolfgang Wulff

    2005-01-01

    Full text of publication follows: a quantitative methodology is developed, based on the concepts of hierarchy and synthesis, to integrate and organize information and data. The methodology uses scaling to synthesize experimental data and analytical results, and to provide quantitative criteria for evaluating the effects of various design and operating parameters that influence processes in a complex system such as a nuclear power plant or a related test facility. Synthesis and scaling are performed on three hierarchical levels: the process, component and system levels. Scaling on the process level determines the effect of a selected process on a particular state variable during a selected scenario. At the component level this scaling determines the effects various processes have on a state variable, and it ranks the processes according to their importance by the magnitude of the fractional change they cause on that state variable. At the system level the scaling determines the governing processes and corresponding components, ranking these in the order of importance according to their effect on the fractional change of system-wide state variables. The scaling methodology reveals on all levels the fractional change of state variables and is called therefore the Fractional Scaling Methodology (FSM). FSM synthesizes process parameters and assigns to each thermohydraulic process a dimensionless effect metric Ω = ωt, that is the product of the specific rate of fractional change ω and the characteristic time t. The rate of fractional change ω is the ratio of process transport rate over content of a preserved quantity in a component. The effect metric Ω quantifies the contribution of the process to the fractional change of a state variable in a given component. Ordering of a component effect metrics provides the hierarchy of processes in a component, then in all components and the system. FSM separates quantitatively dominant from minor processes and components and

  4. Application of code scaling applicability and uncertainty methodology to the large break loss of coolant

    International Nuclear Information System (INIS)

    Young, M.Y.; Bajorek, S.M.; Nissley, M.E.

    1998-01-01

    In the late 1980s, after completion of an extensive research program, the United States Nuclear Regulatory Commission (USNRC) amended its regulations (10CFR50.46) to allow the use of realistic physical models to analyze the loss of coolant accident (LOCA) in a light water reactors. Prior to this time, the evaluation of this accident was subject to a prescriptive set of rules (appendix K of the regulations) requiring conservative models and assumptions to be applied simultaneously, leading to very pessimistic estimates of the impact of this accident on the reactor core. The rule change therefore promised to provide significant benefits to owners of power reactors, allowing them to increase output. In response to the rule change, a method called code scaling, applicability and uncertainty (CSAU) was developed to apply realistic methods, while properly taking into account data uncertainty, uncertainty in physical modeling and plant variability. The method was claimed to be structured, traceable, and practical, but was met with some criticism when first demonstrated. In 1996, the USNRC approved a methodology, based on CSAU, developed by a group led by Westinghouse. The lessons learned in this application of CSAU will be summarized. Some of the issues raised concerning the validity and completeness of the CSAU methodology will also be discussed. (orig.)

  5. A Methodology for Multiple Rule System Integration and Resolution Within a Singular Knowledge Base

    Science.gov (United States)

    Kautzmann, Frank N., III

    1988-01-01

    Expert Systems which support knowledge representation by qualitative modeling techniques experience problems, when called upon to support integrated views embodying description and explanation, especially when other factors such as multiple causality, competing rule model resolution, and multiple uses of knowledge representation are included. A series of prototypes are being developed to demonstrate the feasibility of automating the process of systems engineering, design and configuration, and diagnosis and fault management. A study involves not only a generic knowledge representation; it must also support multiple views at varying levels of description and interaction between physical elements, systems, and subsystems. Moreover, it will involve models of description and explanation for each level. This multiple model feature requires the development of control methods between rule systems and heuristics on a meta-level for each expert system involved in an integrated and larger class of expert system. The broadest possible category of interacting expert systems is described along with a general methodology for the knowledge representation and control of mutually exclusive rule systems.

  6. Large scale comparative codon-pair context analysis unveils general rules that fine-tune evolution of mRNA primary structure.

    Directory of Open Access Journals (Sweden)

    Gabriela Moura

    Full Text Available BACKGROUND: Codon usage and codon-pair context are important gene primary structure features that influence mRNA decoding fidelity. In order to identify general rules that shape codon-pair context and minimize mRNA decoding error, we have carried out a large scale comparative codon-pair context analysis of 119 fully sequenced genomes. METHODOLOGIES/PRINCIPAL FINDINGS: We have developed mathematical and software tools for large scale comparative codon-pair context analysis. These methodologies unveiled general and species specific codon-pair context rules that govern evolution of mRNAs in the 3 domains of life. We show that evolution of bacterial and archeal mRNA primary structure is mainly dependent on constraints imposed by the translational machinery, while in eukaryotes DNA methylation and tri-nucleotide repeats impose strong biases on codon-pair context. CONCLUSIONS: The data highlight fundamental differences between prokaryotic and eukaryotic mRNA decoding rules, which are partially independent of codon usage.

  7. Rule-bases construction through self-learning for a table-based Sugeno-Takagi fuzzy logic control system

    Directory of Open Access Journals (Sweden)

    C. Boldisor

    2009-12-01

    Full Text Available A self-learning based methodology for building the rule-base of a fuzzy logic controller (FLC is presented and verified, aiming to engage intelligent characteristics to a fuzzy logic control systems. The methodology is a simplified version of those presented in today literature. Some aspects are intentionally ignored since it rarely appears in control system engineering and a SISO process is considered here. The fuzzy inference system obtained is a table-based Sugeno-Takagi type. System’s desired performance is defined by a reference model and rules are extracted from recorded data, after the correct control actions are learned. The presented algorithm is tested in constructing the rule-base of a fuzzy controller for a DC drive application. System’s performances and method’s viability are analyzed.

  8. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  9. Propulsion integration of hypersonic air-breathing vehicles utilizing a top-down design methodology

    Science.gov (United States)

    Kirkpatrick, Brad Kenneth

    In recent years, a focus of aerospace engineering design has been the development of advanced design methodologies and frameworks to account for increasingly complex and integrated vehicles. Techniques such as parametric modeling, global vehicle analyses, and interdisciplinary data sharing have been employed in an attempt to improve the design process. The purpose of this study is to introduce a new approach to integrated vehicle design known as the top-down design methodology. In the top-down design methodology, the main idea is to relate design changes on the vehicle system and sub-system level to a set of over-arching performance and customer requirements. Rather than focusing on the performance of an individual system, the system is analyzed in terms of the net effect it has on the overall vehicle and other vehicle systems. This detailed level of analysis can only be accomplished through the use of high fidelity computational tools such as Computational Fluid Dynamics (CFD) or Finite Element Analysis (FEA). The utility of the top-down design methodology is investigated through its application to the conceptual and preliminary design of a long-range hypersonic air-breathing vehicle for a hypothetical next generation hypersonic vehicle (NHRV) program. System-level design is demonstrated through the development of the nozzle section of the propulsion system. From this demonstration of the methodology, conclusions are made about the benefits, drawbacks, and cost of using the methodology.

  10. A Belief Rule-Based Expert System to Diagnose Influenza

    DEFF Research Database (Denmark)

    Hossain, Mohammad Shahadat; Khalid, Md. Saifuddin; Akter, Shamima

    2014-01-01

    , development and application of an expert system to diagnose influenza under uncertainty. The recently developed generic belief rule-based inference methodology by using the evidential reasoning (RIMER) approach is employed to develop this expert system, termed as Belief Rule Based Expert System (BRBES......). The RIMER approach can handle different types of uncertainties, both in knowledge representation, and in inference procedures. The knowledge-base of this system was constructed by using records of the real patient data along with in consultation with the Influenza specialists of Bangladesh. Practical case...

  11. Evolving rule-based systems in two medical domains using genetic programming.

    Science.gov (United States)

    Tsakonas, Athanasios; Dounias, Georgios; Jantzen, Jan; Axer, Hubertus; Bjerregaard, Beth; von Keyserlingk, Diedrich Graf

    2004-11-01

    To demonstrate and compare the application of different genetic programming (GP) based intelligent methodologies for the construction of rule-based systems in two medical domains: the diagnosis of aphasia's subtypes and the classification of pap-smear examinations. Past data representing (a) successful diagnosis of aphasia's subtypes from collaborating medical experts through a free interview per patient, and (b) correctly classified smears (images of cells) by cyto-technologists, previously stained using the Papanicolaou method. Initially a hybrid approach is proposed, which combines standard genetic programming and heuristic hierarchical crisp rule-base construction. Then, genetic programming for the production of crisp rule based systems is attempted. Finally, another hybrid intelligent model is composed by a grammar driven genetic programming system for the generation of fuzzy rule-based systems. Results denote the effectiveness of the proposed systems, while they are also compared for their efficiency, accuracy and comprehensibility, to those of an inductive machine learning approach as well as to those of a standard genetic programming symbolic expression approach. The proposed GP-based intelligent methodologies are able to produce accurate and comprehensible results for medical experts performing competitive to other intelligent approaches. The aim of the authors was the production of accurate but also sensible decision rules that could potentially help medical doctors to extract conclusions, even at the expense of a higher classification score achievement.

  12. Concurrence of rule- and similarity-based mechanisms in artificial grammar learning.

    Science.gov (United States)

    Opitz, Bertram; Hofmann, Juliane

    2015-03-01

    A current theoretical debate regards whether rule-based or similarity-based learning prevails during artificial grammar learning (AGL). Although the majority of findings are consistent with a similarity-based account of AGL it has been argued that these results were obtained only after limited exposure to study exemplars, and performance on subsequent grammaticality judgment tests has often been barely above chance level. In three experiments the conditions were investigated under which rule- and similarity-based learning could be applied. Participants were exposed to exemplars of an artificial grammar under different (implicit and explicit) learning instructions. The analysis of receiver operating characteristics (ROC) during a final grammaticality judgment test revealed that explicit but not implicit learning led to rule knowledge. It also demonstrated that this knowledge base is built up gradually while similarity knowledge governed the initial state of learning. Together these results indicate that rule- and similarity-based mechanisms concur during AGL. Moreover, it could be speculated that two different rule processes might operate in parallel; bottom-up learning via gradual rule extraction and top-down learning via rule testing. Crucially, the latter is facilitated by performance feedback that encourages explicit hypothesis testing. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Probabilistic Rule Generator: A new methodology of variable-valued logic synthesis

    International Nuclear Information System (INIS)

    Lee, W.D.; Ray, S.R.

    1986-01-01

    A new methodology to synthesize variable-valued logic formulas from training data events is presented. Probablistic Rule Generator (PRG) employs not only information-theoretic entropy as a heuristic to capture a path expression but also multiple-valued logic to expand a captured complex. PRG is efficient for capturing major clusters in the event space, and is more general than previous methodologies in providing probabilistic features

  14. COLLABORATIVE NETWORK SECURITY MANAGEMENT SYSTEM BASED ON ASSOCIATION MINING RULE

    Directory of Open Access Journals (Sweden)

    Nisha Mariam Varughese

    2014-07-01

    Full Text Available Security is one of the major challenges in open network. There are so many types of attacks which follow fixed patterns or frequently change their patterns. It is difficult to find the malicious attack which does not have any fixed patterns. The Distributed Denial of Service (DDoS attacks like Botnets are used to slow down the system performance. To address such problems Collaborative Network Security Management System (CNSMS is proposed along with the association mining rule. CNSMS system is consists of collaborative Unified Threat Management (UTM, cloud based security centre and traffic prober. The traffic prober captures the internet traffic and given to the collaborative UTM. Traffic is analysed by the Collaborative UTM, to determine whether it contains any malicious attack or not. If any security event occurs, it will reports to the cloud based security centre. The security centre generates security rules based on association mining rule and distributes to the network. The cloud based security centre is used to store the huge amount of tragic, their logs and the security rule generated. The feedback is evaluated and the invalid rules are eliminated to improve the system efficiency.

  15. Contracting Selection for the Development of the Range Rule Risk Methodology

    National Research Council Canada - National Science Library

    1997-01-01

    ...-Effectiveness Risk Tool and contractor selection for the development of the Range Rule Risk Methodology. The audit objective was to determine whether the Government appropriately used the Ordnance and Explosives Cost-Effectiveness Risk Tool...

  16. Achieving 95% probability level using best estimate codes and the code scaling, applicability and uncertainty (CSAU) [Code Scaling, Applicability and Uncertainty] methodology

    International Nuclear Information System (INIS)

    Wilson, G.E.; Boyack, B.E.; Duffey, R.B.; Griffith, P.; Katsma, K.R.; Lellouche, G.S.; Rohatgi, U.S.; Wulff, W.; Zuber, N.

    1988-01-01

    Issue of a revised rule for loss of coolant accident/emergency core cooling system (LOCA/ECCS) analysis of light water reactors will allow the use of best estimate (BE) computer codes in safety analysis, with uncertainty analysis. This paper describes a systematic methodology, CSAU (Code Scaling, Applicability and Uncertainty), which will provide uncertainty bounds in a cost effective, auditable, rational and practical manner. 8 figs., 2 tabs

  17. The Balance-Scale Task Revisited: A Comparison of Statistical Models for Rule-Based and Information-Integration Theories of Proportional Reasoning.

    Directory of Open Access Journals (Sweden)

    Abe D Hofman

    Full Text Available We propose and test three statistical models for the analysis of children's responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779, and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808. For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development.

  18. Fixed mass and scaling sum rules

    International Nuclear Information System (INIS)

    Ward, B.F.L.

    1975-01-01

    Using the correspondence principle (continuity in dynamics), the approach of Keppell-Jones-Ward-Taha to fixed mass and scaling current algebraic sum rules is extended so as to consider explicitly the contributions of all classes of intermediate states. A natural, generalized formulation of the truncation ideas of Cornwall, Corrigan, and Norton is introduced as a by-product of this extension. The formalism is illustrated in the familiar case of the spin independent Schwinger term sum rule. New sum rules are derived which relate the Regge residue functions of the respective structure functions to their fixed hadronic mass limits for q 2 → infinity. (Auth.)

  19. Quantifying reactor safety margins: Application of code scaling, applicability, and uncertainty evaluation methodology to a large-break, loss-of-coolant accident

    International Nuclear Information System (INIS)

    Boyack, B.; Duffey, R.; Wilson, G.; Griffith, P.; Lellouche, G.; Levy, S.; Rohatgi, U.; Wulff, W.; Zuber, N.

    1989-12-01

    The US Nuclear Regulatory Commission (NRC) has issued a revised rule for loss-of-coolant accident/emergency core cooling system (ECCS) analysis of light water reactors to allow the use of best-estimate computer codes in safety analysis as an option. A key feature of this option requires the licensee to quantify the uncertainty of the calculations and include that uncertainty when comparing the calculated results with acceptance limits provided in 10 CFR Part 50. To support the revised ECCS rule and illustrate its application, the NRC and its contractors and consultants have developed and demonstrated an uncertainty evaluation methodology called code scaling, applicability, and uncertainty (CSAU). The CSAU methodology and an example application described in this report demonstrate that uncertainties in complex phenomena can be quantified. The methodology is structured, traceable, and practical, as is needed in the regulatory arena. The methodology is systematic and comprehensive as it addresses and integrates the scenario, experiments, code, and plant to resolve questions concerned with: (a) code capability to scale-up processes from test facility to full-scale nuclear power plants; (b) code applicability to safety studies of a postulated accident scenario in a specified nuclear power plant; and (c) quantifying uncertainties of calculated results. 127 refs., 55 figs., 40 tabs

  20. Stored energy analysis in scale-down test facility

    International Nuclear Information System (INIS)

    Deng Chengcheng; Qin Benke; Fang Fangfang; Chang Huajian; Ye Zishen

    2013-01-01

    In the integral test facilities that simulate the accident transient process of the prototype nuclear power plant, the stored energy in the metal components has a direct influence on the simulation range and the test results of the facilities. Based on the heat transfer theory, three methods analyzing the stored energy were developed, and a thorough study on the stored energy problem in the scale-down test facilities was further carried out. The lumped parameter method and power integration method were applied to analyze the transient process of energy releasing and to evaluate the average total energy stored in the reactor pressure vessel of the ACME (advanced core-cooling mechanism experiment) facility, which is now being built in China. The results show that the similarity requirements for such three methods to analyze the stored energy in the test facilities are reduced gradually. Under the condition of satisfying the integral similarity of natural circulation, the stored energy releasing process in the scale-down test facilities can't maintain exact similarity. The stored energy in the reactor pressure vessel wall of ACME, which is released quickly during the early stage of rapid depressurization of system, will not make a major impact on the long-term behavior of system. And the scaling distortion of integral average total energy of the stored heat is acceptable. (authors)

  1. HSTDEK: Developing a methodology for construction of large-scale, multi-use knowledge bases

    Science.gov (United States)

    Freeman, Michael S.

    1987-01-01

    The primary research objectives of the Hubble Space Telescope Design/Engineering Knowledgebase (HSTDEK) are to develop a methodology for constructing and maintaining large scale knowledge bases which can be used to support multiple applications. To insure the validity of its results, this research is being persued in the context of a real world system, the Hubble Space Telescope. The HSTDEK objectives are described in detail. The history and motivation of the project are briefly described. The technical challenges faced by the project are outlined.

  2. A Belief Rule-Based Expert System to Assess Bronchiolitis Suspicion from Signs and Symptoms Under Uncertainty

    DEFF Research Database (Denmark)

    Karim, Rezuan; Hossain, Mohammad Shahadat; Khalid, Md. Saifuddin

    2017-01-01

    developed generic belief rule-based inference methodology by using evidential reasoning (RIMER) acts as the inference engine of this BRBES while belief rule base as the knowledge representation schema. The knowledge base of the system is constructed by using real patient data and expert opinion from...

  3. Top-down design and verification methodology for analog mixed-signal integrated circuits

    NARCIS (Netherlands)

    Beviz, P.

    2016-01-01

    The current report contains the introduction of a novel Top-Down Design and Verification methodology for AMS integrated circuits. With the introduction of new design and verification flow, more reliable and efficient development of AMS ICs is possible. The assignment incorporated the research on the

  4. Performance prediction of industrial centrifuges using scale-down models.

    Science.gov (United States)

    Boychyn, M; Yim, S S S; Bulmer, M; More, J; Bracewell, D G; Hoare, M

    2004-12-01

    Computational fluid dynamics was used to model the high flow forces found in the feed zone of a multichamber-bowl centrifuge and reproduce these in a small, high-speed rotating disc device. Linking the device to scale-down centrifugation, permitted good estimation of the performance of various continuous-flow centrifuges (disc stack, multichamber bowl, CARR Powerfuge) for shear-sensitive protein precipitates. Critically, the ultra scale-down centrifugation process proved to be a much more accurate predictor of production multichamber-bowl performance than was the pilot centrifuge.

  5. Enzyme-Gelatin Electrochemical Biosensors: Scaling Down

    Directory of Open Access Journals (Sweden)

    Hendrik A. Heering

    2012-03-01

    Full Text Available In this article we investigate the possibility of scaling down enzyme-gelatin modified electrodes by spin coating the enzyme-gelatin layer. Special attention is given to the electrochemical behavior of the selected enzymes inside the gelatin matrix. A glassy carbon electrode was used as a substrate to immobilize, in the first instance, horse heart cytochrome c (HHC in a gelatin matrix. Both a drop dried and a spin coated layer was prepared. On scaling down, a transition from diffusion controlled reactions towards adsorption controlled reactions is observed. Compared to a drop dried electrode, a spin coated electrode showed a more stable electrochemical behavior. Next to HHC, we also incorporated catalase in a spin coated gelatin matrix immobilized on a glassy carbon electrode. By spincoating, highly uniform sub micrometer layers of biocompatible matrices can be constructed. A full electrochemical study and characterization of the modified surfaces has been carried out. It was clear that in the case of catalase, gluteraldehyde addition was needed to prevent leaking of the catalase from the gelatin matrix.

  6. A model-based framework for incremental scale-up of wastewater treatment processes

    DEFF Research Database (Denmark)

    Mauricio Iglesias, Miguel; Sin, Gürkan

    Scale-up is traditionally done following specific ratios or rules of thumb which do not lead to optimal results. We present a generic framework to assist in scale-up of wastewater treatment processes based on multiscale modelling, multiobjective optimisation and a validation of the model at the new...... large scale. The framework is illustrated by the scale-up of a complete autotropic nitrogen removal process. The model based multiobjective scaleup offers a promising improvement compared to the rule of thumbs based emprical scale up rules...

  7. Simulation of operating rules and discretional decisions using a fuzzy rule-based system integrated into a water resources management model

    Science.gov (United States)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2013-04-01

    Water resources systems are operated, mostly, using a set of pre-defined rules not regarding, usually, to an optimal allocation in terms of water use or economic benefits, but to historical and institutional reasons. These operating policies are reproduced, commonly, as hedging rules, pack rules or zone-based operations, and simulation models can be used to test their performance under a wide range of hydrological and/or socio-economic hypothesis. Despite the high degree of acceptation and testing that these models have achieved, the actual operation of water resources systems hardly follows all the time the pre-defined rules with the consequent uncertainty on the system performance. Real-world reservoir operation is very complex, affected by input uncertainty (imprecision in forecast inflow, seepage and evaporation losses, etc.), filtered by the reservoir operator's experience and natural risk-aversion, while considering the different physical and legal/institutional constraints in order to meet the different demands and system requirements. The aim of this work is to expose a fuzzy logic approach to derive and assess the historical operation of a system. This framework uses a fuzzy rule-based system to reproduce pre-defined rules and also to match as close as possible the actual decisions made by managers. After built up, the fuzzy rule-based system can be integrated in a water resources management model, making possible to assess the system performance at the basin scale. The case study of the Mijares basin (eastern Spain) is used to illustrate the method. A reservoir operating curve regulates the two main reservoir releases (operated in a conjunctive way) with the purpose of guaranteeing a high realiability of supply to the traditional irrigation districts with higher priority (more senior demands that funded the reservoir construction). A fuzzy rule-based system has been created to reproduce the operating curve's performance, defining the system state (total

  8. Bioclim Deliverable D6a: regional climatic characteristics for the European sites at specific times: the dynamical down-scaling

    International Nuclear Information System (INIS)

    2003-01-01

    The overall aim of BIOCLIM is to assess the possible long-term impacts due to climate change on the safety of radioactive waste repositories in deep formations. This aim is addressed through the following specific objectives: - Development of practical and innovative strategies for representing sequential climatic changes to the geosphere-biosphere system for existing sites over central Europe, addressing the timescale of one million years, which is relevant to the geological disposal of radioactive waste. - Exploration and evaluation of the potential effects of climate change on the nature of the biosphere systems used to assess the environmental impact. - Dissemination of information on the new methodologies and the results obtained from the project among the international waste management community for use in performance assessments of potential or planned radioactive waste repositories. The BIOCLIM project is designed to advance the state-of-the-art of biosphere modelling for use in Performance Assessments. Therefore, two strategies are developed for representing sequential climatic changes to geosphere-biosphere systems. The hierarchical strategy successively uses a hierarchy of climate models. These models vary from simple 2-D models, which simulate interactions between a few aspects of the Earth system at a rough surface resolution, through General Circulation Model (GCM) and vegetation model, which simulate in great detail the dynamics and physics of the atmosphere, ocean and biosphere, to regional models, which focus on the European regions and sites of interest. Moreover, rule-based and statistical down-scaling procedures are also considered. Comparisons are provided in terms of climate and vegetation cover at the selected times and for the study regions. The integrated strategy consists of using integrated climate models, representing all the physical mechanisms important for long-term continuous climate variations, to simulate the climate evolution over

  9. Evolving Rule-Based Systems in two Medical Domains using Genetic Programming

    DEFF Research Database (Denmark)

    Tsakonas, A.; Dounias, G.; Jantzen, Jan

    2004-01-01

    We demonstrate, compare and discuss the application of two genetic programming methodologies for the construction of rule-based systems in two medical domains: the diagnosis of Aphasia's subtypes and the classification of Pap-Smear Test examinations. The first approach consists of a scheme...

  10. Size structure, not metabolic scaling rules, determines fisheries reference points

    DEFF Research Database (Denmark)

    Andersen, Ken Haste; Beyer, Jan

    2015-01-01

    Impact assessments of fishing on a stock require parameterization of vital rates: growth, mortality and recruitment. For 'data-poor' stocks, vital rates may be estimated from empirical size-based relationships or from life-history invariants. However, a theoretical framework to synthesize...... these empirical relations is lacking. Here, we combine life-history invariants, metabolic scaling and size-spectrum theory to develop a general size- and trait-based theory for demography and recruitment of exploited fish stocks. Important concepts are physiological or metabolic scaled mortalities and flux...... is that larger species have a higher egg production per recruit than small species. This means that density dependence is stronger for large than for small species and has the consequence that fisheries reference points that incorporate recruitment do not obey metabolic scaling rules. This result implies...

  11. Predictions of the spontaneous symmetry-breaking theory for visual code completeness and spatial scaling in single-cell learning rules.

    Science.gov (United States)

    Webber, C J

    2001-05-01

    This article shows analytically that single-cell learning rules that give rise to oriented and localized receptive fields, when their synaptic weights are randomly and independently initialized according to a plausible assumption of zero prior information, will generate visual codes that are invariant under two-dimensional translations, rotations, and scale magnifications, provided that the statistics of their training images are sufficiently invariant under these transformations. Such codes span different image locations, orientations, and size scales with equal economy. Thus, single-cell rules could account for the spatial scaling property of the cortical simple-cell code. This prediction is tested computationally by training with natural scenes; it is demonstrated that a single-cell learning rule can give rise to simple-cell receptive fields spanning the full range of orientations, image locations, and spatial frequencies (except at the extreme high and low frequencies at which the scale invariance of the statistics of digitally sampled images must ultimately break down, because of the image boundary and the finite pixel resolution). Thus, no constraint on completeness, or any other coupling between cells, is necessary to induce the visual code to span wide ranges of locations, orientations, and size scales. This prediction is made using the theory of spontaneous symmetry breaking, which we have previously shown can also explain the data-driven self-organization of a wide variety of transformation invariances in neurons' responses, such as the translation invariance of complex cell response.

  12. Prelude to rational scale-up of penicillin production: a scale-down study.

    Science.gov (United States)

    Wang, Guan; Chu, Ju; Noorman, Henk; Xia, Jianye; Tang, Wenjun; Zhuang, Yingping; Zhang, Siliang

    2014-03-01

    Penicillin is one of the best known pharmaceuticals and is also an important member of the β-lactam antibiotics. Over the years, ambitious yields, titers, productivities, and low costs in the production of the β-lactam antibiotics have been stepwise realized through successive rounds of strain improvement and process optimization. Penicillium chrysogenum was proven to be an ideal cell factory for the production of penicillin, and successful approaches were exploited to elevate the production titer. However, the industrial production of penicillin faces the serious challenge that environmental gradients, which are caused by insufficient mixing and mass transfer limitations, exert a considerably negative impact on the ultimate productivity and yield. Scale-down studies regarding diverse environmental gradients have been carried out on bacteria, yeasts, and filamentous fungi as well as animal cells. In accordance, a variety of scale-down devices combined with fast sampling and quenching protocols have been established to acquire the true snapshots of the perturbed cellular conditions. The perturbed metabolome information stemming from scale-down studies contributed to the comprehension of the production process and the identification of improvement approaches. However, little is known about the influence of the flow field and the mechanisms of intracellular metabolism. Consequently, it is still rather difficult to realize a fully rational scale-up. In the future, developing a computer framework to simulate the flow field of the large-scale fermenters is highly recommended. Furthermore, a metabolically structured kinetic model directly related to the production of penicillin will be further coupled to the fluid flow dynamics. A mathematical model including the information from both computational fluid dynamics and chemical reaction dynamics will then be established for the prediction of detailed information over the entire period of the fermentation process and

  13. Genetic Programming for the Generation of Crisp and Fuzzy Rule Bases in Classification and Diagnosis of Medical Data

    DEFF Research Database (Denmark)

    Dounias, George; Tsakonas, Athanasios; Jantzen, Jan

    2002-01-01

    This paper demonstrates two methodologies for the construction of rule-based systems in medical decision making. The first approach consists of a method combining genetic programming and heuristic hierarchical rule-base construction. The second model is composed by a strongly-typed genetic...

  14. Post-decomposition optimizations using pattern matching and rule-based clustering for multi-patterning technology

    Science.gov (United States)

    Wang, Lynn T.-N.; Madhavan, Sriram

    2018-03-01

    A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.

  15. Simple spatial scaling rules behind complex cities.

    Science.gov (United States)

    Li, Ruiqi; Dong, Lei; Zhang, Jiang; Wang, Xinran; Wang, Wen-Xu; Di, Zengru; Stanley, H Eugene

    2017-11-28

    Although most of wealth and innovation have been the result of human interaction and cooperation, we are not yet able to quantitatively predict the spatial distributions of three main elements of cities: population, roads, and socioeconomic interactions. By a simple model mainly based on spatial attraction and matching growth mechanisms, we reveal that the spatial scaling rules of these three elements are in a consistent framework, which allows us to use any single observation to infer the others. All numerical and theoretical results are consistent with empirical data from ten representative cities. In addition, our model can also provide a general explanation of the origins of the universal super- and sub-linear aggregate scaling laws and accurately predict kilometre-level socioeconomic activity. Our work opens a new avenue for uncovering the evolution of cities in terms of the interplay among urban elements, and it has a broad range of applications.

  16. A Belief Rule-Based (BRB) Decision Support System for Assessing Clinical Asthma Suspicion

    DEFF Research Database (Denmark)

    Hossain, Mohammad Shahadat; Hossain, Emran; Khalid, Md. Saifuddin

    2014-01-01

    conditions of uncertainty. The Belief Rule-Based Inference Methodology Using the Evidential Reasoning (RIMER) approach was adopted to develop this expert system; which is named the Belief Rule-Based Expert System (BRBES). The system can handle various types of uncertainty in knowledge representation...... and inference procedures. The knowledge base of this system was constructed by using real patient data and expert opinion. Practical case studies were used to validate the system. The system-generated results are more effective and reliable in terms of accuracy than the results generated by a manual system....

  17. 77 FR 52977 - Regulatory Capital Rules: Advanced Approaches Risk-Based Capital Rule; Market Risk Capital Rule

    Science.gov (United States)

    2012-08-30

    ...-weighted assets for residential mortgages, securitization exposures, and counterparty credit risk. The.... Risk-Weighted Assets--Proposed Modifications to the Advanced Approaches Rules A. Counterparty Credit... Margin Period of Risk 3. Changes to the Internal Models Methodology (IMM) 4. Credit Valuation Adjustments...

  18. Design methodology for the physical protection upgrade rule requirements for fixed sites. Technical report

    International Nuclear Information System (INIS)

    Evans, L.J. Jr.; Allen, T.

    1980-06-01

    This Design Methodology document aids the licensee in understanding how the fixed site requirements of the Physical Protection Upgrade Rule affect the design of physical protection systems for fuel processing plants, fuel manufacturing plants, or other fixed site special nuclear material operations involving possession or use of formula quantities of strategic special nuclear material. The document consists of three major elements: Logic Trees, Safeguards Jobs and Component Matrices, and Effectiveness Test Questionnaires. The work is based upon a previous study conducted by Sandia Laboratories for the Nuclear Regulatory Commission

  19. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  20. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture the observed locality of interactions. Traditional self-propelled particle models fail to capture the fine scale dynamics of the system. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics, while maintaining a biologically plausible perceptual range. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  1. Cerebral methodology based computing to estimate real phenomena from large-scale nuclear simulation

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2011-01-01

    Our final goal is to estimate real phenomena from large-scale nuclear simulations by using computing processes. Large-scale simulations mean that they include scale variety and physical complexity so that corresponding experiments and/or theories do not exist. In nuclear field, it is indispensable to estimate real phenomena from simulations in order to improve the safety and security of nuclear power plants. Here, the analysis of uncertainty included in simulations is needed to reveal sensitivity of uncertainty due to randomness, to reduce the uncertainty due to lack of knowledge and to lead a degree of certainty by verification and validation (V and V) and uncertainty quantification (UQ) processes. To realize this, we propose 'Cerebral Methodology based Computing (CMC)' as computing processes with deductive and inductive approaches by referring human reasoning processes. Our idea is to execute deductive and inductive simulations contrasted with deductive and inductive approaches. We have established its prototype system and applied it to a thermal displacement analysis of a nuclear power plant. The result shows that our idea is effective to reduce the uncertainty and to get the degree of certainty. (author)

  2. Fusion reactor design studies: standard unit costs and cost scaling rules

    International Nuclear Information System (INIS)

    Schulte, S.C.; Bickford, W.E.; Willingham, C.E.; Ghose, S.K.; Walker, M.G.

    1979-09-01

    This report establishes standard unit costs and scaling rules for estimating costs of material, equipment, land, and labor components used in magnetic confinement fusion reactor plant construction and operation. Use of the standard unit costs and scaling rules will add uniformity to cost estimates, and thus allow valid comparison of the economic characteristics of various reactor concepts

  3. Rule-Based Event Processing and Reaction Rules

    Science.gov (United States)

    Paschke, Adrian; Kozlenkov, Alexander

    Reaction rules and event processing technologies play a key role in making business and IT / Internet infrastructures more agile and active. While event processing is concerned with detecting events from large event clouds or streams in almost real-time, reaction rules are concerned with the invocation of actions in response to events and actionable situations. They state the conditions under which actions must be taken. In the last decades various reaction rule and event processing approaches have been developed, which for the most part have been advanced separately. In this paper we survey reaction rule approaches and rule-based event processing systems and languages.

  4. Analysis and design of type b package tie-down systems

    International Nuclear Information System (INIS)

    Phalippou, C.; Tombini, C.; Tanguy, L.

    1993-01-01

    In order to analyse the incidence of tie-down conditions as a cause of road accidents and to advise carriers on methods of calculating the risk, the French Atomic Energy Commission (CEA), within the framework of a research contract financed by the European Community, conducted a survey into road accidents in which B type packages were involved. After analysis of the survey results, the CEA then conducted reduced scale tests on representative models to establish design rules for tie-down systems. These rules have been the subject of various publications and have at last resulted in the production of a software aid to the design and monitoring of tie-down systems. This document states the various stages involved in this work and the way in which the ARRIMAGE software is arranged. (J.P.N.)

  5. A pattern-based methodology for optimizing stitches in double-patterning technology

    Science.gov (United States)

    Wang, Lynn T.; Madhavan, Sriram; Dai, Vito; Capodieci, Luigi

    2015-03-01

    A pattern-based methodology for optimizing stitches is developed based on identifying stitch topologies and replacing them with pre-characterized fixing solutions in decomposed layouts. A topology-based library of stitches with predetermined fixing solutions is built. A pattern-based engine searches for matching topologies in the decomposed layouts. When a match is found, the engine opportunistically replaces the predetermined fixing solution: only a design rule check error-free replacement is preserved. The methodology is demonstrated on a 20nm layout design that contains over 67 million, first metal layer stitches. Results show that a small library containing 3 stitch topologies improves the stitch area regularity by 4x.

  6. Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules

    Science.gov (United States)

    Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.

    Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.

  7. The Association between Audit Business Scale Advantage and Audit Quality of Asset Write-downs

    Directory of Open Access Journals (Sweden)

    Ziye Zhao

    2008-06-01

    We contribute to the literature with the following findings. First, auditors’ business scale is positively related to return relevance of write-downs. Second, auditors with ABSA not only enhance the relevance between impairments and economic variables but also weaken the relation between impairments and managerial variables; however, the results appear in only a few of the firm-specific variables. Third, results are mixed when we test the ABSA effect on price-relevance and persistence dimensions. Fourth, the ABSA effect is stronger when the complexity of asset write-downs requires some inside information to comprehend the nature of action. Adding to the main finding, we also found the ABSA effect became weaker when we proxy ABSA with raw data of companies’ business scale instead of the top five auditors in business scale. Taken together, our results show that the ABSA effect does exist in auditing of assets write-downs, although with weak evidence. Our results also indicated rational auditor choice based on quality of service in China's audit market. We identified some unique factors from stakeholders’ cooperative structuring actions in China audit market as potential explanations to the market rationality.

  8. Climatic and physiographic controls on catchment-scale nitrate loss at different spatial scales: insights from a top-down model development approach

    Science.gov (United States)

    Shafii, Mahyar; Basu, Nandita; Schiff, Sherry; Van Cappellen, Philippe

    2017-04-01

    Dramatic increase in nitrogen circulating in the biosphere due to anthropogenic activities has resulted in impairment of water quality in groundwater and surface water causing eutrophication in coastal regions. Understanding the fate and transport of nitrogen from landscape to coastal areas requires exploring the drivers of nitrogen processes in both time and space, as well as the identification of appropriate flow pathways. Conceptual models can be used as diagnostic tools to provide insights into such controls. However, diagnostic evaluation of coupled hydrological-biogeochemical models is challenging. This research proposes a top-down methodology utilizing hydrochemical signatures to develop conceptual models for simulating the integrated streamflow and nitrate responses while taking into account dominant controls on nitrate variability (e.g., climate, soil water content, etc.). Our main objective is to seek appropriate model complexity that sufficiently reproduces multiple hydrological and nitrate signatures. Having developed a suitable conceptual model for a given watershed, we employ it in sensitivity studies to demonstrate the dominant process controls that contribute to the nitrate response at scales of interest. We apply the proposed approach to nitrate simulation in a range of small to large sub-watersheds in the Grand River Watershed (GRW) located in Ontario. Such multi-basin modeling experiment will enable us to address process scaling and investigate the consequences of lumping processes in terms of models' predictive capability. The proposed methodology can be applied to the development of large-scale models that can help decision-making associated with nutrients management at regional scale.

  9. Qualification of a full plant nodalization for the prediction of the core exit temperature through a scaling methodology

    Energy Technology Data Exchange (ETDEWEB)

    Freixa, J., E-mail: jordi.freixa-terradas@upc.edu; Martínez-Quiroga, V., E-mail: victor.martinez.quiroga@upc.edu; Reventós, F., E-mail: francesc.reventos@upc.edu

    2016-11-15

    Highlights: • Core exit temperature is used in PWRs as an indication of core heat up. • Qualification of full scale nuclear reactors by means of a scaling methodology. • Scaling of RELAP5 calculations to full scale power plants. - Abstract: System codes and their necessary power plant nodalizations are an essential step in thermal hydraulic safety analysis. In order to assess the safety of a particular power plant, in addition to the validation and verification of the code, the nodalization of the system needs to be qualified. Since most existing experimental data come from scaled-down facilities, any qualification process must therefore address scale considerations. The Group of Thermal Hydraulic Studies at Technical University of Catalonia has developed a scaling-up methodology (SCUP) for the qualification of full-scale nodalizations through a systematic procedure based on the extrapolation of post-test simulations of Integral Test Facility experiments. In the present work, the SCUP methodology will be employed to qualify the nodalization of the AscóNPP, a Pressurized Water Reactor (PWR), for the reproduction of an important safety phenomenon which is the effectiveness of the Core Exit Temperature (CET) as an Accident Management (AM) indicator. Given the difficulties in placing measurements in the core region, CET measurements are used as a criterion for the initiation of safety operational procedures during accidental conditions in PWR. However, the CET response has some limitation in detecting inadequate core cooling simply because the measurement is not taken in the position where the cladding exposure occurs. In order to apply the SCUP methodology, the OECD/NEA ROSA-2 Test 3, an SBLOCA in the hot leg, has been selected as a starting point. This experiment was conducted at the Large Scale Test Facility (LSTF), a facility operated by the Japanese Atomic Energy Agency (JAEA) and was focused on the assessment of the effectiveness of AM actions triggered by

  10. Rule-based land cover classification from very high-resolution satellite image with multiresolution segmentation

    Science.gov (United States)

    Haque, Md. Enamul; Al-Ramadan, Baqer; Johnson, Brian A.

    2016-07-01

    Multiresolution segmentation and rule-based classification techniques are used to classify objects from very high-resolution satellite images of urban areas. Custom rules are developed using different spectral, geometric, and textural features with five scale parameters, which exploit varying classification accuracy. Principal component analysis is used to select the most important features out of a total of 207 different features. In particular, seven different object types are considered for classification. The overall classification accuracy achieved for the rule-based method is 95.55% and 98.95% for seven and five classes, respectively. Other classifiers that are not using rules perform at 84.17% and 97.3% accuracy for seven and five classes, respectively. The results exploit coarse segmentation for higher scale parameter and fine segmentation for lower scale parameter. The major contribution of this research is the development of rule sets and the identification of major features for satellite image classification where the rule sets are transferable and the parameters are tunable for different types of imagery. Additionally, the individual objectwise classification and principal component analysis help to identify the required object from an arbitrary number of objects within images given ground truth data for the training.

  11. A methodology for scenario development based on understanding of long-term evolution of geological disposal systems

    International Nuclear Information System (INIS)

    Wakasugi, Keiichiro; Ishiguro, Katsuhiko; Ebashi, Takeshi; Ueda, Hiroyoshi; Koyama, Toshihiro; Shiratsuchi, Hiroshi; Yashio, Shoko; Kawamura, Hideki

    2012-01-01

    We have developed a 'hybrid' scenario development method by combining bottom-up and top-down approaches and applied for the case of geological disposal of high-level waste. This approach provides a top-down perspective, by introducing a concept of safety functions for different periods and 'storyboards', which depict repository evolution with time on a range of spatial scales, and a bottom-up perspective, by identifying relationship between processes related to radionuclide migration and safety functions based on feature, event and process (FEP) management. Based on a trial study, we have specified work descriptions for each step of the hybrid scenario development methodology and confirmed that the storyboard provides a baseline and holistic overview for the FEP management and a common platform to involve close interaction with experts in various disciplines to understand the crossover phenomenological processes. We also confirmed that there is no conflict between the top-down approach and the bottom-up approach and the hybrid scenario development work frame fulfils the specified requirements for traceability, comprehensiveness, ease of understanding, integration of multidisciplinary knowledge and applicability to a staged approach to siting. (author)

  12. Stored energy analysis in the scaled-down test facilities

    International Nuclear Information System (INIS)

    Deng, Chengcheng; Chang, Huajian; Qin, Benke; Wu, Qiao

    2016-01-01

    Highlights: • Three methods are developed to evaluate stored energy in the scaled-down test facilities. • The mechanism behind stored energy distortion in the test facilities is revealed. • The application of stored energy analysis is demonstrated for the ACME facility of China. - Abstract: In the scaled-down test facilities that simulate the accident transient process of the prototype nuclear power plant, the stored energy release in the metal structures has an important influence on the accuracy and effectiveness of the experimental data. Three methods of stored energy analysis are developed, and the mechanism behind stored energy distortion in the test facilities is revealed. Moreover, the application of stored energy analysis is demonstrated for the ACME test facility newly built in China. The results show that the similarity requirements of three methods analyzing the stored energy release decrease gradually. The physical mechanism of stored energy release process can be characterized by the dimensionless numbers including Stanton number, Fourier number and Biot number. Under the premise of satisfying the overall similarity of natural circulation, the stored energy release process in the scale-down test facilities cannot maintain exact similarity. The results of the application of stored energy analysis illustrate that both the transient release process and integral total stored energy of the reactor pressure vessel wall of CAP1400 power plant can be well reproduced in the ACME test facility.

  13. A low-jitter RF PLL frequency synthesizer with high-speed mixed-signal down-scaling circuits

    International Nuclear Information System (INIS)

    Tang Lu; Wang Zhigong; Xue Hong; He Xiaohu; Xu Yong; Sun Ling

    2010-01-01

    A low-jitter RF phase locked loop (PLL) frequency synthesizer with high-speed mixed-signal down-scaling circuits is proposed. Several techniques are proposed to reduce the design complexity and improve the performance of the mixed-signal down-scaling circuit in the PLL. An improved D-latch is proposed to increase the speed and the driving capability of the DMP in the down-scaling circuit. Through integrating the D-latch with 'OR' logic for dual-modulus operation, the delays associated with both the 'OR' and D-flip-flop (DFF) operations are reduced, and the complexity of the circuit is also decreased. The programmable frequency divider of the down-scaling circuit is realized in a new method based on deep submicron CMOS technology standard cells and a more accurate wire-load model. The charge pump in the PLL is also realized with a novel architecture to improve the current matching characteristic so as to reduce the jitter of the system. The proposed RF PLL frequency synthesizer is realized with a TSMC 0.18-μm CMOS process. The measured phase noise of the PLL frequency synthesizer output at 100 kHz offset from the center frequency is only -101.52 dBc/Hz. The circuit exhibits a low RMS jitter of 3.3 ps. The power consumption of the PLL frequency synthesizer is also as low as 36 mW at a 1.8 V power supply. (semiconductor integrated circuits)

  14. Methodology for Air Quality Forecast Downscaling from Regional- to Street-Scale

    Science.gov (United States)

    Baklanov, Alexander; Nuterman, Roman; Mahura, Alexander; Amstrup, Bjarne; Hansen Saas, Bent; Havskov Sørensen, Jens; Lorenzen, Thomas; Weismann, Jakob

    2010-05-01

    The most serious air pollution events occur in cities where there is a combination of high population density and air pollution, e.g. from vehicles. The pollutants can lead to serious human health problems, including asthma, irritation of the lungs, bronchitis, pneumonia, decreased resistance to respiratory infections, and premature death. In particular air pollution is associated with increase in cardiovascular disease and lung cancer. In 2000 WHO estimated that between 2.5 % and 11 % of total annual deaths are caused by exposure to air pollution. However, European-scale air quality models are not suited for local forecasts, as their grid-cell is typically of the order of 5 to 10km and they generally lack detailed representation of urban effects. Two suites are used in the framework of the EC FP7 project MACC (Monitoring of Atmosphere Composition and Climate) to demonstrate how downscaling from the European MACC ensemble to local-scale air quality forecast will be carried out: one will illustrate capabilities for the city of Copenhagen (Denmark); the second will focus on the city of Bucharest (Romania). This work is devoted to the first suite, where methodological aspects of downscaling from regional (European/ Denmark) to urban scale (Copenhagen), and from the urban down to street scale. The first results of downscaling according to the proposed methodology are presented. The potential for downscaling of European air quality forecasts by operating urban and street-level forecast models is evaluated. This will bring a strong support for continuous improvement of the regional forecast modelling systems for air quality in Europe, and underline clear perspectives for the future regional air quality core and downstream services for end-users. At the end of the MACC project, requirements on "how-to-do" downscaling of European air-quality forecasts to the city and street levels with different approaches will be formulated.

  15. Phosphotyrosine-based-phosphoproteomics scaled-down to biopsy level for analysis of individual tumor biology and treatment selection.

    Science.gov (United States)

    Labots, Mariette; van der Mijn, Johannes C; Beekhof, Robin; Piersma, Sander R; de Goeij-de Haas, Richard R; Pham, Thang V; Knol, Jaco C; Dekker, Henk; van Grieken, Nicole C T; Verheul, Henk M W; Jiménez, Connie R

    2017-06-06

    Mass spectrometry-based phosphoproteomics of cancer cell and tissue lysates provides insight in aberrantly activated signaling pathways and potential drug targets. For improved understanding of individual patient's tumor biology and to allow selection of tyrosine kinase inhibitors in individual patients, phosphoproteomics of small clinical samples should be feasible and reproducible. We aimed to scale down a pTyr-phosphopeptide enrichment protocol to biopsy-level protein input and assess reproducibility and applicability to tumor needle biopsies. To this end, phosphopeptide immunoprecipitation using anti-phosphotyrosine beads was performed using 10, 5 and 1mg protein input from lysates of colorectal cancer (CRC) cell line HCT116. Multiple needle biopsies from 7 human CRC resection specimens were analyzed at the 1mg-level. The total number of phosphopeptides captured and detected by LC-MS/MS ranged from 681 at 10mg input to 471 at 1mg HCT116 protein. ID-reproducibility ranged from 60.5% at 10mg to 43.9% at 1mg. Per 1mg-level biopsy sample, >200 phosphopeptides were identified with 57% ID-reproducibility between paired tumor biopsies. Unsupervised analysis clustered biopsies from individual patients together and revealed known and potential therapeutic targets. This study demonstrates the feasibility of label-free pTyr-phosphoproteomics at the tumor biopsy level based on reproducible analyses using 1mg of protein input. The considerable number of identified phosphopeptides at this level is attributed to an effective down-scaled immuno-affinity protocol as well as to the application of ID propagation in the data processing and analysis steps. Unsupervised cluster analysis reveals patient-specific profiles. Together, these findings pave the way for clinical trials in which pTyr-phosphoproteomics will be performed on pre- and on-treatment biopsies. Such studies will improve our understanding of individual tumor biology and may enable future pTyr-phosphoproteomics-based

  16. Vibration amplitude rule study for rotor under large time scale

    International Nuclear Information System (INIS)

    Yang Xuan; Zuo Jianli; Duan Changcheng

    2014-01-01

    The rotor is an important part of the rotating machinery; its vibration performance is one of the important factors affecting the service life. This paper presents both theoretical analyses and experimental demonstrations of the vibration rule of the rotor under large time scales. The rule can be used for the service life estimation of the rotor. (authors)

  17. A bench-scale biotreatability methodology to evaluate field bioremediation

    International Nuclear Information System (INIS)

    Saberiyan, A.G.; MacPherson, J.R. Jr.; Moore, R.; Pruess, A.J.; Andrilenas, J.S.

    1995-01-01

    A bench-scale biotreatability methodology was designed to assess field bioremediation of petroleum contaminated soil samples. This methodology was performed successfully on soil samples from more than 40 sites. The methodology is composed of two phases, characterization and experimentation. The first phase is physical, chemical, and biological characterization of the contaminated soil sample. This phase determines soil parameters, contaminant type, presence of indigenous contaminant-degrading bacteria, and bacterial population size. The second phase, experimentation, consists of a respirometry test to measure the growth of microbes indirectly (via generation of CO 2 ) and the consumption of their food source directly (via contaminant loss). Based on a Monod kinetic analysis, the half-life of a contaminant can be calculated. Abiotic losses are accounted for based on a control test. The contaminant molecular structure is used to generate a stoichiometric equation. The stoichiometric equation yields a theoretical ratio for mg of contaminant degraded per mg of CO 2 produced. Data collected from the respirometry test are compared to theoretical values to evaluate bioremediation feasibility

  18. Development of in situ product removal strategies in biocatalysis applying scaled-down unit operations.

    Science.gov (United States)

    Heintz, Søren; Börner, Tim; Ringborg, Rolf H; Rehn, Gustav; Grey, Carl; Nordblad, Mathias; Krühne, Ulrich; Gernaey, Krist V; Adlercreutz, Patrick; Woodley, John M

    2017-03-01

    An experimental platform based on scaled-down unit operations combined in a plug-and-play manner enables easy and highly flexible testing of advanced biocatalytic process options such as in situ product removal (ISPR) process strategies. In such a platform, it is possible to compartmentalize different process steps while operating it as a combined system, giving the possibility to test and characterize the performance of novel process concepts and biocatalysts with minimal influence of inhibitory products. Here the capabilities of performing process development by applying scaled-down unit operations are highlighted through a case study investigating the asymmetric synthesis of 1-methyl-3-phenylpropylamine (MPPA) using ω-transaminase, an enzyme in the sub-family of amino transferases (ATAs). An on-line HPLC system was applied to avoid manual sample handling and to semi-automatically characterize ω-transaminases in a scaled-down packed-bed reactor (PBR) module, showing MPPA as a strong inhibitor. To overcome the inhibition, a two-step liquid-liquid extraction (LLE) ISPR concept was tested using scaled-down unit operations combined in a plug-and-play manner. Through the tested ISPR concept, it was possible to continuously feed the main substrate benzylacetone (BA) and extract the main product MPPA throughout the reaction, thereby overcoming the challenges of low substrate solubility and product inhibition. The tested ISPR concept achieved a product concentration of 26.5 g MPPA  · L -1 , a purity up to 70% g MPPA  · g tot -1 and a recovery in the range of 80% mol · mol -1 of MPPA in 20 h, with the possibility to increase the concentration, purity, and recovery further. Biotechnol. Bioeng. 2017;114: 600-609. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Interrelationships Between Receiver/Relative Operating Characteristics Display, Binomial, Logit, and Bayes' Rule Probability of Detection Methodologies

    Science.gov (United States)

    Generazio, Edward R.

    2014-01-01

    Unknown risks are introduced into failure critical systems when probability of detection (POD) capabilities are accepted without a complete understanding of the statistical method applied and the interpretation of the statistical results. The presence of this risk in the nondestructive evaluation (NDE) community is revealed in common statements about POD. These statements are often interpreted in a variety of ways and therefore, the very existence of the statements identifies the need for a more comprehensive understanding of POD methodologies. Statistical methodologies have data requirements to be met, procedures to be followed, and requirements for validation or demonstration of adequacy of the POD estimates. Risks are further enhanced due to the wide range of statistical methodologies used for determining the POD capability. Receiver/Relative Operating Characteristics (ROC) Display, simple binomial, logistic regression, and Bayes' rule POD methodologies are widely used in determining POD capability. This work focuses on Hit-Miss data to reveal the framework of the interrelationships between Receiver/Relative Operating Characteristics Display, simple binomial, logistic regression, and Bayes' Rule methodologies as they are applied to POD. Knowledge of these interrelationships leads to an intuitive and global understanding of the statistical data, procedural and validation requirements for establishing credible POD estimates.

  20. Revised Rules for Concrete Bridges

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Jensen, F. M.; Middleton, C.

    This paper is based on research performed for the Highway Agency, London, UK under the project DPU/9/44 "Revision of Bridge Assessment Rules Based on Whole Life Performance: Concrete Bridges" It contains details of a methodology which can be used to generate Whole Life (WL) reliability profiles....... These WL reliability profiles may be used to establish revised rules for Concrete Bridges....

  1. Knowledge rule base for the beam optics program TRACE 3-D

    International Nuclear Information System (INIS)

    Gillespie, G.H.; Van Staagen, P.K.; Hill, B.W.

    1993-01-01

    An expert system type of knowledge rule base has been developed for the input parameters used by the particle beam transport program TRACE 3-D. The goal has been to provide the program's user with adequate on-screen information to allow him to initially set up a problem with minimal open-quotes off-lineclose quotes calculations. The focus of this work has been in developing rules for the parameters which define the beam line transport elements. Ten global parameters, the particle mass and charge, beam energy, etc., are used to provide open-quotes expertclose quotes estimates of lower and upper limits for each of the transport element parameters. For example, the limits for the field strength of the quadrupole element are based on a water-cooled, iron-core electromagnet with dimensions derived from practical engineering constraints, and the upper limit for the effective length is scaled with the particle momenta so that initially parallel trajectories do not cross the axis inside the magnet. Limits for the quadrupole doublet and triplet parameters incorporate these rules and additional rules based on stable FODO lattices and bidirectional focusing requirements. The structure of the rule base is outlined and examples for the quadrupole singlet, doublet and triplet are described. The rule base has been implemented within the Shell for Particle Accelerator Related Codes (SPARC) graphical user interface (GUI)

  2. Levels of Organisation in agent-based modelling for renewable resources management. Agricultural water management collective rules enforcement in the French Drome River Valley Case Study

    International Nuclear Information System (INIS)

    Abrami, G.

    2004-11-01

    Levels of Organisation in agent-based modelling for renewable resources management. Agricultural water management collective rules enforcement in the French Dr me River Valley Case Study. In the context of Agent-Based Modelling for participative renewable resources management, this thesis is concerned with representing multiple tangled levels of organisation of a system. The Agent-Group-Role (AGR) formalism is borrowed from computer science research. It has been conceptually specified to handle levels of organisation, and behaviours within levels of organisation. A design methodology dedicated to AGR modelling has been developed, together with an implementation of the formalism over a multi-agent platform. AGR models of agricultural water management in the French Dr me River Valley have been built and tested. This experiment demonstrates the AGR formalism ability to (1) clarify usually implicit hypothesis on action modes, scales or viewpoints (2) facilitate the definition of scenarios with various collective rules, and various rules in enforcement behaviours (3) generate bricks for generic irrigated catchment models. (author)

  3. RuleML-Based Learning Object Interoperability on the Semantic Web

    Science.gov (United States)

    Biletskiy, Yevgen; Boley, Harold; Ranganathan, Girish R.

    2008-01-01

    Purpose: The present paper aims to describe an approach for building the Semantic Web rules for interoperation between heterogeneous learning objects, namely course outlines from different universities, and one of the rule uses: identifying (in)compatibilities between course descriptions. Design/methodology/approach: As proof of concept, a rule…

  4. The research of selection model based on LOD in multi-scale display of electronic map

    Science.gov (United States)

    Zhang, Jinming; You, Xiong; Liu, Yingzhen

    2008-10-01

    This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.

  5. La percepción social hacia las personas con síndrome de Down: la escala EPSD-1 The social perception of people with Down syndrome: the EPSD-1 scale

    Directory of Open Access Journals (Sweden)

    Jesús Molina Saorín

    2012-12-01

    design important psychosocial variables are recognized, whilst trying to discuss its properties in order to apply it to other contexts. Based on an analysis of factors, we offer ten main factors that deal with issues such as the social exclusion of people with Down syndrome, their autonomy and independence, their affective-sexual relations, their social and educational acceptance, their integration, their family attitudes, teacher education, and the social protectionism of these people. The initial sample is composed of 1.796 participants, and the results indicate that the scale devised here is a reliable and valid instrument in its application. This scale can be of great value for the Social Sciences, offering a latent relation with respect to the initial education which, since the university, is offered to students of higher education. Using a quantitative methodology, the results show that the psychometric properties of the scale are highly satisfactory, prompting us to suggest the conduction of new longitudinal and transversal studies using it to different populations with the purpose of furthering our knowledge of the social perception of people with Down syndrome, and of bringing out new facts about the tendency and evolution of these theme during the last years.

  6. Rough set and rule-based multicriteria decision aiding

    Directory of Open Access Journals (Sweden)

    Roman Slowinski

    2012-08-01

    Full Text Available The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA. DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems.

  7. Cellular scaling rules for the brain of Artiodactyla include a highly folded cortex with few neurons

    Directory of Open Access Journals (Sweden)

    Rodrigo eSiqueira Kazu

    2014-11-01

    Full Text Available Quantitative analysis of the cellular composition of rodent, primate, insectivore and afrotherian brains has shown that nonneuronal scaling rules are similar across these mammalian orders that diverged about 95 million years ago, and therefore appear to be conserved in evolution, while neuronal scaling rules appear to be free to vary in a clade-specific manner. Here we analyze the cellular scaling rules that apply to the brain of artiodactyls, a group within the order Cetartiodactyla, believed to be a relatively recent radiation from the common Eutherian ancestor. We find that artiodactyls share nonneuronal scaling rules with all groups analyzed previously. Artiodactyls share with afrotherians and rodents, but not with primates, the neuronal scaling rules that apply to the cerebral cortex and cerebellum. The neuronal scaling rules that apply to the remaining brain areas are however distinct in artiodactyls. Importantly, we show that the folding index of the cerebral cortex scales with the number of neurons in the cerebral cortex in distinct fashions across artiodactyls, afrotherians, rodents, and primates, such that the artiodactyl cerebral cortex is more convoluted than primate cortices of similar numbers of neurons. Our findings suggest that the scaling rules found to be shared across modern afrotherians, glires and artiodactyls applied to the common Eutherian ancestor, such as the relationship between the mass of the cerebral cortex as a whole and its number of neurons. In turn, the distribution of neurons along the surface of the cerebral cortex, which is related to its degree of gyrification, appears to be a clade-specific characteristic. If the neuronal scaling rules for artiodactyls extend to all cetartiodactyls, we predict that the large cerebral cortex of cetaceans will still have fewer neurons than the human cerebral cortex.

  8. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  9. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  10. The Menopause Rating Scale (MRS scale: A methodological review

    Directory of Open Access Journals (Sweden)

    Strelow Frank

    2004-09-01

    Full Text Available Abstract Background This paper compiles data from different sources to get a first comprehensive picture of psychometric and other methodological characteristics of the Menopause Rating Scale (MRS scale. The scale was designed and standardized as a self-administered scale to (a to assess symptoms/complaints of aging women under different conditions, (b to evaluate the severity of symptoms over time, and (c to measure changes pre- and postmenopause replacement therapy. The scale became widespread used (available in 10 languages. Method A large multinational survey (9 countries in 4 continents from 2001/ 2002 is the basis for in depth analyses on reliability and validity of the MRS. Additional small convenience samples were used to get first impressions about test-retest reliability. The data were centrally analyzed. Data from a postmarketing HRT study were used to estimate discriminative validity. Results Reliability measures (consistency and test-retest stability were found to be good across countries, although the sample size for test-retest reliability was small. Validity: The internal structure of the MRS across countries was astonishingly similar to conclude that the scale really measures the same phenomenon in symptomatic women. The sub-scores and total score correlations were high (0.7–0.9 but lower among the sub-scales (0.5–0.7. This however suggests that the subscales are not fully independent. Norm values from different populations were presented showing that a direct comparison between Europe and North America is possible, but caution recommended with comparisons of data from Latin America and Indonesia. But this will not affect intra-individual comparisons within clinical trials. The comparison with the Kupperman Index showed sufficiently good correlations, illustrating an adept criterion-oriented validity. The same is true for the comparison with the generic quality-of-life scale SF-36 where also a sufficiently close association

  11. A rule-based software test data generator

    Science.gov (United States)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  12. Using Rule-Based Computer Programming to Unify Communication Rules Research.

    Science.gov (United States)

    Sanford, David L.; Roach, J. W.

    This paper proposes the use of a rule-based computer programming language as a standard for the expression of rules, arguing that the adoption of a standard would enable researchers to communicate about rules in a consistent and significant way. Focusing on the formal equivalence of artificial intelligence (AI) programming to different types of…

  13. Moral empiricism and the bias for act-based rules.

    Science.gov (United States)

    Ayars, Alisabeth; Nichols, Shaun

    2017-10-01

    Previous studies on rule learning show a bias in favor of act-based rules, which prohibit intentionally producing an outcome but not merely allowing the outcome. Nichols, Kumar, Lopez, Ayars, and Chan (2016) found that exposure to a single sample violation in which an agent intentionally causes the outcome was sufficient for participants to infer that the rule was act-based. One explanation is that people have an innate bias to think rules are act-based. We suggest an alternative empiricist account: since most rules that people learn are act-based, people form an overhypothesis (Goodman, 1955) that rules are typically act-based. We report three studies that indicate that people can use information about violations to form overhypotheses about rules. In study 1, participants learned either three "consequence-based" rules that prohibited allowing an outcome or three "act-based" rules that prohibiting producing the outcome; in a subsequent learning task, we found that participants who had learned three consequence-based rules were more likely to think that the new rule prohibited allowing an outcome. In study 2, we presented participants with either 1 consequence-based rule or 3 consequence-based rules, and we found that those exposed to 3 such rules were more likely to think that a new rule was also consequence based. Thus, in both studies, it seems that learning 3 consequence-based rules generates an overhypothesis to expect new rules to be consequence-based. In a final study, we used a more subtle manipulation. We exposed participants to examples act-based or accident-based (strict liability) laws and then had them learn a novel rule. We found that participants who were exposed to the accident-based laws were more likely to think a new rule was accident-based. The fact that participants' bias for act-based rules can be shaped by evidence from other rules supports the idea that the bias for act-based rules might be acquired as an overhypothesis from the

  14. On the effects of adaptive reservoir operating rules in hydrological physically-based models

    Science.gov (United States)

    Giudici, Federico; Anghileri, Daniela; Castelletti, Andrea; Burlando, Paolo

    2017-04-01

    Recent years have seen a significant increase of the human influence on the natural systems both at the global and local scale. Accurately modeling the human component and its interaction with the natural environment is key to characterize the real system dynamics and anticipate future potential changes to the hydrological regimes. Modern distributed, physically-based hydrological models are able to describe hydrological processes with high level of detail and high spatiotemporal resolution. Yet, they lack in sophistication for the behavior component and human decisions are usually described by very simplistic rules, which might underperform in reproducing the catchment dynamics. In the case of water reservoir operators, these simplistic rules usually consist of target-level rule curves, which represent the average historical level trajectory. Whilst these rules can reasonably reproduce the average seasonal water volume shifts due to the reservoirs' operation, they cannot properly represent peculiar conditions, which influence the actual reservoirs' operation, e.g., variations in energy price or water demand, dry or wet meteorological conditions. Moreover, target-level rule curves are not suitable to explore the water system response to climate and socio economic changing contexts, because they assume a business-as-usual operation. In this work, we quantitatively assess how the inclusion of adaptive reservoirs' operating rules into physically-based hydrological models contribute to the proper representation of the hydrological regime at the catchment scale. In particular, we contrast target-level rule curves and detailed optimization-based behavioral models. We, first, perform the comparison on past observational records, showing that target-level rule curves underperform in representing the hydrological regime over multiple time scales (e.g., weekly, seasonal, inter-annual). Then, we compare how future hydrological changes are affected by the two modeling

  15. Scaling architecture-on-demand based optical networks

    NARCIS (Netherlands)

    Meyer, Hugo; Sancho, Jose Carlos; Mrdakovic, Milica; Peng, Shuping; Simeonidou, Dimitra; Miao, Wang; Calabretta, Nicola

    2016-01-01

    This paper analyzes methodologies that allow scaling properly Architecture-On-Demand (AoD) based optical networks. As Data Centers and HPC systems are growing in size and complexity, optical networks seem to be the way to scale the bandwidth of current network infrastructures. To scale the number of

  16. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Science.gov (United States)

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  17. Investment Strategies Optimization based on a SAX-GA Methodology

    CERN Document Server

    Canelas, António M L; Horta, Nuno C G

    2013-01-01

    This book presents a new computational finance approach combining a Symbolic Aggregate approXimation (SAX) technique with an optimization kernel based on genetic algorithms (GA). While the SAX representation is used to describe the financial time series, the evolutionary optimization kernel is used in order to identify the most relevant patterns and generate investment rules. The proposed approach considers several different chromosomes structures in order to achieve better results on the trading platform The methodology presented in this book has great potential on investment markets.

  18. A C++ Class for Rule-Base Objects

    Directory of Open Access Journals (Sweden)

    William J. Grenney

    1992-01-01

    Full Text Available A C++ class, called Tripod, was created as a tool to assist with the development of rule-base decision support systems. The Tripod class contains data structures for the rule-base and member functions for operating on the data. The rule-base is defined by three ASCII files. These files are translated by a preprocessor into a single file that is located when a rule-base object is instantiated. The Tripod class was tested as part of a proto-type decision support system (DSS for winter highway maintenance in the Intermountain West. The DSS is composed of two principal modules: the main program, called the wrapper, and a Tripod rule-base object. The wrapper is a procedural module that interfaces with remote sensors and an external meterological database. The rule-base contains the logic for advising an inexperienced user and for assisting with the decision making process.

  19. Automated implementation of rule-based expert systems with neural networks for time-critical applications

    Science.gov (United States)

    Ramamoorthy, P. A.; Huang, Song; Govind, Girish

    1991-01-01

    In fault diagnosis, control and real-time monitoring, both timing and accuracy are critical for operators or machines to reach proper solutions or appropriate actions. Expert systems are becoming more popular in the manufacturing community for dealing with such problems. In recent years, neural networks have revived and their applications have spread to many areas of science and engineering. A method of using neural networks to implement rule-based expert systems for time-critical applications is discussed here. This method can convert a given rule-based system into a neural network with fixed weights and thresholds. The rules governing the translation are presented along with some examples. We also present the results of automated machine implementation of such networks from the given rule-base. This significantly simplifies the translation process to neural network expert systems from conventional rule-based systems. Results comparing the performance of the proposed approach based on neural networks vs. the classical approach are given. The possibility of very large scale integration (VLSI) realization of such neural network expert systems is also discussed.

  20. A Constructivist Approach to Rule Bases

    NARCIS (Netherlands)

    Sileno, G.; Boer, A.; van Engers, T.; Loiseau, S.; Filipe, J.; Duval, B.; van den Herik, J.

    2015-01-01

    The paper presents a set of algorithms for the conversion of rule bases between priority-based and constraint-based representations. Inspired by research in precedential reasoning in law, such algorithms can be used for the analysis of a rule base, and for the study of the impact of the introduction

  1. A demand-centered, hybrid life-cycle methodology for city-scale greenhouse gas inventories.

    Science.gov (United States)

    Ramaswami, Anu; Hillman, Tim; Janson, Bruce; Reiner, Mark; Thomas, Gregg

    2008-09-01

    Greenhouse gas (GHG) accounting for individual cities is confounded by spatial scale and boundary effects that impact the allocation of regional material and energy flows. This paper develops a demand-centered, hybrid life-cycle-based methodology for conducting city-scale GHG inventories that incorporates (1) spatial allocation of surface and airline travel across colocated cities in larger metropolitan regions, and, (2) life-cycle assessment (LCA) to quantify the embodied energy of key urban materials--food, water, fuel, and concrete. The hybrid methodology enables cities to separately report the GHG impact associated with direct end-use of energy by cities (consistent with EPA and IPCC methods), as well as the impact of extra-boundary activities such as air travel and production of key urban materials (consistent with Scope 3 protocols recommended by the World Resources Institute). Application of this hybrid methodology to Denver, Colorado, yielded a more holistic GHG inventory that approaches a GHG footprint computation, with consistency of inclusions across spatial scale as well as convergence of city-scale per capita GHG emissions (approximately 25 mt CO2e/person/year) with state and national data. The method is shown to have significant policy impacts, and also demonstrates the utility of benchmarks in understanding energy use in various city sectors.

  2. Integrated Case Based and Rule Based Reasoning for Decision Support

    OpenAIRE

    Eshete, Azeb Bekele

    2009-01-01

    This project is a continuation of my specialization project which was focused on studying theoretical concepts related to case based reasoning method, rule based reasoning method and integration of them. The integration of rule-based and case-based reasoning methods has shown a substantial improvement with regards to performance over the individual methods. Verdande Technology As wants to try integrating the rule based reasoning method with an existing case based system. This project focu...

  3. Evaluating perceptual integration: uniting response-time- and accuracy-based methodologies.

    Science.gov (United States)

    Eidels, Ami; Townsend, James T; Hughes, Howard C; Perry, Lacey A

    2015-02-01

    This investigation brings together a response-time system identification methodology (e.g., Townsend & Wenger Psychonomic Bulletin & Review 11, 391-418, 2004a) and an accuracy methodology, intended to assess models of integration across stimulus dimensions (features, modalities, etc.) that were proposed by Shaw and colleagues (e.g., Mulligan & Shaw Perception & Psychophysics 28, 471-478, 1980). The goal was to theoretically examine these separate strategies and to apply them conjointly to the same set of participants. The empirical phases were carried out within an extension of an established experimental design called the double factorial paradigm (e.g., Townsend & Nozawa Journal of Mathematical Psychology 39, 321-359, 1995). That paradigm, based on response times, permits assessments of architecture (parallel vs. serial processing), stopping rule (exhaustive vs. minimum time), and workload capacity, all within the same blocks of trials. The paradigm introduced by Shaw and colleagues uses a statistic formally analogous to that of the double factorial paradigm, but based on accuracy rather than response times. We demonstrate that the accuracy measure cannot discriminate between parallel and serial processing. Nonetheless, the class of models supported by the accuracy data possesses a suitable interpretation within the same set of models supported by the response-time data. The supported model, consistent across individuals, is parallel and has limited capacity, with the participants employing the appropriate stopping rule for the experimental setting.

  4. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  5. Bioprocess scale-up/down as integrative enabling technology: from fluid mechanics to systems biology and beyond.

    Science.gov (United States)

    Delvigne, Frank; Takors, Ralf; Mudde, Rob; van Gulik, Walter; Noorman, Henk

    2017-09-01

    Efficient optimization of microbial processes is a critical issue for achieving a number of sustainable development goals, considering the impact of microbial biotechnology in agrofood, environment, biopharmaceutical and chemical industries. Many of these applications require scale-up after proof of concept. However, the behaviour of microbial systems remains unpredictable (at least partially) when shifting from laboratory-scale to industrial conditions. The need for robust microbial systems is thus highly needed in this context, as well as a better understanding of the interactions between fluid mechanics and cell physiology. For that purpose, a full scale-up/down computational framework is already available. This framework links computational fluid dynamics (CFD), metabolic flux analysis and agent-based modelling (ABM) for a better understanding of the cell lifelines in a heterogeneous environment. Ultimately, this framework can be used for the design of scale-down simulators and/or metabolically engineered cells able to cope with environmental fluctuations typically found in large-scale bioreactors. However, this framework still needs some refinements, such as a better integration of gas-liquid flows in CFD, and taking into account intrinsic biological noise in ABM. © 2017 The Authors. Microbial Biotechnology published by John Wiley & Sons Ltd and Society for Applied Microbiology.

  6. Rules of Thumb for Up-and-Down Economics.

    Science.gov (United States)

    Kennedy, Peter

    1994-01-01

    Reports that economist Paul Krugman identifies three types of economic writing. Asserts that the major principles of macroeconomics can be synthesized into "Five Rules of Thumb" which can help to interpret economic news. Presents and discusses these five rules and includes a survey summary of journalists and economists on this issue.…

  7. Performance Assessment of Turbulence Models for the Prediction of the Reactor Internal Flow in the Scale-down APR+

    International Nuclear Information System (INIS)

    Lee, Gonghee; Bang, Youngseok; Woo, Swengwoong; Kim, Dohyeong; Kang, Minku

    2013-01-01

    The types of errors in CFD simulation can be divided into the two main categories: numerical errors and model errors. Turbulence model is one of the important sources for model errors. In this study, in order to assess the prediction performance of Reynolds-averaged Navier-Stokes (RANS)-based two equations turbulence models for the analysis of flow distribution inside a 1/5 scale-down APR+, the simulation was conducted with the commercial CFD software, ANSYS CFX V. 14. In this study, in order to assess the prediction performance of turbulence models for the analysis of flow distribution inside a 1/5 scale-down APR+, the simulation was conducted with the commercial CFD software, ANSYS CFX V. 14. Both standard k-ε model and SST model predicted the similar flow pattern inside reactor. Therefore it was concluded that the prediction performance of both turbulence models was nearly same. Complex thermal-hydraulic characteristics exist inside reactor because the reactor internals consist of fuel assembly, control rod assembly, and the internal structures. Either flow distribution test for the scale-down reactor model or computational fluid dynamics (CFD) simulation have been conducted to understand these complex thermal-hydraulic features inside reactor

  8. ConGEMs: Condensed Gene Co-Expression Module Discovery Through Rule-Based Clustering and Its Application to Carcinogenesis

    Directory of Open Access Journals (Sweden)

    Saurav Mallik

    2017-12-01

    Full Text Available For transcriptomic analysis, there are numerous microarray-based genomic data, especially those generated for cancer research. The typical analysis measures the difference between a cancer sample-group and a matched control group for each transcript or gene. Association rule mining is used to discover interesting item sets through rule-based methodology. Thus, it has advantages to find causal effect relationships between the transcripts. In this work, we introduce two new rule-based similarity measures—weighted rank-based Jaccard and Cosine measures—and then propose a novel computational framework to detect condensed gene co-expression modules ( C o n G E M s through the association rule-based learning system and the weighted similarity scores. In practice, the list of evolved condensed markers that consists of both singular and complex markers in nature depends on the corresponding condensed gene sets in either antecedent or consequent of the rules of the resultant modules. In our evaluation, these markers could be supported by literature evidence, KEGG (Kyoto Encyclopedia of Genes and Genomes pathway and Gene Ontology annotations. Specifically, we preliminarily identified differentially expressed genes using an empirical Bayes test. A recently developed algorithm—RANWAR—was then utilized to determine the association rules from these genes. Based on that, we computed the integrated similarity scores of these rule-based similarity measures between each rule-pair, and the resultant scores were used for clustering to identify the co-expressed rule-modules. We applied our method to a gene expression dataset for lung squamous cell carcinoma and a genome methylation dataset for uterine cervical carcinogenesis. Our proposed module discovery method produced better results than the traditional gene-module discovery measures. In summary, our proposed rule-based method is useful for exploring biomarker modules from transcriptomic data.

  9. ConGEMs: Condensed Gene Co-Expression Module Discovery Through Rule-Based Clustering and Its Application to Carcinogenesis.

    Science.gov (United States)

    Mallik, Saurav; Zhao, Zhongming

    2017-12-28

    For transcriptomic analysis, there are numerous microarray-based genomic data, especially those generated for cancer research. The typical analysis measures the difference between a cancer sample-group and a matched control group for each transcript or gene. Association rule mining is used to discover interesting item sets through rule-based methodology. Thus, it has advantages to find causal effect relationships between the transcripts. In this work, we introduce two new rule-based similarity measures-weighted rank-based Jaccard and Cosine measures-and then propose a novel computational framework to detect condensed gene co-expression modules ( C o n G E M s) through the association rule-based learning system and the weighted similarity scores. In practice, the list of evolved condensed markers that consists of both singular and complex markers in nature depends on the corresponding condensed gene sets in either antecedent or consequent of the rules of the resultant modules. In our evaluation, these markers could be supported by literature evidence, KEGG (Kyoto Encyclopedia of Genes and Genomes) pathway and Gene Ontology annotations. Specifically, we preliminarily identified differentially expressed genes using an empirical Bayes test. A recently developed algorithm-RANWAR-was then utilized to determine the association rules from these genes. Based on that, we computed the integrated similarity scores of these rule-based similarity measures between each rule-pair, and the resultant scores were used for clustering to identify the co-expressed rule-modules. We applied our method to a gene expression dataset for lung squamous cell carcinoma and a genome methylation dataset for uterine cervical carcinogenesis. Our proposed module discovery method produced better results than the traditional gene-module discovery measures. In summary, our proposed rule-based method is useful for exploring biomarker modules from transcriptomic data.

  10. Betweenness-based algorithm for a partition scale-free graph

    International Nuclear Information System (INIS)

    Zhang Bai-Da; Wu Jun-Jie; Zhou Jing; Tang Yu-Hua

    2011-01-01

    Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom—up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top—down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches. (interdisciplinary physics and related areas of science and technology)

  11. Development of scaling rules for Rutherford type superconducting cables

    International Nuclear Information System (INIS)

    Royet, J.M.; Scanlan, R.M.

    1991-01-01

    During the R and D phase of the Superconducting Supercollider (SSC) program, LBL was responsible for establishing the parameters for cables used in SSC dipole and quadrupole magnets. In addition, LBL has collaborated with Fermi National Accelerator Laboratory on the design and fabrication of a new cable for use in the Low Beta Quadrupoles. As a result of the development work on these and other cables, we have arrived a set of scaling rules which provide guidelines for choosing the parameters for a wide range of superconducting cables. These parameters include strand size, strand number, keystone angle, percent compaction, cable pitch and compacted cable dimensions. In addition, we have defined the tolerance ranges for the key cable manufacturing parameters such as mandrel size and shape, strand tension, and Turkshead temperature control. In this paper, the authors present the results on cables ranging from 8 strands to 36 strands of 0.65mm wire and from 8 strands to 30 strands of 0.8mm wire. The authors use these results to demonstrate the application of the scaling rules for Rutherford-type cable

  12. Development of scaling rules for Rutherford type superconducting cables

    International Nuclear Information System (INIS)

    Royet, J.M.; Scanlan, R.M.

    1990-09-01

    During the R ampersand D phase of the Superconducting Supercollider (SSC) program, LBL was responsible for establishing the parameters for cables used in SSC dipole and quadrupole magnets. In addition, the design and fabrication of a new cable for use in the Low Beta Quadrupoles. As a result of the development work on these and other cables, we have arrived a set of scaling rules which provide guidelines for choosing the parameters for a wide range of superconducting cables. These parameters include strand size, strand number, keystone angle, percent compaction, cable pitch and compacted cable dimensions. In addition, we have defined the tolerance ranges for the key cable manufacturing parameters such as mandrel size and shape, stand tension, and Turkshead temperature control. In this paper, we present the results on cables ranging from 8 strands to 36 strands of 0.65mm wire and from 8 strands to 30 strands of 0.8mm wire. We use these results to demonstrate the application of the scaling rules for Rutherford-type cable

  13. The Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) Scale: Comprehensive Assessment of Psychopathology in Down Syndrome

    Science.gov (United States)

    Dekker, Alain D.; Sacco, Silvia; Carfi, Angelo; Benejam, Bessy; Vermeiren, Yannick; Beugelsdijk, Gonny; Schippers, Mieke; Hassefras, Lyanne; Eleveld, José; Grefelman, Sharina; Fopma, Roelie; Bomer-Veenboer, Monique; Boti, Mariángeles; Oosterling, G. Danielle E.; Scholten, Esther; Tollenaere, Marleen; Checkley, Laura; Strydom, André; Van Goethem, Gert; Onder, Graziano; Blesa, Rafael; zu Eulenburg, Christine; Coppus, Antonia M.W.; Rebillat, Anne-Sophie; Fortea, Juan; De Deyn, Peter P.

    2018-01-01

    People with Down syndrome (DS) are prone to develop Alzheimer’s disease (AD). Behavioral and psychological symptoms of dementia (BPSD) are core features, but have not been comprehensively evaluated in DS. In a European multidisciplinary study, the novel Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) scale was developed to identify frequency and severity of behavioral changes taking account of life-long characteristic behavior. 83 behavioral items in 12 clinically defined sections were evaluated. The central aim was to identify items that change in relation to the dementia status, and thus may differentiate between diagnostic groups. Structured interviews were conducted with informants of persons with DS without dementia (DS, n = 149), with questionable dementia (DS+Q, n = 65), and with diagnosed dementia (DS+AD, n = 67). First exploratory data suggest promising interrater, test-retest, and internal consistency reliability measures. Concerning item relevance, group comparisons revealed pronounced increases in frequency and severity in items of anxiety, sleep disturbances, agitation & stereotypical behavior, aggression, apathy, depressive symptoms, and eating/drinking behavior. The proportion of individuals presenting an increase was highest in DS+AD, intermediate in DS+Q, and lowest in DS. Interestingly, among DS+Q individuals, a substantial proportion already presented increased anxiety, sleep disturbances, apathy, and depressive symptoms, suggesting that these changes occur early in the course of AD. Future efforts should optimize the scale based on current results and clinical experiences, and further study applicability, reliability, and validity. Future application of the scale in daily care may aid caregivers to understand changes, and contribute to timely interventions and adaptation of caregiving. PMID:29689719

  14. The Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) Scale: Comprehensive Assessment of Psychopathology in Down Syndrome.

    Science.gov (United States)

    Dekker, Alain D; Sacco, Silvia; Carfi, Angelo; Benejam, Bessy; Vermeiren, Yannick; Beugelsdijk, Gonny; Schippers, Mieke; Hassefras, Lyanne; Eleveld, José; Grefelman, Sharina; Fopma, Roelie; Bomer-Veenboer, Monique; Boti, Mariángeles; Oosterling, G Danielle E; Scholten, Esther; Tollenaere, Marleen; Checkley, Laura; Strydom, André; Van Goethem, Gert; Onder, Graziano; Blesa, Rafael; Zu Eulenburg, Christine; Coppus, Antonia M W; Rebillat, Anne-Sophie; Fortea, Juan; De Deyn, Peter P

    2018-01-01

    People with Down syndrome (DS) are prone to develop Alzheimer's disease (AD). Behavioral and psychological symptoms of dementia (BPSD) are core features, but have not been comprehensively evaluated in DS. In a European multidisciplinary study, the novel Behavioral and Psychological Symptoms of Dementia in Down Syndrome (BPSD-DS) scale was developed to identify frequency and severity of behavioral changes taking account of life-long characteristic behavior. 83 behavioral items in 12 clinically defined sections were evaluated. The central aim was to identify items that change in relation to the dementia status, and thus may differentiate between diagnostic groups. Structured interviews were conducted with informants of persons with DS without dementia (DS, n = 149), with questionable dementia (DS+Q, n = 65), and with diagnosed dementia (DS+AD, n = 67). First exploratory data suggest promising interrater, test-retest, and internal consistency reliability measures. Concerning item relevance, group comparisons revealed pronounced increases in frequency and severity in items of anxiety, sleep disturbances, agitation & stereotypical behavior, aggression, apathy, depressive symptoms, and eating/drinking behavior. The proportion of individuals presenting an increase was highest in DS+AD, intermediate in DS+Q, and lowest in DS. Interestingly, among DS+Q individuals, a substantial proportion already presented increased anxiety, sleep disturbances, apathy, and depressive symptoms, suggesting that these changes occur early in the course of AD. Future efforts should optimize the scale based on current results and clinical experiences, and further study applicability, reliability, and validity. Future application of the scale in daily care may aid caregivers to understand changes, and contribute to timely interventions and adaptation of caregiving.

  15. Assessing a Top-Down Modeling Approach for Seasonal Scale Snow Sensitivity

    Science.gov (United States)

    Luce, C. H.; Lute, A.

    2017-12-01

    Mechanistic snow models are commonly applied to assess changes to snowpacks in a warming climate. Such assessments involve a number of assumptions about details of weather at daily to sub-seasonal time scales. Models of season-scale behavior can provide contrast for evaluating behavior at time scales more in concordance with climate warming projections. Such top-down models, however, involve a degree of empiricism, with attendant caveats about the potential of a changing climate to affect calibrated relationships. We estimated the sensitivity of snowpacks from 497 Snowpack Telemetry (SNOTEL) stations in the western U.S. based on differences in climate between stations (spatial analog). We examined the sensitivity of April 1 snow water equivalent (SWE) and mean snow residence time (SRT) to variations in Nov-Mar precipitation and average Nov-Mar temperature using multivariate local-fit regressions. We tested the modeling approach using a leave-one-out cross-validation as well as targeted two-fold non-random cross-validations contrasting, for example, warm vs. cold years, dry vs. wet years, and north vs. south stations. Nash-Sutcliffe Efficiency (NSE) values for the validations were strong for April 1 SWE, ranging from 0.71 to 0.90, and still reasonable, but weaker, for SRT, in the range of 0.64 to 0.81. From these ranges, we exclude validations where the training data do not represent the range of target data. A likely reason for differences in validation between the two metrics is that the SWE model reflects the influence of conservation of mass while using temperature as an indicator of the season-scale energy balance; in contrast, SRT depends more strongly on the energy balance aspects of the problem. Model forms with lower numbers of parameters generally validated better than more complex model forms, with the caveat that pseudoreplication could encourage selection of more complex models when validation contrasts were weak. Overall, the split sample validations

  16. Integrated layout based Monte-Carlo simulation for design arc optimization

    Science.gov (United States)

    Shao, Dongbing; Clevenger, Larry; Zhuang, Lei; Liebmann, Lars; Wong, Robert; Culp, James

    2016-03-01

    Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533

  17. A methodological approach for designing a usable ontology-based GUI in healthcare.

    Science.gov (United States)

    Lasierra, N; Kushniruk, A; Alesanco, A; Borycki, E; García, J

    2013-01-01

    This paper presents a methodological approach to the design and evaluation of an interface for an ontology-based system used for designing care plans for monitoring patients at home. In order to define the care plans, physicians need a tool for creating instances of the ontology and configuring some rules. Our purpose is to develop an interface to allow clinicians to interact with the ontology. Although ontology-driven applications do not necessarily present the ontology in the user interface, it is our hypothesis that showing selected parts of the ontology in a "usable" way could enhance clinician's understanding and make easier the definition of the care plans. Based on prototyping and iterative testing, this methodology combines visualization techniques and usability methods. Preliminary results obtained after a formative evaluation indicate the effectiveness of suggested combination.

  18. Non-plant referenced simulator methodology to meet new 10 CFR 55.45 rule

    International Nuclear Information System (INIS)

    Ibarra, J.G.

    1988-01-01

    The new 10CFR55.45 rule on Operating Tests necessitates that simulators be upgraded to meet the new requirements. This paper presents the human factors work done on an NRC approved guidance document sponsored by four utilities to develop a non-plant reference simulator facility. Human factors developed the simulator process flow and criteria, and integrated all the development work into the simulation facility plan. The human factors work provided the mechanism to solidify ideas and provided the foundation for the simulator development methodology

  19. Intelligent wear mode identification system for marine diesel engines based on multi-level belief rule base methodology

    Science.gov (United States)

    Yan, Xinping; Xu, Xiaojian; Sheng, Chenxing; Yuan, Chengqing; Li, Zhixiong

    2018-01-01

    Wear faults are among the chief causes of main-engine damage, significantly influencing the secure and economical operation of ships. It is difficult for engineers to utilize multi-source information to identify wear modes, so an intelligent wear mode identification model needs to be developed to assist engineers in diagnosing wear faults in diesel engines. For this purpose, a multi-level belief rule base (BBRB) system is proposed in this paper. The BBRB system consists of two-level belief rule bases, and the 2D and 3D characteristics of wear particles are used as antecedent attributes on each level. Quantitative and qualitative wear information with uncertainties can be processed simultaneously by the BBRB system. In order to enhance the efficiency of the BBRB, the silhouette value is adopted to determine referential points and the fuzzy c-means clustering algorithm is used to transform input wear information into belief degrees. In addition, the initial parameters of the BBRB system are constructed on the basis of expert-domain knowledge and then optimized by the genetic algorithm to ensure the robustness of the system. To verify the validity of the BBRB system, experimental data acquired from real-world diesel engines are analyzed. Five-fold cross-validation is conducted on the experimental data and the BBRB is compared with the other four models in the cross-validation. In addition, a verification dataset containing different wear particles is used to highlight the effectiveness of the BBRB system in wear mode identification. The verification results demonstrate that the proposed BBRB is effective and efficient for wear mode identification with better performance and stability than competing systems.

  20. UPC Scaling-up methodology for Deterministic Safety Assessment and Support to Plant Operation

    Energy Technology Data Exchange (ETDEWEB)

    Martínez-Quiroga, V.; Reventós, F.; Batet, Il.

    2015-07-01

    Best Estimate codes along with necessary nodalizations are widely used tools in nuclear engineering for both Deterministic Safety Assessment (DSA) and Support to Plant Operation and Control. In this framework, the application of quality assurance procedures in both codes and nodalizations becomes an essential step prior any significant study. Along these lines the present paper introduces the UPC SCUP, a systematic methodology based on the extrapolation of the Integral Test Facilities (ITF) post-test simulations by means of scaling analyses. In that sense, SCUP fulfills a gap in current nodalization qualification procedures, the related with the validation of NPP nodalizations for Design Basis Accidents conditions. Three are the pillars that support SCUP: judicial selection of the experimental transients, full confidence in the quality of the ITF simulations, and simplicity in justifying discrepancies that appear between ITF and NPP counterpart transients. The techniques that are presented include the socalled Kv scaled calculations as well as the use of two new approaches, ”Hybrid nodalizations” and ”Scaled-up nodalizations”. These last two methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The study of both LSTF and PKL counterpart tests have allowed to qualify the methodology by the comparison with experimental data. Post-test simulations at different sizes allowed to define which phenomena could be well reproduced by system codes and which not, in this way also establishing the basis for the extrapolation to an NPP scaled calculation. Furthermore, the application of the UPC SCUP methodology demonstrated that selected phenomena can be scaled-up and explained between counterpart simulations by carefully considering the differences in scale and design. (Author)

  1. A Cross-Wavelet Transform Aided Rule Based Approach for Early Prediction of Lean Blow-out in Swirl-Stabilized Dump Combustor

    Directory of Open Access Journals (Sweden)

    Debangshu Dey

    2015-03-01

    Full Text Available Lean or ultralean combustion is one of the popular strategies to achieve very low emission levels. However, it is extremely susceptible to lean blow-out (LBO. The present work explores a Cross-wavelet transform (XWT aided rule based scheme for early prediction of lean blowout. XWT can be considered as an advancement of wavelet analysis which gives correlation between two waveforms in time-frequency space. In the present scheme a swirl-stabilized dump combustor is used as a laboratory-scale model of a generic gas turbine combustor with LPG as fuel. Various time series data of CH chemiluminescence signal are recorded for different flame conditions by varying equivalence ratio, flow rate and level of air-fuel premixing. Some features are extracted from the cross-wavelet spectrum of the recorded waveforms and a reference wave. The extracted features are observed to classify the flame condition into three major classes: near LBO, moderate and healthy. Moreover, a Rough Set based technique is also applied on the extracted features to generate a rule base so that it can be fed to a real time controller or expert system to take necessary control action to prevent LBO. Results show that the proposed methodology performs with an acceptable degree of accuracy.

  2. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  3. Methodology for the free allocation of emission allowances in the EU ETS post 2012. Sector report for the chemical industry

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-11-15

    In 2013, the third trading period of the EU emission trading scheme (EU ETS) will start. With a few exceptions, no free allocation of emission allowances is foreseen in this third trading period for the emissions related to the production of electricity. These emission allowances will be auctioned. For other emissions, transitional free allocation of emission allowances is envisioned. This free allocation will be based on Community wide allocation rules that will, to the extent feasible, be based on ex-ante benchmarks. In 2013, the free allocation is 80% of the quantity determined via these rules, going down to 30% in 2020. An exception is made for activities that are deemed to be exposed to a significant risk of carbon leakage. These activities will receive an allocation of 100% of the quantity determined via the rules. The benchmarks should in principle be calculated for products, i.e. a specific performance per unit productive output, to ensure that they maximize greenhouse gas reductions throughout each production process of the sectors concerned. In this study for the European Commission, a blueprint for a methodology based on benchmarking is developed to determine the allocation rules in the EU ETS from 2013 onwards. In case where benchmarking is not regarded feasible, alternative approaches are suggested. The methodology allows determining the allocation for each EU ETS installation eligible for free allocation of emission allowances. The focus of this study is on preparing a first blueprint of an allocation methodology for free allocation of emission allowances under the EU Emission Trading Scheme for the period 2013-2020 for installations in the refinery industry. The report should be read in conjunction with the report on the project approach and general issues.

  4. Methodology for the free allocation of emission allowances in the EU ETS post 2012. Sector report for the refinery industry

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2009-11-15

    In 2013, the third trading period of the EU emission trading scheme (EU ETS) will start. With a few exceptions, no free allocation of emission allowances is foreseen in this third trading period for the emissions related to the production of electricity. These emission allowances will be auctioned. For other emissions, transitional free allocation of emission allowances is envisioned. This free allocation will be based on Community wide allocation rules that will, to the extent feasible, be based on ex-ante benchmarks. In 2013, the free allocation is 80% of the quantity determined via these rules, going down to 30% in 2020. An exception is made for activities that are deemed to be exposed to a significant risk of carbon leakage. These activities will receive an allocation of 100% of the quantity determined via the rules. The benchmarks should in principle be calculated for products, i.e. a specific performance per unit productive output, to ensure that they maximize greenhouse gas reductions throughout each production process of the sectors concerned. In this study for the European Commission, a blueprint for a methodology based on benchmarking is developed to determine the allocation rules in the EU ETS from 2013 onwards. In case where benchmarking is not regarded feasible, alternative approaches are suggested. The methodology allows determining the allocation for each EU ETS installation eligible for free allocation of emission allowances. The focus of this study is on preparing a first blueprint of an allocation methodology for free allocation of emission allowances under the EU Emission Trading Scheme for the period 2013-2020 for installations in the refinery industry. The report should be read in conjunction with the report on the project approach and general issues.

  5. Spatio-Temporal Rule Mining

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach

    2005-01-01

    Recent advances in communication and information technology, such as the increasing accuracy of GPS technology and the miniaturization of wireless communication devices pave the road for Location-Based Services (LBS). To achieve high quality for such services, spatio-temporal data mining techniques...... are needed. In this paper, we describe experiences with spatio-temporal rule mining in a Danish data mining company. First, a number of real world spatio-temporal data sets are described, leading to a taxonomy of spatio-temporal data. Second, the paper describes a general methodology that transforms...... the spatio-temporal rule mining task to the traditional market basket analysis task and applies it to the described data sets, enabling traditional association rule mining methods to discover spatio-temporal rules for LBS. Finally, unique issues in spatio-temporal rule mining are identified and discussed....

  6. Ruling the Commons. Introducing a new methodology for the analysis of historical commons

    Directory of Open Access Journals (Sweden)

    Tine de Moor

    2016-10-01

    Full Text Available Despite significant progress in recent years, the evolution of commons over the long run remains an under-explored area within commons studies. During the last years an international team of historians have worked under the umbrella of the Common Rules Project in order to design and test a new methodology aimed at advancing our knowledge on the dynamics of institutions for collective action – in particular commons. This project aims to contribute to the current debate on commons on three different fronts. Theoretically, it explicitly draws our attention to issues of change and adaptation in the commons – contrasting with more static analyses. Empirically, it highlights the value of historical records as a rich source of information for longitudinal analysis of the functioning of commons. Methodologically, it develops a systematic way of analyzing and comparing commons’ regulations across regions and time, setting a number of variables that have been defined on the basis of the “most common denominators” in commons regulation across countries and time periods. In this paper we introduce the project, describe our sources and methodology, and present the preliminary results of our analysis.

  7. A new methodology for the study of FAC phenomenon based on a fuzzy rule system

    International Nuclear Information System (INIS)

    Ferreira Guimaraes, Antonio Cesar

    2003-01-01

    This work consists of the representation of the corrosion problem, FAC - 'Flow-Accelerated Corrosion' in components, structures and passive systems in a nuclear power plant with aging, through a fuzzy rules system, in substitution to the conventional modeling and experimental analyses. Using data characteristic of the nature of the problem to be analyzed, a reduced number of rules can be establish to represent the actual problem. The results can be visualized in a very satisfactory way thus providing the engineer with the knowledge to work in the space of solution of rules to do the necessary inferences

  8. Horizontal and Vertical Rule Bases Method in Fuzzy Controllers

    OpenAIRE

    Aminifar, Sadegh; bin Marzuki, Arjuna

    2013-01-01

    Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rule base. Horizontal...

  9. Rule based systems for big data a machine learning approach

    CERN Document Server

    Liu, Han; Cocea, Mihaela

    2016-01-01

    The ideas introduced in this book explore the relationships among rule based systems, machine learning and big data. Rule based systems are seen as a special type of expert systems, which can be built by using expert knowledge or learning from real data. The book focuses on the development and evaluation of rule based systems in terms of accuracy, efficiency and interpretability. In particular, a unified framework for building rule based systems, which consists of the operations of rule generation, rule simplification and rule representation, is presented. Each of these operations is detailed using specific methods or techniques. In addition, this book also presents some ensemble learning frameworks for building ensemble rule based systems.

  10. An industrial perspective on bioreactor scale-down: what we can learn from combined large-scale bioprocess and model fluid studies.

    Science.gov (United States)

    Noorman, Henk

    2011-08-01

    For industrial bioreactor design, operation, control and optimization, the scale-down approach is often advocated to efficiently generate data on a small scale, and effectively apply suggested improvements to the industrial scale. In all cases it is important to ensure that the scale-down conditions are representative of the real large-scale bioprocess. Progress is hampered by limited detailed and local information from large-scale bioprocesses. Complementary to real fermentation studies, physical aspects of model fluids such as air-water in large bioreactors provide useful information with limited effort and cost. Still, in industrial practice, investments of time, capital and resources often prohibit systematic work, although, in the end, savings obtained in this way are trivial compared to the expenses that result from real process disturbances, batch failures, and non-flyers with loss of business opportunity. Here we try to highlight what can be learned from real large-scale bioprocess in combination with model fluid studies, and to provide suitable computation tools to overcome data restrictions. Focus is on a specific well-documented case for a 30-m(3) bioreactor. Areas for further research from an industrial perspective are also indicated. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. TOWARDS A MULTI-SCALE AGENT-BASED PROGRAMMING LANGUAGE METHODOLOGY.

    Science.gov (United States)

    Somogyi, Endre; Hagar, Amit; Glazier, James A

    2016-12-01

    Living tissues are dynamic, heterogeneous compositions of objects , including molecules, cells and extra-cellular materials, which interact via chemical, mechanical and electrical process and reorganize via transformation, birth, death and migration processes . Current programming language have difficulty describing the dynamics of tissues because: 1: Dynamic sets of objects participate simultaneously in multiple processes, 2: Processes may be either continuous or discrete, and their activity may be conditional, 3: Objects and processes form complex, heterogeneous relationships and structures, 4: Objects and processes may be hierarchically composed, 5: Processes may create, destroy and transform objects and processes. Some modeling languages support these concepts, but most cannot translate models into executable simulations. We present a new hybrid executable modeling language paradigm, the Continuous Concurrent Object Process Methodology ( CCOPM ) which naturally expresses tissue models, enabling users to visually create agent-based models of tissues, and also allows computer simulation of these models.

  12. Rule-Based and Case-Based Reasoning in Housing Prices

    OpenAIRE

    Gabrielle Gayer; Itzhak Gilboa; Offer Lieberman

    2004-01-01

    People reason about real-estate prices both in terms of general rules and in terms of analogies to similar cases. We propose to empirically test which mode of reasoning fits the data better. To this end, we develop the statistical techniques required for the estimation of the case-based model. It is hypothesized that case-based reasoning will have relatively more explanatory power in databases of rental apartments, whereas rule-based reasoning will have a relative advantage in sales data. We ...

  13. Setting up fuel supply strategies for large-scale bio-energy projects using agricultural and forest residues. A methodology for developing countries

    International Nuclear Information System (INIS)

    Junginger, M.

    2000-08-01

    The objective of this paper is to develop a coherent methodology to set up fuel supply strategies for large-scale biomass-conversion units. This method will explicitly take risks and uncertainties regarding availability and costs in relation to time into account. This paper aims at providing general guidelines, which are not country-specific. These guidelines cannot provide 'perfect fit'-solutions, but aim to give general help to overcome barriers and to set up supply strategies. It will mainly focus on residues from the agricultural and forestry sector. This study focuses on electricity or both electricity and heat production (CHP) with plant scales between 1040 MWe. This range is chosen due to rules of economies of scale. In large-scale plants the benefits of increased efficiency outweigh increased transportation costs, allowing a lower price per kWh which in turn may allow higher biomass costs. However, fuel-supply risks tend to get higher with increasing plant size, which makes it more important to assess them for large(r) conversion plants. Although the methodology does not focus on a specific conversion technology, it should be stressed that the technology must be able to handle a wide variety of biomass fuels with different characteristics because many biomass residues are not available the year round and various fuels are needed for a constant supply. The methodology allows for comparing different technologies (with known investment and operational and maintenance costs from literature) and evaluation for different fuel supply scenarios. In order to demonstrate the methodology, a case study was carried out for the north-eastern part of Thailand (Isaan), an agricultural region. The research was conducted in collaboration with the Regional Wood Energy Development Programme in Asia (RWEDP), a project of the UN Food and Agricultural Organization (FAO) in Bangkok, Thailand. In Section 2 of this paper the methodology will be presented. In Section 3 the economic

  14. Thermophysical properties of lignocellulose: a cell-scale study down to 41 K.

    Science.gov (United States)

    Cheng, Zhe; Xu, Zaoli; Zhang, Lei; Wang, Xinwei

    2014-01-01

    Thermal energy transport is of great importance in lignocellulose pyrolysis for biofuels. The thermophysical properties of lignocellulose significantly affect the overall properties of bio-composites and the related thermal transport. In this work, cell-scale lignocellulose (mono-layer plant cells) is prepared to characterize their thermal properties from room temperature down to ∼ 40 K. The thermal conductivities of cell-scale lignocellulose along different directions show a little anisotropy due to the cell structure anisotropy. It is found that with temperature going down, the volumetric specific heat of the lignocellulose shows a slower decreasing trend against temperature than microcrystalline cellulose, and its value is always higher than that of microcrystalline cellulose. The thermal conductivity of lignocellulose decreases with temperature from 243 K to 317 K due to increasing phonon-phonon scatterings. From 41 K to 243 K, the thermal conductivity rises with temperature and its change mainly depends on the heat capacity's change.

  15. Thermophysical properties of lignocellulose: a cell-scale study down to 41 K.

    Directory of Open Access Journals (Sweden)

    Zhe Cheng

    Full Text Available Thermal energy transport is of great importance in lignocellulose pyrolysis for biofuels. The thermophysical properties of lignocellulose significantly affect the overall properties of bio-composites and the related thermal transport. In this work, cell-scale lignocellulose (mono-layer plant cells is prepared to characterize their thermal properties from room temperature down to ∼ 40 K. The thermal conductivities of cell-scale lignocellulose along different directions show a little anisotropy due to the cell structure anisotropy. It is found that with temperature going down, the volumetric specific heat of the lignocellulose shows a slower decreasing trend against temperature than microcrystalline cellulose, and its value is always higher than that of microcrystalline cellulose. The thermal conductivity of lignocellulose decreases with temperature from 243 K to 317 K due to increasing phonon-phonon scatterings. From 41 K to 243 K, the thermal conductivity rises with temperature and its change mainly depends on the heat capacity's change.

  16. Sterol synthesis and cell size distribution under oscillatory growth conditions in Saccharomyces cerevisiae scale-down cultivations.

    Science.gov (United States)

    Marbà-Ardébol, Anna-Maria; Bockisch, Anika; Neubauer, Peter; Junne, Stefan

    2018-02-01

    Physiological responses of yeast to oscillatory environments as they appear in the liquid phase in large-scale bioreactors have been the subject of past studies. So far, however, the impact on the sterol content and intracellular regulation remains to be investigated. Since oxygen is a cofactor in several reaction steps within sterol metabolism, changes in oxygen availability, as occurs in production-scale aerated bioreactors, might have an influence on the regulation and incorporation of free sterols into the cell lipid layer. Therefore, sterol and fatty acid synthesis in two- and three-compartment scale-down Saccharomyces cerevisiae cultivation were studied and compared with typical values obtained in homogeneous lab-scale cultivations. While cells were exposed to oscillating substrate and oxygen availability in the scale-down cultivations, growth was reduced and accumulation of carboxylic acids was increased. Sterol synthesis was elevated to ergosterol at the same time. The higher fluxes led to increased concentrations of esterified sterols. The cells thus seem to utilize the increased availability of precursors to fill their sterol reservoirs; however, this seems to be limited in the three-compartment reactor cultivation due to a prolonged exposure to oxygen limitation. Besides, a larger heterogeneity within the single-cell size distribution was observed under oscillatory growth conditions with three-dimensional holographic microscopy. Hence the impact of gradients is also observable at the morphological level. The consideration of such a single-cell-based analysis provides useful information about the homogeneity of responses among the population. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Knowledge-based and model-based hybrid methodology for comprehensive waste minimization in electroplating plants

    Science.gov (United States)

    Luo, Keqin

    1999-11-01

    The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and

  18. An overview of bipolar qualitative decision rules

    Science.gov (United States)

    Bonnefon, Jean-Francois; Dubois, Didier; Fargier, Hélène

    Making a good decision is often a matter of listing and comparing positive and negative arguments, as studies in cognitive psychology have shown. In such cases, the evaluation scale should be considered bipolar, that is, negative and positive values are explicitly distinguished. Generally, positive and negative features are evaluated separately, as done in Cumulative Prospect Theory. However, contrary to the latter framework that presupposes genuine numerical assessments, decisions are often made on the basis of an ordinal ranking of the pros and the cons, and focusing on the most salient features, i.e., the decision process is qualitative. In this paper, we report on a project aiming at characterizing several decision rules, based on possibilistic order of magnitude reasoning, and tailored for the joint handling of positive and negative affects, and at testing their empirical validity. The simplest rules can be viewed as extensions of the maximin and maximax criteria to the bipolar case and, like them, suffer from a lack of discrimination power. More decisive rules that refine them are also proposed. They account for both the principle of Pareto-efficiency and the notion of order of magnitude reasoning. The most decisive one uses a lexicographic ranking of the pros and cons. It comes down to a special case of Cumulative Prospect Theory, and subsumes the “Take the best” heuristic.

  19. A Rule-Based Model for Bankruptcy Prediction Based on an Improved Genetic Ant Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2013-01-01

    Full Text Available In this paper, we proposed a hybrid system to predict corporate bankruptcy. The whole procedure consists of the following four stages: first, sequential forward selection was used to extract the most important features; second, a rule-based model was chosen to fit the given dataset since it can present physical meaning; third, a genetic ant colony algorithm (GACA was introduced; the fitness scaling strategy and the chaotic operator were incorporated with GACA, forming a new algorithm—fitness-scaling chaotic GACA (FSCGACA, which was used to seek the optimal parameters of the rule-based model; and finally, the stratified K-fold cross-validation technique was used to enhance the generalization of the model. Simulation experiments of 1000 corporations’ data collected from 2006 to 2009 demonstrated that the proposed model was effective. It selected the 5 most important factors as “net income to stock broker’s equality,” “quick ratio,” “retained earnings to total assets,” “stockholders’ equity to total assets,” and “financial expenses to sales.” The total misclassification error of the proposed FSCGACA was only 7.9%, exceeding the results of genetic algorithm (GA, ant colony algorithm (ACA, and GACA. The average computation time of the model is 2.02 s.

  20. RULE-BASE METHOD FOR ANALYSIS OF QUALITY E-LEARNING IN HIGHER EDUCATION

    Directory of Open Access Journals (Sweden)

    darsih darsih darsih

    2016-04-01

    Full Text Available ABSTRACT Assessing the quality of e-learning courses to measure the success of e-learning systems in online learning is essential. The system can be used to improve education. The study analyzes the quality of e-learning course on the web site www.kulon.undip.ac.id used a questionnaire with questions based on the variables of ISO 9126. Penilaiann Likert scale was used with a web app. Rule-base reasoning method is used to subject the quality of e-learningyang assessed. A case study conducted in four e-learning courses with 133 sample / respondents as users of the e-learning course. From the obtained results of research conducted both for the value of e-learning from each subject tested. In addition, each e-learning courses have different advantages depending on certain variables. Keywords : E-Learning, Rule-Base, Questionnaire, Likert, Measuring.

  1. Risk-based rules for crane safety systems

    Energy Technology Data Exchange (ETDEWEB)

    Ruud, Stian [Section for Control Systems, DNV Maritime, 1322 Hovik (Norway)], E-mail: Stian.Ruud@dnv.com; Mikkelsen, Age [Section for Lifting Appliances, DNV Maritime, 1322 Hovik (Norway)], E-mail: Age.Mikkelsen@dnv.com

    2008-09-15

    The International Maritime Organisation (IMO) has recommended a method called formal safety assessment (FSA) for future development of rules and regulations. The FSA method has been applied in a pilot research project for development of risk-based rules and functional requirements for systems and components for offshore crane systems. This paper reports some developments in the project. A method for estimating target reliability for the risk-control options (safety functions) by means of the cost/benefit decision criterion has been developed in the project and is presented in this paper. Finally, a structure for risk-based rules is proposed and presented.

  2. Risk-based rules for crane safety systems

    International Nuclear Information System (INIS)

    Ruud, Stian; Mikkelsen, Age

    2008-01-01

    The International Maritime Organisation (IMO) has recommended a method called formal safety assessment (FSA) for future development of rules and regulations. The FSA method has been applied in a pilot research project for development of risk-based rules and functional requirements for systems and components for offshore crane systems. This paper reports some developments in the project. A method for estimating target reliability for the risk-control options (safety functions) by means of the cost/benefit decision criterion has been developed in the project and is presented in this paper. Finally, a structure for risk-based rules is proposed and presented

  3. Derivative-Based Trapezoid Rule for the Riemann-Stieltjes Integral

    Directory of Open Access Journals (Sweden)

    Weijing Zhao

    2014-01-01

    Full Text Available The derivative-based trapezoid rule for the Riemann-Stieltjes integral is presented which uses 2 derivative values at the endpoints. This kind of quadrature rule obtains an increase of two orders of precision over the trapezoid rule for the Riemann-Stieltjes integral and the error term is investigated. At last, the rationality of the generalization of derivative-based trapezoid rule for Riemann-Stieltjes integral is demonstrated.

  4. Rule-based energy management strategies for hybrid vehicles

    NARCIS (Netherlands)

    Hofman, T.; Druten, van R.M.; Serrarens, A.F.A.; Steinbuch, M.

    2007-01-01

    Int. J. of Electric and Hybrid Vehicles (IJEHV), The highest control layer of a (hybrid) vehicular drive train is termed the Energy Management Strategy (EMS). In this paper an overview of different control methods is given and a new rule-based EMS is introduced based on the combination of Rule-Based

  5. Direct Down-scale Experiments of Concentration Column Designs for SHINE Process

    Energy Technology Data Exchange (ETDEWEB)

    Youker, Amanda J. [Argonne National Lab. (ANL), Argonne, IL (United States); Stepinski, Dominique C. [Argonne National Lab. (ANL), Argonne, IL (United States); Vandegrift, George F. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-05-01

    Argonne is assisting SHINE Medical Technologies in their efforts to become a domestic Mo-99 producer. The SHINE accelerator-driven process uses a uranyl-sulfate target solution for the production of fission-product Mo-99. Argonne has developed a molybdenum recovery and purification process for this target solution. The process includes an initial Mo recovery column followed by a concentration column to reduce the product volume from 15-25 L to < 1 L prior to entry into the LEU Modified Cintichem (LMC) process for purification.1 This report discusses direct down-scale experiments of the plant-scale concentration column design, where the effects of loading velocity and temperature were investigated.

  6. Fuzzy Clustering based Methodology for Multidimensional Data Analysis in Computational Forensic Domain

    OpenAIRE

    Kilian Stoffel; Paul Cotofrei; Dong Han

    2012-01-01

    As interdisciplinary domain requiring advanced and innovative methodologies the computational forensics domain is characterized by data being simultaneously large scaled and uncertain multidimensional and approximate. Forensic domain experts trained to discover hidden pattern from crime data are limited in their analysis without the assistance of a computational intelligence approach. In this paper a methodology and an automatic procedure based on fuzzy set theory and designed to infer precis...

  7. Appraisal of thermal premium in green building practice at urban scale. Methodological preview

    Directory of Open Access Journals (Sweden)

    Antonino Barbalace

    2012-03-01

    Full Text Available Research objective is to conceive, design and experiment a strategy at wide city scale to contribute facing the global ecological and energy crises of the Planet. This should be done through structural cut-down of fossil fuel consumption for settlements, i.e. by adopting the lowest cost energy: the not-consumed one! According to the Green Building Practices there are goals that can be achieved mainly through building passivation connected to: application of state-of-the-art sustainable technologies; alternative energy sources; diffusion of zero-mile decentralized solar energy production; fostering-up of solar feeding for summer air-conditioning, i.e. one of the most demanding energy consumers of the world. Owners, investors, contractors, occupants, tenants ask for more information and better quantification of energy saving magnitude in Green Buildings, as well as further appraisal of pay-back period i.e the length of time required for the return or re-payment of the additional larger initial investment. Research tries to conceive a methodology to assess in physical and monetary terms the energy saving and to define in financial terms the initial investment pay-back period at a reasonable rate. Proposed methodology has been tested in a first Case Study set-up at urban level and performed on a urban blocks of a Mediterranean medium size town including real world prototype yard, i.e. chantier, integrating passivation with the adoption of key frontier technologies such as sun chiller and solar cooling.

  8. Selecting Tanker Steaming Speeds under Uncertainty: A Rule-Based Bayesian Reasoning Approach

    Directory of Open Access Journals (Sweden)

    N.S.F. Abdul Rahman

    2015-06-01

    Full Text Available In the tanker industry, there are a lot of uncertain conditions that tanker companies have to deal with. For example, the global financial crisis and economic recession, the increase of bunker fuel prices and global climate change. Such conditions have forced tanker companies to change tankers speed from full speed to slow speed, extra slow speed and super slow speed. Due to such conditions, the objective of this paper is to present a methodology for determining vessel speeds of tankers that minimize the cost of the vessels under such conditions. The four levels of vessel speed in the tanker industry will be investigated and will incorporate a number of uncertain conditions. This will be done by developing a scientific model using a rule-based Bayesian reasoning method. The proposed model has produced 96 rules that can be used as guidance in the decision making process. Such results help tanker companies to determine the appropriate vessel speed to be used in a dynamic operational environmental.

  9. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  10. Horizontal and Vertical Rule Bases Method in Fuzzy Controllers

    Directory of Open Access Journals (Sweden)

    Sadegh Aminifar

    2013-01-01

    Full Text Available Concept of horizontal and vertical rule bases is introduced. Using this method enables the designers to look for main behaviors of system and describes them with greater approximations. The rules which describe the system in first stage are called horizontal rule base. In the second stage, the designer modulates the obtained surface by describing needed changes on first surface for handling real behaviors of system. The rules used in the second stage are called vertical rule base. Horizontal and vertical rule bases method has a great roll in easing of extracting the optimum control surface by using too lesser rules than traditional fuzzy systems. This research involves with control of a system with high nonlinearity and in difficulty to model it with classical methods. As a case study for testing proposed method in real condition, the designed controller is applied to steaming room with uncertain data and variable parameters. A comparison between PID and traditional fuzzy counterpart and our proposed system shows that our proposed system outperforms PID and traditional fuzzy systems in point of view of number of valve switching and better surface following. The evaluations have done both with model simulation and DSP implementation.

  11. An XML-Based Manipulation and Query Language for Rule-Based Information

    Science.gov (United States)

    Mansour, Essam; Höpfner, Hagen

    Rules are utilized to assist in the monitoring process that is required in activities, such as disease management and customer relationship management. These rules are specified according to the application best practices. Most of research efforts emphasize on the specification and execution of these rules. Few research efforts focus on managing these rules as one object that has a management life-cycle. This paper presents our manipulation and query language that is developed to facilitate the maintenance of this object during its life-cycle and to query the information contained in this object. This language is based on an XML-based model. Furthermore, we evaluate the model and language using a prototype system applied to a clinical case study.

  12. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  13. Fuzzy rule-based model for hydropower reservoirs operation

    Energy Technology Data Exchange (ETDEWEB)

    Moeini, R.; Afshar, A.; Afshar, M.H. [School of Civil Engineering, Iran University of Science and Technology, Tehran (Iran, Islamic Republic of)

    2011-02-15

    Real-time hydropower reservoir operation is a continuous decision-making process of determining the water level of a reservoir or the volume of water released from it. The hydropower operation is usually based on operating policies and rules defined and decided upon in strategic planning. This paper presents a fuzzy rule-based model for the operation of hydropower reservoirs. The proposed fuzzy rule-based model presents a set of suitable operating rules for release from the reservoir based on ideal or target storage levels. The model operates on an 'if-then' principle, in which the 'if' is a vector of fuzzy premises and the 'then' is a vector of fuzzy consequences. In this paper, reservoir storage, inflow, and period are used as premises and the release as the consequence. The steps involved in the development of the model include, construction of membership functions for the inflow, storage and the release, formulation of fuzzy rules, implication, aggregation and defuzzification. The required knowledge bases for the formulation of the fuzzy rules is obtained form a stochastic dynamic programming (SDP) model with a steady state policy. The proposed model is applied to the hydropower operation of ''Dez'' reservoir in Iran and the results are presented and compared with those of the SDP model. The results indicate the ability of the method to solve hydropower reservoir operation problems. (author)

  14. How can activity-based costing methodology be performed as a powerful tool to calculate costs and secure appropriate patient care?

    Science.gov (United States)

    Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong

    2007-04-01

    Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.

  15. Derivation of Optimal Operating Rules for Large-scale Reservoir Systems Considering Multiple Trade-off

    Science.gov (United States)

    Zhang, J.; Lei, X.; Liu, P.; Wang, H.; Li, Z.

    2017-12-01

    Flood control operation of multi-reservoir systems such as parallel reservoirs and hybrid reservoirs often suffer from complex interactions and trade-off among tributaries and the mainstream. The optimization of such systems is computationally intensive due to nonlinear storage curves, numerous constraints and complex hydraulic connections. This paper aims to derive the optimal flood control operating rules based on the trade-off among tributaries and the mainstream using a new algorithm known as weighted non-dominated sorting genetic algorithm II (WNSGA II). WNSGA II could locate the Pareto frontier in non-dominated region efficiently due to the directed searching by weighted crowding distance, and the results are compared with those of conventional operating rules (COR) and single objective genetic algorithm (GA). Xijiang river basin in China is selected as a case study, with eight reservoirs and five flood control sections within four tributaries and the mainstream. Furthermore, the effects of inflow uncertainty have been assessed. Results indicate that: (1) WNSGA II could locate the non-dominated solutions faster and provide better Pareto frontier than the traditional non-dominated sorting genetic algorithm II (NSGA II) due to the weighted crowding distance; (2) WNSGA II outperforms COR and GA on flood control in the whole basin; (3) The multi-objective operating rules from WNSGA II deal with the inflow uncertainties better than COR. Therefore, the WNSGA II can be used to derive stable operating rules for large-scale reservoir systems effectively and efficiently.

  16. A methodology for the assessment of flood hazards at the regional scale

    Science.gov (United States)

    Gallina, Valentina; Torresan, Silvia; Critto, Andrea; Zabeo, Alex; Semenzin, Elena; Marcomini, Antonio

    2013-04-01

    In recent years, the frequency of water-related disasters has increased and recent flood events in Europe (e.g. 2002 in Central Europe, 2007 in UK, 2010 in Italy) caused physical-environmental and socio-economic damages. Specifically, floods are the most threatening water-related disaster that affects humans, their lives and properties. Within the KULTURisk project (FP7) a Regional Risk Assessment (RRA) methodology is proposed to evaluate the benefits of risk prevention in terms of reduced environmental risks due to floods. The method is based on the KULTURisk framework and allows the identification and prioritization of targets (i.e. people, buildings, infrastructures, agriculture, natural and semi-natural systems, cultural heritages) and areas at risk from floods in the considered region by comparing the baseline scenario (i.e. current state) with alternative scenarios (i.e. where different structural and/or non-structural measures are planned). The RRA methodology is flexible and can be adapted to different case studies (i.e. large rivers, alpine/mountain catchments, urban areas and coastal areas) and spatial scales (i.e. from the large river to the urban scale). The final aim of RRA is to help decision-makers in examining the possible environmental risks associated with uncertain future flood hazards and in identifying which prevention scenario could be the most suitable one. The RRA methodology employs Multi-Criteria Decision Analysis (MCDA functions) in order to integrate stakeholder preferences and experts judgments into the analysis. Moreover, Geographic Information Systems (GISs) are used to manage, process, analyze, and map data to facilitate the analysis and the information sharing with different experts and stakeholders. In order to characterize flood risks, the proposed methodology integrates the output of hydrodynamic models with the analysis of site-specific bio-geophysical and socio-economic indicators (e.g. slope of the territory, land cover

  17. Benchmarking energy scenarios for China: perspectives from top-down, economic and bottom-up, technical modelling

    DEFF Research Database (Denmark)

    This study uses a soft-linking methodology to harmonise two complex global top-down and bottom-up models with a regional China focus. The baseline follows the GDP and demographic trends of the Shared Socio-economic Pathways (SSP2) scenario, down-scaled for China, while the carbon tax scenario fol......-specific modelling results further. These new sub-regional China features can now be used for a more detailed analysis of China's regional developments in a global context....

  18. Orthogonal search-based rule extraction for modelling the decision to transfuse.

    Science.gov (United States)

    Etchells, T A; Harrison, M J

    2006-04-01

    Data from an audit relating to transfusion decisions during intermediate or major surgery were analysed to determine the strengths of certain factors in the decision making process. The analysis, using orthogonal search-based rule extraction (OSRE) from a trained neural network, demonstrated that the risk of tissue hypoxia (ROTH) assessed using a 100-mm visual analogue scale, the haemoglobin value (Hb) and the presence or absence of on-going haemorrhage (OGH) were able to reproduce the transfusion decisions with a joint specificity of 0.96 and sensitivity of 0.93 and a positive predictive value of 0.9. The rules indicating transfusion were: 1. ROTH > 32 mm and Hb 13 mm and Hb 38 mm, Hb < 102 g x l(-1) and OGH; 4. Hb < 78 g x l(-1).

  19. Architectural Synthesis of Flow-Based Microfluidic Large-Scale Integration Biochips

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan; Pop, Paul; Madsen, Jan

    2012-01-01

    ,we propose a top-down architectural synthesis methodology for the flow-based biochips. Starting from a given biochemical application and a microfluidic component library, we are interested in synthesizing a biochip architecture, i.e., performing component allocation from the library based on the biochemical....... By combining several microvalves, more complex units, such as micropumps, switches, mixers, and multiplexers, can be built. The manufacturing technology, soft lithography, used for the flow-based biochips is advancing faster than Moore's law, resulting in increased architectural complexity. However...... by synthesizing architectures for real-life applications as well as synthetic benchmarks....

  20. Analysis of Rules for Islamic Inheritance Law in Indonesia Using Hybrid Rule Based Learning

    Science.gov (United States)

    Khosyi'ah, S.; Irfan, M.; Maylawati, D. S.; Mukhlas, O. S.

    2018-01-01

    Along with the development of human civilization in Indonesia, the changes and reform of Islamic inheritance law so as to conform to the conditions and culture cannot be denied. The distribution of inheritance in Indonesia can be done automatically by storing the rule of Islamic inheritance law in the expert system. In this study, we analyze the knowledge of experts in Islamic inheritance in Indonesia and represent it in the form of rules using rule-based Forward Chaining (FC) and Davis-Putman-Logemann-Loveland (DPLL) algorithms. By hybridizing FC and DPLL algorithms, the rules of Islamic inheritance law in Indonesia are clearly defined and measured. The rules were conceptually validated by some experts in Islamic laws and informatics. The results revealed that generally all rules were ready for use in an expert system.

  1. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    Science.gov (United States)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  2. A Modified Rule of Thumb for Evaluating Scale Reproducibilities Determined by Electronic Computers

    Science.gov (United States)

    Hofmann, Richard J.

    1978-01-01

    The Goodenough technique for determining scale error is compared to the Guttman technique and demonstrated to be more conservative than the Guttman technique. Implications with regard to Guttman's evaluative rule of thumb for evaluating a reproducibility are noted. (Author)

  3. Using Top-down and Bottom-up Costing Approaches in LMICs: The Case for Using Both to Assess the Incremental Costs of New Technologies at Scale.

    Science.gov (United States)

    Cunnama, Lucy; Sinanovic, Edina; Ramma, Lebogang; Foster, Nicola; Berrie, Leigh; Stevens, Wendy; Molapo, Sebaka; Marokane, Puleng; McCarthy, Kerrigan; Churchyard, Gavin; Vassall, Anna

    2016-02-01

    Estimating the incremental costs of scaling-up novel technologies in low-income and middle-income countries is a methodologically challenging and substantial empirical undertaking, in the absence of routine cost data collection. We demonstrate a best practice pragmatic approach to estimate the incremental costs of new technologies in low-income and middle-income countries, using the example of costing the scale-up of Xpert Mycobacterium tuberculosis (MTB)/resistance to riframpicin (RIF) in South Africa. We estimate costs, by applying two distinct approaches of bottom-up and top-down costing, together with an assessment of processes and capacity. The unit costs measured using the different methods of bottom-up and top-down costing, respectively, are $US16.9 and $US33.5 for Xpert MTB/RIF, and $US6.3 and $US8.5 for microscopy. The incremental cost of Xpert MTB/RIF is estimated to be between $US14.7 and $US17.7. While the average cost of Xpert MTB/RIF was higher than previous studies using standard methods, the incremental cost of Xpert MTB/RIF was found to be lower. Costs estimates are highly dependent on the method used, so an approach, which clearly identifies resource-use data collected from a bottom-up or top-down perspective, together with capacity measurement, is recommended as a pragmatic approach to capture true incremental cost where routine cost data are scarce. © 2016 The Authors. Health Economics published by John Wiley & Sons Ltd.

  4. MODIFIED PATH METHODOLOGY FOR OBTAINING INTERVAL-SCALED POSTURAL ASSESSMENTS OF FARMWORKERS.

    Science.gov (United States)

    Garrison, Emma B; Dropkin, Jonathan; Russell, Rebecca; Jenkins, Paul

    2018-01-29

    Agricultural workers perform tasks that frequently require awkward and extreme postures that are associated with musculoskeletal disorders (MSDs). The PATH (Posture, Activity, Tools, Handling) system currently provides a sound methodology for quantifying workers' exposure to these awkward postures on an ordinal scale of measurement, which places restrictions on the choice of analytic methods. This study reports a modification of the PATH methodology that instead captures these postures as degrees of flexion, an interval-scaled measurement. Rather than making live observations in the field, as in PATH, the postural assessments were performed on photographs using ImageJ photo analysis software. Capturing the postures in photographs permitted more careful measurement of the degrees of flexion. The current PATH methodology requires that the observer in the field be trained in the use of PATH, whereas the single photographer used in this modification requires only sufficient training to maintain the proper camera angle. Ultimately, these interval-scale measurements could be combined with other quantitative measures, such as those produced by electromyograms (EMGs), to provide more sophisticated estimates of future risk for MSDs. Further, these data can provide a baseline from which the effects of interventions designed to reduce hazardous postures can be calculated with greater precision. Copyright© by the American Society of Agricultural Engineers.

  5. Seventh meeting of the Global Alliance to Eliminate Lymphatic Filariasis: reaching the vision by scaling up, scaling down, and reaching out

    Science.gov (United States)

    2014-01-01

    This report summarizes the 7th meeting of the Global Alliance to Eliminate Lymphatic Filariasis (GAELF), Washington DC, November 18–19, 2012. The theme, “A Future Free of Lymphatic Filariasis: Reaching the Vision by Scaling Up, Scaling Down and Reaching Out”, emphasized new strategies and partnerships necessary to reach the 2020 goal of elimination of lymphatic filariasis (LF) as a public-health problem. PMID:24450283

  6. Resonance Self-Shielding Methodologies in SCALE 6

    International Nuclear Information System (INIS)

    Williams, Mark L.

    2011-01-01

    SCALE 6 includes several problem-independent multigroup (MG) libraries that were processed from the evaluated nuclear data file ENDF/B using a generic flux spectrum. The library data must be self-shielded and corrected for problem-specific spectral effects for use in MG neutron transport calculations. SCALE 6 computes problem-dependent MG cross sections through a combination of the conventional Bondarenko shielding-factor method and a deterministic continuous-energy (CE) calculation of the fine-structure spectra in the resolved resonance and thermal energy ranges. The CE calculation can be performed using an infinite medium approximation, a simplified two-region method for lattices, or a one-dimensional discrete ordinates transport calculation with pointwise (PW) cross-section data. This paper describes the SCALE-resonance self-shielding methodologies, including the deterministic calculation of the CE flux spectra using PW nuclear data and the method for using CE spectra to produce problem-specific MG cross sections for various configurations (including doubly heterogeneous lattices). It also presents results of verification and validation studies.

  7. A scaling study of the natural circulation flow of the ex-vessel core catcher cooling system of a 1400MW PWR for designing a scale-down test facility

    International Nuclear Information System (INIS)

    Rhee, Bo. W.; Ha, K. S.; Park, R. J.; Song, J. H.

    2012-01-01

    A scaling study on the steady state natural circulation flow along the flow path of the ex-vessel core catcher cooling system of 1400MWe PWR is described. The scaling criteria for reproducing the same thermalhydraulic characteristics of the natural circulation flow as the prototype core catcher cooling system in the scale-down test facility is derived and the resulting natural circulation flow characteristics of the prototype and scale-down facility analyzed and compared. The purpose of this study is to apply the similarity law to the prototype EU-APR1400 core catcher cooling system and the model test facility of this prototype system and derive a relationship between the heating channel characteristics and the down-comer piping characteristics so as to determine the down-comer pipe size and the orifice size of the model test facility. As the geometry and the heating wall heat flux of the heating channel of the model test facility will be the same as those of the prototype core catcher cooling system except the width of the heating channel is reduced, the axial distribution of the coolant quality (or void fraction) is expected to resemble each other between the prototype and model facility. Thus using this fact, the down-comer piping design characteristics of the model facility can be determined from the relationship derived from the similarity law

  8. A methodology of SiP testing based on boundary scan

    Science.gov (United States)

    Qin, He; Quan, Haiyang; Han, Yifei; Zhu, Tianrui; Zheng, Tuo

    2017-10-01

    System in Package (SiP) play an important role in portable, aerospace and military electronic with the microminiaturization, light weight, high density, and high reliability. At present, SiP system test has encountered the problem on system complexity and malfunction location with the system scale exponentially increase. For SiP system, this paper proposed a testing methodology and testing process based on the boundary scan technology. Combining the character of SiP system and referencing the boundary scan theory of PCB circuit and embedded core test, the specific testing methodology and process has been proposed. The hardware requirement of the under test SiP system has been provided, and the hardware platform of the testing has been constructed. The testing methodology has the character of high test efficiency and accurate malfunction location.

  9. Connecting clinical and actuarial prediction with rule-based methods.

    Science.gov (United States)

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  10. A top-down system engineering approach as an alternative to the tradiotinal over-the-bench methodology for the design of a building

    NARCIS (Netherlands)

    Bluyssen, P.M.; Oostra, M.A.R.; Böhms, H.M.

    2010-01-01

    The traditional over-the-bench methodology for the design of a building is resulting in failure costs, failures at delivery and severe problems. An interactive top-down approach is required, both for the communication process necessary to facilitate the design, construction, maintenance and

  11. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  12. Down image recognition based on deep convolutional neural network

    Directory of Open Access Journals (Sweden)

    Wenzhu Yang

    2018-06-01

    Full Text Available Since of the scale and the various shapes of down in the image, it is difficult for traditional image recognition method to correctly recognize the type of down image and get the required recognition accuracy, even for the Traditional Convolutional Neural Network (TCNN. To deal with the above problems, a Deep Convolutional Neural Network (DCNN for down image classification is constructed, and a new weight initialization method is proposed. Firstly, the salient regions of a down image were cut from the image using the visual saliency model. Then, these salient regions of the image were used to train a sparse autoencoder and get a collection of convolutional filters, which accord with the statistical characteristics of dataset. At last, a DCNN with Inception module and its variants was constructed. To improve the recognition accuracy, the depth of the network is deepened. The experiment results indicate that the constructed DCNN increases the recognition accuracy by 2.7% compared to TCNN, when recognizing the down in the images. The convergence rate of the proposed DCNN with the new weight initialization method is improved by 25.5% compared to TCNN. Keywords: Deep convolutional neural network, Weight initialization, Sparse autoencoder, Visual saliency model, Image recognition

  13. Hybrid tree-rule firewall for high speed data transmission

    NARCIS (Netherlands)

    Chomsiri, Thawatchai; He, Xiangjian; Nanda, Priyadarsi; Tan, Zhiyuan

    2017-01-01

    Traditional firewalls employ listed rules in both configuration and process phases to regulate network traffic. However, configuring a firewall with listed rules may create rule conflicts, and slows down the firewall. To overcome this problem, we have proposed a Tree-rule firewall in our previous

  14. Usage-based vs. rule-based learning: the acquisition of word order in wh-questions in English and Norwegian.

    Science.gov (United States)

    Westergaard, Marit

    2009-11-01

    This paper discusses different approaches to language acquisition in relation to children's acquisition of word order in wh-questions in English and Norwegian. While generative models assert that children set major word order parameters and thus acquire a rule of subject-auxiliary inversion or generalized verb second (V2) at an early stage, some constructivist work argues that English-speaking children are simply reproducing frequent wh-word+auxiliary combinations in the input. The paper questions both approaches, re-evaluates some previous work, and provides some further data, concluding that the acquisition of wh-questions must be the result of a rule-based process. Based on variation in adult grammars, a cue-based model to language acquisition is presented, according to which children are sensitive to minor cues in the input, called micro-cues. V2 is not considered to be one major parameter, but several smaller-scale cues, which are responsible for children's lack of syntactic (over-)generalization in the acquisition process.

  15. Prevalence of autism and attention-deficit-hyperactivity disorder in Down syndrome: a population-based study.

    Science.gov (United States)

    Oxelgren, Ulrika Wester; Myrelid, Åsa; Annerén, Göran; Ekstam, Bodil; Göransson, Cathrine; Holmbom, Agneta; Isaksson, Anne; Åberg, Marie; Gustafsson, Jan; Fernell, Elisabeth

    2017-03-01

    To investigate the prevalence of autism spectrum disorder (ASD) and attention-deficit-hyperactivity disorder (ADHD) in a population-based group of children and adolescents with Down syndrome, and to relate the findings to level of intellectual disability and to medical conditions. From a population-based cohort of 60 children and adolescents with Down syndrome, 41 individuals (29 males, 12 females; mean age 11y, age range 5-17y) for whom parents gave consent for participation were clinically assessed with regard to ASD and ADHD. The main instruments used were the Autism Diagnostic Interview-Revised, Autism Diagnostic Observation Schedule, Swanson, Nolan, and Pelham-IV Rating Scale, and the Adaptive Behavior Assessment System-II. High rates of ASD and ADHD were found: 17 (42%) and 14 (34%) of the 41 children met DSM criteria for ASD and ADHD respectively. Children with Down syndrome and coexisting neurodevelopmental/neuropsychiatric disorders in addition to intellectual disability and medical disorders constitute a severely disabled group. Based on the results, we suggest that screening is implemented for both ASD and ADHD, at the age of 3 to 5 years and early school years respectively, to make adequate interventions possible. © 2016 Mac Keith Press.

  16. Online Dispatching Rules For Vehicle-Based Internal Transport Systems

    NARCIS (Netherlands)

    T. Le-Anh (Tuan); M.B.M. de Koster (René)

    2004-01-01

    textabstractOn-line vehicles dispatching rules are widely used in many facilities such as warehouses to control vehicles' movements. Single-attribute dispatching rules, which dispatch vehicles based on only one parameter, are used commonly. However, multi-attribute dispatching rules prove to be

  17. WellnessRules: A Web 3.0 Case Study in RuleML-Based Prolog-N3 Profile Interoperation

    Science.gov (United States)

    Boley, Harold; Osmun, Taylor Michael; Craig, Benjamin Larry

    An interoperation study, WellnessRules, is described, where rules about wellness opportunities are created by participants in rule languages such as Prolog and N3, and translated within a wellness community using RuleML/XML. The wellness rules are centered around participants, as profiles, encoding knowledge about their activities conditional on the season, the time-of-day, the weather, etc. This distributed knowledge base extends FOAF profiles with a vocabulary and rules about wellness group networking. The communication between participants is organized through Rule Responder, permitting wellness-profile translation and distributed querying across engines. WellnessRules interoperates between rules and queries in the relational (Datalog) paradigm of the pure-Prolog subset of POSL and in the frame (F-logic) paradigm of N3. An evaluation of Rule Responder instantiated for WellnessRules revealed acceptable Web response times.

  18. Personalization of Rule-based Web Services.

    Science.gov (United States)

    Choi, Okkyung; Han, Sang Yong

    2008-04-04

    Nowadays Web users have clearly expressed their wishes to receive personalized services directly. Personalization is the way to tailor services directly to the immediate requirements of the user. However, the current Web Services System does not provide any features supporting this such as consideration of personalization of services and intelligent matchmaking. In this research a flexible, personalized Rule-based Web Services System to address these problems and to enable efficient search, discovery and construction across general Web documents and Semantic Web documents in a Web Services System is proposed. This system utilizes matchmaking among service requesters', service providers' and users' preferences using a Rule-based Search Method, and subsequently ranks search results. A prototype of efficient Web Services search and construction for the suggested system is developed based on the current work.

  19. Prediction and verification of centrifugal dewatering of P. pastoris fermentation cultures using an ultra scale-down approach.

    Science.gov (United States)

    Lopes, A G; Keshavarz-Moore, E

    2012-08-01

    Recent years have seen a dramatic rise in fermentation broth cell densities and a shift to extracellular product expression in microbial cells. As a result, dewatering characteristics during cell separation is of importance, as any liquor trapped in the sediment results in loss of product, and thus a decrease in product recovery. In this study, an ultra scale-down (USD) approach was developed to enable the rapid assessment of dewatering performance of pilot-scale centrifuges with intermittent solids discharge. The results were then verified at scale for two types of pilot-scale centrifuges: a tubular bowl equipment and a disk-stack centrifuge. Initial experiments showed that employing a laboratory-scale centrifugal mimic based on using a comparable feed concentration to that of the pilot-scale centrifuge, does not successfully predict the dewatering performance at scale (P-value centrifuge. Initial experiments used Baker's yeast feed suspensions followed by fresh Pichia pastoris fermentation cultures. This work presents a simple and novel USD approach to predict dewatering levels in two types of pilot-scale centrifuges using small quantities of feedstock (centrifuge needs to be operated, reducing the need for repeated pilot-scale runs during early stages of process development. Copyright © 2012 Wiley Periodicals, Inc.

  20. A modified GO-FLOW methodology with common cause failure based on Discrete Time Bayesian Network

    International Nuclear Information System (INIS)

    Fan, Dongming; Wang, Zili; Liu, Linlin; Ren, Yi

    2016-01-01

    Highlights: • Identification of particular causes of failure for common cause failure analysis. • Comparison two formalisms (GO-FLOW and Discrete Time Bayesian network) and establish the correlation between them. • Mapping the GO-FLOW model into Bayesian network model. • Calculated GO-FLOW model with common cause failures based on DTBN. - Abstract: The GO-FLOW methodology is a success-oriented system reliability modelling technique for multi-phase missions involving complex time-dependent, multi-state and common cause failure (CCF) features. However, the analysis algorithm cannot easily handle the multiple shared signals and CCFs. In addition, the simulative algorithm is time consuming when vast multi-state components exist in the model, and the multiple time points of phased mission problems increases the difficulty of the analysis method. In this paper, the Discrete Time Bayesian Network (DTBN) and the GO-FLOW methodology are integrated by the unified mapping rules. Based on these rules, the multi operators can be mapped into DTBN followed by, a complete GO-FLOW model with complex characteristics (e.g. phased mission, multi-state, and CCF) can be converted to the isomorphic DTBN and easily analyzed by utilizing the DTBN. With mature algorithms and tools, the multi-phase mission reliability parameter can be efficiently obtained via the proposed approach without considering the shared signals and the various complex logic operation. Meanwhile, CCF can also arise in the computing process.

  1. Adopted Methodology for Cool-Down of SST-1 Superconducting Magnet System: Operational Experience with the Helium Refrigerator

    Science.gov (United States)

    Sahu, A. K.; Sarkar, B.; Panchal, P.; Tank, J.; Bhattacharya, R.; Panchal, R.; Tanna, V. L.; Patel, R.; Shukla, P.; Patel, J. C.; Singh, M.; Sonara, D.; Sharma, R.; Duggar, R.; Saxena, Y. C.

    2008-03-01

    The 1.3 kW at 4.5 K helium refrigerator / liquefier (HRL) was commissioned during the year 2003. The HRL was operated with its different modes as per the functional requirements of the experiments. The superconducting magnets system (SCMS) of SST-1 was successfully cooled down to 4.5 K. The actual loads were different from the originally predicted boundary conditions and an adjustment in the thermodynamic balance of the refrigerator was necessary. This led to enhanced capacity, which was achieved without any additional hardware. The required control system for the HRL was tuned to achieve the stable thermodynamic balance, while keeping the turbines' operating parameters at optimized conditions. An extra mass flow rate requirement was met by exploiting the margin available with the compressor station. The methodology adopted to modify the capacity of the HRL, the safety precautions and experience of SCMS cool down to 4.5 K, are discussed.

  2. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    Energy Technology Data Exchange (ETDEWEB)

    Vismari, Lucio Flavio, E-mail: lucio.vismari@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil); Batista Camargo Junior, Joao, E-mail: joaocamargo@usp.b [Safety Analysis Group (GAS), School of Engineering at University of Sao Paulo (Poli-USP), Av. Prof. Luciano Gualberto, Trav.3, n.158, Predio da Engenharia de Eletricidade, Sala C2-32, CEP 05508-900, Sao Paulo (Brazil)

    2011-07-15

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  3. A safety assessment methodology applied to CNS/ATM-based air traffic control system

    International Nuclear Information System (INIS)

    Vismari, Lucio Flavio; Batista Camargo Junior, Joao

    2011-01-01

    In the last decades, the air traffic system has been changing to adapt itself to new social demands, mainly the safe growth of worldwide traffic capacity. Those changes are ruled by the Communication, Navigation, Surveillance/Air Traffic Management (CNS/ATM) paradigm , based on digital communication technologies (mainly satellites) as a way of improving communication, surveillance, navigation and air traffic management services. However, CNS/ATM poses new challenges and needs, mainly related to the safety assessment process. In face of these new challenges, and considering the main characteristics of the CNS/ATM, a methodology is proposed at this work by combining 'absolute' and 'relative' safety assessment methods adopted by the International Civil Aviation Organization (ICAO) in ICAO Doc.9689 , using Fluid Stochastic Petri Nets (FSPN) as the modeling formalism, and compares the safety metrics estimated from the simulation of both the proposed (in analysis) and the legacy system models. To demonstrate its usefulness, the proposed methodology was applied to the 'Automatic Dependent Surveillance-Broadcasting' (ADS-B) based air traffic control system. As conclusions, the proposed methodology assured to assess CNS/ATM system safety properties, in which FSPN formalism provides important modeling capabilities, and discrete event simulation allowing the estimation of the desired safety metric.

  4. Knowledge discovery about quality of life changes of spinal cord injury patients: clustering based on rules by states.

    Science.gov (United States)

    Gibert, Karina; García-Rudolph, Alejandro; Curcoll, Lluïsa; Soler, Dolors; Pla, Laura; Tormos, José María

    2009-01-01

    In this paper, an integral Knowledge Discovery Methodology, named Clustering based on rules by States, which incorporates artificial intelligence (AI) and statistical methods as well as interpretation-oriented tools, is used for extracting knowledge patterns about the evolution over time of the Quality of Life (QoL) of patients with Spinal Cord Injury. The methodology incorporates the interaction with experts as a crucial element with the clustering methodology to guarantee usefulness of the results. Four typical patterns are discovered by taking into account prior expert knowledge. Several hypotheses are elaborated about the reasons for psychological distress or decreases in QoL of patients over time. The knowledge discovery from data (KDD) approach turns out, once again, to be a suitable formal framework for handling multidimensional complexity of the health domains.

  5. A national-scale remote sensing-based methodology for quantifying tidal marsh biomass to support "Blue Carbon" accounting

    Science.gov (United States)

    Byrd, K. B.; Ballanti, L.; Nguyen, D.; Simard, M.; Thomas, N.; Windham-Myers, L.; Castaneda, E.; Kroeger, K. D.; Gonneea, M. E.; O'Keefe Suttles, J.; Megonigal, P.; Troxler, T.; Schile, L. M.; Davis, M.; Woo, I.

    2016-12-01

    According to 2013 IPCC Wetlands Supplement guidelines, tidal marsh Tier 2 or Tier 3 accounting must include aboveground biomass carbon stock changes. To support this need, we are using free satellite and aerial imagery to develop a national scale, consistent remote sensing-based methodology for quantifying tidal marsh aboveground biomass. We are determining the extent to which additional satellite data will increase the accuracy of this "blue carbon" accounting. Working in 6 U.S. estuaries (Cape Cod, MA, Chesapeake Bay, MD, Everglades, FL, Mississippi Delta, LA, San Francisco Bay, CA, and Puget Sound, WA), we built a tidal marsh biomass dataset (n=2404). Landsat reflectance data were matched spatially and temporally with field plots using Google Earth Engine. We quantified percent cover of green vegetation, non-vegetation, and open water in Landsat pixels using segmentation of 1m National Agriculture Imagery Program aerial imagery. Sentinel-1A C-band backscatter data were used in Chesapeake, Mississippi Delta and Puget Sound. We tested multiple Landsat vegetation indices and Sentinel backscatter metrics in 30m scale biomass linear regression models by region. Scaling biomass by fraction green vegetation significantly improved biomass estimation (e.g. Cape Cod: R2 = 0.06 vs. R2 = 0.60, n=28). The best vegetation indices differed by region, though indices based on the shortwave infrared-1 and red bands were most predictive in the Everglades and the Mississippi Delta, while the soil adjusted vegetation index was most predictive in Puget Sound and Chesapeake. Backscatter metrics significantly improved model predictions over vegetation indices alone; consistently across regions, the most significant metric was the range in backscatter values within the green vegetation segment of the Landsat pixel (e.g. Mississippi Delta: R2 = 0.47 vs. R2 = 0.59, n=15). Results support using remote sensing of biomass stock change to estimate greenhouse gas emission factors in tidal

  6. Hypothesis testing on the fractal structure of behavioral sequences: the Bayesian assessment of scaling methodology.

    Science.gov (United States)

    Moscoso del Prado Martín, Fermín

    2013-12-01

    I introduce the Bayesian assessment of scaling (BAS), a simple but powerful Bayesian hypothesis contrast methodology that can be used to test hypotheses on the scaling regime exhibited by a sequence of behavioral data. Rather than comparing parametric models, as typically done in previous approaches, the BAS offers a direct, nonparametric way to test whether a time series exhibits fractal scaling. The BAS provides a simpler and faster test than do previous methods, and the code for making the required computations is provided. The method also enables testing of finely specified hypotheses on the scaling indices, something that was not possible with the previously available methods. I then present 4 simulation studies showing that the BAS methodology outperforms the other methods used in the psychological literature. I conclude with a discussion of methodological issues on fractal analyses in experimental psychology. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. SCALE6 Hybrid Deterministic-Stochastic Shielding Methodology for PWR Containment Calculations

    International Nuclear Information System (INIS)

    Matijevic, Mario; Pevec, Dubravko; Trontl, Kresimir

    2014-01-01

    The capabilities and limitations of SCALE6/MAVRIC hybrid deterministic-stochastic shielding methodology (CADIS and FW-CADIS) are demonstrated when applied to a realistic deep penetration Monte Carlo (MC) shielding problem of full-scale PWR containment model. The ultimate goal of such automatic variance reduction (VR) techniques is to achieve acceptable precision for the MC simulation in reasonable time by preparation of phase-space VR parameters via deterministic transport theory methods (discrete ordinates SN) by generating space-energy mesh-based adjoint function distribution. The hybrid methodology generates VR parameters that work in tandem (biased source distribution and importance map) in automated fashion which is paramount step for MC simulation of complex models with fairly uniform mesh tally uncertainties. The aim in this paper was determination of neutron-gamma dose rate distribution (radiation field) over large portions of PWR containment phase-space with uniform MC uncertainties. The sources of ionizing radiation included fission neutrons and gammas (reactor core) and gammas from activated two-loop coolant. Special attention was given to focused adjoint source definition which gave improved MC statistics in selected materials and/or regions of complex model. We investigated benefits and differences of FW-CADIS over CADIS and manual (i.e. analog) MC simulation of particle transport. Computer memory consumption by deterministic part of hybrid methodology represents main obstacle when using meshes with millions of cells together with high SN/PN parameters, so optimization of control and numerical parameters of deterministic module plays important role for computer memory management. We investigated the possibility of using deterministic module (memory intense) with broad group library v7 2 7n19g opposed to fine group library v7 2 00n47g used with MC module to fully take effect of low energy particle transport and secondary gamma emission. Compared with

  8. Constructing rule-based models using the belief functions framework

    NARCIS (Netherlands)

    Almeida, R.J.; Denoeux, T.; Kaymak, U.; Greco, S.; Bouchon-Meunier, B.; Coletti, G.; Fedrizzi, M.; Matarazzo, B.; Yager, R.R.

    2012-01-01

    Abstract. We study a new approach to regression analysis. We propose a new rule-based regression model using the theoretical framework of belief functions. For this purpose we use the recently proposed Evidential c-means (ECM) to derive rule-based models solely from data. ECM allocates, for each

  9. A Fuzzy Rule-based Controller For Automotive Vehicle Guidance

    OpenAIRE

    Hessburg, Thomas; Tomizuka, Masayoshi

    1991-01-01

    A fuzzy rule-based controller is applied to lateral guidance of a vehicle for an automated highway system. The fuzzy rules, based on human drivers' experiences, are developed to track the center of a lane in the presence of external disturbances and over a range of vehicle operating conditions.

  10. Scaling to Nanotechnology Limits with the PIMS Computer Architecture and a new Scaling Rule

    Energy Technology Data Exchange (ETDEWEB)

    Debenedictis, Erik P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    We describe a new approach to computing that moves towards the limits of nanotechnology using a newly formulated sc aling rule. This is in contrast to the current computer industry scali ng away from von Neumann's original computer at the rate of Moore's Law. We extend Moore's Law to 3D, which l eads generally to architectures that integrate logic and memory. To keep pow er dissipation cons tant through a 2D surface of the 3D structure requires using adiabatic principles. We call our newly proposed architecture Processor In Memory and Storage (PIMS). We propose a new computational model that integrates processing and memory into "tiles" that comprise logic, memory/storage, and communications functions. Since the programming model will be relatively stable as a system scales, programs repr esented by tiles could be executed in a PIMS system built with today's technology or could become the "schematic diagram" for implementation in an ultimate 3D nanotechnology of the future. We build a systems software approach that offers advantages over and above the technological and arch itectural advantages. Firs t, the algorithms may be more efficient in the conventional sens e of having fewer steps. Second, the algorithms may run with higher power efficiency per operation by being a better match for the adiabatic scaling ru le. The performance analysis based on demonstrated ideas in physical science suggests 80,000 x improvement in cost per operation for the (arguably) gene ral purpose function of emulating neurons in Deep Learning.

  11. Reconciling Basin-Scale Top-Down and Bottom-Up Methane Emission Measurements for Onshore Oil and Gas Development: Cooperative Research and Development Final Report, CRADA Number CRD-14-572

    Energy Technology Data Exchange (ETDEWEB)

    Heath, Garvin A. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-12-04

    The overall objective of the Research Partnership to Secure Energy for America (RPSEA)-funded research project is to develop independent estimates of methane emissions using top-down and bottom-up measurement approaches and then to compare the estimates, including consideration of uncertainty. Such approaches will be applied at two scales: basin and facility. At facility scale, multiple methods will be used to measure methane emissions of the whole facility (controlled dual tracer and single tracer releases, aircraft-based mass balance and Gaussian back-trajectory), which are considered top-down approaches. The bottom-up approach will sum emissions from identified point sources measured using appropriate source-level measurement techniques (e.g., high-flow meters). At basin scale, the top-down estimate will come from boundary layer airborne measurements upwind and downwind of the basin, using a regional mass balance model plus approaches to separate atmospheric methane emissions attributed to the oil and gas sector. The bottom-up estimate will result from statistical modeling (also known as scaling up) of measurements made at selected facilities, with gaps filled through measurements and other estimates based on other studies. The relative comparison of the bottom-up and top-down estimates made at both scales will help improve understanding of the accuracy of the tested measurement and modeling approaches. The subject of this CRADA is NREL's contribution to the overall project. This project resulted from winning a competitive solicitation no. RPSEA RFP2012UN001, proposal no. 12122-95, which is the basis for the overall project. This Joint Work Statement (JWS) details the contributions of NREL and Colorado School of Mines (CSM) in performance of the CRADA effort.

  12. Multimodal hybrid reasoning methodology for personalized wellbeing services.

    Science.gov (United States)

    Ali, Rahman; Afzal, Muhammad; Hussain, Maqbool; Ali, Maqbool; Siddiqi, Muhammad Hameed; Lee, Sungyoung; Ho Kang, Byeong

    2016-02-01

    A wellness system provides wellbeing recommendations to support experts in promoting a healthier lifestyle and inducing individuals to adopt healthy habits. Adopting physical activity effectively promotes a healthier lifestyle. A physical activity recommendation system assists users to adopt daily routines to form a best practice of life by involving themselves in healthy physical activities. Traditional physical activity recommendation systems focus on general recommendations applicable to a community of users rather than specific individuals. These recommendations are general in nature and are fit for the community at a certain level, but they are not relevant to every individual based on specific requirements and personal interests. To cover this aspect, we propose a multimodal hybrid reasoning methodology (HRM) that generates personalized physical activity recommendations according to the user׳s specific needs and personal interests. The methodology integrates the rule-based reasoning (RBR), case-based reasoning (CBR), and preference-based reasoning (PBR) approaches in a linear combination that enables personalization of recommendations. RBR uses explicit knowledge rules from physical activity guidelines, CBR uses implicit knowledge from experts׳ past experiences, and PBR uses users׳ personal interests and preferences. To validate the methodology, a weight management scenario is considered and experimented with. The RBR part of the methodology generates goal, weight status, and plan recommendations, the CBR part suggests the top three relevant physical activities for executing the recommended plan, and the PBR part filters out irrelevant recommendations from the suggested ones using the user׳s personal preferences and interests. To evaluate the methodology, a baseline-RBR system is developed, which is improved first using ranged rules and ultimately using a hybrid-CBR. A comparison of the results of these systems shows that hybrid-CBR outperforms the

  13. Combination Rules for Morse-Based van der Waals Force Fields.

    Science.gov (United States)

    Yang, Li; Sun, Lei; Deng, Wei-Qiao

    2018-02-15

    In traditional force fields (FFs), van der Waals interactions have been usually described by the Lennard-Jones potentials. Conventional combination rules for the parameters of van der Waals (VDW) cross-termed interactions were developed for the Lennard-Jones based FFs. Here, we report that the Morse potentials were a better function to describe VDW interactions calculated by highly precise quantum mechanics methods. A new set of combination rules was developed for Morse-based FFs, in which VDW interactions were described by Morse potentials. The new set of combination rules has been verified by comparing the second virial coefficients of 11 noble gas mixtures. For all of the mixed binaries considered in this work, the combination rules work very well and are superior to all three other existing sets of combination rules reported in the literature. We further used the Morse-based FF by using the combination rules to simulate the adsorption isotherms of CH 4 at 298 K in four covalent-organic frameworks (COFs). The overall agreement is great, which supports the further applications of this new set of combination rules in more realistic simulation systems.

  14. Transferring Codified Knowledge: Socio-Technical versus Top-Down Approaches

    Science.gov (United States)

    Guzman, Gustavo; Trivelato, Luiz F.

    2008-01-01

    Purpose: This paper aims to analyse and evaluate the transfer process of codified knowledge (CK) performed under two different approaches: the "socio-technical" and the "top-down". It is argued that the socio-technical approach supports the transfer of CK better than the top-down approach. Design/methodology/approach: Case study methodology was…

  15. Scaling up from the grassroots and the top down: The impacts of multi-level governance on community forestry in Durango, Mexico

    Directory of Open Access Journals (Sweden)

    Gustavo A García-López

    2013-08-01

    Full Text Available This paper analyzes the local-level impacts of cross-scale linkages in Mexican community forestry by evaluating the operation of four inter-community forest associations (FAs. Based on one year of fieldwork in Durango, Mexico, the paper focuses on two inter-related issues: (1 the services that each association provides to their member communities and how they impact forest management and the development of communities’ forestry enterprises, and (2 the differences in services and impacts between top-down and bottom-up FAs. The findings show that FAs, as a form of cross-scale linkage, can be crucial for the provision of services, goods and infrastructure related to the protection and enhancement of community forests, the economic development of community enterprises, and the political representation of these communities. At the same time, the study finds important differences between top-down and bottom-up FAs, while pointing to some of the disadvantages of each type of linkage.

  16. Organizational Knowledge Transfer Using Ontologies and a Rule-Based System

    Science.gov (United States)

    Okabe, Masao; Yoshioka, Akiko; Kobayashi, Keido; Yamaguchi, Takahira

    In recent automated and integrated manufacturing, so-called intelligence skill is becoming more and more important and its efficient transfer to next-generation engineers is one of the urgent issues. In this paper, we propose a new approach without costly OJT (on-the-job training), that is, combinational usage of a domain ontology, a rule ontology and a rule-based system. Intelligence skill can be decomposed into pieces of simple engineering rules. A rule ontology consists of these engineering rules as primitives and the semantic relations among them. A domain ontology consists of technical terms in the engineering rules and the semantic relations among them. A rule ontology helps novices get the total picture of the intelligence skill and a domain ontology helps them understand the exact meanings of the engineering rules. A rule-based system helps domain experts externalize their tacit intelligence skill to ontologies and also helps novices internalize them. As a case study, we applied our proposal to some actual job at a remote control and maintenance office of hydroelectric power stations in Tokyo Electric Power Co., Inc. We also did an evaluation experiment for this case study and the result supports our proposal.

  17. An investigation of care-based vs. rule-based morality in frontotemporal dementia, Alzheimer's disease, and healthy controls.

    Science.gov (United States)

    Carr, Andrew R; Paholpak, Pongsatorn; Daianu, Madelaine; Fong, Sylvia S; Mather, Michelle; Jimenez, Elvira E; Thompson, Paul; Mendez, Mario F

    2015-11-01

    Behavioral changes in dementia, especially behavioral variant frontotemporal dementia (bvFTD), may result in alterations in moral reasoning. Investigators have not clarified whether these alterations reflect differential impairment of care-based vs. rule-based moral behavior. This study investigated 18 bvFTD patients, 22 early onset Alzheimer's disease (eAD) patients, and 20 healthy age-matched controls on care-based and rule-based items from the Moral Behavioral Inventory and the Social Norms Questionnaire, neuropsychological measures, and magnetic resonance imaging (MRI) regions of interest. There were significant group differences with the bvFTD patients rating care-based morality transgressions less severely than the eAD group and rule-based moral behavioral transgressions more severely than controls. Across groups, higher care-based morality ratings correlated with phonemic fluency on neuropsychological tests, whereas higher rule-based morality ratings correlated with increased difficulty set-shifting and learning new rules to tasks. On neuroimaging, severe care-based reasoning correlated with cortical volume in right anterior temporal lobe, and rule-based reasoning correlated with decreased cortical volume in the right orbitofrontal cortex. Together, these findings suggest that frontotemporal disease decreases care-based morality and facilitates rule-based morality possibly from disturbed contextual abstraction and set-shifting. Future research can examine whether frontal lobe disorders and bvFTD result in a shift from empathic morality to the strong adherence to conventional rules. Published by Elsevier Ltd.

  18. Bridging the Gap between the Nanometer-Scale Bottom-Up and Micrometer-Scale Top-Down Approaches for Site-Defined InP/InAs Nanowires.

    Science.gov (United States)

    Zhang, Guoqiang; Rainville, Christophe; Salmon, Adrian; Takiguchi, Masato; Tateno, Kouta; Gotoh, Hideki

    2015-11-24

    This work presents a method that bridges the gap between the nanometer-scale bottom-up and micrometer-scale top-down approaches for site-defined nanostructures, which has long been a significant challenge for applications that require low-cost and high-throughput manufacturing processes. We realized the bridging by controlling the seed indium nanoparticle position through a self-assembly process. Site-defined InP nanowires were then grown from the indium-nanoparticle array in the vapor-liquid-solid mode through a "seed and grow" process. The nanometer-scale indium particles do not always occupy the same locations within the micrometer-scale open window of an InP exposed substrate due to the scale difference. We developed a technique for aligning the nanometer-scale indium particles on the same side of the micrometer-scale window by structuring the surface of a misoriented InP (111)B substrate. Finally, we demonstrated that the developed method can be used to grow a uniform InP/InAs axial-heterostructure nanowire array. The ability to form a heterostructure nanowire array with this method makes it possible to tune the emission wavelength over a wide range by employing the quantum confinement effect and thus expand the application of this technology to optoelectronic devices. Successfully pairing a controllable bottom-up growth technique with a top-down substrate preparation technique greatly improves the potential for the mass-production and widespread adoption of this technology.

  19. An Embedded Rule-Based Diagnostic Expert System in Ada

    Science.gov (United States)

    Jones, Robert E.; Liberman, Eugene M.

    1992-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with it portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assumed a growing role in providing human-like reasoning capability expertise for computer systems. The integration is discussed of expert system technology with Ada programming language, especially a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell. NASA Lewis was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-based power expert system, in ART-Ada. Three components, the rule-based expert systems, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The rules were written in the ART-Ada development environment and converted to Ada source code. The graphics interface was developed with the Transportable Application Environment (TAE) Plus, which generates Ada source code to control graphics images. SMART-Ada communicates with a remote host to obtain either simulated or real data. The Ada source code generated with ART-Ada, TAE Plus, and communications code was incorporated into an Ada expert system that reads the data from a power distribution test bed, applies the rule to determine a fault, if one exists, and graphically displays it on the screen. The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  20. Designing Fuzzy Rule Based Expert System for Cyber Security

    OpenAIRE

    Goztepe, Kerim

    2016-01-01

    The state of cyber security has begun to attract more attention and interest outside the community of computer security experts. Cyber security is not a single problem, but rather a group of highly different problems involving different sets of threats. Fuzzy Rule based system for cyber security is a system consists of a rule depository and a mechanism for accessing and running the rules. The depository is usually constructed with a collection of related rule sets. The aim of this study is to...

  1. Extracting Cross-Ontology Weighted Association Rules from Gene Ontology Annotations.

    Science.gov (United States)

    Agapito, Giuseppe; Milano, Marianna; Guzzi, Pietro Hiram; Cannataro, Mario

    2016-01-01

    Gene Ontology (GO) is a structured repository of concepts (GO Terms) that are associated to one or more gene products through a process referred to as annotation. The analysis of annotated data is an important opportunity for bioinformatics. There are different approaches of analysis, among those, the use of association rules (AR) which provides useful knowledge, discovering biologically relevant associations between terms of GO, not previously known. In a previous work, we introduced GO-WAR (Gene Ontology-based Weighted Association Rules), a methodology for extracting weighted association rules from ontology-based annotated datasets. We here adapt the GO-WAR algorithm to mine cross-ontology association rules, i.e., rules that involve GO terms present in the three sub-ontologies of GO. We conduct a deep performance evaluation of GO-WAR by mining publicly available GO annotated datasets, showing how GO-WAR outperforms current state of the art approaches.

  2. Approximating optimal behavioural strategies down to rules-of-thumb: energy reserve changes in pairs of social foragers.

    Directory of Open Access Journals (Sweden)

    Sean A Rands

    Full Text Available Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.

  3. Approximating optimal behavioural strategies down to rules-of-thumb: energy reserve changes in pairs of social foragers.

    Science.gov (United States)

    Rands, Sean A

    2011-01-01

    Functional explanations of behaviour often propose optimal strategies for organisms to follow. These 'best' strategies could be difficult to perform given biological constraints such as neural architecture and physiological constraints. Instead, simple heuristics or 'rules-of-thumb' that approximate these optimal strategies may instead be performed. From a modelling perspective, rules-of-thumb are also useful tools for considering how group behaviour is shaped by the behaviours of individuals. Using simple rules-of-thumb reduces the complexity of these models, but care needs to be taken to use rules that are biologically relevant. Here, we investigate the similarity between the outputs of a two-player dynamic foraging game (which generated optimal but complex solutions) and a computational simulation of the behaviours of the two members of a foraging pair, who instead followed a rule-of-thumb approximation of the game's output. The original game generated complex results, and we demonstrate here that the simulations following the much-simplified rules-of-thumb also generate complex results, suggesting that the rule-of-thumb was sufficient to make some of the model outcomes unpredictable. There was some agreement between both modelling techniques, but some differences arose - particularly when pair members were not identical in how they gained and lost energy. We argue that exploring how rules-of-thumb perform in comparison to their optimal counterparts is an important exercise for biologically validating the output of agent-based models of group behaviour.

  4. Experimental design of high energy electron gun by means of scaling rules

    International Nuclear Information System (INIS)

    Wojcicki, S.

    1996-01-01

    The possibility of the design of a new family of electron guns by means of scaling theory of electron-optical devices (EOD) is presented. According to the theory, EOD with a relatively big space charge, as in high energy Pierce type electron guns used in technological equipment, generally cannot be scaled, because of their nonlinear space charge nature. Therefore, the scaling rules are applied here only to the anode zone of the gun, where the electron beam perveance is small, and the cathode lens of gun with considerable space charge remains unchanged. The procedure for scaling a 25 kV and 150 mA gun with cylindrical electron beam into a high voltage 75 kV and 150 mA electron system is given. An experimental investigation proved the high technological quality of a high voltage gun constructed according to the above conception. (author)

  5. Knowledge base rule partitioning design for CLIPS

    Science.gov (United States)

    Mainardi, Joseph D.; Szatkowski, G. P.

    1990-01-01

    This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

  6. Frontal impact response of a virtual low percentile six years old human thorax developed by automatic down-scaling

    Directory of Open Access Journals (Sweden)

    Špička J.

    2015-06-01

    Full Text Available Traffic accidents cause one of the highest numbers of severe injuries in the whole population spectrum. The numbers of deaths and seriously injured citizens prove that traffic accidents and their consequences are still a serious problem to be solved. The paper contributes to the field of vehicle safety technology with a virtual approach. Exploitation of the previously developed scaling algorithm enables the creation of a specific anthropometric model based on a validated reference model. The aim of the paper is to prove the biofidelity of the small percentile six years old virtual human model developed by automatic down-scaling in a frontal impact. For the automatically developed six years old virtual specific anthropometric model, the Kroell impact test is simulated and the results are compared to the experimental data. The chosen approach shows good correspondence of the scaled model performance to the experimental corridors.

  7. Methodological foundations of evaluation of effectiveness indicators of small-scale business activities

    Directory of Open Access Journals (Sweden)

    Ivanova T.

    2013-01-01

    Full Text Available The methodological approach to the measurement of financial indicators of small-scale enterprises has been developed. It enables to secure the comparability of financial condition indicators and the results of small-scale enterprise activities, and also to develop the methods of vertical integral estimate calculation at separate aspects of financial condition and the results of smallscale enterprise activities.

  8. The COMFORT-behavior scale is useful to assess pain and distress in 0- to 3-year-old children with Down syndrome.

    Science.gov (United States)

    Valkenburg, Abraham J; Boerlage, Anneke A; Ista, Erwin; Duivenvoorden, Hugo J; Tibboel, Dick; van Dijk, Monique

    2011-09-01

    Many pediatric intensive care units use the COMFORT-Behavior scale (COMFORT-B) to assess pain in 0- to 3-year-old children. The objective of this study was to determine whether this scale is also valid for the assessment of pain in 0- to 3-year-old children with Down syndrome. These children often undergo cardiac or intestinal surgery early in life and therefore admission to a pediatric intensive care unit. Seventy-six patients with Down syndrome were included and 466 without Down syndrome. Pain was regularly assessed with the COMFORT-B scale and the pain Numeric Rating Scale (NRS). For either group, confirmatory factor analyses revealed a 1-factor model. Internal consistency between COMFORT-B items was good (Cronbach's α=0.84-0.87). Cutoff values for the COMFORT-B set at 17 or higher discriminated between pain (NRS pain of 4 or higher) and no pain (NRS pain below 4) in both groups. We concluded that the COMFORT-B scale is also valid for 0- to 3-year-old children with Down syndrome. This makes it even more useful in the pediatric intensive care unit setting, doing away with the need to apply another instrument for those children younger than 3. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  9. Multi-scale X-ray computed tomography to detect and localize metal-based nanomaterials in lung tissues of in vivo exposed mice.

    Science.gov (United States)

    Chaurand, Perrine; Liu, Wei; Borschneck, Daniel; Levard, Clément; Auffan, Mélanie; Paul, Emmanuel; Collin, Blanche; Kieffer, Isabelle; Lanone, Sophie; Rose, Jérôme; Perrin, Jeanne

    2018-03-13

    In this methodological study, we demonstrated the relevance of 3D imaging performed at various scales for the ex vivo detection and location of cerium oxide nanomaterials (CeO 2 -NMs) in mouse lung. X-ray micro-computed tomography (micro-CT) with a voxel size from 14 µm to 1 µm (micro-CT) was combined with X-ray nano-computed tomography with a voxel size of 63 nm (nano-CT). An optimized protocol was proposed to facilitate the sample preparation, to minimize the experimental artifacts and to optimize the contrast of soft tissues exposed to metal-based nanomaterials (NMs). 3D imaging of the NMs biodistribution in lung tissues was consolidated by combining a vast variety of techniques in a correlative approach: histological observations, 2D chemical mapping and speciation analysis were performed for an unambiguous detection of NMs. This original methodological approach was developed following a worst-case scenario of exposure, i.e. high dose of exposure with administration via intra-tracheal instillation. Results highlighted both (i) the non-uniform distribution of CeO 2 -NMs within the entire lung lobe (using large field-of-view micro-CT) and (ii) the detection of CeO 2 -NMs down to the individual cell scale, e.g. macrophage scale (using nano-CT with a voxel size of 63 nm).

  10. Scale-free music of the brain.

    Directory of Open Access Journals (Sweden)

    Dan Wu

    Full Text Available BACKGROUND: There is growing interest in the relation between the brain and music. The appealing similarity between brainwaves and the rhythms of music has motivated many scientists to seek a connection between them. A variety of transferring rules has been utilized to convert the brainwaves into music; and most of them are mainly based on spectra feature of EEG. METHODOLOGY/PRINCIPAL FINDINGS: In this study, audibly recognizable scale-free music was deduced from individual Electroencephalogram (EEG waveforms. The translation rules include the direct mapping from the period of an EEG waveform to the duration of a note, the logarithmic mapping of the change of average power of EEG to music intensity according to the Fechner's law, and a scale-free based mapping from the amplitude of EEG to music pitch according to the power law. To show the actual effect, we applied the deduced sonification rules to EEG segments recorded during rapid-eye movement sleep (REM and slow-wave sleep (SWS. The resulting music is vivid and different between the two mental states; the melody during REM sleep sounds fast and lively, whereas that in SWS sleep is slow and tranquil. 60 volunteers evaluated 25 music pieces, 10 from REM, 10 from SWS and 5 from white noise (WN, 74.3% experienced a happy emotion from REM and felt boring and drowsy when listening to SWS, and the average accuracy for all the music pieces identification is 86.8%(kappa = 0.800, P<0.001. We also applied the method to the EEG data from eyes closed, eyes open and epileptic EEG, and the results showed these mental states can be identified by listeners. CONCLUSIONS/SIGNIFICANCE: The sonification rules may identify the mental states of the brain, which provide a real-time strategy for monitoring brain activities and are potentially useful to neurofeedback therapy.

  11. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  12. A rough set-based association rule approach implemented on a brand trust evaluation model

    Science.gov (United States)

    Liao, Shu-Hsien; Chen, Yin-Ju

    2017-09-01

    In commerce, businesses use branding to differentiate their product and service offerings from those of their competitors. The brand incorporates a set of product or service features that are associated with that particular brand name and identifies the product/service segmentation in the market. This study proposes a new data mining approach, a rough set-based association rule induction, implemented on a brand trust evaluation model. In addition, it presents as one way to deal with data uncertainty to analyse ratio scale data, while creating predictive if-then rules that generalise data values to the retail region. As such, this study uses the analysis of algorithms to find alcoholic beverages brand trust recall. Finally, discussions and conclusion are presented for further managerial implications.

  13. Estimation of Handgrip Force from SEMG Based on Wavelet Scale Selection.

    Science.gov (United States)

    Wang, Kai; Zhang, Xianmin; Ota, Jun; Huang, Yanjiang

    2018-02-24

    This paper proposes a nonlinear correlation-based wavelet scale selection technology to select the effective wavelet scales for the estimation of handgrip force from surface electromyograms (SEMG). The SEMG signal corresponding to gripping force was collected from extensor and flexor forearm muscles during the force-varying analysis task. We performed a computational sensitivity analysis on the initial nonlinear SEMG-handgrip force model. To explore the nonlinear correlation between ten wavelet scales and handgrip force, a large-scale iteration based on the Monte Carlo simulation was conducted. To choose a suitable combination of scales, we proposed a rule to combine wavelet scales based on the sensitivity of each scale and selected the appropriate combination of wavelet scales based on sequence combination analysis (SCA). The results of SCA indicated that the scale combination VI is suitable for estimating force from the extensors and the combination V is suitable for the flexors. The proposed method was compared to two former methods through prolonged static and force-varying contraction tasks. The experiment results showed that the root mean square errors derived by the proposed method for both static and force-varying contraction tasks were less than 20%. The accuracy and robustness of the handgrip force derived by the proposed method is better than that obtained by the former methods.

  14. Nasonia Parasitic Wasps Escape from Haller's Rule by Diphasic, Partially Isometric Brain-Body Size Scaling and Selective Neuropil Adaptations

    NARCIS (Netherlands)

    Groothuis, Jitte; Smid, Hans M.

    2017-01-01

    Haller's rule states that brains scale allometrically with body size in all animals, meaning that relative brain size increases with decreasing body size. This rule applies both on inter- and intraspecific comparisons. Only 1 species, the extremely small parasitic wasp Trichogramma evanescens, is

  15. Optimizing Electric Vehicle Coordination Over a Heterogeneous Mesh Network in a Scaled-Down Smart Grid Testbed

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Lévesque, Martin; Maier, Martin

    2015-01-01

    High penetration of renewable energy sources and electric vehicles (EVs) create power imbalance and congestion in the existing power network, and hence causes significant problems in the control and operation. Despite investing huge efforts from the electric utilities, governments, and researchers......, smart grid (SG) is still at the developmental stage to address those issues. In this regard, a smart grid testbed (SGT) is desirable to develop, analyze, and demonstrate various novel SG solutions, namely demand response, real-time pricing, and congestion management. In this paper, a novel SGT...... is developed in a laboratory by scaling a 250 kVA, 0.4 kV real low-voltage distribution feeder down to 1 kVA, 0.22 kV. Information and communication technology is integrated in the scaled-down network to establish real-time monitoring and control. The novelty of the developed testbed is demonstrated...

  16. Fuzzy rule-based forecast of meteorological drought in western Niger

    Science.gov (United States)

    Abdourahamane, Zakari Seybou; Acar, Reşat

    2018-01-01

    Understanding the causes of rainfall anomalies in the West African Sahel to effectively predict drought events remains a challenge. The physical mechanisms that influence precipitation in this region are complex, uncertain, and imprecise in nature. Fuzzy logic techniques are renowned to be highly efficient in modeling such dynamics. This paper attempts to forecast meteorological drought in Western Niger using fuzzy rule-based modeling techniques. The 3-month scale standardized precipitation index (SPI-3) of four rainfall stations was used as predictand. Monthly data of southern oscillation index (SOI), South Atlantic sea surface temperature (SST), relative humidity (RH), and Atlantic sea level pressure (SLP), sourced from the National Oceanic and Atmosphere Administration (NOAA), were used as predictors. Fuzzy rules and membership functions were generated using fuzzy c-means clustering approach, expert decision, and literature review. For a minimum lead time of 1 month, the model has a coefficient of determination R 2 between 0.80 and 0.88, mean square error (MSE) below 0.17, and Nash-Sutcliffe efficiency (NSE) ranging between 0.79 and 0.87. The empirical frequency distributions of the predicted and the observed drought classes are equal at the 99% of confidence level based on two-sample t test. Results also revealed the discrepancy in the influence of SOI and SLP on drought occurrence at the four stations while the effect of SST and RH are space independent, being both significantly correlated (at α based forecast model shows better forecast skills.

  17. Rule-Based Storytelling Text-to-Speech (TTS Synthesis

    Directory of Open Access Journals (Sweden)

    Ramli Izzad

    2016-01-01

    Full Text Available In recent years, various real life applications such as talking books, gadgets and humanoid robots have drawn the attention to pursue research in the area of expressive speech synthesis. Speech synthesis is widely used in various applications. However, there is a growing need for an expressive speech synthesis especially for communication and robotic. In this paper, global and local rule are developed to convert neutral to storytelling style speech for the Malay language. In order to generate rules, modification of prosodic parameters such as pitch, intensity, duration, tempo and pauses are considered. Modification of prosodic parameters is examined by performing prosodic analysis on a story collected from an experienced female and male storyteller. The global and local rule is applied in sentence level and synthesized using HNM. Subjective tests are conducted to evaluate the synthesized storytelling speech quality of both rules based on naturalness, intelligibility, and similarity to the original storytelling speech. The results showed that global rule give a better result than local rule

  18. Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search

    Science.gov (United States)

    Nakamura, Katsuhiko; Hoshina, Akemi

    This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.

  19. Single-molecule pull-down (SiMPull) for new-age biochemistry: methodology and biochemical applications of single-molecule pull-down (SiMPull) for probing biomolecular interactions in crude cell extracts.

    Science.gov (United States)

    Aggarwal, Vasudha; Ha, Taekjip

    2014-11-01

    Macromolecular interactions play a central role in many biological processes. Protein-protein interactions have mostly been studied by co-immunoprecipitation, which cannot provide quantitative information on all possible molecular connections present in the complex. We will review a new approach that allows cellular proteins and biomolecular complexes to be studied in real-time at the single-molecule level. This technique is called single-molecule pull-down (SiMPull), because it integrates principles of conventional immunoprecipitation with the powerful single-molecule fluorescence microscopy. SiMPull is used to count how many of each protein is present in the physiological complexes found in cytosol and membranes. Concurrently, it serves as a single-molecule biochemical tool to perform functional studies on the pulled-down proteins. In this review, we will focus on the detailed methodology of SiMPull, its salient features and a wide range of biological applications in comparison with other biosensing tools. © 2014 WILEY Periodicals, Inc.

  20. Guidelines for visualizing and annotating rule-based models†

    Science.gov (United States)

    Chylek, Lily A.; Hu, Bin; Blinov, Michael L.; Emonet, Thierry; Faeder, James R.; Goldstein, Byron; Gutenkunst, Ryan N.; Haugh, Jason M.; Lipniacki, Tomasz; Posner, Richard G.; Yang, Jin; Hlavacek, William S.

    2011-01-01

    Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme-substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FcεRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models. PMID:21647530

  1. Guidelines for visualizing and annotating rule-based models.

    Science.gov (United States)

    Chylek, Lily A; Hu, Bin; Blinov, Michael L; Emonet, Thierry; Faeder, James R; Goldstein, Byron; Gutenkunst, Ryan N; Haugh, Jason M; Lipniacki, Tomasz; Posner, Richard G; Yang, Jin; Hlavacek, William S

    2011-10-01

    Rule-based modeling provides a means to represent cell signaling systems in a way that captures site-specific details of molecular interactions. For rule-based models to be more widely understood and (re)used, conventions for model visualization and annotation are needed. We have developed the concepts of an extended contact map and a model guide for illustrating and annotating rule-based models. An extended contact map represents the scope of a model by providing an illustration of each molecule, molecular component, direct physical interaction, post-translational modification, and enzyme-substrate relationship considered in a model. A map can also illustrate allosteric effects, structural relationships among molecular components, and compartmental locations of molecules. A model guide associates elements of a contact map with annotation and elements of an underlying model, which may be fully or partially specified. A guide can also serve to document the biological knowledge upon which a model is based. We provide examples of a map and guide for a published rule-based model that characterizes early events in IgE receptor (FcεRI) signaling. We also provide examples of how to visualize a variety of processes that are common in cell signaling systems but not considered in the example model, such as ubiquitination. An extended contact map and an associated guide can document knowledge of a cell signaling system in a form that is visual as well as executable. As a tool for model annotation, a map and guide can communicate the content of a model clearly and with precision, even for large models.

  2. An Investigation of Care-Based vs. Rule-Based Morality in Frontotemporal Dementia, Alzheimer’s Disease, and Healthy Controls

    Science.gov (United States)

    Carr, Andrew R.; Paholpak, Pongsatorn; Daianu, Madelaine; Fong, Sylvia S.; Mather, Michelle; Jimenez, Elvira E.; Thompson, Paul; Mendez, Mario F.

    2015-01-01

    Behavioral changes in dementia, especially behavioral variant frontotemporal dementia (bvFTD), may result in alterations in moral reasoning. Investigators have not clarified whether these alterations reflect differential impairment of care-based vs. rule-based moral behavior. This study investigated 18 bvFTD patients, 22 early onset Alzheimer’s disease (eAD) patients, and 20 healthy age-matched controls on care-based and rule-based items from the Moral Behavioral Inventory and the Social Norms Questionnaire, neuropsychological measures, and magnetic resonance imaging (MRI) regions of interest. There were significant group differences with the bvFTD patients rating care-based morality transgressions less severely than the eAD group and rule-based moral behavioral transgressions more severely than controls. Across groups, higher care-based morality ratings correlated with phonemic fluency on neuropsychological tests, whereas higher rule-based morality ratings correlated with increased difficulty set-shifting and learning new rules to tasks. On neuroimaging, severe care-based reasoning correlated with cortical volume in right anterior temporal lobe, and rule-based reasoning correlated with decreased cortical volume in the right orbitofrontal cortex. Together, these findings suggest that frontotemporal disease decreases care-based morality and facilitates rule-based morality possibly from disturbed contextual abstraction and set-shifting. Future research can examine whether frontal lobe disorders and bvFTD result in a shift from empathic morality to the strong adherence to conventional rules. PMID:26432341

  3. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan; Lau, Cheryl; Mü ller, Pascal; Wonka, Peter; Pauly, Mark

    2017-01-01

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  4. Design Transformations for Rule-based Procedural Modeling

    KAUST Repository

    Lienhard, Stefan

    2017-05-24

    We introduce design transformations for rule-based procedural models, e.g., for buildings and plants. Given two or more procedural designs, each specified by a grammar, a design transformation combines elements of the existing designs to generate new designs. We introduce two technical components to enable design transformations. First, we extend the concept of discrete rule switching to rule merging, leading to a very large shape space for combining procedural models. Second, we propose an algorithm to jointly derive two or more grammars, called grammar co-derivation. We demonstrate two applications of our work: we show that our framework leads to a larger variety of models than previous work, and we show fine-grained transformation sequences between two procedural models.

  5. Evaluation of analytical performance based on partial order methodology.

    Science.gov (United States)

    Carlsen, Lars; Bruggemann, Rainer; Kenessova, Olga; Erzhigitov, Erkin

    2015-01-01

    Classical measurements of performances are typically based on linear scales. However, in analytical chemistry a simple scale may be not sufficient to analyze the analytical performance appropriately. Here partial order methodology can be helpful. Within the context described here, partial order analysis can be seen as an ordinal analysis of data matrices, especially to simplify the relative comparisons of objects due to their data profile (the ordered set of values an object have). Hence, partial order methodology offers a unique possibility to evaluate analytical performance. In the present data as, e.g., provided by the laboratories through interlaboratory comparisons or proficiency testings is used as an illustrative example. However, the presented scheme is likewise applicable for comparison of analytical methods or simply as a tool for optimization of an analytical method. The methodology can be applied without presumptions or pretreatment of the analytical data provided in order to evaluate the analytical performance taking into account all indicators simultaneously and thus elucidating a "distance" from the true value. In the present illustrative example it is assumed that the laboratories analyze a given sample several times and subsequently report the mean value, the standard deviation and the skewness, which simultaneously are used for the evaluation of the analytical performance. The analyses lead to information concerning (1) a partial ordering of the laboratories, subsequently, (2) a "distance" to the Reference laboratory and (3) a classification due to the concept of "peculiar points". Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Rule based deterioration identification and management system

    International Nuclear Information System (INIS)

    Kataoka, S.; Pavinich, W.; Lapides, M.

    1993-01-01

    Under the sponsorship of IHI and EPRI, a rule-based screening system has been developed that can be used by utility engineers to determine which deterioration mechanisms are acting on specific LWR components, and to evaluate the efficacy of an age-related deterioration management program. The screening system was developed using the rule-based shell, NEXPERT, which provides traceability to the data sources used in the logic development. The system addresses all the deterioration mechanisms of specific metals encountered in either BWRs or PWRs. Deterioration mechanisms are listed with reasons why they may occur during the design life of LWRs, considering the plant environment, manufacturing process, service history, material chemical composition, etc. of components in a specific location of a LWR. To eliminate the evaluation of inactive deterioration quickly, a tier structure is applied to the rules. The reasons why deterioration will occur are extracted automatically by backward chaining. To reduce the amount of user input, plant environmental data are stored in files as default environmental data. (author)

  7. Efficiency in Rule- vs. Plan-Based Movements Is Modulated by Action-Mode.

    Science.gov (United States)

    Scheib, Jean P P; Stoll, Sarah; Thürmer, J Lukas; Randerath, Jennifer

    2018-01-01

    The rule/plan motor cognition (RPMC) paradigm elicits visually indistinguishable motor outputs, resulting from either plan- or rule-based action-selection, using a combination of essentially interchangeable stimuli. Previous implementations of the RPMC paradigm have used pantomimed movements to compare plan- vs. rule-based action-selection. In the present work we attempt to determine the generalizability of previous RPMC findings to real object interaction by use of a grasp-to-rotate task. In the plan task, participants had to use prospective planning to achieve a comfortable post-handle rotation hand posture. The rule task used implementation intentions (if-then rules) leading to the same comfortable end-state. In Experiment A, we compare RPMC performance of 16 healthy participants in pantomime and real object conditions of the experiment, within-subjects. Higher processing efficiency of rule- vs. plan-based action-selection was supported by diffusion model analysis. Results show a significant response-time increase in the pantomime condition compared to the real object condition and a greater response-time advantage of rule-based vs. plan-based actions in the pantomime compared to the real object condition. In Experiment B, 24 healthy participants performed the real object RPMC task in a task switching vs. a blocked condition. Results indicate that plan-based action-selection leads to longer response-times and less efficient information processing than rule-based action-selection in line with previous RPMC findings derived from the pantomime action-mode. Particularly in the task switching mode, responses were faster in the rule compared to the plan task suggesting a modulating influence of cognitive load. Overall, results suggest an advantage of rule-based action-selection over plan-based action-selection; whereby differential mechanisms appear to be involved depending on the action-mode. We propose that cognitive load is a factor that modulates the advantageous

  8. Comparison of two down-scaling methods for climate study and climate change on the mountain areas in France

    International Nuclear Information System (INIS)

    Piazza, Marie; Page, Christian; Sanchez-Gomez, Emilia; Terray, Laurent; Deque, Michel

    2013-01-01

    Mountain regions are highly vulnerable to climate change and are likely to be among the areas most impacted by global warming. But climate projections for the end of the 21. century are developed with general circulation models of climate, which do not present a sufficient horizontal resolution to accurately evaluate the impacts of warming on these regions. Several techniques are then used to perform a spatial down-scaling (on the order of 10 km). There are two categories of down-scaling methods: dynamical methods that require significant computational resources for the achievement of regional climate simulations at high resolution, and statistical methods that require few resources but an observation dataset over a long period and of good quality. In this study, climate simulations of the global atmospheric model ARPEGE projections over France are down-scaled according to a dynamical method, performed with the ALADIN-Climate regional model, and a statistical method performed with the software DSClim developed at CERFACS. The two down-scaling methods are presented and the results on the climate of the French mountains are evaluated for the current climate. Both methods give similar results for average snowfall. However extreme events of total precipitation (droughts, intense precipitation events) are largely underestimated by the statistical method. Then, the results of both methods are compared for two future climate projections, according to the greenhouse gas emissions scenario A1B of IPCC. The two methods agree on fewer frost days, a significant decrease in the amounts of solid precipitation and an average increase in the percentage of dry days of more than 10%. The results obtained on Corsica are more heterogeneous but they are questionable because the reduced spatial domain is probably not very relevant regarding statistical sampling. (authors)

  9. Mapping practices of project management – merging top-down and bottom-up perspectives

    DEFF Research Database (Denmark)

    Thuesen, Christian

    2015-01-01

    This paper presents a new methodology for studying different accounts of project management practices based on network mapping and analysis. Drawing upon network mapping and visualization as an analytical strategy top-down and bottom-up accounts of project management practice are analysed...... and compared. The analysis initially reveals a substantial difference between the top-down and bottom-up accounts of practice. Furthermore it identifies a soft side of project management that is central in the bottom-up account but absent from the top-down. Finally, the study shows that network mapping...

  10. Evolving temporal association rules with genetic algorithms

    OpenAIRE

    Matthews, Stephen G.; Gongora, Mario A.; Hopgood, Adrian A.

    2010-01-01

    A novel framework for mining temporal association rules by discovering itemsets with a genetic algorithm is introduced. Metaheuristics have been applied to association rule mining, we show the efficacy of extending this to another variant - temporal association rule mining. Our framework is an enhancement to existing temporal association rule mining methods as it employs a genetic algorithm to simultaneously search the rule space and temporal space. A methodology for validating the ability of...

  11. Achieving superior band gap, refractive index and morphology in composite oxide thin film systems violating the Moss rule

    International Nuclear Information System (INIS)

    Sahoo, N K; Thakur, S; Tokas, R B

    2006-01-01

    The interrelation between energy gap and high frequency refractive index in semiconductors and dielectrics is manifested by an inverse law which is popularly known as the Moss rule. This semi-empirical relationship is based on the fundamental principle that in a dielectric medium all energy levels are scaled down by a factor of the square of the dielectric constant. Such a rule is obeyed by most pure semiconductors and dielectrics with a few rare violations in composite materials which display several interesting parametric and microstructural evolutions. The present results are based on some specific oxide composite thin films involving Gd 2 O 3 /SiO 2 and ZrO 2 /SiO 2 codeposited systems that have displayed a superior refractive index and energy gaps violating the semi-empirical Moss rule. Also, morphological supremacy is also distinctly noticed in these composites. The novel microstructural and polarizability properties of such composite systems were probed through multi-mode atomic force microscopy and phase modulated spectroscopic ellipsometry using refractive index modelling, autocorrelation and height-height correlation functional analyses. These binary composite thin films have shown their potential as well as the possibility of meeting expectations in satisfying the challenging optical coating requirements of the deep ultraviolet spectral region

  12. Spatial and temporal disaggregation of the on-road vehicle emission inventory in a medium-sized Andean city. Comparison of GIS-based top-down methodologies

    Science.gov (United States)

    Gómez, C. D.; González, C. M.; Osses, M.; Aristizábal, B. H.

    2018-04-01

    Emission data is an essential tool for understanding environmental problems associated with sources and dynamics of air pollutants in urban environments, especially those emitted from vehicular sources. There is a lack of knowledge about the estimation of air pollutant emissions and particularly its spatial and temporal distribution in South America, mainly in medium-sized cities with population less than one million inhabitants. This work performed the spatial and temporal disaggregation of the on-road vehicle emission inventory (EI) in the medium-sized Andean city of Manizales, Colombia, with a spatial resolution of 1 km × 1 km and a temporal resolution of 1 h. A reported top-down methodology, based on the analysis of traffic flow levels and road network distribution, was applied. Results obtained allowed the identification of several hotspots of emission at the downtown zone and the residential and commercial area of Manizales. Downtown exhibited the highest percentage contribution of emissions normalized by its total area, with values equal to 6% and 5% of total CO and PM10 emissions per km2 respectively. These indexes were higher than those obtained in residential-commercial area with values of 2%/km2 for both pollutants. Temporal distribution showed strong relationship with driving patterns at rush hours, as well as an important influence of passenger cars and motorcycles in emissions of CO both at downtown and residential-commercial areas, and the impact of public transport in PM10 emissions in the residential-commercial zone. Considering that detailed information about traffic counts and road network distribution is not always available in medium-sized cities, this work compares other simplified top-down methods for spatially assessing the on-road vehicle EI. Results suggested that simplified methods could underestimate the spatial allocation of downtown emissions, a zone dominated by high traffic of vehicles. The comparison between simplified methods

  13. Diversity of Rule-based Approaches: Classic Systems and Recent Applications

    Directory of Open Access Journals (Sweden)

    Grzegorz J. Nalepa

    2016-11-01

    Full Text Available Rules are a common symbolic model of knowledge. Rule-based systems share roots in cognitive science and artificial intelligence. In the former, they are mostly used in cognitive architectures; in the latter, they are developed in several domains including knowledge engineering and machine learning. This paper aims to give an overview of these issues with the focus on the current research perspective of artificial intelligence. Moreover, in this setting we discuss our results in the design of rule-based systems and their applications in context-aware and business intelligence systems.

  14. A rule-based automatic sleep staging method.

    Science.gov (United States)

    Liang, Sheng-Fu; Kuo, Chin-En; Hu, Yu-Han; Cheng, Yu-Shian

    2012-03-30

    In this paper, a rule-based automatic sleep staging method was proposed. Twelve features including temporal and spectrum analyses of the EEG, EOG, and EMG signals were utilized. Normalization was applied to each feature to eliminating individual differences. A hierarchical decision tree with fourteen rules was constructed for sleep stage classification. Finally, a smoothing process considering the temporal contextual information was applied for the continuity. The overall agreement and kappa coefficient of the proposed method applied to the all night polysomnography (PSG) of seventeen healthy subjects compared with the manual scorings by R&K rules can reach 86.68% and 0.79, respectively. This method can integrate with portable PSG system for sleep evaluation at-home in the near future. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. A General Attribute and Rule Based Role-Based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Growing numbers of users and many access control policies which involve many different resource attributes in service-oriented environments bring various problems in protecting resource. This paper analyzes the relationships of resource attributes to user attributes in all policies, and propose a general attribute and rule based role-based access control(GAR-RBAC) model to meet the security needs. The model can dynamically assign users to roles via rules to meet the need of growing numbers of users. These rules use different attribute expression and permission as a part of authorization constraints, and are defined by analyzing relations of resource attributes to user attributes in many access policies that are defined by the enterprise. The model is a general access control model, and can support many access control policies, and also can be used to wider application for service. The paper also describes how to use the GAR-RBAC model in Web service environments.

  16. Implementing XML Schema Naming and Design Rules

    Energy Technology Data Exchange (ETDEWEB)

    Lubell, Joshua [National Institute of Standards and Technology (NIST); Kulvatunyou, Boonserm [ORNL; Morris, Katherine [National Institute of Standards and Technology (NIST); Harvey, Betty [Electronic Commerce Connection, Inc.

    2006-08-01

    We are building a methodology and tool kit for encoding XML schema Naming and Design Rules (NDRs) in a computer-interpretable fashion, enabling automated rule enforcement and improving schema quality. Through our experience implementing rules from various NDR specifications, we discuss some issues and offer practical guidance to organizations grappling with NDR development.

  17. A top-down approach for the prediction of hardness and toughness of hierarchical materials

    International Nuclear Information System (INIS)

    Carpinteri, Alberto; Paggi, Marco

    2009-01-01

    Many natural and man-made materials exhibit structure over more than one length scale. In this paper, we deal with hierarchical grained composite materials that have recently been designed to achieve superior hardness and toughness as compared to their traditional counterparts. Their nested structure, where meso-grains are recursively composed of smaller and smaller micro-grains at the different scales with a fractal-like topology, is herein studied from a hierarchical perspective. Considering a top-down approach, i.e. from the largest to the smallest scale, we propose a recursive micromechanical model coupled with a generalized fractal mixture rule for the prediction of hardness and toughness of a grained material with n hierarchical levels. A relationship between hardness and toughness is also derived and the analytical predictions are compared with experimental data.

  18. Value and Vision-based Methodology in Integrated Design

    DEFF Research Database (Denmark)

    Tollestrup, Christian

    on empirical data from workshop where the Value and Vision-based methodology has been taught. The research approach chosen for this investigation is Action Research, where the researcher plays an active role in generating the data and gains a deeper understanding of the investigated phenomena. The result...... of this thesis is the value transformation from an explicit set of values to a product concept using a vision based concept development methodology based on the Pyramid Model (Lerdahl, 2001) in a design team context. The aim of this thesis is to examine how the process of value transformation is occurring within...... is divided in three; the systemic unfolding of the Value and Vision-based methodology, the structured presentation of practical implementation of the methodology and finally the analysis and conclusion regarding the value transformation, phenomena and learning aspects of the methodology....

  19. Graph-based linear scaling electronic structure theory

    Energy Technology Data Exchange (ETDEWEB)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.; Swart, Pieter J.; Germann, Timothy C.; Bock, Nicolas [Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Mniszewski, Susan M.; Mohd-Yusof, Jamal; Wall, Michael E.; Djidjev, Hristo [Computer, Computational, and Statistical Sciences Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Rubensson, Emanuel H. [Division of Scientific Computing, Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala (Sweden)

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  20. Sodium-cutting: a new top-down approach to cut open nanostructures on nonplanar surfaces on a large scale.

    Science.gov (United States)

    Chen, Wei; Deng, Da

    2014-11-11

    We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.

  1. Critical infrastructure systems of systems assessment methodology.

    Energy Technology Data Exchange (ETDEWEB)

    Sholander, Peter E.; Darby, John L.; Phelan, James M.; Smith, Bryan; Wyss, Gregory Dane; Walter, Andrew; Varnado, G. Bruce; Depoy, Jennifer Mae

    2006-10-01

    Assessing the risk of malevolent attacks against large-scale critical infrastructures requires modifications to existing methodologies that separately consider physical security and cyber security. This research has developed a risk assessment methodology that explicitly accounts for both physical and cyber security, while preserving the traditional security paradigm of detect, delay, and respond. This methodology also accounts for the condition that a facility may be able to recover from or mitigate the impact of a successful attack before serious consequences occur. The methodology uses evidence-based techniques (which are a generalization of probability theory) to evaluate the security posture of the cyber protection systems. Cyber threats are compared against cyber security posture using a category-based approach nested within a path-based analysis to determine the most vulnerable cyber attack path. The methodology summarizes the impact of a blended cyber/physical adversary attack in a conditional risk estimate where the consequence term is scaled by a ''willingness to pay'' avoidance approach.

  2. Strategy-Driven Exploration for Rule-Based Models of Biochemical Systems with Porgy

    OpenAIRE

    Andrei , Oana; Fernández , Maribel; Kirchner , Hélène; Pinaud , Bruno

    2016-01-01

    This paper presents Porgy – an interactive visual environment for rule-based modelling of biochemical systems. We model molecules and molecule interactions as port graphs and port graph rewrite rules, respectively. We use rewriting strategies to control which rules to apply, and where and when to apply them. Our main contributions to rule-based modelling of biochemical systems lie in the strategy language and the associated visual and interactive features offered by Porgy. These features faci...

  3. Parametric analysis of a down-scaled turbo jet engine suitable for drone and UAV propulsion

    Science.gov (United States)

    Wessley, G. Jims John; Chauhan, Swati

    2018-04-01

    This paper presents a detailed study on the need for downscaling gas turbine engines for UAV and drone propulsion. Also, the procedure for downscaling and the parametric analysis of a downscaled engine using Gas Turbine Simulation Program software GSP 11 is presented. The need for identifying a micro gas turbine engine in the thrust range of 0.13 to 4.45 kN to power UAVs and drones weighing in the range of 4.5 to 25 kg is considered and in order to meet the requirement a parametric analysis on the scaled down Allison J33-A-35 Turbojet engine is performed. It is evident from the analysis that the thrust developed by the scaled engine and the Thrust Specific Fuel Consumption TSFC depends on pressure ratio, mass flow rate of air and Mach number. A scaling factor of 0.195 corresponding to air mass flow rate of 7.69 kg/s produces a thrust in the range of 4.57 to 5.6 kN while operating at a Mach number of 0.3 within the altitude of 5000 to 9000 m. The thermal and overall efficiency of the scaled engine is found to be 67% and 75% respectively for a pressure ratio of 2. The outcomes of this analysis form a strong base for further analysis, design and fabrication of micro gas turbine engines to propel future UAVs and drones.

  4. THE METHODOLOGY USED TO INTERPRET CUSTOMARY LAND TENURE

    Directory of Open Access Journals (Sweden)

    Gerrit Pienaar

    2012-09-01

    Full Text Available Customary land tenure is normally not based on codified or statutory sources, but stems from customary traditions and norms. When westernised courts have to interpret and adjudicate these customary traditions and norms, the normal rules of statutory interpretation cannot be followed. The court has to rely on evidence of the traditional values of land use to determine the rules connected to land tenure.Previously courts in many mixed jurisdictions relied on common or civil law legal principles to determine the nature of customary land tenure and lay down the principles to adjudicate customary land disputes among traditional communities, or between traditional and westernised communities in the same jurisdiction. Many examples of such westernised approach can be found in case law of Canada and South Africa. The interpretation of the nature of customary land tenure according to common law or civil law principles has been increasingly rejected by higher courts in South Africa and Canada, e.g. in Alexkor Ltd v The Richtersveld Community 2004 5 SA 469 (CC and Delgamuukw v British Columbia 1997 3 SCR 1010. This paper explores the methodology the courts should follow to determine what the distinctive nature of customary land tenure is. As customary land tenure is not codified or based on legislation, the court has to rely, in addition to the evidence of indigenous peoples, on the expert evidence of anthropologists and sociologists in determining the nature of aboriginal title (in Canada and indigenous land tenure (in South Africa. The court must approach the rules of evidence and interpret the evidence with a consciousness of the special nature of aboriginal claims and the evidentiary difficulties in proving a right which originates in times where there were no written records of the practices, customs and traditions engaged in. The court must not undervalue the evidence presented simply because that evidence does not conform precisely with the

  5. Idioms-based Business Rule Extraction

    NARCIS (Netherlands)

    R Smit (Rob)

    2011-01-01

    htmlabstractThis thesis studies the extraction of embedded business rules, using the idioms of the used framework to identify them. Embedded business rules exist as source code in the software system and knowledge about them may get lost. Extraction of those business rules could make them accessible

  6. Changing from a Rules-based to a Principles-based Accounting Logic: A Review

    Directory of Open Access Journals (Sweden)

    Marta Silva Guerreiro

    2014-06-01

    Full Text Available We explore influences on unlisted companies when Portugal moved from a code law, rules-based accounting system, to a principles-based accounting system of adapted International Financial Reporting Standards (IFRS. Institutionalisation of the new principles-based system was generally facilitated by a socio-economic and political context that increasingly supported IFRS logic. This helped central actors gain political opportunity, mobilise important allies, and accommodate major protagonists. The preparedness of unlisted companies to adopt the new IFRS-based accounting system voluntarily was explained by their desire to maintain social legitimacy. However, it was affected negatively by the embeddedness of rule-based practices in the ‘old’ prevailing institutional logic.

  7. Mathematics interventions for children and adolescents with Down syndrome: a research synthesis.

    Science.gov (United States)

    Lemons, C J; Powell, S R; King, S A; Davidson, K A

    2015-08-01

    Many children and adolescents with Down syndrome fail to achieve proficiency in mathematics. Researchers have suggested that tailoring interventions based on the behavioural phenotype may enhance efficacy. The research questions that guided this review were (1) what types of mathematics interventions have been empirically evaluated with children and adolescents with Down syndrome?; (2) do the studies demonstrate sufficient methodological rigor?; (3) is there evidence of efficacy for the evaluated mathematics interventions?; and (4) to what extent have researchers considered aspects of the behavioural phenotype in selecting, designing and/or implementing mathematics interventions for children and adolescents with Down syndrome? Nine studies published between 1989 and 2012 were identified for inclusion. Interventions predominantly focused on early mathematics skills and reported positive outcomes. However, no study met criteria for methodological rigor. Further, no authors explicitly considered the behavioural phenotype. Additional research using rigorous experimental designs is needed to evaluate the efficacy of mathematics interventions for children and adolescents with Down syndrome. Suggestions for considering the behavioural phenotype in future research are provided. © 2015 MENCAP and International Association of the Scientific Study of Intellectual and Developmental Disabilities and John Wiley & Sons Ltd.

  8. CONSTRAINING THE SPIN-DOWN TIMESCALE OF THE WHITE DWARF PROGENITORS OF TYPE Ia SUPERNOVAE

    International Nuclear Information System (INIS)

    Meng, Xiangcun; Podsiadlowski, Philipp

    2013-01-01

    Justham and Di Stefano et al. proposed that the white dwarf progenitor of a Type Ia supernova (SN Ia) may have to spin down before it can explode. As the white dwarf spin-down timescale is not well known theoretically, here we try to constrain it empirically (within the framework of this spin-down model) for progenitor systems that contain a giant donor and for which circumbinary material has been detected after the explosion: we obtain an upper limit of a few 10 7 yr. Based on the study of Di Stefano and Kilic, this means that it is too early to rule out the existence of a surviving companion in SNR 0509–67.5

  9. Optimal quadrature rules for odd-degree spline spaces and their application to tensor-product-based isogeometric analysis

    KAUST Repository

    Barton, Michael

    2016-03-14

    We introduce optimal quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. Using the homotopy continuation concept (Bartoň and Calo, 2016) that transforms optimal quadrature rules from source spaces to target spaces, we derive optimal rules for splines defined on finite domains. Starting with the classical Gaussian quadrature for polynomials, which is an optimal rule for a discontinuous odd-degree space, we derive rules for target spaces of higher continuity. We further show how the homotopy methodology handles cases where the source and target rules require different numbers of optimal quadrature points. We demonstrate it by deriving optimal rules for various odd-degree spline spaces, particularly with non-uniform knot sequences and non-uniform multiplicities. We also discuss convergence of our rules to their asymptotic counterparts, that is, the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains. For spaces of low continuities, we numerically show that the derived rules quickly converge to their asymptotic counterparts as the weights and nodes of a few boundary elements differ from the asymptotic values.

  10. Optimal quadrature rules for odd-degree spline spaces and their application to tensor-product-based isogeometric analysis

    KAUST Repository

    Barton, Michael; Calo, Victor M.

    2016-01-01

    We introduce optimal quadrature rules for spline spaces that are frequently used in Galerkin discretizations to build mass and stiffness matrices. Using the homotopy continuation concept (Bartoň and Calo, 2016) that transforms optimal quadrature rules from source spaces to target spaces, we derive optimal rules for splines defined on finite domains. Starting with the classical Gaussian quadrature for polynomials, which is an optimal rule for a discontinuous odd-degree space, we derive rules for target spaces of higher continuity. We further show how the homotopy methodology handles cases where the source and target rules require different numbers of optimal quadrature points. We demonstrate it by deriving optimal rules for various odd-degree spline spaces, particularly with non-uniform knot sequences and non-uniform multiplicities. We also discuss convergence of our rules to their asymptotic counterparts, that is, the analogues of the midpoint rule of Hughes et al. (2010), that are exact and optimal for infinite domains. For spaces of low continuities, we numerically show that the derived rules quickly converge to their asymptotic counterparts as the weights and nodes of a few boundary elements differ from the asymptotic values.

  11. Opinion evolution based on cellular automata rules in small world networks

    Science.gov (United States)

    Shi, Xiao-Ming; Shi, Lun; Zhang, Jie-Fang

    2010-03-01

    In this paper, we apply cellular automata rules, which can be given by a truth table, to human memory. We design each memory as a tracking survey mode that keeps the most recent three opinions. Each cellular automata rule, as a personal mechanism, gives the final ruling in one time period based on the data stored in one's memory. The key focus of the paper is to research the evolution of people's attitudes to the same question. Based on a great deal of empirical observations from computer simulations, all the rules can be classified into 20 groups. We highlight the fact that the phenomenon shown by some rules belonging to the same group will be altered within several steps by other rules in different groups. It is truly amazing that, compared with the last hundreds of presidential voting in America, the eras of important events in America's history coincide with the simulation results obtained by our model.

  12. Hierarchical graphs for rule-based modeling of biochemical systems

    Directory of Open Access Journals (Sweden)

    Hu Bin

    2011-02-01

    Full Text Available Abstract Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal of an edge represents a class of association (dissociation reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for

  13. Thermal stratification in a scaled-down suppression pool of the Fukushima Daiichi nuclear power plants

    Energy Technology Data Exchange (ETDEWEB)

    Jo, Byeongnam, E-mail: jo@vis.t.u-tokyo.ac.jp [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Erkan, Nejdet [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Takahashi, Shinji [Department of Nuclear Engineering and Management, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Song, Daehun [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan); Hyundai and Kia Corporate R& D Division, Hyundai Motors, 772-1, Jangduk-dong, Hwaseong-Si, Gyeonggi-Do 445-706 (Korea, Republic of); Sagawa, Wataru; Okamoto, Koji [Nuclear Professional School, The University of Tokyo, 2-22 Shirakata, Tokai-mura, Ibaraki 319-1188 (Japan)

    2016-08-15

    Highlights: • Thermal stratification was reproduced in a scaled-down suppression pool of the Fukushima Daiichi nuclear power plants. • Horizontal temperature profiles were uniform in the toroidal suppression pool. • Subcooling-steam flow rate map of thermal stratification was obtained. • Steam bubble-induced flow model in suppression pool was suggested. • Bubble frequency strongly depends on the steam flow rate. - Abstract: Thermal stratification in the suppression pool of the Fukushima Daiichi nuclear power plants was experimentally investigated in sub-atmospheric pressure conditions using a 1/20 scale torus shaped setup. The thermal stratification was reproduced in the scaled-down suppression pool and the effect of the steam flow rate on different thermal stratification behaviors was examined for a wide range of steam flow rates. A sparger-type steam injection pipe that emulated Fukushima Daiichi Unit 3 (F1U3) was used. The steam was injected horizontally through 132 holes. The development (formation and disappearance) of thermal stratification was significantly affected by the steam flow rate. Interestingly, the thermal stratification in the suppression pool vanished when subcooling became lower than approximately 5 °C. This occurred because steam bubbles are not well condensed at low subcooling temperatures; therefore, those bubbles generate significant upward momentum, leading to mixing of the water in the suppression pool.

  14. H2POWER: Development of a methodology to calculate life cycle cost of small and medium-scale hydrogen systems

    International Nuclear Information System (INIS)

    Verduzco, Laura E.; Duffey, Michael R.; Deason, Jonathan P.

    2007-01-01

    At this time, hydrogen-based power plants and large hydrogen production facilities are capital intensive and unable to compete financially against hydrocarbon-based energy production facilities. An option to overcome this problem and foster the introduction of hydrogen technology is to introduce small and medium-scale applications such as residential and community hydrogen refueling units. Such units could potentially be used to generate both electricity and heat for the home, as well as hydrogen fuel for the automobile. Cost modeling for the integration of these three forms of energy presents several methodological challenges. This is particularly true since the technology is still in the development phase and both the financial and the environmental cost must be calculated using mainly secondary sources. In order to address these issues and aid in the design of small and medium-scale hydrogen systems, this study presents a computer model to calculate financial and environmental costs of this technology using different hydrogen pathways. The model can design and compare hydrogen refueling units against hydrocarbon-based technologies, including the 'gap' between financial and economic costs. Using the methodology, various penalties and incentives that can foster the introduction of hydrogen-based technologies can be added to the analysis to study their impact on financial cost

  15. Scaling Analysis of the Single-Phase Natural Circulation: the Hydraulic Similarity

    International Nuclear Information System (INIS)

    Yu, Xin-Guo; Choi, Ki-Yong

    2015-01-01

    These passive safety systems all rely on the natural circulation to cool down the reactor cores during an accident. Thus, a robust and accurate scaling methodology must be developed and employed to both assist in the design of a scaled-down test facility and guide the tests in order to mimic the natural circulation flow of its prototype. The natural circulation system generally consists of a heat source, the connecting pipes and several heat sinks. Although many applauding scaling methodologies have been proposed during last several decades, few works have been dedicated to systematically analyze and exactly preserve the hydraulic similarity. In the present study, the hydraulic similarity analyses are performed at both system and local level. By this mean, the scaling criteria for the exact hydraulic similarity in a full-pressure model have been sought. In other words, not only the system-level but also the local-level hydraulic similarities are pursued. As the hydraulic characteristics of a fluid system is governed by the momentum equation, the scaling analysis starts with it. A dimensionless integral loop momentum equation is derived to obtain the dimensionless numbers. In the dimensionless momentum equation, two dimensionless numbers, the dimensionless flow resistance number and the dimensionless gravitational force number, are identified along with a unique hydraulic time scale, characterizing the system hydraulic response. A full-height full-pressure model is also made to see which model among the full-height model and reduced-height model can preserve the hydraulic behavior of the prototype. From the dimensionless integral momentum equation, a unique hydraulic time scale, which characterizes the hydraulic response of a single-phase natural circulation system, is identified along with two dimensionless parameters: the dimensionless flow resistance number and the dimensionless gravitational force number. By satisfying the equality of both dimensionless numbers

  16. Scaling Analysis of the Single-Phase Natural Circulation: the Hydraulic Similarity

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Xin-Guo; Choi, Ki-Yong [KAERI, Daejeon (Korea, Republic of)

    2015-05-15

    These passive safety systems all rely on the natural circulation to cool down the reactor cores during an accident. Thus, a robust and accurate scaling methodology must be developed and employed to both assist in the design of a scaled-down test facility and guide the tests in order to mimic the natural circulation flow of its prototype. The natural circulation system generally consists of a heat source, the connecting pipes and several heat sinks. Although many applauding scaling methodologies have been proposed during last several decades, few works have been dedicated to systematically analyze and exactly preserve the hydraulic similarity. In the present study, the hydraulic similarity analyses are performed at both system and local level. By this mean, the scaling criteria for the exact hydraulic similarity in a full-pressure model have been sought. In other words, not only the system-level but also the local-level hydraulic similarities are pursued. As the hydraulic characteristics of a fluid system is governed by the momentum equation, the scaling analysis starts with it. A dimensionless integral loop momentum equation is derived to obtain the dimensionless numbers. In the dimensionless momentum equation, two dimensionless numbers, the dimensionless flow resistance number and the dimensionless gravitational force number, are identified along with a unique hydraulic time scale, characterizing the system hydraulic response. A full-height full-pressure model is also made to see which model among the full-height model and reduced-height model can preserve the hydraulic behavior of the prototype. From the dimensionless integral momentum equation, a unique hydraulic time scale, which characterizes the hydraulic response of a single-phase natural circulation system, is identified along with two dimensionless parameters: the dimensionless flow resistance number and the dimensionless gravitational force number. By satisfying the equality of both dimensionless numbers

  17. Assessing Financial Education Methods: Principles vs. Rules-of-Thumb Approaches

    Science.gov (United States)

    Skimmyhorn, William L.; Davies, Evan R.; Mun, David; Mitchell, Brian

    2016-01-01

    Despite thousands of programs and tremendous public and private interest in improving financial decision-making, little is known about how best to teach financial education. Using an experimental approach, the authors estimated the effects of two different education methodologies (principles-based and rules-of-thumb) on the knowledge,…

  18. Scaling of two-phase flow transients using reduced pressure system and simulant fluid

    International Nuclear Information System (INIS)

    Kocamustafaogullari, G.; Ishii, M.

    1987-01-01

    Scaling criteria for a natural circulation loop under single-phase flow conditions are derived. Based on these criteria, practical applications for designing a scaled-down model are considered. Particular emphasis is placed on scaling a test model at reduced pressure levels compared to a prototype and on fluid-to-fluid scaling. The large number of similarty groups which are to be matched between modell and prototype makes the design of a scale model a challenging tasks. The present study demonstrates a new approach to this clasical problen using two-phase flow scaling parameters. It indicates that a real time scaling is not a practical solution and a scaled-down model should have an accelerated (shortened) time scale. An important result is the proposed new scaling methodology for simulating pressure transients. It is obtained by considerung the changes of the fluid property groups which appear within the two-phase similarity parameters and the single-phase to two-phase flow transition prameters. Sample calculations are performed for modeling two-phase flow transients of a high pressure water system by a low-pressure water system or a Freon system. It is shown that modeling is possible for both cases for simulation pressure transients. However, simulation of phase change transitions is not possible by a reduced pressure water system without distortion in either power or time. (orig.)

  19. A Rule Based Approach to ISS Interior Volume Control and Layout

    Science.gov (United States)

    Peacock, Brian; Maida, Jim; Fitts, David; Dory, Jonathan

    2001-01-01

    Traditional human factors design involves the development of human factors requirements based on a desire to accommodate a certain percentage of the intended user population. As the product is developed human factors evaluation involves comparison between the resulting design and the specifications. Sometimes performance metrics are involved that allow leniency in the design requirements given that the human performance result is satisfactory. Clearly such approaches may work but they give rise to uncertainty and negotiation. An alternative approach is to adopt human factors design rules that articulate a range of each design continuum over which there are varying outcome expectations and interactions with other variables, including time. These rules are based on a consensus of human factors specialists, designers, managers and customers. The International Space Station faces exactly this challenge in interior volume control, which is based on anthropometric, performance and subjective preference criteria. This paper describes the traditional approach and then proposes a rule-based alternative. The proposed rules involve spatial, temporal and importance dimensions. If successful this rule-based concept could be applied to many traditional human factors design variables and could lead to a more effective and efficient contribution of human factors input to the design process.

  20. Risk-based methodology for USNRC inspections

    International Nuclear Information System (INIS)

    Wong, S.M.; Holahan, G.M.; Chung, J.W.; Johnson, M.R.

    1995-01-01

    This paper describes the development and trial applications of a risk-based methodology to enhance the inspection processes for US nuclear power plants. Objectives of risk-based methods to complement prescriptive engineering approaches in US Nuclear Regulatory Commission (USNRC) inspection programs are presented. Insights from time-dependent risk profiles of plant configurational from Individual Plant Evaluation (IPE) studies were integrated to develop a framework for optimizing inspection efforts in NRC regulatory initiatives. Lessons learned from NRC pilot applications of the risk-based methodology for evaluation of the effectiveness of operational risk management programs at US nuclear power plant sites are also discussed

  1. A proposed heuristic methodology for searching reloading pattern

    International Nuclear Information System (INIS)

    Choi, K. Y.; Yoon, Y. K.

    1993-01-01

    A new heuristic method for loading pattern search has been developed to overcome shortcomings of the algorithmic approach. To reduce the size of vast solution space, general shuffling rules, a regionwise shuffling method, and a pattern grouping method were introduced. The entropy theory was applied to classify possible loading patterns into groups with similarity between them. The pattern search program was implemented with use of the PROLOG language. A two-group nodal code MEDIUM-2D was used for analysis of power distribution in the core. The above mentioned methodology has been tested to show effectiveness in reducing of solution space down to a few hundred pattern groups. Burnable poison rods were then arranged in each pattern group in accordance with burnable poison distribution rules, which led to further reduction of the solution space to several scores of acceptable pattern groups. The method of maximizing cycle length (MCL) and minimizing power-peaking factor (MPF) were applied to search for specific useful loading patterns from the acceptable pattern groups. Thus, several specific loading patterns that have low power-peaking factor and large cycle length were successfully searched from the selected pattern groups. (Author)

  2. [Methodological aspects in the evaluation of turn-over and up/down sizing as indicators of work-related stress].

    Science.gov (United States)

    Veronesi, G; Bertù, L; Mombelli, S; Cimmino, L; Caravati, G; Conti, M; Abate, T; Ferrario, M M

    2011-01-01

    We discuss the methodological aspects related to the evaluation of turn-over and up-down sizing as indicators of work-related stress, in complex organizations like a university hospital. To estimate the active workers population we developed an algorithm which integrated several administrative databases. The indicators were standardized to take into account some potential confounders (age, sex, work seniority) when considering different hospital structures and job mansions. Main advantages of our method include flexibility in the choice of the analysis detail (hospital units, job mansions, a combination of both) and the possibility to describe over-time trends to measure the success of preventive strategies.

  3. New methodologies for calculation of flight parameters on reduced scale wings models in wind tunnel =

    Science.gov (United States)

    Ben Mosbah, Abdallah

    In order to improve the qualities of wind tunnel tests, and the tools used to perform aerodynamic tests on aircraft wings in the wind tunnel, new methodologies were developed and tested on rigid and flexible wings models. A flexible wing concept is consists in replacing a portion (lower and/or upper) of the skin with another flexible portion whose shape can be changed using an actuation system installed inside of the wing. The main purpose of this concept is to improve the aerodynamic performance of the aircraft, and especially to reduce the fuel consumption of the airplane. Numerical and experimental analyses were conducted to develop and test the methodologies proposed in this thesis. To control the flow inside the test sections of the Price-Paidoussis wind tunnel of LARCASE, numerical and experimental analyses were performed. Computational fluid dynamics calculations have been made in order to obtain a database used to develop a new hybrid methodology for wind tunnel calibration. This approach allows controlling the flow in the test section of the Price-Paidoussis wind tunnel. For the fast determination of aerodynamic parameters, new hybrid methodologies were proposed. These methodologies were used to control flight parameters by the calculation of the drag, lift and pitching moment coefficients and by the calculation of the pressure distribution around an airfoil. These aerodynamic coefficients were calculated from the known airflow conditions such as angles of attack, the mach and the Reynolds numbers. In order to modify the shape of the wing skin, electric actuators were installed inside the wing to get the desired shape. These deformations provide optimal profiles according to different flight conditions in order to reduce the fuel consumption. A controller based on neural networks was implemented to obtain desired displacement actuators. A metaheuristic algorithm was used in hybridization with neural networks, and support vector machine approaches and their

  4. Improving Intrusion Detection System Based on Snort Rules for Network Probe Attacks Detection with Association Rules Technique of Data Mining

    Directory of Open Access Journals (Sweden)

    Nattawat Khamphakdee

    2015-07-01

    Full Text Available The intrusion detection system (IDS is an important network security tool for securing computer and network systems. It is able to detect and monitor network traffic data. Snort IDS is an open-source network security tool. It can search and match rules with network traffic data in order to detect attacks, and generate an alert. However, the Snort IDS  can detect only known attacks. Therefore, we have proposed a procedure for improving Snort IDS rules, based on the association rules data mining technique for detection of network probe attacks.  We employed the MIT-DARPA 1999 data set for the experimental evaluation. Since behavior pattern traffic data are both normal and abnormal, the abnormal behavior data is detected by way of the Snort IDS. The experimental results showed that the proposed Snort IDS rules, based on data mining detection of network probe attacks, proved more efficient than the original Snort IDS rules, as well as icmp.rules and icmp-info.rules of Snort IDS.  The suitable parameters for the proposed Snort IDS rules are defined as follows: Min_sup set to 10%, and Min_conf set to 100%, and through the application of eight variable attributes. As more suitable parameters are applied, higher accuracy is achieved.

  5. Merit-Based Incentive Payment System: Meaningful Changes in the Final Rule Brings Cautious Optimism.

    Science.gov (United States)

    Manchikanti, Laxmaiah; Helm Ii, Standiford; Calodney, Aaron K; Hirsch, Joshua A

    2017-01-01

    physicians to avoid penalties for non-reporting with reporting of just a single patient. In summary, CMS has provided substantial flexibility with mechanisms to avoid penalties, reporting for 90 continuous days, increasing the low volume threshold, changing the reporting burden and data thresholds and, finally, coordination between performance categories. The final rule has made MIPS more meaningful with bonuses for exceptional performance, the ability to report for 90 days, and to report on 50% of the patients in 2017 and 60% of the patients in 2018. The final rule also reduced the quality measures to 6, including only one outcome or high priority measure with elimination of cross cutting measure requirement. In addition, the final rule reduced the burden of ACI, improved the coordination of performance, reduced improvement activities burden from 60 points to 40 points, and finally improved coordination between performance categories. Multiple concerns remain regarding the reduction in scoring for quality improvement in future years, increase in proportion of MIPS scoring for resource use utilizing flawed, claims based methodology and the continuation of the disproportionate importance of ACI, an expensive program that can be onerous for providers which in many ways has not lived up to its promise. Key words: Medicare Access and CHIP Reauthorization Act of 2015, merit-based incentive payment system, quality performance measures, resource use, improvement activities, advancing care information performance category.

  6. A new scaling methodology for NO(x) emissions performance of gas burners and furnaces

    Science.gov (United States)

    Hsieh, Tse-Chih

    1997-11-01

    A general burner and furnace scaling methodology is presented, together with the resulting scaling model for NOsb{x} emissions performance of a broad class of swirl-stabilized industrial gas burners. The model is based on results from a set of novel burner scaling experiments on a generic gas burner and furnace design at five different scales having near-uniform geometric, aerodynamic, and thermal similarity and uniform measurement protocols. These provide the first NOsb{x} scaling data over the range of thermal scales from 30 kW to 12 MW, including input-output measurements as well as detailed in-flame measurements of NO, NOsb{x}, CO, Osb2, unburned hydrocarbons, temperature, and velocities at each scale. The in-flame measurements allow identification of key sources of NOsb{x} production. The underlying physics of these NOsb{x} sources lead to scaling laws for their respective contributions to the overall NOsb{x} emissions performance. It is found that the relative importance of each source depends on the burner scale and operating conditions. Simple furnace residence time scaling is shown to be largely irrelevant, with NOsb{x} emissions instead being largely controlled by scaling of the near-burner region. The scalings for these NOsb{x} sources are combined in a comprehensive scaling model for NOsb{x} emission performance. Results from the scaling model show good agreement with experimental data at all burner scales and over the entire range of turndown, staging, preheat, and excess air dilution, with correlations generally exceeding 90%. The scaling model permits design trade-off assessments for a broad class of burners and furnaces, and allows performance of full industrial scale burners and furnaces of this type to be inferred from results of small scale tests.

  7. The impact of pH inhomogeneities on CHO cell physiology and fed-batch process performance - two-compartment scale-down modelling and intracellular pH excursion.

    Science.gov (United States)

    Brunner, Matthias; Braun, Philipp; Doppler, Philipp; Posch, Christoph; Behrens, Dirk; Herwig, Christoph; Fricke, Jens

    2017-07-01

    Due to high mixing times and base addition from top of the vessel, pH inhomogeneities are most likely to occur during large-scale mammalian processes. The goal of this study was to set-up a scale-down model of a 10-12 m 3 stirred tank bioreactor and to investigate the effect of pH perturbations on CHO cell physiology and process performance. Short-term changes in extracellular pH are hypothesized to affect intracellular pH and thus cell physiology. Therefore, batch fermentations, including pH shifts to 9.0 and 7.8, in regular one-compartment systems are conducted. The short-term adaption of the cells intracellular pH are showed an immediate increase due to elevated extracellular pH. With this basis of fundamental knowledge, a two-compartment system is established which is capable of simulating defined pH inhomogeneities. In contrast to state-of-the-art literature, the scale-down model is included parameters (e.g. volume of the inhomogeneous zone) as they might occur during large-scale processes. pH inhomogeneity studies in the two-compartment system are performed with simulation of temporary pH zones of pH 9.0. The specific growth rate especially during the exponential growth phase is strongly affected resulting in a decreased maximum viable cell density and final product titer. The gathered results indicate that even short-term exposure of cells to elevated pH values during large-scale processes can affect cell physiology and overall process performance. In particular, it could be shown for the first time that pH perturbations, which might occur during the early process phase, have to be considered in scale-down models of mammalian processes. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Developing an Informant Questionnaire for Cognitive Abilities in Down Syndrome: The Cognitive Scale for Down Syndrome (CS-DS.

    Directory of Open Access Journals (Sweden)

    Carla M Startin

    Full Text Available Down syndrome (DS is the most common genetic cause of intellectual disability (ID. Abilities relating to executive function, memory and language are particularly affected in DS, although there is a large variability across individuals. People with DS also show an increased risk of developing dementia. While assessment batteries have been developed for adults with DS to assess cognitive abilities, these batteries may not be suitable for those with more severe IDs, dementia, or visual / hearing difficulties. Here we report the development of an informant rated questionnaire, the Cognitive Scale for Down Syndrome (CS-DS, which focuses on everyday abilities relating to executive function, memory and language, and is suitable for assessing these abilities in all adults with DS regardless of cognitive ability. Complete questionnaires were collected about 128 individuals with DS. After final question selection we found high internal consistency scores across the total questionnaire and within the executive function, memory and language domains. CS-DS scores showed a wide range, with minimal floor and ceiling effects. We found high interrater (n = 55 and test retest (n = 36 intraclass correlations. CS-DS scores were significantly lower in those aged 41+ with significant cognitive decline compared to those without decline. Across all adults without cognitive decline, CS-DS scores correlated significantly to measures of general abilities. Exploratory factor analysis suggested five factors within the scale, relating to memory, self-regulation / inhibition, self-direction / initiation, communication, and focussing attention. The CS-DS therefore shows good interrater and test retest reliability, and appears to be a valid and suitable informant rating tool for assessing everyday cognitive abilities in a wide range of individuals with DS. Such a questionnaire may be a useful outcome measure for intervention studies to assess improvements to cognition, in

  9. Performance based regulation - The maintenance rule

    Energy Technology Data Exchange (ETDEWEB)

    Correia, Richard P. [NRR/DOTS/TQMP, U.S. Nuclear Regulatory Commission, Office of Nuclear Reactor Regulation, M/S OWFN 10A19, Washington, D.C. 20555 (United States)

    1997-07-01

    The U.S. Nuclear Regulatory Commission has begun a transition from 'process-oriented' to 'results-oriented' regulations. The maintenance rule is a results-oriented rule that mandates consideration of risk and plant performance. The Maintenance Rule allows licensees to devise the most effective and efficient means of achieving the results described in the rule including the use of Probabilistic Risk (or Safety) Assessments. The NRC staff conducted a series of site visits to evaluate implementation of the Rule. Conclusions from the site visits indicated that the results-oriented Maintenance Rule can be successfully implemented and enforced. (author)

  10. Performance based regulation - The maintenance rule

    International Nuclear Information System (INIS)

    Correia, Richard P.

    1997-01-01

    The U.S. Nuclear Regulatory Commission has begun a transition from 'process-oriented' to 'results-oriented' regulations. The maintenance rule is a results-oriented rule that mandates consideration of risk and plant performance. The Maintenance Rule allows licensees to devise the most effective and efficient means of achieving the results described in the rule including the use of Probabilistic Risk (or Safety) Assessments. The NRC staff conducted a series of site visits to evaluate implementation of the Rule. Conclusions from the site visits indicated that the results-oriented Maintenance Rule can be successfully implemented and enforced. (author)

  11. Methodologies, languages and tools

    International Nuclear Information System (INIS)

    Amako, Katsuya

    1994-01-01

    This is a summary of the open-quotes Methodologies, Languages and Toolsclose quotes session in the CHEP'94 conference. All the contributions to methodologies and languages are relevant to the object-oriented approach. Other topics presented are related to various software tools in the down-sized computing environment

  12. Proofs, Upside Down

    DEFF Research Database (Denmark)

    Puech, Matthias

    2013-01-01

    this correspondence by applying a series of off-the-shelf program transformations à la Danvy et al. on a type checker for the bidirectional λ-calculus, and get a type checker for the λ¯ -calculus, the proof term assignment of LJT. This functional correspondence revisits the relationship between natural deduction......It is well known in proof theory that sequent-calculus proofs differ from natural deduction proofs by “reversing” elimination rules upside down into left introduction rules. It is also well known that to each recursive, functional program corresponds an equivalent iterative, accumulator......-passing program, where the accumulator stores the continuation of the iteration, in “reversed” order. Here, we compose these remarks and show that a restriction of the intuitionistic sequent calculus, LJT, is exactly an accumulator-passing version of intuitionistic natural deduction NJ. More precisely, we obtain...

  13. Microfabricated modular scale-down device for regenerative medicine process development.

    Directory of Open Access Journals (Sweden)

    Marcel Reichen

    Full Text Available The capacity of milli and micro litre bioreactors to accelerate process development has been successfully demonstrated in traditional biotechnology. However, for regenerative medicine present smaller scale culture methods cannot cope with the wide range of processing variables that need to be evaluated. Existing microfabricated culture devices, which could test different culture variables with a minimum amount of resources (e.g. expensive culture medium, are typically not designed with process development in mind. We present a novel, autoclavable, and microfabricated scale-down device designed for regenerative medicine process development. The microfabricated device contains a re-sealable culture chamber that facilitates use of standard culture protocols, creating a link with traditional small-scale culture devices for validation and scale-up studies. Further, the modular design can easily accommodate investigation of different culture substrate/extra-cellular matrix combinations. Inactivated mouse embryonic fibroblasts (iMEF and human embryonic stem cell (hESC colonies were successfully seeded on gelatine-coated tissue culture polystyrene (TC-PS using standard static seeding protocols. The microfluidic chip included in the device offers precise and accurate control over the culture medium flow rate and resulting shear stresses in the device. Cells were cultured for two days with media perfused at 300 µl.h(-1 resulting in a modelled shear stress of 1.1×10(-4 Pa. Following perfusion, hESC colonies stained positively for different pluripotency markers and retained an undifferentiated morphology. An image processing algorithm was developed which permits quantification of co-cultured colony-forming cells from phase contrast microscope images. hESC colony sizes were quantified against the background of the feeder cells (iMEF in less than 45 seconds for high-resolution images, which will permit real-time monitoring of culture progress in future

  14. Medicare program; replacement of reasonable charge methodology by fee schedules for parenteral and enteral nutrients, equipment, and supplies. Final rule.

    Science.gov (United States)

    2001-08-28

    This final rule implements fee schedules for payment of parenteral and enteral nutrition (PEN) items and services furnished under the prosthetic device benefit, defined in section 1861(s)(8) of the Social Security Act. The authority for establishing these fee schedules is provided by the Balanced Budget Act of 1997, which amended the Social Security Act at section 1842(s). Section 1842(s) of the Social Security Act specifies that statewide or other area wide fee schedules may be implemented for the following items and services still subject to the reasonable charge payment methodology: medical supplies; home dialysis supplies and equipment; therapeutic shoes; parenteral and enteral nutrients, equipment, and supplies; electromyogram devices; salivation devices; blood products; and transfusion medicine. This final rule describes changes made to the proposed fee schedule payment methodology for these items and services and provides that the fee schedules for PEN items and services are effective for all covered items and services furnished on or after January 1, 2002. Fee schedules will not be implemented for electromyogram devices and salivation devices at this time since these items are not covered by Medicare. In addition, fee schedules will not be implemented for medical supplies, home dialysis supplies and equipment, therapeutic shoes, blood products, and transfusion medicine at this time since the data required to establish these fee schedules are inadequate.

  15. A self-learning rule base for command following in dynamical systems

    Science.gov (United States)

    Tsai, Wei K.; Lee, Hon-Mun; Parlos, Alexander

    1992-01-01

    In this paper, a self-learning Rule Base for command following in dynamical systems is presented. The learning is accomplished though reinforcement learning using an associative memory called SAM. The main advantage of SAM is that it is a function approximator with explicit storage of training samples. A learning algorithm patterned after the dynamic programming is proposed. Two artificially created, unstable dynamical systems are used for testing, and the Rule Base was used to generate a feedback control to improve the command following ability of the otherwise uncontrolled systems. The numerical results are very encouraging. The controlled systems exhibit a more stable behavior and a better capability to follow reference commands. The rules resulting from the reinforcement learning are explicitly stored and they can be modified or augmented by human experts. Due to overlapping storage scheme of SAM, the stored rules are similar to fuzzy rules.

  16. Conceptual design of a thermo-electrical energy storage system based on heat integration of thermodynamic cycles – Part A: Methodology and base case

    International Nuclear Information System (INIS)

    Morandin, Matteo; Maréchal, François; Mercangöz, Mehmet; Buchter, Florian

    2012-01-01

    The interest in large scale electricity storage (ES) with discharging time longer than 1 h and nominal power greater than 1 MW, is increasing worldwide as the increasing share of renewable energy, typically solar and wind energy, imposes severe load management issues. Thermo-electrical energy storage (TEES) based on thermodynamic cycles is currently under investigation at ABB corporate research as an alternative solution to pump hydro and compressed air energy storage. TEES is based on the conversion of electricity into thermal energy during charge by means of a heat pump and on the conversion of thermal energy into electricity during discharge by means of a thermal engine. The synthesis and the thermodynamic optimization of a TEES system based on hot water, ice storage and transcritical CO 2 cycles, is discussed in two papers. In this first paper a methodology for the conceptual design of a TEES system based on the analysis of the thermal integration between charging and discharging cycles through Pinch Analysis tools is introduced. According to such methodology, the heat exchanger network and temperatures and volumes of storage tanks are not defined a priori but are determined after the cycle parameters are optimized. For this purpose a heuristic procedure based on the interpretation of the composite curves obtained by optimizing the thermal integration between the cycles was developed. Such heuristic rules were implemented in a code that allows finding automatically the complete system design for given values of the intensive parameters of the charging and discharging cycles only. A base case system configuration is introduced and the results of its thermodynamic optimization are discussed here. A maximum roundtrip efficiency of 60% was obtained for the base case configuration assuming turbomachinery and heat exchanger performances in line with indications from manufacturers. -- Highlights: ► Energy storage based on water, ice, and transcritical CO 2 cycles is

  17. Relational and Object-Oriented Methodology in Data Bases Systems

    Directory of Open Access Journals (Sweden)

    Marian Pompiliu CRISTESCU

    2006-01-01

    Full Text Available Database programming languages integrate concepts of databases and programming languages to provide both implementation tools for data-intensive applications and high-level user interfaces to databases. Frequently, database programs contain a large amount of application knowledge which is hidden in the procedural code and thus difficult to maintain with changing data and user views. This paper presents a first attempt to improve the situation by supporting the integrated definition and management of data and rules based on a setoriented and predicative approach. The use of database technology for integrated fact and rule base management is shown to have some important advantages in terms of fact and rule integrity, question-answering, and explanation of results.

  18. A networked voting rule for democratic representation

    Science.gov (United States)

    Hernández, Alexis R.; Gracia-Lázaro, Carlos; Brigatti, Edgardo; Moreno, Yamir

    2018-03-01

    We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals' interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process.

  19. A networked voting rule for democratic representation

    Science.gov (United States)

    Brigatti, Edgardo; Moreno, Yamir

    2018-01-01

    We introduce a general framework for exploring the problem of selecting a committee of representatives with the aim of studying a networked voting rule based on a decentralized large-scale platform, which can assure a strong accountability of the elected. The results of our simulations suggest that this algorithm-based approach is able to obtain a high representativeness for relatively small committees, performing even better than a classical voting rule based on a closed list of candidates. We show that a general relation between committee size and representatives exists in the form of an inverse square root law and that the normalized committee size approximately scales with the inverse of the community size, allowing the scalability to very large populations. These findings are not strongly influenced by the different networks used to describe the individuals’ interactions, except for the presence of few individuals with very high connectivity which can have a marginal negative effect in the committee selection process. PMID:29657817

  20. Fuzzy Sets-based Control Rules for Terminating Algorithms

    Directory of Open Access Journals (Sweden)

    Jose L. VERDEGAY

    2002-01-01

    Full Text Available In this paper some problems arising in the interface between two different areas, Decision Support Systems and Fuzzy Sets and Systems, are considered. The Model-Base Management System of a Decision Support System which involves some fuzziness is considered, and in that context the questions on the management of the fuzziness in some optimisation models, and then of using fuzzy rules for terminating conventional algorithms are presented, discussed and analyzed. Finally, for the concrete case of the Travelling Salesman Problem, and as an illustration of determination, management and using the fuzzy rules, a new algorithm easy to implement in the Model-Base Management System of any oriented Decision Support System is shown.

  1. RISMA: A Rule-based Interval State Machine Algorithm for Alerts Generation, Performance Analysis and Monitoring Real-Time Data Processing

    Science.gov (United States)

    Laban, Shaban; El-Desouky, Aly

    2013-04-01

    The monitoring of real-time systems is a challenging and complicated process. So, there is a continuous need to improve the monitoring process through the use of new intelligent techniques and algorithms for detecting exceptions, anomalous behaviours and generating the necessary alerts during the workflow monitoring of such systems. The interval-based or period-based theorems have been discussed, analysed, and used by many researches in Artificial Intelligence (AI), philosophy, and linguistics. As explained by Allen, there are 13 relations between any two intervals. Also, there have also been many studies of interval-based temporal reasoning and logics over the past decades. Interval-based theorems can be used for monitoring real-time interval-based data processing. However, increasing the number of processed intervals makes the implementation of such theorems a complex and time consuming process as the relationships between such intervals are increasing exponentially. To overcome the previous problem, this paper presents a Rule-based Interval State Machine Algorithm (RISMA) for processing, monitoring, and analysing the behaviour of interval-based data, received from real-time sensors. The proposed intelligent algorithm uses the Interval State Machine (ISM) approach to model any number of interval-based data into well-defined states as well as inferring them. An interval-based state transition model and methodology are presented to identify the relationships between the different states of the proposed algorithm. By using such model, the unlimited number of relationships between similar large numbers of intervals can be reduced to only 18 direct relationships using the proposed well-defined states. For testing the proposed algorithm, necessary inference rules and code have been designed and applied to the continuous data received in near real-time from the stations of International Monitoring System (IMS) by the International Data Centre (IDC) of the Preparatory

  2. Study of Movement Speeds Down Stairs

    CERN Document Server

    Hoskins, Bryan L

    2013-01-01

    The Study of Movement Speeds Down Stairs closely examines forty-three unique case studies on movement patterns down stairwells. These studies include observations made during evacuation drills, others made during normal usage, interviews with people after fire evacuations, recommendations made from compiled studies, and detailed results from laboratory studies. The methodology used in each study for calculating density and movement speed, when known, are also presented, and this book identifies an additional seventeen variables linked to altering movement speeds. The Study of Movement Speeds Down Stairs is intended for researchers as a reference guide for evaluating pedestrian evacuation dynamics down stairwells. Practitioners working in a related field may also find this book invaluable.

  3. Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems

    Science.gov (United States)

    Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith

    1988-01-01

    Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

  4. Hydrodynamics of a natural circulation loop in a scaled-down steam drum-riser-downcomer assembly

    Energy Technology Data Exchange (ETDEWEB)

    Basu, Dipankar N., E-mail: dnbasu@iitg.ernet.in; Patil, N.D.; Bhattacharyya, Souvik; Das, P.K.

    2013-12-15

    Highlights: • Experimental investigation of loop hydrodynamics in a scaled-down simulated AHWR. • Identification of flow regimes and transition analyzing conductance probe signal. • Downcomer flow maximizes with fully developed churn flow and lowest for bubbly flow. • Highest downcomer flow rate is achieved with identical air supply to both risers. • Interaction of varying flow patterns reduces downcomer flow for unequal operation. - Abstract: Complex interactions of different phases, widely varying frictional characteristics of different flow regimes and the involvement of multiple scales of transport make the modelling of a two-phase natural circulation loop (NCL) exceedingly difficult. The knowledge base about the dependency of downcomer flow rate on riser-side flow patterns, particularly for systems with multiple parallel channels is barely developed, necessitating the need for detailed experimentation. The present study focuses on developing a scaled-down test facility relevant to the Advanced Heavy Water Reactor conceived in the atomic energy programme of India to study the hydrodynamics of the NCL using air and water as test fluids. An experimental facility with two risers, one downcomer and a phase-separating drum was fabricated. Conductivity probes and photographic techniques are used to characterize the two phase flow. Normalized voltage signals obtained from the amplified output of conductivity probes and their subsequent analysis through probability distribution function reveal the presence of different two-phase flow patterns in the riser tubes. With the increase in air supply per riser void fraction in the two-phase mixture increases and gradually flow patterns transform from bubbly to fully developed annular through slug, churn and dispersed annular flow regimes. Downcomer flow rate increases rapidly with air supply till a maximum and then starts decreasing due to enhanced frictional forces. However, the maximum value of downcomer water

  5. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    Science.gov (United States)

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  6. A Fuzzy Rule-Based Expert System for Evaluating Intellectual Capital

    Directory of Open Access Journals (Sweden)

    Mohammad Hossein Fazel Zarandi

    2012-01-01

    Full Text Available A fuzzy rule-based expert system is developed for evaluating intellectual capital. A fuzzy linguistic approach assists managers to understand and evaluate the level of each intellectual capital item. The proposed fuzzy rule-based expert system applies fuzzy linguistic variables to express the level of qualitative evaluation and criteria of experts. Feasibility of the proposed model is demonstrated by the result of intellectual capital performance evaluation for a sample company.

  7. Rule-Based vs. Behavior-Based Self-Deployment for Mobile Wireless Sensor Networks.

    Science.gov (United States)

    Urdiales, Cristina; Aguilera, Francisco; González-Parada, Eva; Cano-García, Jose; Sandoval, Francisco

    2016-07-07

    In mobile wireless sensor networks (MWSN), nodes are allowed to move autonomously for deployment. This process is meant: (i) to achieve good coverage; and (ii) to distribute the communication load as homogeneously as possible. Rather than optimizing deployment, reactive algorithms are based on a set of rules or behaviors, so nodes can determine when to move. This paper presents an experimental evaluation of both reactive deployment approaches: rule-based and behavior-based ones. Specifically, we compare a backbone dispersion algorithm with a social potential fields algorithm. Most tests are done under simulation for a large number of nodes in environments with and without obstacles. Results are validated using a small robot network in the real world. Our results show that behavior-based deployment tends to provide better coverage and communication balance, especially for a large number of nodes in areas with obstacles.

  8. RANWAR: rank-based weighted association rule mining from gene expression and methylation data.

    Science.gov (United States)

    Mallik, Saurav; Mukhopadhyay, Anirban; Maulik, Ujjwal

    2015-01-01

    Ranking of association rules is currently an interesting topic in data mining and bioinformatics. The huge number of evolved rules of items (or, genes) by association rule mining (ARM) algorithms makes confusion to the decision maker. In this article, we propose a weighted rule-mining technique (say, RANWAR or rank-based weighted association rule-mining) to rank the rules using two novel rule-interestingness measures, viz., rank-based weighted condensed support (wcs) and weighted condensed confidence (wcc) measures to bypass the problem. These measures are basically depended on the rank of items (genes). Using the rank, we assign weight to each item. RANWAR generates much less number of frequent itemsets than the state-of-the-art association rule mining algorithms. Thus, it saves time of execution of the algorithm. We run RANWAR on gene expression and methylation datasets. The genes of the top rules are biologically validated by Gene Ontologies (GOs) and KEGG pathway analyses. Many top ranked rules extracted from RANWAR that hold poor ranks in traditional Apriori, are highly biologically significant to the related diseases. Finally, the top rules evolved from RANWAR, that are not in Apriori, are reported.

  9. Rule-based topology system for spatial databases to validate complex geographic datasets

    Science.gov (United States)

    Martinez-Llario, J.; Coll, E.; Núñez-Andrés, M.; Femenia-Ribera, C.

    2017-06-01

    A rule-based topology software system providing a highly flexible and fast procedure to enforce integrity in spatial relationships among datasets is presented. This improved topology rule system is built over the spatial extension Jaspa. Both projects are open source, freely available software developed by the corresponding author of this paper. Currently, there is no spatial DBMS that implements a rule-based topology engine (considering that the topology rules are designed and performed in the spatial backend). If the topology rules are applied in the frontend (as in many GIS desktop programs), ArcGIS is the most advanced solution. The system presented in this paper has several major advantages over the ArcGIS approach: it can be extended with new topology rules, it has a much wider set of rules, and it can mix feature attributes with topology rules as filters. In addition, the topology rule system can work with various DBMSs, including PostgreSQL, H2 or Oracle, and the logic is performed in the spatial backend. The proposed topology system allows users to check the complex spatial relationships among features (from one or several spatial layers) that require some complex cartographic datasets, such as the data specifications proposed by INSPIRE in Europe and the Land Administration Domain Model (LADM) for Cadastral data.

  10. Understanding Kaula's Rule for Small Bodies

    Science.gov (United States)

    McMahon, J.; Scheeres, D. J.; Farnocchia, D.; Chesley, S.

    2015-12-01

    Kaula's rule gives a bound on the RMS of the gravity coefficients of each order as a power law K/n^2, where n is the degree. Kaula derived the value of K for Earth as 10-5. This rule has been used as an a priori information bound on the gravity coefficients of other planetary bodies before their gravity fields are measured by spacecraft. To apply Kaula's rule to other bodies, the simple scaling based on the relative gravity of each body is used - (gEarth/gPlanet)2. This scaling was successfully used even for Vesta, where K = 0.011. However, if Kaula's rule is applied to very small bodies, such as the OSIRIS-REx target asteroid Bennu, the scaling results in un-useable bounds. In this case, K ~ 105. This fact has motivated further investigation into the derivation and application of a Kaula-like power rule to bound the gravity field of small bodies. Our initial investigation focuses on the specific application to Bennu. This study is enabled by the fact that a fairly accurate shape model of Bennu has been derived based on three Earth-based radar apparitions along with a constrained bulk density based on astrometry and thermal measurements. Thus we investigated varying the Bennu topography within the expected accuracy of the shape model as well as the density distribution. Several interesting facts were discovered through this analysis. First, the top shape of Bennu, common to a number of near-Earth asteroids, results in the even zonal coefficients being larger than the odd zonal of one lower degree. Second, the zonals in general are significantly larger than the coefficients with order > 1, so that the zonals will dominate any fitting of K to a power law. This encourages us to have one K for the absolute value of the zonals (K=0.087), and a separate value for the RMS of the other coefficients (K=0.026). Third, variation in the topography within this uncertainty dominates the variation in the gravity field coefficients over basic inhomogenous density distribution

  11. Optimal Sequential Rules for Computer-Based Instruction.

    Science.gov (United States)

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  12. A Platform-Based Methodology for System-Level Mixed-Signal Design

    Directory of Open Access Journals (Sweden)

    Alberto Sangiovanni-Vincentelli

    2010-01-01

    Full Text Available The complexity of today's embedded electronic systems as well as their demanding performance and reliability requirements are such that their design can no longer be tackled with ad hoc techniques while still meeting tight time to-market constraints. In this paper, we present a system level design approach for electronic circuits, utilizing the platform-based design (PBD paradigm as the natural framework for mixed-domain design formalization. In PBD, a meet-in-the-middle approach allows systematic exploration of the design space through a series of top-down mapping of system constraints onto component feasibility models in a platform library, which is based on bottom-up characterizations. In this framework, new designs can be assembled from the precharacterized library components, giving the highest priority to design reuse, correct assembly, and efficient design flow from specifications to implementation. We apply concepts from design centering to enforce robustness to modeling errors as well as process, voltage, and temperature variations, which are currently plaguing embedded system design in deep-submicron technologies. The effectiveness of our methodology is finally shown on the design of a pipeline A/D converter and two receiver front-ends for UMTS and UWB communications.

  13. App Studies : Platform Rules and Methodological Challenges

    NARCIS (Netherlands)

    Gerlitz, C.; Helmond, A.; van der Vlist, F.; Weltevrede, E.; De Groote, S.; Majmundar, P.

    2016-01-01

    The panel engages with conceptual and methodological challenges within a specific area of ‘internet rules’, namely the space of mobile apps. Whereas the web was set out to function as a ‘generative’ and open technology facilitating the production of unanticipated services and applications, the

  14. A New Classification Approach Based on Multiple Classification Rules

    OpenAIRE

    Zhongmei Zhou

    2014-01-01

    A good classifier can correctly predict new data for which the class label is unknown, so it is important to construct a high accuracy classifier. Hence, classification techniques are much useful in ubiquitous computing. Associative classification achieves higher classification accuracy than some traditional rule-based classification approaches. However, the approach also has two major deficiencies. First, it generates a very large number of association classification rules, especially when t...

  15. Good and Bad Objects : Cardinality-Based Rules

    NARCIS (Netherlands)

    Dimitrov, D.A.; Borm, P.E.M.; Hendrickx, R.L.P.

    2003-01-01

    We consider the problem of ranking sets of objects, the members of which are mutually compatible.Assuming that each object is either good or bad, we axiomatically characterize three cardinality-based rules which arise naturally in this dichotomous setting.They are what we call the symmetric

  16. A false-alarm aware methodology to develop robust and efficient multi-scale infrared small target detection algorithm

    Science.gov (United States)

    Moradi, Saed; Moallem, Payman; Sabahi, Mohamad Farzan

    2018-03-01

    False alarm rate and detection rate are still two contradictory metrics for infrared small target detection in an infrared search and track system (IRST), despite the development of new detection algorithms. In certain circumstances, not detecting true targets is more tolerable than detecting false items as true targets. Hence, considering background clutter and detector noise as the sources of the false alarm in an IRST system, in this paper, a false alarm aware methodology is presented to reduce false alarm rate while the detection rate remains undegraded. To this end, advantages and disadvantages of each detection algorithm are investigated and the sources of the false alarms are determined. Two target detection algorithms having independent false alarm sources are chosen in a way that the disadvantages of the one algorithm can be compensated by the advantages of the other one. In this work, multi-scale average absolute gray difference (AAGD) and Laplacian of point spread function (LoPSF) are utilized as the cornerstones of the desired algorithm of the proposed methodology. After presenting a conceptual model for the desired algorithm, it is implemented through the most straightforward mechanism. The desired algorithm effectively suppresses background clutter and eliminates detector noise. Also, since the input images are processed through just four different scales, the desired algorithm has good capability for real-time implementation. Simulation results in term of signal to clutter ratio and background suppression factor on real and simulated images prove the effectiveness and the performance of the proposed methodology. Since the desired algorithm was developed based on independent false alarm sources, our proposed methodology is expandable to any pair of detection algorithms which have different false alarm sources.

  17. Combination of statistical and physically based methods to assess shallow slide susceptibility at the basin scale

    Science.gov (United States)

    Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel

    2017-07-01

    Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.

  18. GRAMMAR RULE BASED INFORMATION RETRIEVAL MODEL FOR BIG DATA

    Directory of Open Access Journals (Sweden)

    T. Nadana Ravishankar

    2015-07-01

    Full Text Available Though Information Retrieval (IR in big data has been an active field of research for past few years; the popularity of the native languages presents a unique challenge in big data information retrieval systems. There is a need to retrieve information which is present in English and display it in the native language for users. This aim of cross language information retrieval is complicated by unique features of the native languages such as: morphology, compound word formations, word spelling variations, ambiguity, word synonym, other language influence and etc. To overcome some of these issues, the native language is modeled using a grammar rule based approach in this work. The advantage of this approach is that the native language is modeled and its unique features are encoded using a set of inference rules. This rule base coupled with the customized ontological system shows considerable potential and is found to show better precision and recall.

  19. A Web-Based Rice Plant Expert System Using Rule-Based Reasoning

    Directory of Open Access Journals (Sweden)

    Anton Setiawan Honggowibowo

    2009-12-01

    Full Text Available Rice plants can be attacked by various kinds of diseases which are possible to be determined from their symptoms. However, it is to recognize that to find out the exact type of disease, an agricultural expert’s opinion is needed, meanwhile the numbers of agricultural experts are limited and there are too many problems to be solved at the same time. This makes a system with a capability as an expert is required. This system must contain the knowledge of the diseases and symptom of rice plants as an agricultural expert has to have. This research designs a web-based expert system using rule-based reasoning. The rule are modified from the method of forward chaining inference and backward chaining in order to to help farmers in the rice plant disease diagnosis. The web-based rice plants disease diagnosis expert system has the advantages to access and use easily. With web-based features inside, it is expected that the farmer can accesse the expert system everywhere to overcome the problem to diagnose rice diseases.

  20. Multi Scale Micro and Nano Metrology for Advanced Precision Moulding Technologies

    DEFF Research Database (Denmark)

    Quagliotti, Danilo

    dimensions of the novel micro and nano production. Nowadays, design methodologies and concurrent tolerance guidelines are not yet available for advanced micro manufacture. Moreover, there are no shared methodologies that deals with the uncertainty evaluation of feature of size in the sub-millimetre scale......The technological revolution that has deeply influenced the manufacturing industry over the past two decades opened up new possibilities for the realisation of advanced micro and nano systems but, at the same time, traditional techniques for quality assurance became not adequate any longer......, as the technology progressed. The gap between the needs of the manufacturing industry and the well-organized structure of the dimensional and geometrical metrology appeared, above all, related to the methodologies and, also, to the instrumentation used to deal with the incessant scaling down of the critical...

  1. Methodology for the motor coordination through the adapted table tennis in boys and girl with Down´s Syndrome

    Directory of Open Access Journals (Sweden)

    Carlos Alberto Govea Macía

    2014-12-01

    Full Text Available The present research approaches the application of a pre experiment with the objective of elaborating a methodology that stimulates the development of the motor coordination, adapting the table tennis to the necessities and characteristics of the boys and girls with Downs Syndrome diagnose. To elaborate it was started from the deficiencies and contradictions found in the adapted sport for patients with Down’s Syndrome, as well as the potentialities and deficiencies in their motor coordination; there are used methods and techniques of theoretical character (analysis, synthesis, induction, deduction, hypothetical, deductive, systemic structural functional and modeling; of empiric-experimental character (observation, measurement, documental analysis, experts' approach, pre pedagogic experiment, study case; and as techniques: the survey and instruments; and of mathematical-statistical character, descriptive statistic and inferential. The use of the experts’ criterion is used to verify the theoretical validity elements of the designed methodology. The results analysis allows accepting the proposal as valid after having obtained the theoretical and practical results validation with a case study implementation.

  2. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    Science.gov (United States)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  3. Selection methodology for LWR safety programs and proposals. Volume 2. Methodology application

    International Nuclear Information System (INIS)

    Ritzman, R.L.; Husseiny, A.A.

    1980-08-01

    The results of work done to update and apply a methodology for selecting (prioritizing) LWR safety technology R and D programs are described. The methodology is based on multiattribute utility (MAU) theory. Application of the methodology to rank-order a group of specific R and D programs included development of a complete set of attribute utility functions, specification of individual attribute scaling constants, and refinement and use of an interactive computer program (MAUP) to process decision-maker inputs and generate overall (multiattribute) program utility values. The output results from several decision-makers are examined for consistency and conclusions and recommendations regarding general use of the methodology are presented. 3 figures, 18 tables

  4. Classification Based on Pruning and Double Covered Rule Sets for the Internet of Things Applications

    Science.gov (United States)

    Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy. PMID:24511304

  5. Classification based on pruning and double covered rule sets for the internet of things applications.

    Science.gov (United States)

    Li, Shasha; Zhou, Zhongmei; Wang, Weiping

    2014-01-01

    The Internet of things (IOT) is a hot issue in recent years. It accumulates large amounts of data by IOT users, which is a great challenge to mining useful knowledge from IOT. Classification is an effective strategy which can predict the need of users in IOT. However, many traditional rule-based classifiers cannot guarantee that all instances can be covered by at least two classification rules. Thus, these algorithms cannot achieve high accuracy in some datasets. In this paper, we propose a new rule-based classification, CDCR-P (Classification based on the Pruning and Double Covered Rule sets). CDCR-P can induce two different rule sets A and B. Every instance in training set can be covered by at least one rule not only in rule set A, but also in rule set B. In order to improve the quality of rule set B, we take measure to prune the length of rules in rule set B. Our experimental results indicate that, CDCR-P not only is feasible, but also it can achieve high accuracy.

  6. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  7. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  8. One-fifth-scale and full-scale fuel element rocking tests

    International Nuclear Information System (INIS)

    Nau, P.V.; Olsen, B.E.

    1978-06-01

    Using 1 / 5 -scale and 1 / 1 -scale (prototype H451) fuel elements, one, two, or three stacked elements on a clamped base element were rocked from an initial release position. Relative displacement, rock-down loads, and dowel pin shear forces were measured. A scaled comparison between 1 / 5 -scale and 1 / 1 -scale results was made to evaluate the model scaling laws, and an error analysis was performed to assess the accuracy and usefulness of the test data

  9. Rule-based conversion of closely-related languages: a Dutch-to-Afrikaans convertor

    CSIR Research Space (South Africa)

    Van Huyssteen, GB

    2009-11-01

    Full Text Available and performance of a rule-based Dutch-to-Afrikaans converter, with the aim to transform Dutch text so that it looks more like an Afrikaans text (even though it might not even be a good Dutch translation). The rules we used is based on systematic orthographic...

  10. Cumulative effects assessment: Does scale matter?

    International Nuclear Information System (INIS)

    Therivel, Riki; Ross, Bill

    2007-01-01

    Cumulative effects assessment (CEA) is (or should be) an integral part of environmental assessment at both the project and the more strategic level. CEA helps to link the different scales of environmental assessment in that it focuses on how a given receptor is affected by the totality of plans, projects and activities, rather than on the effects of a particular plan or project. This article reviews how CEAs consider, and could consider, scale issues: spatial extent, level of detail, and temporal issues. It is based on an analysis of Canadian project-level CEAs and UK strategic-level CEAs. Based on a review of literature and, especially, case studies with which the authors are familiar, it concludes that scale issues are poorly considered at both levels, with particular problems being unclear or non-existing cumulative effects scoping methodologies; poor consideration of past or likely future human activities beyond the plan or project in question; attempts to apportion 'blame' for cumulative effects; and, at the plan level, limited management of cumulative effects caused particularly by the absence of consent regimes. Scale issues are important in most of these problems. However both strategic-level and project-level CEA have much potential for managing cumulative effects through better siting and phasing of development, demand reduction and other behavioural changes, and particularly through setting development consent rules for projects. The lack of strategic resource-based thresholds constrains the robust management of strategic-level cumulative effects

  11. Oxytocin modulates trait-based rule following

    NARCIS (Netherlands)

    Gross, J.; de Dreu, C.K.W.

    Rules, whether in the form of norms, taboos or laws, regulate and coordinate human life. Some rules, however, are arbitrary and adhering to them can be personally costly. Rigidly sticking to such rules can be considered maladaptive. Here, we test whether, at the neurobiological level, (mal)adaptive

  12. An evaluation and implementation of rule-based Home Energy Management System using the Rete algorithm.

    Science.gov (United States)

    Kawakami, Tomoya; Fujita, Naotaka; Yoshihisa, Tomoki; Tsukamoto, Masahiko

    2014-01-01

    In recent years, sensors become popular and Home Energy Management System (HEMS) takes an important role in saving energy without decrease in QoL (Quality of Life). Currently, many rule-based HEMSs have been proposed and almost all of them assume "IF-THEN" rules. The Rete algorithm is a typical pattern matching algorithm for IF-THEN rules. Currently, we have proposed a rule-based Home Energy Management System (HEMS) using the Rete algorithm. In the proposed system, rules for managing energy are processed by smart taps in network, and the loads for processing rules and collecting data are distributed to smart taps. In addition, the number of processes and collecting data are reduced by processing rules based on the Rete algorithm. In this paper, we evaluated the proposed system by simulation. In the simulation environment, rules are processed by a smart tap that relates to the action part of each rule. In addition, we implemented the proposed system as HEMS using smart taps.

  13. Implementation Of 5S Methodology In The Small Scale Industry A Case Study

    Directory of Open Access Journals (Sweden)

    R. S. Agrahari

    2015-04-01

    Full Text Available Abstract 5S is a basic foundation of Lean Manufacturing systems. It is a tool for cleaning sorting organizing and providing the necessary groundwork for workpiece improvement. This paper dealt with the implementation of 5S methodology in the small scale industry. By following the 5S methodology it shows significant improvements to safety productivity efficiency and housekeeping. The improvements before and after 5S implementation is shown by pictures in the paper. It also intends to build a stronger work ethic within the management and workers who would be expected to continue the good practices.

  14. Rating the methodological quality of single-subject designs and n-of-1 trials: introducing the Single-Case Experimental Design (SCED) Scale.

    Science.gov (United States)

    Tate, Robyn L; McDonald, Skye; Perdices, Michael; Togher, Leanne; Schultz, Regina; Savage, Sharon

    2008-08-01

    Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical

  15. Techniques and implementation of the embedded rule-based expert system using Ada

    Science.gov (United States)

    Liberman, Eugene M.; Jones, Robert E.

    1991-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with its portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assured a growing role in providing human-like reasoning capability and expertise for computer systems. The integration of expert system technology with Ada programming language, specifically a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell is discussed. The NASA Lewis Research Center was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-base power expert system, in ART-Ada. Three components, the rule-based expert system, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  16. Large-scale agent-based social simulation : A study on epidemic prediction and control

    NARCIS (Netherlands)

    Zhang, M.

    2016-01-01

    Large-scale agent-based social simulation is gradually proving to be a versatile methodological approach for studying human societies, which could make contributions from policy making in social science, to distributed artificial intelligence and agent technology in computer science, and to theory

  17. Evaluation of Machine Learning and Rules-Based Approaches for Predicting Antimicrobial Resistance Profiles in Gram-negative Bacilli from Whole Genome Sequence Data.

    Science.gov (United States)

    Pesesky, Mitchell W; Hussain, Tahir; Wallace, Meghan; Patel, Sanket; Andleeb, Saadia; Burnham, Carey-Ann D; Dantas, Gautam

    2016-01-01

    incomplete genome assembly confounded the rules-based algorithm, resulting in predictions based on gene family, rather than on knowledge of the specific variant found. Low-frequency resistance caused errors in the machine-learning algorithm because those genes were not seen or seen infrequently in the test set. We also identified an example of variability in the phenotype-based results that led to disagreement with both genotype-based methods. Genotype-based antimicrobial susceptibility testing shows great promise as a diagnostic tool, and we outline specific research goals to further refine this methodology.

  18. Evaluation of Machine Learning and Rules-Based Approaches for Predicting Antimicrobial Resistance Profiles in Gram-negative Bacilli from Whole Genome Sequence Data

    Directory of Open Access Journals (Sweden)

    Mitchell Pesesky

    2016-11-01

    factors and incomplete genome assembly confounded the rules-based algorithm, resulting in predictions based on gene family, rather than on knowledge of the specific variant found. Low-frequency resistance caused errors in the machine-learning algorithm because those genes were not seen or seen infrequently in the test set. We also identified an example of variability in the phenotype-based results that led to disagreement with both genotype-based methods. Genotype-based antimicrobial susceptibility testing shows great promise as a diagnostic tool, and we outline specific research goals to further refine this methodology.

  19. SCALE6.1 Hybrid Shielding Methodology For The Spent Fuel Dry Storage

    International Nuclear Information System (INIS)

    Matijevic, M.; Pevec, D.; Trontl, K.

    2015-01-01

    The SCALE6.1/MAVRIC hybrid deterministic-stochastic shielding methodology was used for dose rates calculation of the generic spent fuel dry storage installation. The neutron-gamma dose rates around the cask array were calculated over a large problem domain in order to determine the boundary of the controlled area. The FW-CADIS methodology, based on the deterministic forward and adjoint solution over the phase - space, was used for optimized, global Monte Carlo results over the mesh tally. The cask inventory was modeled as homogenized material corresponding to 20 fuel assemblies from a standard mid - sized PWR reactor. The global simulation model was an array of 32 casks in 2 rows with concrete foundations and external air, which makes a large spatial domain for shielding calculations. The dose rates around the casks were determined using FW-CADIS method with weighted adjoint source and mesh tally covering a portion of spatial domain of interest. The conservatively obtained dose rates give the upper boundary, since the activation reduction of sources was not taken into account when sequential filling of the dry storage will start. The effective area of the dry storage installation can be additionally reduced with lowering concrete foundation under the ground, embankment raising, and with extra concrete walls, that would additionally lower the dominant gamma dose rates. (author).

  20. Electronuclear sum rules

    International Nuclear Information System (INIS)

    Arenhoevel, H.; Drechsel, D.; Weber, H.J.

    1978-01-01

    Generalized sum rules are derived by integrating the electromagnetic structure functions along lines of constant ratio of momentum and energy transfer. For non-relativistic systems these sum rules are related to the conventional photonuclear sum rules by a scaling transformation. The generalized sum rules are connected with the absorptive part of the forward scattering amplitude of virtual photons. The analytic structure of the scattering amplitudes and the possible existence of dispersion relations have been investigated in schematic relativistic and non-relativistic models. While for the non-relativistic case analyticity does not hold, the relativistic scattering amplitude is analytical for time-like (but not for space-like) photons and relations similar to the Gell-Mann-Goldberger-Thirring sum rule exist. (Auth.)

  1. Development of in-situ product removal strategies in biocatalysis applying scaled-down unit operations

    DEFF Research Database (Denmark)

    Heintz, Søren; Börner, Tim; Ringborg, Rolf Hoffmeyer

    2017-01-01

    different process steps while operating it as a combined system, giving the possibility to test and characterize the performance of novel process concepts and biocatalysts with minimal influence of inhibitory products. Here the capabilities of performing process development by applying scaled-down unit...... operations are highlighted through a case study investigating the asymmetric synthesis of 1-methyl-3-phenylpropylamine (MPPA) using ω-transaminase, an enzyme in the sub-family of amino transferases (ATAs). An on-line HPLC system was applied to avoid manual sample handling and to semi...

  2. A Rules-Based Simulation of Bacterial Turbulence

    Science.gov (United States)

    Mikel-Stites, Maxwell; Staples, Anne

    2015-11-01

    In sufficiently dense bacterial populations (>40% bacteria by volume), unusual collective swimming behaviors have been consistently observed, resembling von Karman vortex streets. The source of these collective swimming behavior has yet to be fully determined, and as of yet, no research has been conducted that would define whether or not this behavior is derived predominantly from the properties of the surrounding media, or if it is an emergent behavior as a result of the ``rules'' governing the behavior of individual bacteria. The goal of this research is to ascertain whether or not it is possible to design a simulation that can replicate the qualitative behavior of the densely packed bacterial populations using only behavioral rules to govern the actions of each bacteria, with the physical properties of the media being neglected. The results of the simulation will address whether or not it is possible for the system's overall behavior to be driven exclusively by these rule-based dynamics. In order to examine this, the behavioral simulation was written in MATLAB on a fixed grid, and updated sequentially with the bacterial behavior, including randomized tumbling, gathering and perceptual sub-functions. If the simulation is successful, it will serve as confirmation that it is possible to generate these qualitatively vortex-like behaviors without specific physical media (that the phenomena arises in emergent fashion from behavioral rules), or as evidence that the observed behavior requires some specific set of physical parameters.

  3. Extraction and Preference Ordering of Multireservoir Water Supply Rules in Dry Years

    Directory of Open Access Journals (Sweden)

    Ling Kang

    2016-01-01

    Full Text Available This paper presents a new methodology of combined use of the nondominated sorting genetic algorithm II (NSGA-II and the approach of successive elimination of alternatives based on order and degree of efficiency (SEABODE in identifying the most preferred multireservoir water supply rules in dry years. First, the suggested operation rules consists of a two-point type time-varying hedging policy for a single reservoir and a simple proportional allocation policy of common water demand between two parallel reservoirs. Then, the NSGA-II is employed to derive enough noninferior operation rules (design alternatives in terms of two conflicting objectives (1 minimizing the total deficit ratio (TDR of all demands of the entire system in operation horizon, and (2 minimizing the maximum deficit ratio (MDR of water supply in a single period. Next, the SEABODE, a multicriteria decision making (MCDM procedure, is applied to further eliminate alternatives based on the concept of efficiency of order k with degree p. In SEABODE, the reservoir performance indices and water shortage indices are selected as evaluation criteria for preference ordering among the design alternatives obtained by NSGA-II. The proposed methodology was tested on a regional water supply system with three reservoirs located in the Jialing River, China, where the results demonstrate its applicability and merits.

  4. Towards an MDA-based development methodology

    NARCIS (Netherlands)

    Gavras, Anastasius; Belaunde, Mariano; Ferreira Pires, Luis; Andrade Almeida, João; Oquendo, Flavio; Warboys, Brian C.; Morrison, Ron

    2004-01-01

    This paper proposes a development methodology for distributed applications based on the principles and concepts of the Model-Driven Architecture (MDA). The paper identifies phases and activities of an MDA-based development trajectory, and defines the roles and products of each activity in accordance

  5. Design of a Fuzzy Rule Base Expert System to Predict and Classify ...

    African Journals Online (AJOL)

    The main objective of design of a rule base expert system using fuzzy logic approach is to predict and forecast the risk level of cardiac patients to avoid sudden death. In this proposed system, uncertainty is captured using rule base and classification using fuzzy c-means clustering is discussed to overcome the risk level, ...

  6. Cope's Rule and the Universal Scaling Law of Ornament Complexity.

    Science.gov (United States)

    Raia, Pasquale; Passaro, Federico; Carotenuto, Francesco; Maiorino, Leonardo; Piras, Paolo; Teresi, Luciano; Meiri, Shai; Itescu, Yuval; Novosolov, Maria; Baiano, Mattia Antonio; Martínez, Ricard; Fortelius, Mikael

    2015-08-01

    Luxuriant, bushy antlers, bizarre crests, and huge, twisting horns and tusks are conventionally understood as products of sexual selection. This view stems from both direct observation and from the empirical finding that the size of these structures grows faster than body size (i.e., ornament size shows positive allometry). We contend that the familiar evolutionary increase in the complexity of ornaments over time in many animal clades is decoupled from ornament size evolution. Increased body size comes with extended growth. Since growth scales to the quarter power of body size, we predicted that ornament complexity should scale according to the quarter power law as well, irrespective of the role of sexual selection in the evolution and function of the ornament. To test this hypothesis, we selected three clades (ammonites, deer, and ceratopsian dinosaurs) whose species bore ornaments that differ in terms of the importance of sexual selection to their evolution. We found that the exponent of the regression of ornament complexity to body size is the same for the three groups and is statistically indistinguishable from 0.25. We suggest that the evolution of ornament complexity is a by-product of Cope's rule. We argue that although sexual selection may control size in most ornaments, it does not influence their shape.

  7. An Association Rule Based Method to Integrate Metro-Public Bicycle Smart Card Data for Trip Chain Analysis

    Directory of Open Access Journals (Sweden)

    De Zhao

    2018-01-01

    Full Text Available Smart card data provide valuable insights and massive samples for enhancing the understanding of transfer behavior between metro and public bicycle. However, smart cards for metro and public bicycle are often issued and managed by independent companies and this results in the same commuter having different identity tags in the metro and public bicycle smart card systems. The primary objective of this study is to develop a data fusion methodology for matching metro and public bicycle smart cards for the same commuter using historical smart card data. A novel method with association rules to match the data derived from the two systems is proposed and validation was performed. The results showed that our proposed method successfully matched 573 pairs of smart cards with an accuracy of 100%. We also validated the association rules method through visualization of individual metro and public bicycle trips. Based on the matched cards, interesting findings of metro-bicycle transfer have been derived, including the spatial pattern of the public bicycle as first/last mile solution as well as the duration of a metro trip chain.

  8. Rule-based emergency action level monitor prototype

    International Nuclear Information System (INIS)

    Touchton, R.A.; Gunter, A.D.; Cain, D.

    1985-01-01

    In late 1983, the Electric Power Research Institute (EPRI) began a program to encourage and stimulate the development of artificial intelligence (AI) applications for the nuclear industry. Development of a rule-based emergency action level classification system prototype is discussed. The paper describes both the full prototype currently under development and the completed, simplified prototype

  9. Domain XML semantic integration based on extraction rules and ontology mapping

    Directory of Open Access Journals (Sweden)

    Huayu LI

    2016-08-01

    Full Text Available A plenty of XML documents exist in petroleum engineering field, but traditional XML integration solution can’t provide semantic query, which leads to low data use efficiency. In light of WeXML(oil&gas well XML data semantic integration and query requirement, this paper proposes a semantic integration method based on extraction rules and ontology mapping. The method firstly defines a series of extraction rules with which elements and properties of WeXML Schema are mapped to classes and properties in WeOWL ontology, respectively; secondly, an algorithm is used to transform WeXML documents into WeOWL instances. Because WeOWL provides limited semantics, ontology mappings between two ontologies are then built to explain class and property of global ontology with terms of WeOWL, and semantic query based on global domain concepts model is provided. By constructing a WeXML data semantic integration prototype system, the proposed transformational rule, the transfer algorithm and the mapping rule are tested.

  10. Application and comparison of the SCS-CN-based rainfall-runoff model in meso-scale watershed and field scale

    Science.gov (United States)

    Luo, L.; Wang, Z.

    2010-12-01

    Soil Conservation Service Curve Number (SCS-CN) based hydrologic model, has widely been used for agricultural watersheds in recent years. However, there will be relative error when applying it due to differentiation of geographical and climatological conditions. This paper introduces a more adaptable and propagable model based on the modified SCS-CN method, which specializes into two different scale cases of research regions. Combining the typical conditions of the Zhanghe irrigation district in southern part of China, such as hydrometeorologic conditions and surface conditions, SCS-CN based models were established. The Xinbu-Qiao River basin (area =1207 km2) and the Tuanlin runoff test area (area =2.87 km2)were taken as the study areas of basin scale and field scale in Zhanghe irrigation district. Applications were extended from ordinary meso-scale watershed to field scale in Zhanghe paddy field-dominated irrigated . Based on actual measurement data of land use, soil classification, hydrology and meteorology, quantitative evaluation and modifications for two coefficients, i.e. preceding loss and runoff curve, were proposed with corresponding models, table of CN values for different landuse and AMC(antecedent moisture condition) grading standard fitting for research cases were proposed. The simulation precision was increased by putting forward a 12h unit hydrograph of the field area, and 12h unit hydrograph were simplified. Comparison between different scales show that it’s more effectively to use SCS-CN model on field scale after parameters calibrated in basin scale These results can help discovering the rainfall-runoff rule in the district. Differences of established SCS-CN model's parameters between the two study regions are also considered. Varied forms of landuse and impacts of human activities were the important factors which can impact the rainfall-runoff relations in Zhanghe irrigation district.

  11. Sum rules for the real parts of nonforward current-particle scattering amplitudes

    International Nuclear Information System (INIS)

    Abdel-Rahman, A.M.M.

    1976-01-01

    Extending previous work, using Taha's refined infinite-momentum method, new sum rules for the real parts of nonforward current-particle scattering amplitudes are derived. The sum rules are based on covariance, casuality, scaling, equal-time algebra and unsubtracted dispersion relations for the amplitudes. A comparison with the corresponding light-cone approach is made, and it is shown that the light-cone sum rules would also follow from the assumptions underlying the present work

  12. SNE's methodological basis - web-based software in entrepreneurial surveys

    DEFF Research Database (Denmark)

    Madsen, Henning

    This overhead based paper gives an introduction to the research methodology applied in the surveys carried out in the SNE-project.......This overhead based paper gives an introduction to the research methodology applied in the surveys carried out in the SNE-project....

  13. Preparing for budget-based payment methodologies: global payment and episode-based payment.

    Science.gov (United States)

    Hudson, Mark E

    2015-10-01

    Use of budget-based payment methodologies (capitation and episode-based bundled payment) has been demonstrated to drive value in healthcare delivery. With a focus on high-volume, high-cost surgical procedures, inclusion of anaesthesiology services in these methodologies is likely. This review provides a summary of budget-based payment methodologies and practical information necessary for anaesthesiologists to prepare for participation in these programmes. Although few examples of anaesthesiologists' participation in these models exist, an understanding of the structure of these programmes and opportunities for participation are available. Prospective preparation in developing anaesthesiology-specific bundled payment profiles and early participation in pathway development associated with selected episodes of care are essential for successful participation as a gainsharing partner. With significant opportunity to contribute to care coordination and cost management, anaesthesiology can play an important role in budget-based payment programmes and should expect to participate as full gainsharing partners. Precise costing methodologies and accurate economic modelling, along with identification of quality management and cost control opportunities, will help identify participation opportunities and appropriate payment and gainsharing agreements. Anaesthesiology-specific examples with budget-based payment models are needed to help guide increased participation in these programmes.

  14. Green fluorescent protein (GFP) leakage from microbial biosensors provides useful information for the evaluation of the scale-down effect

    DEFF Research Database (Denmark)

    Delvigne, Frank; Brognaux, Alison; Francis, Frédéric

    2011-01-01

    Mixing deficiencies can be potentially detected by the use of a dedicated whole cell microbial biosensor. In this work, a csiE promoter induced under carbon-limited conditions was involved in the elaboration of such biosensor. The cisE biosensor exhibited interesting response after up and down......-shift of the dilution rate in chemostat mode. Glucose limitation was accompanied by green fluorescent protein (GFP) leakage to the extracellular medium. In order to test the responsiveness of microbial biosensors to substrate fluctuations in large-scale, a scale-down reactor (SDR) experiment was performed. The glucose...... fluctuations were characterized at the single cell level and tend to decrease the induction of GFP. Simulations run on the basis of a stochastic hydrodynamic model have shown the variability and the frequencies at which biosensors are exposed to glucose gradient in the SDR. GFP leakage was observed to a great...

  15. Selective Area Sublimation: A Simple Top-down Route for GaN-Based Nanowire Fabrication.

    Science.gov (United States)

    Damilano, B; Vézian, S; Brault, J; Alloing, B; Massies, J

    2016-03-09

    Post-growth in situ partial SiNx masking of GaN-based epitaxial layers grown in a molecular beam epitaxy reactor is used to get GaN selective area sublimation (SAS) by high temperature annealing. Using this top-down approach, nanowires (NWs) with nanometer scale diameter are obtained from GaN and InxGa1-xN/GaN quantum well epitaxial structures. After GaN regrowth on InxGa1-xN/GaN NWs resulting from SAS, InxGa1-xN quantum disks (QDisks) with nanometer sizes in the three dimensions are formed. Low temperature microphotoluminescence experiments demonstrate QDisk multilines photon emission around 3 eV with individual line widths of 1-2 meV.

  16. Scaling and design analyses of a scaled-down, high-temperature test facility for experimental investigation of the initial stages of a VHTR air-ingress accident

    International Nuclear Information System (INIS)

    Arcilesi, David J.; Ham, Tae Kyu; Kim, In Hun; Sun, Xiaodong; Christensen, Richard N.; Oh, Chang H.

    2015-01-01

    Highlights: • A 1/8th geometric-scale test facility that models the VHTR hot plenum is proposed. • Geometric scaling analysis is introduced for VHTR to analyze air-ingress accident. • Design calculations are performed to show that accident phenomenology is preserved. • Some analyses include time scale, hydraulic similarity and power scaling analysis. • Test facility has been constructed and shake-down tests are currently being carried out. - Abstract: A critical event in the safety analysis of the very high-temperature gas-cooled reactor (VHTR) is an air-ingress accident. This accident is initiated, in its worst case scenario, by a double-ended guillotine break of the coaxial cross vessel, which leads to a rapid reactor vessel depressurization. In a VHTR, the reactor vessel is located within a reactor cavity that is filled with air during normal operating conditions. Following the vessel depressurization, the dominant mode of ingress of an air–helium mixture into the reactor vessel will either be molecular diffusion or density-driven stratified flow. The mode of ingress is hypothesized to depend largely on the break conditions of the cross vessel. Since the time scales of these two ingress phenomena differ by orders of magnitude, it is imperative to understand under which conditions each of these mechanisms will dominate in the air ingress process. Computer models have been developed to analyze this type of accident scenario. There are, however, limited experimental data available to understand the phenomenology of the air-ingress accident and to validate these models. Therefore, there is a need to design and construct a scaled-down experimental test facility to simulate the air-ingress accident scenarios and to collect experimental data. The current paper focuses on the analyses performed for the design and operation of a 1/8th geometric scale (by height and diameter), high-temperature test facility. A geometric scaling analysis for the VHTR, a time

  17. Rule-based category learning in children: the role of age and executive functioning.

    Directory of Open Access Journals (Sweden)

    Rahel Rabi

    Full Text Available Rule-based category learning was examined in 4-11 year-olds and adults. Participants were asked to learn a set of novel perceptual categories in a classification learning task. Categorization performance improved with age, with younger children showing the strongest rule-based deficit relative to older children and adults. Model-based analyses provided insight regarding the type of strategy being used to solve the categorization task, demonstrating that the use of the task appropriate strategy increased with age. When children and adults who identified the correct categorization rule were compared, the performance deficit was no longer evident. Executive functions were also measured. While both working memory and inhibitory control were related to rule-based categorization and improved with age, working memory specifically was found to marginally mediate the age-related improvements in categorization. When analyses focused only on the sample of children, results showed that working memory ability and inhibitory control were associated with categorization performance and strategy use. The current findings track changes in categorization performance across childhood, demonstrating at which points performance begins to mature and resemble that of adults. Additionally, findings highlight the potential role that working memory and inhibitory control may play in rule-based category learning.

  18. Evidence Based Cataloguing: Moving Beyond the Rules

    Directory of Open Access Journals (Sweden)

    Kathy Carter

    2010-12-01

    Full Text Available Cataloguing is sometimes regarded as a rule-bound, production-based activity that offers little scope for professional judgement and decision-making. In reality, cataloguing involves challenging decisions that can have significant service and financial impacts. The current environment for cataloguing is a maelstrom of changing demands and competing visions for the future. With information-seekers turning en masse to Google and their behaviour receiving greater attention, library vendors are offering “discovery layer” products to replace traditional OPACs, and cataloguers are examining and debating a transformed version of their descriptive cataloguing rules (Resource Description and Access or RDA. In his “Perceptions of the future of cataloging: Is the sky really falling?” (2009, Ivey provides a good summary of this environment. At the same time, myriad new metadata formats and schema are being developed and applied for digital collections in libraries and other institutions. In today’s libraries, cataloguing is no longer limited to management of traditional AACR and MARC-based metadata for traditional library collections. And like their parent institutions, libraries cannot ignore growing pressures to demonstrate accountability and tangible value provided by their services. More than ever, research and an evidence based approach can help guide cataloguing decision-making.

  19. Acceleration-based methodology to assess the blast mitigation performance of explosive ordnance disposal helmets

    Science.gov (United States)

    Dionne, J. P.; Levine, J.; Makris, A.

    2018-01-01

    To design the next generation of blast mitigation helmets that offer increasing levels of protection against explosive devices, manufacturers must be able to rely on appropriate test methodologies and human surrogates that will differentiate the performance level of various helmet solutions and ensure user safety. Ideally, such test methodologies and associated injury thresholds should be based on widely accepted injury criteria relevant within the context of blast. Unfortunately, even though significant research has taken place over the last decade in the area of blast neurotrauma, there currently exists no agreement in terms of injury mechanisms for blast-induced traumatic brain injury. In absence of such widely accepted test methods and injury criteria, the current study presents a specific blast test methodology focusing on explosive ordnance disposal protective equipment, involving the readily available Hybrid III mannequin, initially developed for the automotive industry. The unlikely applicability of the associated brain injury criteria (based on both linear and rotational head acceleration) is discussed in the context of blast. Test results encompassing a large number of blast configurations and personal protective equipment are presented, emphasizing the possibility to develop useful correlations between blast parameters, such as the scaled distance, and mannequin engineering measurements (head acceleration). Suggestions are put forward for a practical standardized blast testing methodology taking into account limitations in the applicability of acceleration-based injury criteria as well as the inherent variability in blast testing results.

  20. Top Level Space Cost Methodology (TLSCM)

    Science.gov (United States)

    1997-12-02

    Software 7 6. ACEIT . 7 C. Ground Rules and Assumptions 7 D. Typical Life Cycle Cost Distribution 7 E. Methodologies 7 1. Cost/budget Threshold 9 2. Analogy...which is based on real-time Air Force and space programs. Ref.(25:2- 8, 2-9) 6. ACEIT : Automated Cost Estimating Integrated Tools( ACEIT ), Tecolote...Research, Inc. There is a way to use the ACEIT cost program to get a print-out of an expanded WBS. Therefore, find someone that has ACEIT experience and

  1. Evaluation and Selection of Best Priority Sequencing Rule in Job Shop Scheduling using Hybrid MCDM Technique

    Science.gov (United States)

    Kiran Kumar, Kalla; Nagaraju, Dega; Gayathri, S.; Narayanan, S.

    2017-05-01

    Priority Sequencing Rules provide the guidance for the order in which the jobs are to be processed at a workstation. The application of different priority rules in job shop scheduling gives different order of scheduling. More experimentation needs to be conducted before a final choice is made to know the best priority sequencing rule. Hence, a comprehensive method of selecting the right choice is essential in managerial decision making perspective. This paper considers seven different priority sequencing rules in job shop scheduling. For evaluation and selection of the best priority sequencing rule, a set of eight criteria are considered. The aim of this work is to demonstrate the methodology of evaluating and selecting the best priority sequencing rule by using hybrid multi criteria decision making technique (MCDM), i.e., analytical hierarchy process (AHP) with technique for order preference by similarity to ideal solution (TOPSIS). The criteria weights are calculated by using AHP whereas the relative closeness values of all priority sequencing rules are computed based on TOPSIS with the help of data acquired from the shop floor of a manufacturing firm. Finally, from the findings of this work, the priority sequencing rules are ranked from most important to least important. The comprehensive methodology presented in this paper is very much essential for the management of a workstation to choose the best priority sequencing rule among the available alternatives for processing the jobs with maximum benefit.

  2. GraDit: graph-based data repair algorithm for multiple data edits rule violations

    Science.gov (United States)

    Ode Zuhayeni Madjida, Wa; Gusti Bagus Baskara Nugraha, I.

    2018-03-01

    Constraint-based data cleaning captures data violation to a set of rule called data quality rules. The rules consist of integrity constraint and data edits. Structurally, they are similar, where the rule contain left hand side and right hand side. Previous research proposed a data repair algorithm for integrity constraint violation. The algorithm uses undirected hypergraph as rule violation representation. Nevertheless, this algorithm can not be applied for data edits because of different rule characteristics. This study proposed GraDit, a repair algorithm for data edits rule. First, we use bipartite-directed hypergraph as model representation of overall defined rules. These representation is used for getting interaction between violation rules and clean rules. On the other hand, we proposed undirected graph as violation representation. Our experimental study showed that algorithm with undirected graph as violation representation model gave better data quality than algorithm with undirected hypergraph as representation model.

  3. Scale characters analysis for gully structure in the watersheds of loess landforms based on digital elevation models

    Science.gov (United States)

    Zhu, Hongchun; Zhao, Yipeng; Liu, Haiying

    2018-04-01

    Scale is the basic attribute for expressing and describing spatial entity and phenomena. It offers theoretical significance in the study of gully structure information, variable characteristics of watershed morphology, and development evolution at different scales. This research selected five different areas in China's Loess Plateau as the experimental region and used DEM data at different scales as the experimental data. First, the change rule of the characteristic parameters of the data at different scales was analyzed. The watershed structure information did not change along with a change in the data scale. This condition was proven by selecting indices of gully bifurcation ratio and fractal dimension as characteristic parameters of watershed structure information. Then, the change rule of the characteristic parameters of gully structure with different analysis scales was analyzed by setting the scale sequence of analysis at the extraction gully. The gully structure of the watershed changed with variations in the analysis scale, and the change rule was obvious when the gully level changed. Finally, the change rule of the characteristic parameters of the gully structure at different areas was analyzed. The gully fractal dimension showed a significant numerical difference in different areas, whereas the variation of the gully branch ratio was small. The change rule indicated that the development degree of the gully obviously varied in different regions, but the morphological structure was basically similar.

  4. Scale characters analysis for gully structure in the watersheds of loess landforms based on digital elevation models

    Science.gov (United States)

    Zhu, Hongchun; Zhao, Yipeng; Liu, Haiying

    2018-06-01

    Scale is the basic attribute for expressing and describing spatial entity and phenomena. It offers theoretical significance in the study of gully structure information, variable characteristics of watershed morphology, and development evolution at different scales. This research selected five different areas in China's Loess Plateau as the experimental region and used DEM data at different scales as the experimental data. First, the change rule of the characteristic parameters of the data at different scales was analyzed. The watershed structure information did not change along with a change in the data scale. This condition was proven by selecting indices of gully bifurcation ratio and fractal dimension as characteristic parameters of watershed structure information. Then, the change rule of the characteristic parameters of gully structure with different analysis scales was analyzed by setting the scale sequence of analysis at the extraction gully. The gully structure of the watershed changed with variations in the analysis scale, and the change rule was obvious when the gully level changed. Finally, the change rule of the characteristic parameters of the gully structure at different areas was analyzed. The gully fractal dimension showed a significant numerical difference in different areas, whereas the variation of the gully branch ratio was small. The change rule indicated that the development degree of the gully obviously varied in different regions, but the morphological structure was basically similar.

  5. Scaling structure loads for SMA

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam [KEPCO ENC, Yongin (Korea, Republic of)

    2012-10-15

    When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies.

  6. Scaling structure loads for SMA

    International Nuclear Information System (INIS)

    Lee, Dong Won; Song, Jeong Guk; Jeon, Sang Ho; Lim, Hak Kyu; Lee, Kwang Nam

    2012-01-01

    When the Seismic Margin Analysis(SMA) is conducted, the new structural load generation with Seismic Margin Earthquake(SME) is the time consuming work. For the convenience, EPRI NP 6041 suggests the scaling of the structure load. The report recommend that the fixed base(rock foundation) structure designed using either constant modal damping or modal damping ratios developed for a single material damping. For these cases, the SME loads can easily and accurately be calculated by scaling the spectral accelerations of the individual modes for the new SME response spectra. EPRI NP 6041 provides two simple methodologies for the scaling structure seismic loads which are the dominant frequency scaling methodology and the mode by mode scaling methodology. Scaling of the existing analysis to develop SME loads is much easier and more efficient than performing a new analysis. This paper is intended to compare the calculating results of two different methodologies

  7. Rules and Discretion in Monetary Policy: Is the Response of the Stock Market Rational?

    Directory of Open Access Journals (Sweden)

    Ion-Iulian MARINESCU

    2015-04-01

    Full Text Available We investigate the effects of the monetary policy conduct on the domestic capital market for a sample of developed countries where the capital market plays a significant role in the economy. We break down the policy rate innovations in rules-based and discretionary components in order to determine the degree of prudentiality in the monetary policy conduct and we study their accounts with respect to capital market rationality. The rules-based component is determined using an interpolated vanilla Taylor-rule policy rate at the event date and the discretionary component is obtained by subtracting the rules-based rate from the target monetary policy rate innovation. Using an event study approach, we analyze the impact of monetary policy components on the returns of the stock market and we determine that the conduct of the monetary policy can cause irrational responses of the capital market. More than that, we show, for the analyzed countries, that if the general level of discretion in the monetary policy is high the response of the stock market becomes increasingly erratic, indicating that forward guidance may help reduce uncertainty on capital markets.

  8. Genetic learning in rule-based and neural systems

    Science.gov (United States)

    Smith, Robert E.

    1993-01-01

    The design of neural networks and fuzzy systems can involve complex, nonlinear, and ill-conditioned optimization problems. Often, traditional optimization schemes are inadequate or inapplicable for such tasks. Genetic Algorithms (GA's) are a class of optimization procedures whose mechanics are based on those of natural genetics. Mathematical arguments show how GAs bring substantial computational leverage to search problems, without requiring the mathematical characteristics often necessary for traditional optimization schemes (e.g., modality, continuity, availability of derivative information, etc.). GA's have proven effective in a variety of search tasks that arise in neural networks and fuzzy systems. This presentation begins by introducing the mechanism and theoretical underpinnings of GA's. GA's are then related to a class of rule-based machine learning systems called learning classifier systems (LCS's). An LCS implements a low-level production-system that uses a GA as its primary rule discovery mechanism. This presentation illustrates how, despite its rule-based framework, an LCS can be thought of as a competitive neural network. Neural network simulator code for an LCS is presented. In this context, the GA is doing more than optimizing and objective function. It is searching for an ecology of hidden nodes with limited connectivity. The GA attempts to evolve this ecology such that effective neural network performance results. The GA is particularly well adapted to this task, given its naturally-inspired basis. The LCS/neural network analogy extends itself to other, more traditional neural networks. Conclusions to the presentation discuss the implications of using GA's in ecological search problems that arise in neural and fuzzy systems.

  9. A rule-based smart automated fertilization and irrigation systems

    Science.gov (United States)

    Yousif, Musab El-Rashid; Ghafar, Khairuddin; Zahari, Rahimi; Lim, Tiong Hoo

    2018-04-01

    Smart automation in industries has become very important as it can improve the reliability and efficiency of the systems. The use of smart technologies in agriculture have increased over the year to ensure and control the production of crop and address food security. However, it is important to use proper irrigation systems avoid water wastage and overfeeding of the plant. In this paper, a Smart Rule-based Automated Fertilization and Irrigation System is proposed and evaluated. We propose a rule based decision making algorithm to monitor and control the food supply to the plant and the soil quality. A build-in alert system is also used to update the farmer using a text message. The system is developed and evaluated using a real hardware.

  10. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks

    Directory of Open Access Journals (Sweden)

    Runchun Mark Wang

    2015-05-01

    Full Text Available We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP and Spike Timing Dependent Delay Plasticity (STDDP. We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 2^26 (64M synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted and/or delayed pre-synaptic spike to the target synapse in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 2^36 (64G synaptic adaptors on a current high-end FPGA platform.

  11. Recommendation System Based On Association Rules For Distributed E-Learning Management Systems

    Science.gov (United States)

    Mihai, Gabroveanu

    2015-09-01

    Traditional Learning Management Systems are installed on a single server where learning materials and user data are kept. To increase its performance, the Learning Management System can be installed on multiple servers; learning materials and user data could be distributed across these servers obtaining a Distributed Learning Management System. In this paper is proposed the prototype of a recommendation system based on association rules for Distributed Learning Management System. Information from LMS databases is analyzed using distributed data mining algorithms in order to extract the association rules. Then the extracted rules are used as inference rules to provide personalized recommendations. The quality of provided recommendations is improved because the rules used to make the inferences are more accurate, since these rules aggregate knowledge from all e-Learning systems included in Distributed Learning Management System.

  12. Individualized adjustments to reference phantom internal organ dosimetry—scaling factors given knowledge of patient internal anatomy

    Science.gov (United States)

    Wayson, Michael B.; Bolch, Wesley E.

    2018-04-01

    Various computational tools are currently available that facilitate patient organ dosimetry in diagnostic nuclear medicine, yet they are typically restricted to reporting organ doses to ICRP-defined reference phantoms. The present study, while remaining computational phantom based, provides straightforward tools to adjust reference phantom organ dose for both internal photon and electron sources. A wide variety of monoenergetic specific absorbed fractions were computed using radiation transport simulations for tissue spheres of varying size and separation distance. Scaling methods were then constructed for both photon and electron self-dose and cross-dose, with data validation provided from patient-specific voxel phantom simulations, as well as via comparison to the scaling methodology given in MIRD Pamphlet No. 11. Photon and electron self-dose was found to be dependent on both radiation energy and sphere size. Photon cross-dose was found to be mostly independent of sphere size. Electron cross-dose was found to be dependent on sphere size when the spheres were in close proximity, owing to differences in electron range. The validation studies showed that this dataset was more effective than the MIRD 11 method at predicting patient-specific photon doses for at both high and low energies, but gave similar results at photon energies between 100 keV and 1 MeV. The MIRD 11 method for electron self-dose scaling was accurate for lower energies but began to break down at higher energies. The photon cross-dose scaling methodology developed in this study showed gains in accuracy of up to 9% for actual patient studies, and the electron cross-dose scaling methodology showed gains in accuracy up to 9% as well when only the bremsstrahlung component of the cross-dose was scaled. These dose scaling methods are readily available for incorporation into internal dosimetry software for diagnostic phantom-based organ dosimetry.

  13. Vedic division methodology for high-speed very large scale integration applications

    Directory of Open Access Journals (Sweden)

    Prabir Saha

    2014-02-01

    Full Text Available Transistor level implementation of division methodology using ancient Vedic mathematics is reported in this Letter. The potentiality of the ‘Dhvajanka (on top of the flag’ formula was adopted from Vedic mathematics to implement such type of divider for practical very large scale integration applications. The division methodology was implemented through half of the divisor bit instead of the actual divisor, subtraction and little multiplication. Propagation delay and dynamic power consumption of divider circuitry were minimised significantly by stage reduction through Vedic division methodology. The functionality of the division algorithm was checked and performance parameters like propagation delay and dynamic power consumption were calculated through spice spectre with 90 nm complementary metal oxide semiconductor technology. The propagation delay of the resulted (32 ÷ 16 bit divider circuitry was only ∼300 ns and consumed ∼32.5 mW power for a layout area of 17.39 mm^2. Combination of Boolean arithmetic along with ancient Vedic mathematics, substantial amount of iterations were reduced resulted as ∼47, ∼38, 34% reduction in delay and ∼34, ∼21, ∼18% reduction in power were investigated compared with the mostly used (e.g. digit-recurrence, Newton–Raphson, Goldschmidt architectures.

  14. Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis

    International Nuclear Information System (INIS)

    Wang, M; Hu, N Q; Qin, G J

    2011-01-01

    In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

  15. Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Wang, M; Hu, N Q; Qin, G J, E-mail: hnq@nudt.edu.cn, E-mail: wm198063@yahoo.com.cn [School of Mechatronic Engineering and Automation, National University of Defense Technology, ChangSha, Hunan, 410073 (China)

    2011-07-19

    In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

  16. Risk analysis methodology in microbiological assays of 99mTc lyophilized kits

    International Nuclear Information System (INIS)

    Castro Isaac, Marilín; Suárez Beyries, Karina; Alberti Ramírez, Alejandro

    2016-01-01

    The operations with medications require of the execution of requirements settled down in the Good Practices, with the objective of guaranteeing the quality, security and effectiveness of the same ones. The Good Practices of Production of Pharmaceutical Products (BPF), are essential parts of the Quality Management System, to which incorporates like integral part the Quality Risk Assessment (ARC). The System of ARC is directed mainly to guarantee the patient's health, which can be affected so much by the lack of readiness of the medications, like for a quality, security and inadequate effectiveness of these products. As part of the process of ARC, tools will be applied with the objective of establishing a process of taking of decisions with a scientific and practice bases. The Modal Analysis of Flaw and Effect (AMFE) is one of the tools that is used to evaluate the shortcomings that can happen in a process and their probable consequences in the results and/or in the behavior of the products. In this work the methodology settled down to carry out the evaluation of the risk of flaw of the quality microbiological of the lyophilized taken place in CENTIS, using the method AMFE. The scales of values settled down for the severity (S), the occurrence probability (OR) and the detection probability (D) and the levels of risk starting from the risk matrix obtained.

  17. Rule-based Test Generation with Mind Maps

    Directory of Open Access Journals (Sweden)

    Dimitry Polivaev

    2012-02-01

    Full Text Available This paper introduces basic concepts of rule based test generation with mind maps, and reports experiences learned from industrial application of this technique in the domain of smart card testing by Giesecke & Devrient GmbH over the last years. It describes the formalization of test selection criteria used by our test generator, our test generation architecture and test generation framework.

  18. Integrated model-based retargeting and optical proximity correction

    Science.gov (United States)

    Agarwal, Kanak B.; Banerjee, Shayak

    2011-04-01

    Conventional resolution enhancement techniques (RET) are becoming increasingly inadequate at addressing the challenges of subwavelength lithography. In particular, features show high sensitivity to process variation in low-k1 lithography. Process variation aware RETs such as process-window OPC are becoming increasingly important to guarantee high lithographic yield, but such techniques suffer from high runtime impact. An alternative to PWOPC is to perform retargeting, which is a rule-assisted modification of target layout shapes to improve their process window. However, rule-based retargeting is not a scalable technique since rules cannot cover the entire search space of two-dimensional shape configurations, especially with technology scaling. In this paper, we propose to integrate the processes of retargeting and optical proximity correction (OPC). We utilize the normalized image log slope (NILS) metric, which is available at no extra computational cost during OPC. We use NILS to guide dynamic target modification between iterations of OPC. We utilize the NILS tagging capabilities of Calibre TCL scripting to identify fragments with low NILS. We then perform NILS binning to assign different magnitude of retargeting to different NILS bins. NILS is determined both for width, to identify regions of pinching, and space, to locate regions of potential bridging. We develop an integrated flow for 1x metal lines (M1) which exhibits lesser lithographic hotspots compared to a flow with just OPC and no retargeting. We also observe cases where hotspots that existed in the rule-based retargeting flow are fixed using our methodology. We finally also demonstrate that such a retargeting methodology does not significantly alter design properties by electrically simulating a latch layout before and after retargeting. We observe less than 1% impact on latch Clk-Q and D-Q delays post-retargeting, which makes this methodology an attractive one for use in improving shape process windows

  19. Simulating large-scale pedestrian movement using CA and event driven model: Methodology and case study

    Science.gov (United States)

    Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi

    2015-11-01

    Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.

  20. Annotation of rule-based models with formal semantics to enable creation, analysis, reuse and visualization

    Science.gov (United States)

    Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil

    2016-01-01

    Motivation: Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. Results: We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. Availability and implementation: The annotation ontology for rule-based models can be found at http

  1. Moving from Rule-based to Principle-based in Public Sector: Preparers' Perspective

    OpenAIRE

    Roshayani Arshad; Normah Omar; Siti Fatimah Awang

    2013-01-01

    The move from cash accounting to accrual accounting, or rule-based to principle-based accounting, by many governments is part of an ongoing efforts in promoting a more business-like and performance-focused public sector. Using questionnaire responses from preparers of financial statements of public universities in Malaysia, this study examines the implementation challenges and benefits of principle-based accounting. Results from these responses suggest that most respondents perceived signific...

  2. Rules of Thumb from the Literature on Research and Evaluation.

    Science.gov (United States)

    Lai, Morris K.

    Practical advice on frequently asked questions dealing with research and evaluation methodology is presented as rules of thumb, with citations to the author's sources. A statement in the literature is considered a rule of thumb if it meets one of the following criteria: (1) it is specifically called a rule of thumb; (2) it contains numbers in…

  3. Reactor similarity for plasma–material interactions in scaled-down tokamaks as the basis for the Vulcan conceptual design

    International Nuclear Information System (INIS)

    Whyte, D.G.; Olynyk, G.M.; Barnard, H.S.; Bonoli, P.T.; Bromberg, L.; Garrett, M.L.; Haakonsen, C.B.; Hartwig, Z.S.; Mumgaard, R.T.; Podpaly, Y.A.

    2012-01-01

    Highlights: ► Discussion of similarity scalings for reduced-size tokamaks. ► Proposal of a new set of scaling laws for divertor similarity. ► Discussion of how the new scaling provides fidelity to a reactor. ► The new scaling is used as the basis for the Vulcan conceptual design. - Abstract: Dimensionless parameter scaling techniques are a powerful tool in the study of complex physical systems, especially in tokamak fusion experiments where the cost of full-size devices is high. It is proposed that dimensionless similarity be used to study in a small-scale device the coupled issues of the scrape-off layer (SOL) plasma, plasma–material interactions (PMI), and the plasma-facing material (PFM) response expected in a tokamak fusion reactor. Complete similarity is not possible in a reduced-size device. In addition, “hard” technological limits on the achievable magnetic field and peak heat flux, as well as the necessity to produce non-inductive scenarios, must be taken into account. A practical approach is advocated, in which the most important dimensionless parameters are matched to a reactor in the reduced-size device, while relaxing those parameters which are far from a threshold in behavior. “Hard” technological limits are avoided, so that the reduced-size device is technologically feasible. A criticism on these grounds is offered of the “P/R” model, in which the ratio of power crossing the last closed flux surface (LCFS), P, to the device major radius, R, is held constant. A new set of scaling rules, referred to as the “P/S” scaling (where S is the LCFS area) or the “PMI” scaling, is proposed: (i) non-inductive, steady-state operation; (ii) P is scaled with R 2 so that LCFS areal power flux P/S is constant; (iii) magnetic field B constant; (iv) geometry (elongation, safety factor q * , etc.) constant; (v) volume-averaged core density scaled as n≈n ¯ e ∼R −2/7 ; and (vi) ambient wall material temperature T W,0 constant. It is

  4. Enhancing reliable online transaction with intelligent rule-based ...

    African Journals Online (AJOL)

    Enhancing reliable online transaction with intelligent rule-based fraud detection technique. ... These are with a bid to reducing amongst other things the cost of production and also dissuade the poor handling of Nigeria currency. The CBN pronouncement has necessitated the upsurge in transactions completed with credit ...

  5. Reservoir adaptive operating rules based on both of historical streamflow and future projections

    Science.gov (United States)

    Zhang, Wei; Liu, Pan; Wang, Hao; Chen, Jie; Lei, Xiaohui; Feng, Maoyuan

    2017-10-01

    Climate change is affecting hydrological variables and consequently is impacting water resources management. Historical strategies are no longer applicable under climate change. Therefore, adaptive management, especially adaptive operating rules for reservoirs, has been developed to mitigate the possible adverse effects of climate change. However, to date, adaptive operating rules are generally based on future projections involving uncertainties under climate change, yet ignoring historical information. To address this, we propose an approach for deriving adaptive operating rules considering both historical information and future projections, namely historical and future operating rules (HAFOR). A robustness index was developed by comparing benefits from HAFOR with benefits from conventional operating rules (COR). For both historical and future streamflow series, maximizations of both average benefits and the robustness index were employed as objectives, and four trade-offs were implemented to solve the multi-objective problem. Based on the integrated objective, the simulation-based optimization method was used to optimize the parameters of HAFOR. Using the Dongwushi Reservoir in China as a case study, HAFOR was demonstrated to be an effective and robust method for developing adaptive operating rules under the uncertain changing environment. Compared with historical or projected future operating rules (HOR or FPOR), HAFOR can reduce the uncertainty and increase the robustness for future projections, especially regarding results of reservoir releases and volumes. HAFOR, therefore, facilitates adaptive management in the context that climate change is difficult to predict accurately.

  6. A rule-based phase control methodology for a slider-crank wave energy converter power take-off system

    Energy Technology Data Exchange (ETDEWEB)

    Sang, Yuanrui; Karayaka, H. Bora; Yan, Yanjun; Zhang, James Z.; Bogucki, Darek; Yu, Yi-Hsiang

    2017-09-01

    The slider crank is a proven mechanical linkage system with a long history of successful applications, and the slider-crank ocean wave energy converter (WEC) is a type of WEC that converts linear motion into rotation. This paper presents a control algorithm for a slider-crank WEC. In this study, a time-domain hydrodynamic analysis is adopted, and an AC synchronous machine is used in the power take-off system to achieve relatively high system performance. Also, a rule-based phase control strategy is applied to maximize energy extraction, making the system suitable for not only regular sinusoidal waves but also irregular waves. Simulations are carried out under regular sinusoidal wave and synthetically produced irregular wave conditions; performance validations are also presented with high-precision, real ocean wave surface elevation data. The influences of significant wave height, and peak period upon energy extraction of the system are studied. Energy extraction results using the proposed method are compared to those of the passive loading and complex conjugate control strategies; results show that the level of energy extraction is between those of the passive loading and complex conjugate control strategies, and the suboptimal nature of this control strategy is verified.

  7. A rule-based stemmer for Arabic Gulf dialect

    Directory of Open Access Journals (Sweden)

    Belal Abuata

    2015-04-01

    Full Text Available Arabic dialects arewidely used from many years ago instead of Modern Standard Arabic language in many fields. The presence of dialects in any language is a big challenge. Dialects add a new set of variational dimensions in some fields like natural language processing, information retrieval and even in Arabic chatting between different Arab nationals. Spoken dialects have no standard morphological, phonological and lexical like Modern Standard Arabic. Hence, the objective of this paper is to describe a procedure or algorithm by which a stem for the Arabian Gulf dialect can be defined. The algorithm is rule based. Special rules are created to remove the suffixes and prefixes of the dialect words. Also, the algorithm applies rules related to the word size and the relation between adjacent letters. The algorithm was tested for a number of words and given a good correct stem ratio. The algorithm is also compared with two Modern Standard Arabic algorithms. The results showed that Modern Standard Arabic stemmers performed poorly with Arabic Gulf dialect and our algorithm performed poorly when applied for Modern Standard Arabic words.

  8. Analysis and minimization of overtraining effect in rule-based classifiers for computer-aided diagnosis

    International Nuclear Information System (INIS)

    Li Qiang; Doi Kunio

    2006-01-01

    Computer-aided diagnostic (CAD) schemes have been developed to assist radiologists detect various lesions in medical images. In CAD schemes, classifiers play a key role in achieving a high lesion detection rate and a low false-positive rate. Although many popular classifiers such as linear discriminant analysis and artificial neural networks have been employed in CAD schemes for reduction of false positives, a rule-based classifier has probably been the simplest and most frequently used one since the early days of development of various CAD schemes. However, with existing rule-based classifiers, there are major disadvantages that significantly reduce their practicality and credibility. The disadvantages include manual design, poor reproducibility, poor evaluation methods such as resubstitution, and a large overtraining effect. An automated rule-based classifier with a minimized overtraining effect can overcome or significantly reduce the extent of the above-mentioned disadvantages. In this study, we developed an 'optimal' method for the selection of cutoff thresholds and a fully automated rule-based classifier. Experimental results performed with Monte Carlo simulation and a real lung nodule CT data set demonstrated that the automated threshold selection method can completely eliminate overtraining effect in the procedure of cutoff threshold selection, and thus can minimize overall overtraining effect in the constructed rule-based classifier. We believe that this threshold selection method is very useful in the construction of automated rule-based classifiers with minimized overtraining effect

  9. Optimization of Simple Monetary Policy Rules on the Base of Estimated DSGE-model

    OpenAIRE

    Shulgin, A.

    2015-01-01

    Optimization of coefficients in monetary policy rules is performed on the base of the DSGE-model with two independent monetary policy instruments estimated on the Russian data. It was found that welfare maximizing policy rules lead to inadequate result and pro-cyclical monetary policy. Optimal coefficients in Taylor rule and exchange rate rule allow to decrease volatility estimated on Russian data of 2001-2012 by about 20%. The degree of exchange rate flexibility parameter was found to be low...

  10. CFD aided analysis of a scaled down model of the Brazilian Multipurpose Reactor (RMB) pool

    International Nuclear Information System (INIS)

    Schweizer, Fernando L.A.; Lima, Claubia P.B.; Costa, Antonella L.; Veloso, Maria A.F.

    2013-01-01

    Research reactors are commonly built inside deep pools that provide radiological and thermal protection and easy access to its core. Reactors with thermal power in the order of MW usually use an auxiliary thermal-hydraulic circuit at the top of its pool to create a purified hot water layer (HWL). Thermal-hydraulic analysis of the flow configuration in the pool and HWL is paramount to insure radiological protection. A useful tool for these analyses is the application of CFD (Computational Fluid Dynamics). To obtain satisfactory results using CFD it is necessary the verification and validation of the CFD numerical model. Verification is divided in code and solution verifications. In the first one establishes the correctness of the CFD code implementation and in the former estimates the numerical accuracy of a particular calculation. Validation is performed through comparison of numerical and experimental results. This paper presents a dimensional analysis of the RMB (Brazilian Multipurpose Reactor) pool to determine a scaled down experimental installation able to aid in the HWL numerical investigation. Two CFD models were created one with the same dimensions and boundary conditions of the reactor prototype and the other with 1/10 proportion size and boundary conditions set to achieve the same inertial and buoyant forces proportions represented by Froude Number between the two models. Results comparing the HWL thickness show consistence between the prototype and the scaled down model behavior. (author)

  11. Analysis of correlation between pediatric asthma exacerbation and exposure to pollutant mixtures with association rule mining.

    Science.gov (United States)

    Toti, Giulia; Vilalta, Ricardo; Lindner, Peggy; Lefer, Barry; Macias, Charles; Price, Daniel

    2016-11-01

    Traditional studies on effects of outdoor pollution on asthma have been criticized for questionable statistical validity and inefficacy in exploring the effects of multiple air pollutants, alone and in combination. Association rule mining (ARM), a method easily interpretable and suitable for the analysis of the effects of multiple exposures, could be of use, but the traditional interest metrics of support and confidence need to be substituted with metrics that focus on risk variations caused by different exposures. We present an ARM-based methodology that produces rules associated with relevant odds ratios and limits the number of final rules even at very low support levels (0.5%), thanks to post-pruning criteria that limit rule redundancy and control for statistical significance. The methodology has been applied to a case-crossover study to explore the effects of multiple air pollutants on risk of asthma in pediatric subjects. We identified 27 rules with interesting odds ratio among more than 10,000 having the required support. The only rule including only one chemical is exposure to ozone on the previous day of the reported asthma attack (OR=1.14). 26 combinatory rules highlight the limitations of air quality policies based on single pollutant thresholds and suggest that exposure to mixtures of chemicals is more harmful, with odds ratio as high as 1.54 (associated with the combination day0 SO 2 , day0 NO, day0 NO 2 , day1 PM). The proposed method can be used to analyze risk variations caused by single and multiple exposures. The method is reliable and requires fewer assumptions on the data than parametric approaches. Rules including more than one pollutant highlight interactions that deserve further investigation, while helping to limit the search field. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Association-rule-based tuberculosis disease diagnosis

    Science.gov (United States)

    Asha, T.; Natarajan, S.; Murthy, K. N. B.

    2010-02-01

    Tuberculosis (TB) is a disease caused by bacteria called Mycobacterium tuberculosis. It usually spreads through the air and attacks low immune bodies such as patients with Human Immunodeficiency Virus (HIV). This work focuses on finding close association rules, a promising technique in Data Mining, within TB data. The proposed method first normalizes of raw data from medical records which includes categorical, nominal and continuous attributes and then determines Association Rules from the normalized data with different support and confidence. Association rules are applied on a real data set containing medical records of patients with TB obtained from a state hospital. The rules determined describes close association between one symptom to another; as an example, likelihood that an occurrence of sputum is closely associated with blood cough and HIV.

  13. Top-down models in biology: explanation and control of complex living systems above the molecular level.

    Science.gov (United States)

    Pezzulo, Giovanni; Levin, Michael

    2016-11-01

    It is widely assumed in developmental biology and bioengineering that optimal understanding and control of complex living systems follows from models of molecular events. The success of reductionism has overshadowed attempts at top-down models and control policies in biological systems. However, other fields, including physics, engineering and neuroscience, have successfully used the explanations and models at higher levels of organization, including least-action principles in physics and control-theoretic models in computational neuroscience. Exploiting the dynamic regulation of pattern formation in embryogenesis and regeneration requires new approaches to understand how cells cooperate towards large-scale anatomical goal states. Here, we argue that top-down models of pattern homeostasis serve as proof of principle for extending the current paradigm beyond emergence and molecule-level rules. We define top-down control in a biological context, discuss the examples of how cognitive neuroscience and physics exploit these strategies, and illustrate areas in which they may offer significant advantages as complements to the mainstream paradigm. By targeting system controls at multiple levels of organization and demystifying goal-directed (cybernetic) processes, top-down strategies represent a roadmap for using the deep insights of other fields for transformative advances in regenerative medicine and systems bioengineering. © 2016 The Author(s).

  14. Rule-based detection of intrathoracic airway trees

    International Nuclear Information System (INIS)

    Sonka, M.; Park, W.; Hoffman, E.A.

    1996-01-01

    New sensitive and reliable methods for assessing alterations in regional lung structure and function are critically important for the investigation and treatment of pulmonary diseases. Accurate identification of the airway tree will provide an assessment of airway structure and will provide a means by which multiple volumetric images of the lung at the same lung volume over time can be used to assess regional parenchymal changes. The authors describe a novel rule-based method for the segmentation of airway trees from three-dimensional (3-D) sets of computed tomography (CT) images, and its validation. The presented method takes advantage of a priori anatomical knowledge about pulmonary airway and vascular trees and their interrelationships. The method is based on a combination of 3-D seeded region growing that is used to identify large airways, rule-based two-dimensional (2-D) segmentation of individual CT slices to identify probable locations of smaller diameter airways, and merging of airway regions across the 3-D set of slices resulting in a tree-like airway structure. The method was validated in 40 3-mm-thick CT sections from five data sets of canine lungs scanned via electron beam CT in vivo with lung volume held at a constant pressure. The method's performance was compared with that of the conventional 3-D region growing method. The method substantially outperformed an existing conventional approach to airway tree detection

  15. The 5S methodology as a tool for improving the organisation

    OpenAIRE

    J. Michalska; D. Szewieczek

    2007-01-01

    Purpose: The aim of this paper is showing the 5S methodology. In this paper it was introduced the way of implementing the 5S methodology in the company.Design/methodology/approach: In the frames of own research it has been analysed and implemented the 5S rules in the production process.Findings: On the basis of the own research it can be stated, that introducing the 5S rules bring the great changes in the company, for example: process improvement by costs’ reduction, increasing of effectivene...

  16. Rule Induction-Based Knowledge Discovery for Energy Efficiency

    OpenAIRE

    Chen, Qipeng; Fan, Zhong; Kaleshi, Dritan; Armour, Simon M D

    2015-01-01

    Rule induction is a practical approach to knowledge discovery. Provided that a problem is developed, rule induction is able to return the knowledge that addresses the goal of this problem as if-then rules. The primary goals of knowledge discovery are for prediction and description. The rule format knowledge representation is easily understandable so as to enable users to make decisions. This paper presents the potential of rule induction for energy efficiency. In particular, three rule induct...

  17. Neural Substrates of Similarity and Rule-based Strategies in Judgment

    Directory of Open Access Journals (Sweden)

    Bettina eVon Helversen

    2014-10-01

    Full Text Available Making accurate judgments is a core human competence and a prerequisite for success in many areas of life. Plenty of evidence exists that people can employ different judgment strategies to solve identical judgment problems. In categorization, it has been demonstrated that similarity-based and rule-based strategies are associated with activity in different brain regions. Building on this research, the present work tests whether solving two identical judgment problems recruits different neural substrates depending on people's judgment strategies. Combining cognitive modeling of judgment strategies at the behavioral level with functional magnetic resonance imaging (fMRI, we compare brain activity when using two archetypal judgment strategies: a similarity-based exemplar strategy and a rule-based heuristic strategy. Using an exemplar-based strategy should recruit areas involved in long-term memory processes to a larger extent than a heuristic strategy. In contrast, using a heuristic strategy should recruit areas involved in the application of rules to a larger extent than an exemplar-based strategy. Largely consistent with our hypotheses, we found that using an exemplar-based strategy led to relatively higher BOLD activity in the anterior prefrontal and inferior parietal cortex, presumably related to retrieval and selective attention processes. In contrast, using a heuristic strategy led to relatively higher activity in areas in the dorsolateral prefrontal and the temporal-parietal cortex associated with cognitive control and information integration. Thus, even when people solve identical judgment problems, different neural substrates can be recruited depending on the judgment strategy involved.

  18. Behavior-based rules for fitness-for-duty assessment of nuclear power plant personnel

    International Nuclear Information System (INIS)

    Kennedy, R.S.

    1989-01-01

    The safe and reliable operation of nuclear power plants requires that plant personnel not be under the influence of any substance, legal or illegal, or mentally or physically impaired from any cause that in any way adversely affects their ability to safely and competently perform their duties. This goal has been formalized by the US Nuclear Regulatory Commission in their proposed rule for a fitness-for-duty program. The purpose of this paper is to describe a performance-based tool based on surrogate tests and dose equivalency methodologies that is a viable candidate for fitness-for-duty assessment. The automated performance test system (APTS) is a microcomputer-based human performance test battery that has been developed over a decade of research supported variously by the National Science Foundation, National Aeronautics and Space Administration, US Department of Energy, and the US Navy and Army. Representing the most psychometrically sound test from evaluations of over 150 well-known tests of basic psychomotor and cognitive skills, the battery provides direct prediction of a worker's fitness for duty. Twenty-four tests are suitable for use, and a dozen have thus far been shown to be sensitive to the effects of legal and illegal drugs, alcohol, fatigue, stress, and other causes of impairment

  19. Radiation protection optimization using a knowledge based methodology

    International Nuclear Information System (INIS)

    Reyes-Jimenez, J.; Tsoukalas, L.H.

    1991-01-01

    This paper presents a knowledge based methodology for radiological planning and radiation protection optimization. The cost-benefit methodology described on International Commission of Radiation Protection Report No. 37 is employed within a knowledge based framework for the purpose of optimizing radiation protection and plan maintenance activities while optimizing radiation protection. 1, 2 The methodology is demonstrated through an application to a heating ventilation and air conditioning (HVAC) system. HVAC is used to reduce radioactivity concentration levels in selected contaminated multi-compartment models at nuclear power plants when higher than normal radiation levels are detected. The overall objective is to reduce personnel exposure resulting from airborne radioactivity, when routine or maintenance access is required in contaminated areas. 2 figs, 15 refs

  20. Ruled-based control of off-grid desalination powered by renewable energies

    Directory of Open Access Journals (Sweden)

    Alvaro Serna

    2015-08-01

    Full Text Available A rule-based control is presented for desalination plants operating under variable, renewable power availability. This control algorithm is based on two sets of rules: first, a list that prioritizes the reverse osmosis (RO units of the plant is created, based on the current state and the expected water demand; secondly, the available energy is then dispatched to these units following this prioritized list. The selected strategy is tested on a specific case study: a reverse osmosis plant designed for the production of desalinated water powered by wind and wave energy. Simulation results illustrate the correct performance of the plant under this control.

  1. Mining Context-Aware Association Rules Using Grammar-Based Genetic Programming.

    Science.gov (United States)

    Luna, Jose Maria; Pechenizkiy, Mykola; Del Jesus, Maria Jose; Ventura, Sebastian

    2017-09-25

    Real-world data usually comprise features whose interpretation depends on some contextual information. Such contextual-sensitive features and patterns are of high interest to be discovered and analyzed in order to obtain the right meaning. This paper formulates the problem of mining context-aware association rules, which refers to the search for associations between itemsets such that the strength of their implication depends on a contextual feature. For the discovery of this type of associations, a model that restricts the search space and includes syntax constraints by means of a grammar-based genetic programming methodology is proposed. Grammars can be considered as a useful way of introducing subjective knowledge to the pattern mining process as they are highly related to the background knowledge of the user. The performance and usefulness of the proposed approach is examined by considering synthetically generated datasets. A posteriori analysis on different domains is also carried out to demonstrate the utility of this kind of associations. For example, in educational domains, it is essential to identify and understand contextual and context-sensitive factors that affect overall and individual student behavior and performance. The results of the experiments suggest that the approach is feasible and it automatically identifies interesting context-aware associations from real-world datasets.

  2. Cellular automata rule characterization and classification using texture descriptors

    Science.gov (United States)

    Machicao, Jeaneth; Ribas, Lucas C.; Scabini, Leonardo F. S.; Bruno, Odermir M.

    2018-05-01

    The cellular automata (CA) spatio-temporal patterns have attracted the attention from many researchers since it can provide emergent behavior resulting from the dynamics of each individual cell. In this manuscript, we propose an approach of texture image analysis to characterize and classify CA rules. The proposed method converts the CA spatio-temporal patterns into a gray-scale image. The gray-scale is obtained by creating a binary number based on the 8-connected neighborhood of each dot of the CA spatio-temporal pattern. We demonstrate that this technique enhances the CA rule characterization and allow to use different texture image analysis algorithms. Thus, various texture descriptors were evaluated in a supervised training approach aiming to characterize the CA's global evolution. Our results show the efficiency of the proposed method for the classification of the elementary CA (ECAs), reaching a maximum of 99.57% of accuracy rate according to the Li-Packard scheme (6 classes) and 94.36% for the classification of the 88 rules scheme. Moreover, within the image analysis context, we found a better performance of the method by means of a transformation of the binary states to a gray-scale.

  3. DEVELOP-FPS: a First Person Shooter Development Tool for Rule-based Scripts

    Directory of Open Access Journals (Sweden)

    Bruno Correia

    2012-09-01

    Full Text Available We present DEVELOP-FPS, a software tool specially designed for the development of First Person Shooter (FPS players controlled by Rule Based Scripts. DEVELOP-FPS may be used by FPS developers to create, debug, maintain and compare rule base player behaviours, providing a set of useful functionalities: i for an easy preparation of the right scenarios for game debugging and testing; ii for controlling the game execution: users can stop and resume the game execution at any instant, monitoring and controlling every player in the game, monitoring the state of each player, their rule base activation, being able to issue commands to control their behaviour; and iii to automatically run a certain number of game executions and collect data in order to evaluate and compare the players performance along a sufficient number of similar experiments.

  4. Investigation of the falling water flow with evaporation for the passive containment cooling system and its scaling-down criteria

    Science.gov (United States)

    Li, Cheng; Li, Junming; Li, Le

    2018-02-01

    Falling water evaporation cooling could efficiently suppress the containment operation pressure during the nuclear accident, by continually removing the core decay heat to the atmospheric environment. In order to identify the process of large-scale falling water evaporation cooling, the water flow characteristics of falling film, film rupture and falling rivulet were deduced, on the basis of previous correlation studies. The influences of the contact angle, water temperature and water flow rates on water converge along the flow direction were then numerically obtained and results were compared with the data for AP1000 and CAP1400 nuclear power plants. By comparisons, it is concluded that the water coverage fraction of falling water could be enhanced by either reducing the surface contact angle or increasing the water temperature. The falling water flow with evaporation for AP1000 containment was then calculated and the feature of its water coverage fraction was analyzed. Finally, based on the phenomena identification of falling water flow for AP1000 containment evaporation cooling, the scaling-down is performed and the dimensionless criteria were obtained.

  5. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    Science.gov (United States)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  6. A methodology for physically based rockfall hazard assessment

    Directory of Open Access Journals (Sweden)

    G. B. Crosta

    2003-01-01

    Full Text Available Rockfall hazard assessment is not simple to achieve in practice and sound, physically based assessment methodologies are still missing. The mobility of rockfalls implies a more difficult hazard definition with respect to other slope instabilities with minimal runout. Rockfall hazard assessment involves complex definitions for "occurrence probability" and "intensity". This paper is an attempt to evaluate rockfall hazard using the results of 3-D numerical modelling on a topography described by a DEM. Maps portraying the maximum frequency of passages, velocity and height of blocks at each model cell, are easily combined in a GIS in order to produce physically based rockfall hazard maps. Different methods are suggested and discussed for rockfall hazard mapping at a regional and local scale both along linear features or within exposed areas. An objective approach based on three-dimensional matrixes providing both a positional "Rockfall Hazard Index" and a "Rockfall Hazard Vector" is presented. The opportunity of combining different parameters in the 3-D matrixes has been evaluated to better express the relative increase in hazard. Furthermore, the sensitivity of the hazard index with respect to the included variables and their combinations is preliminarily discussed in order to constrain as objective as possible assessment criteria.

  7. Domain-based Teaching Strategy for Intelligent Tutoring System Based on Generic Rules

    Science.gov (United States)

    Kseibat, Dawod; Mansour, Ali; Adjei, Osei; Phillips, Paul

    In this paper we present a framework for selecting the proper instructional strategy for a given teaching material based on its attributes. The new approach is based on a flexible design by means of generic rules. The framework was adapted in an Intelligent Tutoring System to teach Modern Standard Arabic language to adult English-speaking learners with no pre-knowledge of Arabic language is required.

  8. Cloud-based computation for accelerating vegetation mapping and change detection at regional to national scales

    Science.gov (United States)

    Matthew J. Gregory; Zhiqiang Yang; David M. Bell; Warren B. Cohen; Sean Healey; Janet L. Ohmann; Heather M. Roberts

    2015-01-01

    Mapping vegetation and landscape change at fine spatial scales is needed to inform natural resource and conservation planning, but such maps are expensive and time-consuming to produce. For Landsat-based methodologies, mapping efforts are hampered by the daunting task of manipulating multivariate data for millions to billions of pixels. The advent of cloud-based...

  9. A low-power photovoltaic system with energy storage for radio communications: Description and design methodology

    Science.gov (United States)

    Chapman, C. P.; Chapman, P. D.; Lewison, A. H.

    1982-01-01

    A low power photovoltaic system was constructed with approximately 500 amp hours of battery energy storage to provide power to an emergency amateur radio communications center. The system can power the communications center for about 72 hours of continuous nonsun operation. Complete construction details and a design methodology algorithm are given with abundant engineering data and adequate theory to allow similar systems to be constructed, scaled up or down, with minimum design effort.

  10. HOW ECOLOGICAL COMMUNITIES ARE STRUCTURED: A REVIEW ON ECOLOGICAL ASSEMBLY RULES

    Directory of Open Access Journals (Sweden)

    Gabriel Jaime Colorado Zuluaga

    Full Text Available Whether biological communities are deterministic or stochastic assemblages of species has long been a central topic of ecology. The widely demonstrated presence of structural patterns in nature may imply the existence of rules that regulate the organization of ecological communities. In this review, I present a compilation of major assembly rules that fundament, in a great proportion, the community assembly theory. Initially, I present a general overview of key concepts associated to the assembly of communities, in particular the origin of assembly rules, definition, the problem of scale and underlying mechanisms in the structure of ecological communities. Subsequently, two major approaches or paradigms (i.e. species-based and trait-based for the assembly of communities are discussed. Finally, major tested assembly rules are explored and discussed under the light of available published literature.

  11. Rule-based modularization in model transformation languages illustrated with ATL

    NARCIS (Netherlands)

    Ivanov, Ivan; van den Berg, Klaas; Jouault, Frédéric

    2007-01-01

    This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the basis of relations between source and target metamodels and on the base of generic transformation

  12. Lack of cross-scale linkages reduces robustness of community-based fisheries management.

    Directory of Open Access Journals (Sweden)

    Richard Cudney-Bueno

    Full Text Available Community-based management and the establishment of marine reserves have been advocated worldwide as means to overcome overexploitation of fisheries. Yet, researchers and managers are divided regarding the effectiveness of these measures. The "tragedy of the commons" model is often accepted as a universal paradigm, which assumes that unless managed by the State or privatized, common-pool resources are inevitably overexploited due to conflicts between the self-interest of individuals and the goals of a group as a whole. Under this paradigm, the emergence and maintenance of effective community-based efforts that include cooperative risky decisions as the establishment of marine reserves could not occur. In this paper, we question these assumptions and show that outcomes of commons dilemmas can be complex and scale-dependent. We studied the evolution and effectiveness of a community-based management effort to establish, monitor, and enforce a marine reserve network in the Gulf of California, Mexico. Our findings build on social and ecological research before (1997-2001, during (2002 and after (2003-2004 the establishment of marine reserves, which included participant observation in >100 fishing trips and meetings, interviews, as well as fishery dependent and independent monitoring. We found that locally crafted and enforced harvesting rules led to a rapid increase in resource abundance. Nevertheless, news about this increase spread quickly at a regional scale, resulting in poaching from outsiders and a subsequent rapid cascading effect on fishing resources and locally-designed rule compliance. We show that cooperation for management of common-pool fisheries, in which marine reserves form a core component of the system, can emerge, evolve rapidly, and be effective at a local scale even in recently organized fisheries. Stakeholder participation in monitoring, where there is a rapid feedback of the systems response, can play a key role in reinforcing

  13. A hierarchical fuzzy rule-based approach to aphasia diagnosis.

    Science.gov (United States)

    Akbarzadeh-T, Mohammad-R; Moshtagh-Khorasani, Majid

    2007-10-01

    Aphasia diagnosis is a particularly challenging medical diagnostic task due to the linguistic uncertainty and vagueness, inconsistencies in the definition of aphasic syndromes, large number of measurements with imprecision, natural diversity and subjectivity in test objects as well as in opinions of experts who diagnose the disease. To efficiently address this diagnostic process, a hierarchical fuzzy rule-based structure is proposed here that considers the effect of different features of aphasia by statistical analysis in its construction. This approach can be efficient for diagnosis of aphasia and possibly other medical diagnostic applications due to its fuzzy and hierarchical reasoning construction. Initially, the symptoms of the disease which each consists of different features are analyzed statistically. The measured statistical parameters from the training set are then used to define membership functions and the fuzzy rules. The resulting two-layered fuzzy rule-based system is then compared with a back propagating feed-forward neural network for diagnosis of four Aphasia types: Anomic, Broca, Global and Wernicke. In order to reduce the number of required inputs, the technique is applied and compared on both comprehensive and spontaneous speech tests. Statistical t-test analysis confirms that the proposed approach uses fewer Aphasia features while also presenting a significant improvement in terms of accuracy.

  14. Phenomena based Methodology for Process Synthesis incorporating Process Intensification

    DEFF Research Database (Denmark)

    Lutze, Philip; Babi, Deenesh Kavi; Woodley, John

    2013-01-01

    at processes at the lowest level of aggregation which is the phenomena level. In this paper, a phenomena based synthesis/design methodology incorporating process intensification is presented. Using this methodology, a systematic identification of necessary and desirable (integrated) phenomena as well......Process intensification (PI) has the potential to improve existing as well as conceptual processes, in order to achieve a more sustainable production. PI can be achieved at different levels. That is, the unit operations, functional and/or phenomena level. The highest impact is expected by looking...... as generation and screening of phenomena based flowsheet options are presented using a decomposition based solution approach. The developed methodology as well as necessary tools and supporting methods are highlighted through a case study involving the production of isopropyl-acetate....

  15. Fusion of Thresholding Rules During Wavelet-Based Noisy Image Compression

    Directory of Open Access Journals (Sweden)

    Bekhtin Yury

    2016-01-01

    Full Text Available The new method for combining semisoft thresholding rules during wavelet-based data compression of images with multiplicative noise is suggested. The method chooses the best thresholding rule and the threshold value using the proposed criteria which provide the best nonlinear approximations and take into consideration errors of quantization. The results of computer modeling have shown that the suggested method provides relatively good image quality after restoration in the sense of some criteria such as PSNR, SSIM, etc.

  16. A methodology for ecosystem-scale modeling of selenium

    Science.gov (United States)

    Presser, T.S.; Luoma, S.N.

    2010-01-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  17. Modeling industrial centrifugation of mammalian cell culture using a capillary based scale-down system.

    Science.gov (United States)

    Westoby, Matthew; Rogers, Jameson K; Haverstock, Ryan; Romero, Jonathan; Pieracci, John

    2011-05-01

    Continuous-flow centrifugation is widely utilized as the primary clarification step in the recovery of biopharmaceuticals from cell culture. However, it is a challenging operation to develop and characterize due to the lack of easy to use, small-scale, systems that can be used to model industrial processes. As a result, pilot-scale continuous centrifugation is typically employed to model large-scale systems requiring a significant amount of resources. In an effort to reduce resource requirements and create a system which is easy to construct and utilize, a capillary shear device, capable of producing energy dissipation rates equivalent to those present in the feed zones of industrial disk stack centrifuges, was developed and evaluated. When coupled to a bench-top, batch centrifuge, the capillary device reduced centrate turbidity prediction error from 37% to 4% compared to using a bench-top centrifuge alone. Laboratory-scale parameters that are analogous to those routinely varied during industrial-scale continuous centrifugation were identified and evaluated for their utility in emulating disk stack centrifuge performance. The resulting relationships enable bench-scale process modeling of continuous disk stack centrifuges using an easily constructed, scalable, capillary shear device coupled to a typical bench-top centrifuge. Copyright © 2010 Wiley Periodicals, Inc.

  18. Using GO-WAR for mining cross-ontology weighted association rules.

    Science.gov (United States)

    Agapito, Giuseppe; Cannataro, Mario; Guzzi, Pietro Hiram; Milano, Marianna

    2015-07-01

    The Gene Ontology (GO) is a structured repository of concepts (GO terms) that are associated to one or more gene products. The process of association is referred to as annotation. The relevance and the specificity of both GO terms and annotations are evaluated by a measure defined as information content (IC). The analysis of annotated data is thus an important challenge for bioinformatics. There exist different approaches of analysis. From those, the use of association rules (AR) may provide useful knowledge, and it has been used in some applications, e.g. improving the quality of annotations. Nevertheless classical association rules algorithms do not take into account the source of annotation nor the importance yielding to the generation of candidate rules with low IC. This paper presents GO-WAR (Gene Ontology-based Weighted Association Rules) a methodology for extracting weighted association rules. GO-WAR can extract association rules with a high level of IC without loss of support and confidence from a dataset of annotated data. A case study on using of GO-WAR on publicly available GO annotation datasets is used to demonstrate that our method outperforms current state of the art approaches. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles.

    Science.gov (United States)

    Pasquier, M; Quek, C; Toh, M

    2001-10-01

    This paper presents part of our research work concerned with the realisation of an Intelligent Vehicle and the technologies required for its routing, navigation, and control. An automated driver prototype has been developed using a self-organising fuzzy rule-based system (POPFNN-CRI(S)) to model and subsequently emulate human driving expertise. The ability of fuzzy logic to represent vague information using linguistic variables makes it a powerful tool to develop rule-based control systems when an exact working model is not available, as is the case of any vehicle-driving task. Designing a fuzzy system, however, is a complex endeavour, due to the need to define the variables and their associated fuzzy sets, and determine a suitable rule base. Many efforts have thus been devoted to automating this process, yielding the development of learning and optimisation techniques. One of them is the family of POP-FNNs, or Pseudo-Outer Product Fuzzy Neural Networks (TVR, AARS(S), AARS(NS), CRI, Yager). These generic self-organising neural networks developed at the Intelligent Systems Laboratory (ISL/NTU) are based on formal fuzzy mathematical theory and are able to objectively extract a fuzzy rule base from training data. In this application, a driving simulator has been developed, that integrates a detailed model of the car dynamics, complete with engine characteristics and environmental parameters, and an OpenGL-based 3D-simulation interface coupled with driving wheel and accelerator/ brake pedals. The simulator has been used on various road scenarios to record from a human pilot driving data consisting of steering and speed control actions associated to road features. Specifically, the POPFNN-CRI(S) system is used to cluster the data and extract a fuzzy rule base modelling the human driving behaviour. Finally, the effectiveness of the generated rule base has been validated using the simulator in autopilot mode.

  20. A methodology to investigate size scale effects in crystalline plasticity using uniaxial compression testing

    International Nuclear Information System (INIS)

    Uchic, Michael D.; Dimiduk, Dennis M.

    2005-01-01

    A methodology for performing uniaxial compression tests on samples having micron-size dimensions is presented. Sample fabrication is accomplished using focused ion beam milling to create cylindrical samples of uniform cross-section that remain attached to the bulk substrate at one end. Once fabricated, samples are tested in uniaxial compression using a nanoindentation device outfitted with a flat tip, and a stress-strain curve is obtained. The methodology can be used to examine the plastic response of samples of different sizes that are from the same bulk material. In this manner, dimensional size effects at the micron scale can be explored for single crystals, using a readily interpretable test that minimizes imposed stretch and bending gradients. The methodology was applied to a single-crystal Ni superalloy and a transition from bulk-like to size-affected behavior was observed for samples 5 μm in diameter and smaller

  1. Successful adaptation of three-dimensional inversion methodologies for archaeological-scale, total-field magnetic data sets

    Science.gov (United States)

    Cheyney, S.; Fishwick, S.; Hill, I. A.; Linford, N. T.

    2015-08-01

    Despite the development of advanced processing and interpretation tools for magnetic data sets in the fields of mineral and hydrocarbon industries, these methods have not achieved similar levels of adoption for archaeological or very near surface surveys. Using a synthetic data set we demonstrate that certain methodologies and assumptions used to successfully invert more regional-scale data can lead to large discrepancies between the true and recovered depths when applied to archaeological-type anomalies. We propose variations to the current approach, analysing the choice of the depth-weighting function, mesh design and parameter constraints, to develop an appropriate technique for the 3-D inversion of archaeological-scale data sets. The results show a successful recovery of a synthetic scenario, as well as a case study of a Romano-Celtic temple in the UK. For the case study, the final susceptibility model is compared with two coincident ground penetrating radar surveys, showing a high correlation with the comparative depth slices. The new approach takes interpretation of archaeological data sets beyond a simple 2-D visual interpretation based on pattern recognition.

  2. Methodology, Measurement and Analysis of Flow Table Update Characteristics in Hardware OpenFlow Switches

    KAUST Repository

    Kuźniar, Maciej

    2018-02-15

    Software-Defined Networking (SDN) and OpenFlow are actively being standardized and deployed. These deployments rely on switches that come from various vendors and differ in terms of performance and available features. Understanding these differences and performance characteristics is essential for ensuring successful and safe deployments.We propose a systematic methodology for SDN switch performance analysis and devise a series of experiments based on this methodology. The methodology relies on sending a stream of rule updates, while relying on both observing the control plane view as reported by the switch and probing the data plane state to determine switch characteristics by comparing these views. We measure, report and explain the performance characteristics of flow table updates in six hardware OpenFlow switches. Our results describing rule update rates can help SDN designers make their controllers efficient. Further, we also highlight differences between the OpenFlow specification and its implementations, that if ignored, pose a serious threat to network security and correctness.

  3. Research on key technology of the verification system of steel rule based on vision measurement

    Science.gov (United States)

    Jia, Siyuan; Wang, Zhong; Liu, Changjie; Fu, Luhua; Li, Yiming; Lu, Ruijun

    2018-01-01

    The steel rule plays an important role in quantity transmission. However, the traditional verification method of steel rule based on manual operation and reading brings about low precision and low efficiency. A machine vison based verification system of steel rule is designed referring to JJG1-1999-Verificaiton Regulation of Steel Rule [1]. What differentiates this system is that it uses a new calibration method of pixel equivalent and decontaminates the surface of steel rule. Experiments show that these two methods fully meet the requirements of the verification system. Measuring results strongly prove that these methods not only meet the precision of verification regulation, but also improve the reliability and efficiency of the verification system.

  4. Top-down feedback in an HMAX-like cortical model of object perception based on hierarchical Bayesian networks and belief propagation.

    Directory of Open Access Journals (Sweden)

    Salvador Dura-Bernal

    Full Text Available Hierarchical generative models, such as Bayesian networks, and belief propagation have been shown to provide a theoretical framework that can account for perceptual processes, including feedforward recognition and feedback modulation. The framework explains both psychophysical and physiological experimental data and maps well onto the hierarchical distributed cortical anatomy. However, the complexity required to model cortical processes makes inference, even using approximate methods, very computationally expensive. Thus, existing object perception models based on this approach are typically limited to tree-structured networks with no loops, use small toy examples or fail to account for certain perceptual aspects such as invariance to transformations or feedback reconstruction. In this study we develop a Bayesian network with an architecture similar to that of HMAX, a biologically-inspired hierarchical model of object recognition, and use loopy belief propagation to approximate the model operations (selectivity and invariance. Crucially, the resulting Bayesian network extends the functionality of HMAX by including top-down recursive feedback. Thus, the proposed model not only achieves successful feedforward recognition invariant to noise, occlusions, and changes in position and size, but is also able to reproduce modulatory effects such as illusory contour completion and attention. Our novel and rigorous methodology covers key aspects such as learning using a layerwise greedy algorithm, combining feedback information from multiple parents and reducing the number of operations required. Overall, this work extends an established model of object recognition to include high-level feedback modulation, based on state-of-the-art probabilistic approaches. The methodology employed, consistent with evidence from the visual cortex, can be potentially generalized to build models of hierarchical perceptual organization that include top-down and bottom

  5. Quantitative Earthquake Prediction on Global and Regional Scales

    International Nuclear Information System (INIS)

    Kossobokov, Vladimir G.

    2006-01-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  6. Quantitative Earthquake Prediction on Global and Regional Scales

    Science.gov (United States)

    Kossobokov, Vladimir G.

    2006-03-01

    The Earth is a hierarchy of volumes of different size. Driven by planetary convection these volumes are involved into joint and relative movement. The movement is controlled by a wide variety of processes on and around the fractal mesh of boundary zones, and does produce earthquakes. This hierarchy of movable volumes composes a large non-linear dynamical system. Prediction of such a system in a sense of extrapolation of trajectory into the future is futile. However, upon coarse-graining the integral empirical regularities emerge opening possibilities of prediction in a sense of the commonly accepted consensus definition worked out in 1976 by the US National Research Council. Implications of the understanding hierarchical nature of lithosphere and its dynamics based on systematic monitoring and evidence of its unified space-energy similarity at different scales help avoiding basic errors in earthquake prediction claims. They suggest rules and recipes of adequate earthquake prediction classification, comparison and optimization. The approach has already led to the design of reproducible intermediate-term middle-range earthquake prediction technique. Its real-time testing aimed at prediction of the largest earthquakes worldwide has proved beyond any reasonable doubt the effectiveness of practical earthquake forecasting. In the first approximation, the accuracy is about 1-5 years and 5-10 times the anticipated source dimension. Further analysis allows reducing spatial uncertainty down to 1-3 source dimensions, although at a cost of additional failures-to-predict. Despite of limited accuracy a considerable damage could be prevented by timely knowledgeable use of the existing predictions and earthquake prediction strategies. The December 26, 2004 Indian Ocean Disaster seems to be the first indication that the methodology, designed for prediction of M8.0+ earthquakes can be rescaled for prediction of both smaller magnitude earthquakes (e.g., down to M5.5+ in Italy) and

  7. Integrating the bottom-up and top-down approach to energy economy modelling. The case of Denmark

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    1998-01-01

    This paper presents results from an integration project covering Danish models based on bottom-up and top-down approaches to energy]economy modelling. The purpose of the project was to identify theoretical and methodological problems for integrating existing models for Denmark and to implement...... an integration of the models. The integration was established through a number of links between energy bottom-up modules and a macroeconomic model. In this integrated model it is possible to analyse both top-down instruments, such as taxes along with bottom-up instruments, such as regulation of technology...

  8. Hedging Rules for Water Supply Reservoir Based on the Model of Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Yi Ji

    2016-06-01

    Full Text Available This study proposes a hedging rule model which is composed of a two-period reservior operation model considering the damage depth and hedging rule parameter optimization model. The former solves hedging rules based on a given poriod’s water supply weighting factor and carryover storage target, while the latter optimization model is used to optimize the weighting factor and carryover storage target based on the hedging rules. The coupling model gives the optimal poriod’s water supply weighting factor and carryover storage target to guide release. The conclusions achieved from this study as follows: (1 the water supply weighting factor and carryover storage target have a direct impact on the three elements of the hedging rule; (2 parameters can guide reservoirs to supply water reasonably after optimization of the simulation and optimization model; and (3 in order to verify the utility of the hedging rule, the Heiquan reservoir is used as a case study and particle swarm optimization algorithm with a simulation model is adopted for optimizing the parameter. The results show that the proposed hedging rule can improve the operation performances of the water supply reservoir.

  9. Sistem Evaluasi Jamunan Mutu Menggunakan Rule Based System Untuk Monitoring Mutu Perguruan Tinggi

    Directory of Open Access Journals (Sweden)

    Sri Hartono

    2017-05-01

    Full Text Available The needs for continuous quality improvement resulting in the more complex. The research aims to develop system of quality assurance evaluation using rule based system to monitor the quality of higher education. This process of the research begins by documenting the daily activity of study program which consists of lecturer data, research data, service data, staff data, student data, and infrastructure data into a database. The data were evaluated by using rule based system  by adopting rules on quality standards of study program of National Accreditation Board for Higher Education as the knowledge base. Evaluation process was carried out by using the forward chaining methods by matching the existing data to the knowledge base to determine the quality status of each quality standard. While the reccomendation process was carried out by using the backward chaining methods by matching the results of quality status to the desired projection of quality status to determine the nearest target which can be achieved. The result of the research is system of quality assurance evaluation with rule based system that is capable of producing an output system in the form of internal evaluation report and recommendation system that can be used to monitor the quality of higher education.

  10. An Innovative Fuzzy-Logic-Based Methodology for Trend Identification

    International Nuclear Information System (INIS)

    Wang Xin; Tsoukalas, Lefteri H.; Wei, Thomas Y.C.; Reifman, Jaques

    2001-01-01

    A new fuzzy-logic-based methodology for on-line signal trend identification is introduced. The methodology may be used for detecting the onset of nuclear power plant (NPP) transients at the earliest possible time and could be of great benefit to diagnostic, maintenance, and performance-monitoring programs. Although signal trend identification is complicated by the presence of noise, fuzzy methods can help capture important features of on-line signals, integrate the information included in these features, and classify incoming NPP signals into increasing, decreasing, and steady-state trend categories. A computer program named PROTREN is developed and tested for the purpose of verifying this methodology using NPP and simulation data. The results indicate that the new fuzzy-logic-based methodology is capable of detecting transients accurately, it identifies trends reliably and does not misinterpret a steady-state signal as a transient one

  11. Psychometric evaluation of the Moral Distress Risk Scale: A methodological study.

    Science.gov (United States)

    Schaefer, Rafaela; Zoboli, Elma Lcp; Vieira, Margarida M

    2017-01-01

    Moral distress is a kind of suffering that nurses may experience when they act in ways that are considered inconsistent with moral values, leading to a perceived compromise of moral integrity. Consequences are mostly negative and include physical and psychological symptoms, in addition to organizational implications. To psychometrically test the Moral Distress Risk Scale. A methodological study was realized. Data were submitted to exploratory factorial analysis through the SPSS statistical program. Participants and research context: In total, 268 nurses from hospitals and primary healthcare settings participated in this research during the period of March to June of 2016. Ethical considerations: This research has ethics committee approval. The Moral Distress Risk Scale is composed of 7 factors and 30 items; it shows evidence of acceptable reliability and validity with a Cronbach's α = 0.913, a total variance explained of 59%, a Kaiser-Meyer-Olkin = 0.896, and a significant Bartlett <0.001. Concerns about moral distress should be beyond acute care settings, and a tool to help clarify critical points in other healthcare contexts may add value to moral distress speech. Psychometric results reveal that the Moral Distress Risk Scale can be applied in different healthcare contexts.

  12. Design of an Emulsion-based Personal Detergent through a Model-based Chemical Product Design Methodology

    DEFF Research Database (Denmark)

    Mattei, Michele; Hill, Michael; Kontogeorgis, Georgios

    2013-01-01

    An extended systematic methodology for the design of emulsion-based Chemical products is presented. The methodology consists of a model-based framework involving seven sequential hierarchical steps: starting with the identification of the needs to be satisfied by the product and then adding one-b...... to obtain one or more candidate formulations. A conceptual casestudy representing a personal detergent is presented to highlight the methodology....

  13. Design of an Emulsion-based Personal Detergent through a Model-based Chemical Product Design Methodology

    DEFF Research Database (Denmark)

    Mattei, Michele; Hill, Michael; Kontogeorgis, Georgios

    2013-01-01

    An extended systematic methodology for the design of emulsion-based Chemical products is presented. The methodology consists of a model-based framework involving seven sequential hierarchical steps: starting with the identification of the needs to be satisfied by the product and then adding one...... to obtain one or more candidate formulations. A conceptual casestudy representing a personal detergent is presented to highlight the methodology....

  14. Polymer density functional theory approach based on scaling second-order direct correlation function.

    Science.gov (United States)

    Zhou, Shiqi

    2006-06-01

    A second-order direct correlation function (DCF) from solving the polymer-RISM integral equation is scaled up or down by an equation of state for bulk polymer, the resultant scaling second-order DCF is in better agreement with corresponding simulation results than the un-scaling second-order DCF. When the scaling second-order DCF is imported into a recently proposed LTDFA-based polymer DFT approach, an originally associated adjustable but mathematically meaningless parameter now becomes mathematically meaningful, i.e., the numerical value lies now between 0 and 1. When the adjustable parameter-free version of the LTDFA is used instead of the LTDFA, i.e., the adjustable parameter is fixed at 0.5, the resultant parameter-free version of the scaling LTDFA-based polymer DFT is also in good agreement with the corresponding simulation data for density profiles. The parameter-free version of the scaling LTDFA-based polymer DFT is employed to investigate the density profiles of a freely jointed tangent hard sphere chain near a variable sized central hard sphere, again the predictions reproduce accurately the simulational results. Importance of the present adjustable parameter-free version lies in its combination with a recently proposed universal theoretical way, in the resultant formalism, the contact theorem is still met by the adjustable parameter associated with the theoretical way.

  15. Assessment of Prevalence of Persons with Down Syndrome: A Theory-Based Demographic Model

    Science.gov (United States)

    de Graaf, Gert; Vis, Jeroen C.; Haveman, Meindert; van Hove, Geert; de Graaf, Erik A. B.; Tijssen, Jan G. P.; Mulder, Barbara J. M.

    2011-01-01

    Background: The Netherlands are lacking reliable empirical data in relation to the development of birth and population prevalence of Down syndrome. For the UK and Ireland there are more historical empirical data available. A theory-based model is developed for predicting Down syndrome prevalence in the Netherlands from the 1950s onwards. It is…

  16. Validity of contents of a paediatric critical comfort scale using mixed methodology.

    Science.gov (United States)

    Bosch-Alcaraz, A; Jordan-Garcia, I; Alcolea-Monge, S; Fernández-Lorenzo, R; Carrasquer-Feixa, E; Ferrer-Orona, M; Falcó-Pegueroles, A

    Critical illness in paediatric patients includes acute conditions in a healthy child as well as exacerbations of chronic disease, and therefore these situations must be clinically managed in Critical Care Units. The role of the paediatric nurse is to ensure the comfort of these critically ill patients. To that end, instruments are required that correctly assess critical comfort. To describe the process for validating the content of a paediatric critical comfort scale using mixed-method research. Initially, a cross-cultural adaptation of the Comfort Behavior Scale from English to Spanish using the translation and back-translation method was made. After that, its content was evaluated using mixed method research. This second step was divided into a quantitative stage in which an ad hoc questionnaire was used in order to assess each scale's item relevance and wording and a qualitative stage with two meetings with health professionals, patients and a family member following the Delphi Method recommendations. All scale items obtained a content validity index >0.80, except physical movement in its relevance, which obtained 0.76. Global content scale validity was 0.87 (high). During the qualitative stage, items from each of the scale domains were reformulated or eliminated in order to make the scale more comprehensible and applicable. The use of a mixed-method research methodology during the scale content validity phase allows the design of a richer and more assessment-sensitive instrument. Copyright © 2017 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.

  17. Multiple Methodologies: Using Community-Based Participatory Research and Decolonizing Methodologies in Kenya

    Science.gov (United States)

    Elder, Brent C.; Odoyo, Kenneth O.

    2018-01-01

    In this project, we examined the development of a sustainable inclusive education system in western Kenya by combining community-based participatory research (CBPR) and decolonizing methodologies. Through three cycles of qualitative interviews with stakeholders in inclusive education, participants explained what they saw as foundational components…

  18. Associations between rule-based parenting practices and child screen viewing: A cross-sectional study

    Directory of Open Access Journals (Sweden)

    Joanna M. Kesten

    2015-01-01

    Conclusions: Limit setting is associated with greater SV. Collaborative rule setting may be effective for managing boys' game-console use. More research is needed to understand rule-based parenting practices.

  19. A noninvasive method for coronary artery diseases diagnosis using a clinically-interpretable fuzzy rule-based system

    Directory of Open Access Journals (Sweden)

    Hamid Reza Marateb

    2015-01-01

    Full Text Available Background: Coronary heart diseases/coronary artery diseases (CHDs/CAD, the most common form of cardiovascular disease (CVD, are a major cause for death and disability in developing/developed countries. CAD risk factors could be detected by physicians to prevent the CAD occurrence in the near future. Invasive coronary angiography, a current diagnosis method, is costly and associated with morbidity and mortality in CAD patients. The aim of this study was to design a computer-based noninvasive CAD diagnosis system with clinically interpretable rules. Materials and Methods: In this study, the Cleveland CAD dataset from the University of California UCI (Irvine was used. The interval-scale variables were discretized, with cut points taken from the literature. A fuzzy rule-based system was then formulated based on a neuro-fuzzy classifier (NFC whose learning procedure was speeded up by the scaled conjugate gradient algorithm. Two feature selection (FS methods, multiple logistic regression (MLR and sequential FS, were used to reduce the required attributes. The performance of the NFC (without/with FS was then assessed in a hold-out validation framework. Further cross-validation was performed on the best classifier. Results: In this dataset, 16 complete attributes along with the binary CHD diagnosis (gold standard for 272 subjects (68% male were analyzed. MLR + NFC showed the best performance. Its overall sensitivity, specificity, accuracy, type I error (α and statistical power were 79%, 89%, 84%, 0.1 and 79%, respectively. The selected features were "age and ST/heart rate slope categories," "exercise-induced angina status," fluoroscopy, and thallium-201 stress scintigraphy results. Conclusion: The proposed method showed "substantial agreement" with the gold standard. This algorithm is thus, a promising tool for screening CAD patients.

  20. GaAs/Ge crystals grown on Si substrates patterned down to the micron scale

    International Nuclear Information System (INIS)

    Taboada, A. G.; Kreiliger, T.; Falub, C. V.; Känel, H. von; Meduňa, M.; Salvalaglio, M.; Miglio, L.; Isa, F.; Barthazy Meier, E.; Müller, E.; Isella, G.

    2016-01-01

    Monolithic integration of III-V compounds into high density Si integrated circuits is a key technological challenge for the next generation of optoelectronic devices. In this work, we report on the metal organic vapor phase epitaxy growth of strain-free GaAs crystals on Si substrates patterned down to the micron scale. The differences in thermal expansion coefficient and lattice parameter are adapted by a 2-μm-thick intermediate Ge layer grown by low-energy plasma enhanced chemical vapor deposition. The GaAs crystals evolve during growth towards a pyramidal shape, with lateral facets composed of (111) planes and an apex formed by (137) and (001) surfaces. The influence of the anisotropic GaAs growth kinetics on the final morphology is highlighted by means of scanning and transmission electron microscopy measurements. The effect of the Si pattern geometry, substrate orientation, and crystal aspect ratio on the GaAs structural properties was investigated by means of high resolution X-ray diffraction. The thermal strain relaxation process of GaAs crystals with different aspect ratio is discussed within the framework of linear elasticity theory by Finite Element Method simulations based on realistic geometries extracted from cross-sectional scanning electron microscopy images

  1. Proof of Kochen–Specker Theorem: Conversion of Product Rule to Sum Rule

    International Nuclear Information System (INIS)

    Toh, S.P.; Zainuddin, Hishamuddin

    2009-01-01

    Valuation functions of observables in quantum mechanics are often expected to obey two constraints called the sum rule and product rule. However, the Kochen–Specker (KS) theorem shows that for a Hilbert space of quantum mechanics of dimension d ≤ 3, these constraints contradict individually with the assumption of value definiteness. The two rules are not irrelated and Peres [Found. Phys. 26 (1996) 807] has conceived a method of converting the product rule into a sum rule for the case of two qubits. Here we apply this method to a proof provided by Mermin based on the product rule for a three-qubit system involving nine operators. We provide the conversion of this proof to one based on sum rule involving ten operators. (general)

  2. Flavours of XChange, a Rule-Based Reactive Language for the (Semantic) Web

    OpenAIRE

    Bailey, James; Bry, François; Eckert, Michael; Patrânjan, Paula Lavinia

    2005-01-01

    This article introduces XChange, a rule-based reactive language for the Web. Stressing application scenarios, it first argues that high-level reactive languages are needed for bothWeb and SemanticWeb applications. Then, it discusses technologies and paradigms relevant to high-level reactive languages for the (Semantic) Web. Finally, it presents the Event-Condition-Action rules of XChange.

  3. A Belief Rule Based Expert System to Assess Mental Disorder under Uncertainty

    DEFF Research Database (Denmark)

    Hossain, Mohammad Shahadat; Afif Monrat, Ahmed; Hasan, Mamun

    2016-01-01

    to ignorance, incompleteness, and randomness. So, a belief rule-based expert system (BRBES) has been designed and developed with the capability of handling the uncertainties mentioned. Evidential reasoning works as the inference engine and the belief rule base as the knowledge representation schema......Mental disorder is a change of mental or behavioral pattern that causes sufferings and impairs the ability to function in ordinary life. In psychopathology, the assessment methods of mental disorder contain various types of uncertainties associated with signs and symptoms. This study identifies...

  4. Scales and erosion

    Science.gov (United States)

    There is a need to develop scale explicit understanding of erosion to overcome existing conceptual and methodological flaws in our modelling methods currently applied to understand the process of erosion, transport and deposition at the catchment scale. These models need to be based on a sound under...

  5. A Methodology for Equitable Performance Assessment and Presentation of Wave Energy Converters Based on Sea Trials

    DEFF Research Database (Denmark)

    Kofoed, Jens Peter; Pecher, Arthur; Margheritini, Lucia

    2013-01-01

    This paper provides a methodology for the analysis and presentation of data obtained from sea trials of wave energy converters (WEC). The equitable aspect of this methodology lies in its wide application, as any WEC at any scale or stage of development can be considered as long as the tests are p...... parameters influence the performance of the WEC can also be investigated using this methodology.......This paper provides a methodology for the analysis and presentation of data obtained from sea trials of wave energy converters (WEC). The equitable aspect of this methodology lies in its wide application, as any WEC at any scale or stage of development can be considered as long as the tests...... leads to testing campaigns that are not as extensive as desired. Therefore, the performance analysis should be robust enough to allow for not fully complete sea trials and sub optimal performance data. In other words, this methodology is focused at retrieving the maximum amount of useful information out...

  6. Large-Scale Demand Driven Design of a Customized Bus Network: A Methodological Framework and Beijing Case Study

    Directory of Open Access Journals (Sweden)

    Jihui Ma

    2017-01-01

    Full Text Available In recent years, an innovative public transportation (PT mode known as the customized bus (CB has been proposed and implemented in many cities in China to efficiently and effectively shift private car users to PT to alleviate traffic congestion and traffic-related environmental pollution. The route network design activity plays an important role in the CB operation planning process because it serves as the basis for other operation planning activities, for example, timetable development, vehicle scheduling, and crew scheduling. In this paper, according to the demand characteristics and operational purpose, a methodological framework that includes the elements of large-scale travel demand data processing and analysis, hierarchical clustering-based route origin-destination (OD region division, route OD region pairing, and a route selection model is proposed for CB network design. Considering the operating cost and social benefits, a route selection model is proposed and a branch-and-bound-based solution method is developed. In addition, a computer-aided program is developed to analyze a real-world Beijing CB route network design problem. The results of the case study demonstrate that the current CB network of Beijing can be significantly improved, thus demonstrating the effectiveness of the proposed methodology.

  7. Improved USGS methodology for assessing continuous petroleum resources

    Science.gov (United States)

    Charpentier, Ronald R.; Cook, Troy A.

    2010-01-01

    This report presents an improved methodology for estimating volumes of continuous (unconventional) oil and gas resources within the United States and around the world. The methodology is based on previously developed U.S. Geological Survey methodologies that rely on well-scale production data. Improvements were made primarily to how the uncertainty about estimated ultimate recoveries is incorporated in the estimates. This is particularly important when assessing areas with sparse or no production data, because the new methodology allows better use of analog data from areas with significant discovery histories.

  8. Location Prediction Based on Transition Probability Matrices Constructing from Sequential Rules for Spatial-Temporal K-Anonymity Dataset

    Science.gov (United States)

    Liu, Zhao; Zhu, Yunhong; Wu, Chenxue

    2016-01-01

    Spatial-temporal k-anonymity has become a mainstream approach among techniques for protection of users’ privacy in location-based services (LBS) applications, and has been applied to several variants such as LBS snapshot queries and continuous queries. Analyzing large-scale spatial-temporal anonymity sets may benefit several LBS applications. In this paper, we propose two location prediction methods based on transition probability matrices constructing from sequential rules for spatial-temporal k-anonymity dataset. First, we define single-step sequential rules mined from sequential spatial-temporal k-anonymity datasets generated from continuous LBS queries for multiple users. We then construct transition probability matrices from mined single-step sequential rules, and normalize the transition probabilities in the transition matrices. Next, we regard a mobility model for an LBS requester as a stationary stochastic process and compute the n-step transition probability matrices by raising the normalized transition probability matrices to the power n. Furthermore, we propose two location prediction methods: rough prediction and accurate prediction. The former achieves the probabilities of arriving at target locations along simple paths those include only current locations, target locations and transition steps. By iteratively combining the probabilities for simple paths with n steps and the probabilities for detailed paths with n-1 steps, the latter method calculates transition probabilities for detailed paths with n steps from current locations to target locations. Finally, we conduct extensive experiments, and correctness and flexibility of our proposed algorithm have been verified. PMID:27508502

  9. A Model-Based Prognostics Methodology For Electrolytic Capacitors Based On Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for electrolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical...

  10. Disability: Down Syndrome and Sexuality

    Directory of Open Access Journals (Sweden)

    Diana ORTIZ QUIROGA

    2017-03-01

    Full Text Available Thirty?four?year?old Daniel is the first European with Down syndrome to have graduated from university. He starts a social services job in Seville, where he meets free?spirited co?worker Laura. They become fast friends, drawing the attention of both their coworkers and families. Their unique relationship becomes problematic when Daniel falls in love with her. But these rebellious souls refuse to bend to the rules and they findfriendship and love as they have never known.

  11. A stale challenge to the philosophy of science: commentary on "Is psychology based on a methodological error?" by Michael Schwarz.

    Science.gov (United States)

    Ruck, Nora; Slunecko, Thomas

    2010-06-01

    In his article "Is psychology based on a methodological error?" and based on a quite convincing empirical basis, Michael Schwarz offers a methodological critique of one of mainstream psychology's key test theoretical axioms, i.e., that of the in principle normal distribution of personality variables. It is characteristic of this paper--and at first seems to be a strength of it--that the author positions his critique within a frame of philosophy of science, particularly positioning himself in the tradition of Karl Popper's critical rationalism. When scrutinizing Schwarz's arguments, however, we find Schwarz's critique profound only as an immanent critique of test theoretical axioms. We raise doubts, however, as to Schwarz's alleged 'challenge' to the philosophy of science because the author not at all seems to be in touch with the state of the art of contemporary philosophy of science. Above all, we question the universalist undercurrent that Schwarz's 'bio-psycho-social model' of human judgment boils down to. In contrast to such position, we close our commentary with a plea for a context- and culture sensitive philosophy of science.

  12. A Model-based Prognostics Methodology for Electrolytic Capacitors Based on Electrical Overstress Accelerated Aging

    Data.gov (United States)

    National Aeronautics and Space Administration — A remaining useful life prediction methodology for elec- trolytic capacitors is presented. This methodology is based on the Kalman filter framework and an empirical...

  13. A forecast-based STDP rule suitable for neuromorphic implementation.

    Science.gov (United States)

    Davies, S; Galluppi, F; Rast, A D; Furber, S B

    2012-08-01

    Artificial neural networks increasingly involve spiking dynamics to permit greater computational efficiency. This becomes especially attractive for on-chip implementation using dedicated neuromorphic hardware. However, both spiking neural networks and neuromorphic hardware have historically found difficulties in implementing efficient, effective learning rules. The best-known spiking neural network learning paradigm is Spike Timing Dependent Plasticity (STDP) which adjusts the strength of a connection in response to the time difference between the pre- and post-synaptic spikes. Approaches that relate learning features to the membrane potential of the post-synaptic neuron have emerged as possible alternatives to the more common STDP rule, with various implementations and approximations. Here we use a new type of neuromorphic hardware, SpiNNaker, which represents the flexible "neuromimetic" architecture, to demonstrate a new approach to this problem. Based on the standard STDP algorithm with modifications and approximations, a new rule, called STDP TTS (Time-To-Spike) relates the membrane potential with the Long Term Potentiation (LTP) part of the basic STDP rule. Meanwhile, we use the standard STDP rule for the Long Term Depression (LTD) part of the algorithm. We show that on the basis of the membrane potential it is possible to make a statistical prediction of the time needed by the neuron to reach the threshold, and therefore the LTP part of the STDP algorithm can be triggered when the neuron receives a spike. In our system these approximations allow efficient memory access, reducing the overall computational time and the memory bandwidth required. The improvements here presented are significant for real-time applications such as the ones for which the SpiNNaker system has been designed. We present simulation results that show the efficacy of this algorithm using one or more input patterns repeated over the whole time of the simulation. On-chip results show that

  14. Performance Assessment of the Wave Dragon Wave Energy Converter Based on the EquiMar Methodology

    DEFF Research Database (Denmark)

    Parmeggiani, Stefano; Chozas, Julia Fernandez; Pecher, Arthur

    2011-01-01

    At the present pre-commercial phase of the wave energy sector, device developers are called to provide reliable estimates on power performance and production at possible deployment locations. The EU EquiMar project has proposed a novel approach, where the performance assessment is based mainly...... on experimental data deriving from sea trials rather than solely on numerical predictions. The study applies this methodology to evaluate the performance of Wave Dragon at two locations in the North Sea, based on the data acquired during the sea trials of a 1:4.5 scale prototype. Indications about power...

  15. Towards a framework for threaded inference in rule-based systems

    Directory of Open Access Journals (Sweden)

    Luis Casillas Santillan

    2013-11-01

    Full Text Available nformation and communication technologies have shown a significant advance and fast pace in their performance and pervasiveness. Knowledge has become a significant asset for organizations, which need to deal with large amounts of data and information to produce valuable knowledge. Dealing with knowledge is turning the axis for organizations in the new economy. One of the choices to gather the goal of knowledge managing is the use of rule-based systems. This kind of approach is the new chance for expert-systems’ technology. Modern languages and cheap computing allow the implementation of concurrent systems for dealing huge volumes of information in organizations. The present work is aimed at proposing the use of contemporary programming elements, as easy to exploit threading, when implementing rule-based treatment over huge data volumes.

  16. Systematic construction of qualitative physics-based rules for process diagnostics

    International Nuclear Information System (INIS)

    Reifman, J.; Wei, T.Y.C.

    1995-01-01

    A novel first-principles-based expert system is proposed for on-line detection and identification of faulty component candidates during incipient off-normal process operations. The system performs function-oriented diagnostics and can be reused for diagnosing single-component failures in different processes and different plants through the provision of the appropriate process schematics information. The function-oriented and process-independent diagnostic features of the proposed expert system are achieved by constructing a knowledge base containing three distinct types of information, qualitative balance equation rules, functional classification of process components, and the process piping and instrumentation diagram. The various types of qualitative balance equation rules for processes utilizing single-phase liquids are derived and their usage is illustrated through simulation results of a realistic process in a nuclear power plant

  17. 76 FR 33387 - Self-Regulatory Organizations; Notice of Filing and Immediate Effectiveness of Proposed Rule...

    Science.gov (United States)

    2011-06-08

    ... calculates the CBOE Gold ETF Volatility Index (``GVZ''), which is based on the VIX methodology applied to options on the SPDR Gold Trust (``GLD''). The current filing would permit $0.50 strike price intervals for... exchange-traded fund (``ETF'') options. See Rule 1012, Commentary .05(a)(iv). To the extent that the CBOE...

  18. Scaling dimensions in spectroscopy of soil and vegetation

    Science.gov (United States)

    Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.

    2007-05-01

    The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling

  19. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    Science.gov (United States)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-02-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults.

  20. Compressed sensing of roller bearing fault based on multiple down-sampling strategy

    International Nuclear Information System (INIS)

    Wang, Huaqing; Ke, Yanliang; Luo, Ganggang; Tang, Gang

    2016-01-01

    Roller bearings are essential components of rotating machinery and are often exposed to complex operating conditions, which can easily lead to their failures. Thus, to ensure normal production and the safety of machine operators, it is essential to detect the failures as soon as possible. However, it is a major challenge to maintain a balance between detection efficiency and big data acquisition given the limitations of sampling theory. To overcome these limitations, we try to preserve the information pertaining to roller bearing failures using a sampling rate far below the Nyquist sampling rate, which can ease the pressure generated by the large-scale data. The big data of a faulty roller bearing’s vibration signals is firstly reduced by a down-sample strategy while preserving the fault features by selecting peaks to represent the data segments in time domain. However, a problem arises in that the fault features may be weaker than before, since the noise may be mistaken for the peaks when the noise is stronger than the vibration signals, which makes the fault features unable to be extracted by commonly-used envelope analysis. Here we employ compressive sensing theory to overcome this problem, which can make a signal enhancement and reduce the sample sizes further. Moreover, it is capable of detecting fault features from a small number of samples based on orthogonal matching pursuit approach, which can overcome the shortcomings of the multiple down-sample algorithm. Experimental results validate the effectiveness of the proposed technique in detecting roller bearing faults. (paper)

  1. European methodology of analysis vertical restraints under rule of reason in context of cooperative relation specific investments

    Directory of Open Access Journals (Sweden)

    Agamirova Maria, Е.

    2015-06-01

    Full Text Available The problem of underinvestment in specific assets is a key issue in new institutional economics, especially in case of cooperative relation specific investments. It can be solved due to vertical restraints, as an alternative way of vertical integration to transfer control to partner, who makes relation specific investments. The type of relationspecific investments called «cooperative» investments (or cross investments was nearly absent in economic analysis up to the very end of the twentieth century despite of the fact that such investments are widespread. It led to the absence of analysis relation specific investments in official regulation documents. At the same time, different types of relation specific investments can be characterized by different degree of riskiness and need special regulations of vertical agreements. In the paper author makes an attempt to analyze the European methodology of assessment vertical restraints under rule of reason focusing on the type of relation specific investments. It makes possible to improve analysis of vertical restraint in Russian antitrust.

  2. A Methodology for Mapping Meanings in Text-Based Sustainability Communication

    Directory of Open Access Journals (Sweden)

    Mark Brown

    2013-06-01

    Full Text Available In moving society towards more sustainable forms of consumption and production, social learning must play an important role. Making the assumption that it occurs as a consequence of changes in understanding, this article presents a methodology for mapping meanings in sustainability communication texts. The methodology uses techniques from corpus linguistics and framing theory. Two large databases of text were constructed by copying material down from the websites of two different groups of social actors: (i environmental NGOs and (ii British green business, and saving it as .txt files. The findings on individual words show that the NGOs and business use them very differently. Focusing on words expressing concern for the natural environment, it is proposed that the two actors also conceptualize their concern differently. Green business’s cognitive system of concern has two well-developed frames; good intentions and risk management. However, three frames—concern for the natural environment, perception of the damage, and responsibility, are light on detail. In contrast, within the NGOs’ system of concern, the frames of concern for the natural environment, perception of the damage and responsibility, contain words making detailed representations.

  3. Universal kinematic scaling as a probe of factorized long-distance effects in high-energy quarkonium production

    Energy Technology Data Exchange (ETDEWEB)

    Faccioli, Pietro; Seixas, Joao [LIP and IST, Lisbon (Portugal); Lourenco, Carlos; Araujo, Mariana [CERN, Geneva (Switzerland)

    2018-02-15

    Dimensional analysis reveals general kinematic scaling rules for the momentum, mass, and energy dependence of Drell-Yan and quarkonium cross sections. Their application to mid-rapidity LHC data provides strong experimental evidence supporting the validity of the factorization ansatz, a cornerstone of non-relativistic QCD, still lacking theoretical demonstration. Moreover, data-driven patterns emerge for the factorizable long-distance bound-state formation effects, including a remarkable correlation between the S-wave quarkonium cross sections and their binding energies. Assuming that this scaling can be extended to the P-wave case, we obtain precise predictions for the not yet measured feed-down fractions, thereby providing a complete picture of the charmonium and bottomonium feed-down structure. This is crucial information for quantitative interpretations of quarkonium production data, including studies of the suppression patterns measured in nucleus-nucleus collisions. (orig.)

  4. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    Energy Technology Data Exchange (ETDEWEB)

    Scolozzi, Rocco, E-mail: rocco.scolozzi@fmach.it [Sustainable Agro-ecosystems and Bioresources Department, IASMA Research and Innovation Centre, Fondazione Edmund Mach, Via E. Mach 1, 38010 San Michele all& #x27; Adige, (Italy); Geneletti, Davide, E-mail: geneletti@ing.unitn.it [Department of Civil and Environmental Engineering, University of Trento, Trento (Italy)

    2012-09-15

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: Black-Right-Pointing-Pointer Many environmental assessments inadequately consider habitat loss and fragmentation. Black-Right-Pointing-Pointer Species-perspective for defining habitat quality and connectivity is claimed. Black-Right-Pointing-Pointer Species-based tools are difficult to be applied with limited availability of data. Black-Right-Pointing-Pointer We propose a species-oriented and multiple scale-based qualitative approach. Black-Right-Pointing-Pointer Advantages include being species-oriented and providing value-based information.

  5. A multi-scale qualitative approach to assess the impact of urbanization on natural habitats and their connectivity

    International Nuclear Information System (INIS)

    Scolozzi, Rocco; Geneletti, Davide

    2012-01-01

    Habitat loss and fragmentation are often concurrent to land conversion and urbanization. Simple application of GIS-based landscape pattern indicators may be not sufficient to support meaningful biodiversity impact assessment. A review of the literature reveals that habitat definition and habitat fragmentation are frequently inadequately considered in environmental assessment, notwithstanding the increasing number of tools and approaches reported in the landscape ecology literature. This paper presents an approach for assessing impacts on habitats on a local scale, where availability of species data is often limited, developed for an alpine valley in northern Italy. The perspective of the methodology is multiple scale and species-oriented, and provides both qualitative and quantitative definitions of impact significance. A qualitative decision model is used to assess ecological values in order to support land-use decisions at the local level. Building on recent studies in the same region, the methodology integrates various approaches, such as landscape graphs, object-oriented rule-based habitat assessment and expert knowledge. The results provide insights into future habitat loss and fragmentation caused by land-use changes, and aim at supporting decision-making in planning and suggesting possible ecological compensation. - Highlights: ► Many environmental assessments inadequately consider habitat loss and fragmentation. ► Species-perspective for defining habitat quality and connectivity is claimed. ► Species-based tools are difficult to be applied with limited availability of data. ► We propose a species-oriented and multiple scale-based qualitative approach. ► Advantages include being species-oriented and providing value-based information.

  6. Rule-based Information Integration

    NARCIS (Netherlands)

    de Keijzer, Ander; van Keulen, Maurice

    2005-01-01

    In this report, we show the process of information integration. We specifically discuss the language used for integration. We show that integration consists of two phases, the schema mapping phase and the data integration phase. We formally define transformation rules, conversion, evolution and

  7. Large scale mapping: an empirical comparison of pixel-based and ...

    African Journals Online (AJOL)

    Innocent E. Bello

    model of any part of the earth surface on a projected plane or medium in 2D or ..... segmentation following rule set (brightness/colour and shape) and scale ... vegetation or Agric (light green) and (v) Water (blue) were visualized appropriately.

  8. An ontological case base engineering methodology for diabetes management.

    Science.gov (United States)

    El-Sappagh, Shaker H; El-Masri, Samir; Elmogy, Mohammed; Riad, A M; Saddik, Basema

    2014-08-01

    Ontology engineering covers issues related to ontology development and use. In Case Based Reasoning (CBR) system, ontology plays two main roles; the first as case base and the second as domain ontology. However, the ontology engineering literature does not provide adequate guidance on how to build, evaluate, and maintain ontologies. This paper proposes an ontology engineering methodology to generate case bases in the medical domain. It mainly focuses on the research of case representation in the form of ontology to support the case semantic retrieval and enhance all knowledge intensive CBR processes. A case study on diabetes diagnosis case base will be provided to evaluate the proposed methodology.

  9. Ibn Hibban and The Mudallisin’s Narrations in his Book al-Sahih: Rule, Motive and Methodology

    Directory of Open Access Journals (Sweden)

    Muhammad Rozaimi Ramle

    2016-06-01

    Full Text Available Tadlis i.e concealing the narrators is an illlah i.e. a concealed flaws that is an obstruction for a Hadith to be classified as authentic in the science of Hadith,. A Hadith, which has Tadlis in its sanad (the chain of narration will be classified as dhaif (weak. Ibn Hibban was one of the muhaddithin who has compiled authentic hadiths in his book al-Taqasim wa al-Anwa’. In this book, he has put conditions in accepting the authenticity of Hadith narrated by a mudallis. After the study has been carried out, it shows that Ibn Hibban not only stipulated rules for the narrations of mudallisin, but he also has his own motive when putting them in his book and methodology while dealing with them. Hence, this research is intended to explain this matter. An analytical and critical methods will be utilized for the purpose of this study. It also focuses on the definition of tadlis according to Ibn Hibban and its comparison to other scholars of hadith.

  10. High-frequency measurements of aeolian saltation flux: Field-based methodology and applications

    Science.gov (United States)

    Martin, Raleigh L.; Kok, Jasper F.; Hugenholtz, Chris H.; Barchyn, Thomas E.; Chamecki, Marcelo; Ellis, Jean T.

    2018-02-01

    Aeolian transport of sand and dust is driven by turbulent winds that fluctuate over a broad range of temporal and spatial scales. However, commonly used aeolian transport models do not explicitly account for such fluctuations, likely contributing to substantial discrepancies between models and measurements. Underlying this problem is the absence of accurate sand flux measurements at the short time scales at which wind speed fluctuates. Here, we draw on extensive field measurements of aeolian saltation to develop a methodology for generating high-frequency (up to 25 Hz) time series of total (vertically-integrated) saltation flux, namely by calibrating high-frequency (HF) particle counts to low-frequency (LF) flux measurements. The methodology follows four steps: (1) fit exponential curves to vertical profiles of saltation flux from LF saltation traps, (2) determine empirical calibration factors through comparison of LF exponential fits to HF number counts over concurrent time intervals, (3) apply these calibration factors to subsamples of the saltation count time series to obtain HF height-specific saltation fluxes, and (4) aggregate the calibrated HF height-specific saltation fluxes into estimates of total saltation fluxes. When coupled to high-frequency measurements of wind velocity, this methodology offers new opportunities for understanding how aeolian saltation dynamics respond to variability in driving winds over time scales from tens of milliseconds to days.

  11. ABOUT CLINICAL EXPERT SYSTEM BASED ON RULES USING DATA MINING TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    V. P. Martsenyuk

    2015-05-01

    Full Text Available In the work the topics of software implementation of rule induction method based on sequential covering algorithm are considered. Such approach allows us to develop clinical decision support system. The project is implemented within Netbeans IDE based on Java-classes.

  12. Trade rules and exchange rate misalignments: in search for a WTO solution

    Directory of Open Access Journals (Sweden)

    Vera Thorstensen

    2014-09-01

    Full Text Available The debate on the link between trade rules and rules on exchange rates is raising the attention of experts on international trade law and economics. The main purpose of this paper is to analyze the impacts of exchange rate misalignments on tariffs as applied by the WTO - World Trade Organization. It is divided into five sections: the first one explains the methodology used to determine exchange rate misalignments and also presents its results for Brazil, U.S. and China; the second summarizes the methodology applied to calculate the impacts of exchange rate misalignments on the level of tariff protection through an exercise of "misalignment tariffication"; the third examines the effects of exchange rate variations on tariffs and their consequences for the multilateral trading system; the fourth one creates a methodology to estimate exchange rates against a currency of the World and a proposal to deal with persistent and significant misalignments related to trade rules. The conclusions are present in the last section.

  13. A methodology for strain-based fatigue reliability analysis

    International Nuclear Information System (INIS)

    Zhao, Y.X.

    2000-01-01

    A significant scatter of the cyclic stress-strain (CSS) responses should be noted for a nuclear reactor material, 1Cr18Ni9Ti pipe-weld metal. Existence of the scatter implies that a random cyclic strain applied history will be introduced under any of the loading modes even a deterministic loading history. A non-conservative evaluation might be given in the practice without considering the scatter. A methodology for strain-based fatigue reliability analysis, which has taken into account the scatter, is developed. The responses are approximately modeled by probability-based CSS curves of Ramberg-Osgood relation. The strain-life data are modeled, similarly, by probability-based strain-life curves of Coffin-Manson law. The reliability assessment is constructed by considering interference of the random fatigue strain applied and capacity histories. Probability density functions of the applied and capacity histories are analytically given. The methodology could be conveniently extrapolated to the case of deterministic CSS relation as the existent methods did. Non-conservative evaluation of the deterministic CSS relation and availability of present methodology have been indicated by an analysis of the material test results

  14. Methodology for locale-scale monitoring for the PROTHEGO project: the Choirokoitia case study

    Science.gov (United States)

    Themistocleous, Kyriacos; Agapiou, Athos; Cuca, Branka; Danezis, Chris; Cigna, Francesca; Margottini, Claudio; Spizzichino, Daniele

    2016-10-01

    PROTHEGO (PROTection of European Cultural HEritage from GeO-hazards) is a collaborative research project funded in the framework of the Joint Programming Initiative on Cultural Heritage and Global Change (JPICH) - Heritage Plus in 2015-2018 (www.prothego.eu). PROTHEGO aims to make an innovative contribution towards the analysis of geohazards in areas of cultural heritage, and uses novel space technology based on radar interferometry (InSAR) to retrieve information on ground stability and motion in the 400+ UNESCO's World Heritage List monuments and sites of Europe. InSAR can be used to measure micro-movements to identify geo-hazards. In order to verify the InSAR image data, field and close range measurements are necessary. This paper presents the methodology for local-scale monitoring of the Choirokoitia study site in Cyprus, inscribed in the UNESCO World Heritage List, and part of the demonstration sites of PROTHEGO. Various field and remote sensing methods will be exploited for the local-scale monitoring, static GNSS, total station, leveling, laser scanning and UAV and compared with the Persistent Scatterer Interferometry results. The in-situ measurements will be taken systematically in order to document any changes and geo-hazards that affect standing archaeological remains. In addition, ground truth from in-situ visits will provide feedback related to the classification results of urban expansion and land use change maps. Available archival and current optical satellite images will be used to calibrate and identify the level of risk at the Cyprus case study site. The ground based geotechnical monitoring will be compared and validated with InSAR data to evaluate cultural heritage sites deformation trend and to understand its behaviour over the last two decades.

  15. 1996-2016: Two decades of econophysics: Between methodological diversification and conceptual coherence

    Science.gov (United States)

    Schinckus, C.

    2016-12-01

    This article aimed at presenting the scattered econophysics literature as a unified and coherent field through a specific lens imported from philosophy science. More precisely, I used the methodology developed by Imre Lakatos to cover the methodological evolution of econophysics over these last two decades. In this perspective, three co-existing approaches have been identified: statistical econophysics, bottom-up agent based econophysics and top-down agent based econophysics. Although the last is presented here as the last step of the methodological evolution of econophysics, it is worth mentioning that this tradition is still very new. A quick look on the econophysics literature shows that the vast majority of works in this field deal with a strictly statistical approach or a classical bottom-up agent-based modelling. In this context of diversification, the objective (and contribution) of this article is to emphasize the conceptual coherence of econophysics as a unique field of research. With this purpose, I used a theoretical framework coming from philosophy of science to characterize how econophysics evolved by combining a methodological enrichment with the preservation of its core conceptual statements.

  16. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    Science.gov (United States)

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  17. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems.

    Science.gov (United States)

    Chylek, Lily A; Harris, Leonard A; Tung, Chang-Shung; Faeder, James R; Lopez, Carlos F; Hlavacek, William S

    2014-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and posttranslational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). © 2013 Wiley Periodicals, Inc.

  18. A statistical methodology to derive the scaling law for the H-mode power threshold using a large multi-machine database

    International Nuclear Information System (INIS)

    Murari, A.; Lupelli, I.; Gaudio, P.; Gelfusa, M.; Vega, J.

    2012-01-01

    In this paper, a refined set of statistical techniques is developed and then applied to the problem of deriving the scaling law for the threshold power to access the H-mode of confinement in tokamaks. This statistical methodology is applied to the 2010 version of the ITPA International Global Threshold Data Base v6b(IGDBTHv6b). To increase the engineering and operative relevance of the results, only macroscopic physical quantities, measured in the vast majority of experiments, have been considered as candidate variables in the models. Different principled methods, such as agglomerative hierarchical variables clustering, without assumption about the functional form of the scaling, and nonlinear regression, are implemented to select the best subset of candidate independent variables and to improve the regression model accuracy. Two independent model selection criteria, based on the classical (Akaike information criterion) and Bayesian formalism (Bayesian information criterion), are then used to identify the most efficient scaling law from candidate models. The results derived from the full multi-machine database confirm the results of previous analysis but emphasize the importance of shaping quantities, elongation and triangularity. On the other hand, the scaling laws for the different machines and at different currents are different from each other at the level of confidence well above 95%, suggesting caution in the use of the global scaling laws for both interpretation and extrapolation purposes. (paper)

  19. Methodology for qualification of wood-based ash according to REACH - prestudy

    Energy Technology Data Exchange (ETDEWEB)

    Sjoeblom, Rolf [Tekedo AB, Nykoeping (Sweden); Tivegaard, Anna-Maria [SSAB Merox AB, Oxeloesund (Sweden)

    2010-02-15

    The new European Union framework directive on waste is to be implemented during the year 2010. According to this directive, much of what today is regarded as waste will instead be assessed as by-products and in many cases fall under the new European union regulation REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals). REACH applies in conjunction with the new European Union regulation CLP (Classification, Labelling and Packaging of substances and mixtures). There are introductory periods for both of these regulations, and in the case of CLP this regards transition from the present and previous rules under the dangerous substances and dangerous preparations directives (DSD and DPD, respectively). Similarly, the new framework directive on waste supersedes the previous directive and some other statements. There is a connection between the directives of waste and the rules for classification and labelling in that the classification of waste (in the categories hazardous and non-hazardous) build on (but are not identical to) the rules for labelling. Similarly, the national Swedish rules for acceptance of recycled material (waste) for use in geotechnical constructions relate to the provisions in REACH on assessment of chemical safety in the both request that the risk be assessed to be small, and that the same or similar methodologies can be applied to verify this. There is a 'reference alternative' in REACH that implies substantial testing prior to registration. Registration is the key to use of a substance even though a substance may be used as such, in a mixture, or to be released from an article. However, REACH as well as CLP contain a number of provisions for using literature data, data on similar chemicals e t c in order to avoid unnecessary testing. This especially applies to testing on humans and vertebrate animals. Vaermeforsk, through its Programme on Environmentally Friendly Use of Non-Coal Ashes has developed methodologies and

  20. Methodology for qualification of wood-based ash according to REACH - prestudy

    Energy Technology Data Exchange (ETDEWEB)

    Sjoeblom, Rolf (Tekedo AB, Nykoeping (Sweden)); Tivegaard, Anna-Maria (SSAB Merox AB, Oxeloesund (Sweden))

    2010-02-15

    The new European Union framework directive on waste is to be implemented during the year 2010. According to this directive, much of what today is regarded as waste will instead be assessed as by-products and in many cases fall under the new European union regulation REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals). REACH applies in conjunction with the new European Union regulation CLP (Classification, Labelling and Packaging of substances and mixtures). There are introductory periods for both of these regulations, and in the case of CLP this regards transition from the present and previous rules under the dangerous substances and dangerous preparations directives (DSD and DPD, respectively). Similarly, the new framework directive on waste supersedes the previous directive and some other statements. There is a connection between the directives of waste and the rules for classification and labelling in that the classification of waste (in the categories hazardous and non-hazardous) build on (but are not identical to) the rules for labelling. Similarly, the national Swedish rules for acceptance of recycled material (waste) for use in geotechnical constructions relate to the provisions in REACH on assessment of chemical safety in the both request that the risk be assessed to be small, and that the same or similar methodologies can be applied to verify this. There is a 'reference alternative' in REACH that implies substantial testing prior to registration. Registration is the key to use of a substance even though a substance may be used as such, in a mixture, or to be released from an article. However, REACH as well as CLP contain a number of provisions for using literature data, data on similar chemicals e t c in order to avoid unnecessary testing. This especially applies to testing on humans and vertebrate animals. Vaermeforsk, through its Programme on Environmentally Friendly Use of Non-Coal Ashes has developed methodologies and

  1. Adaptive Learning Rule for Hardware-based Deep Neural Networks Using Electronic Synapse Devices

    OpenAIRE

    Lim, Suhwan; Bae, Jong-Ho; Eum, Jai-Ho; Lee, Sungtae; Kim, Chul-Heung; Kwon, Dongseok; Park, Byung-Gook; Lee, Jong-Ho

    2017-01-01

    In this paper, we propose a learning rule based on a back-propagation (BP) algorithm that can be applied to a hardware-based deep neural network (HW-DNN) using electronic devices that exhibit discrete and limited conductance characteristics. This adaptive learning rule, which enables forward, backward propagation, as well as weight updates in hardware, is helpful during the implementation of power-efficient and high-speed deep neural networks. In simulations using a three-layer perceptron net...

  2. A high-level language for rule-based modelling.

    Science.gov (United States)

    Pedersen, Michael; Phillips, Andrew; Plotkin, Gordon D

    2015-01-01

    Rule-based languages such as Kappa excel in their support for handling the combinatorial complexities prevalent in many biological systems, including signalling pathways. But Kappa provides little structure for organising rules, and large models can therefore be hard to read and maintain. This paper introduces a high-level, modular extension of Kappa called LBS-κ. We demonstrate the constructs of the language through examples and three case studies: a chemotaxis switch ring, a MAPK cascade, and an insulin signalling pathway. We then provide a formal definition of LBS-κ through an abstract syntax and a translation to plain Kappa. The translation is implemented in a compiler tool which is available as a web application. We finally demonstrate how to increase the expressivity of LBS-κ through embedded scripts in a general-purpose programming language, a technique which we view as generally applicable to other domain specific languages.

  3. Motivating Students for Project-based Learning for Application of Research Methodology Skills.

    Science.gov (United States)

    Tiwari, Ranjana; Arya, Raj Kumar; Bansal, Manoj

    2017-12-01

    Project-based learning (PBL) is motivational for students to learn research methodology skills. It is a way to engage and give them ownership over their own learning. The aim of this study is to use PBL for application of research methodology skills for better learning by encouraging an all-inclusive approach in teaching and learning rather than an individualized tailored approach. The present study was carried out for MBBS 6 th - and 7 th -semester students of community medicine. Students and faculties were sensitized about PBL and components of research methodology skills. They worked in small groups. The students were asked to fill the student feedback Questionnaire and the faculty was also asked to fill the faculty feedback Questionnaire. Both the Questionnaires were assessed on a 5 point Likert scale. After submitted projects, document analysis was done. A total of 99 students of the 6 th and 7 th semester were participated in PBL. About 90.91% students agreed that there should be continuation of PBL in subsequent batches. 73.74% felt satisfied and motivated with PBL, whereas 76.77% felt that they would be able to use research methodology in the near future. PBL requires considerable knowledge, effort, persistence, and self-regulation on the part of the students. They need to devise plans, gather information evaluate both the findings, and their approach. Facilitator plays a critical role in helping students in the process by shaping opportunity for learning, guiding students, thinking, and helping them construct new understanding.

  4. SPATKIN: a simulator for rule-based modeling of biomolecular site dynamics on surfaces.

    Science.gov (United States)

    Kochanczyk, Marek; Hlavacek, William S; Lipniacki, Tomasz

    2017-11-15

    Rule-based modeling is a powerful approach for studying biomolecular site dynamics. Here, we present SPATKIN, a general-purpose simulator for rule-based modeling in two spatial dimensions. The simulation algorithm is a lattice-based method that tracks Brownian motion of individual molecules and the stochastic firing of rule-defined reaction events. Because rules are used as event generators, the algorithm is network-free, meaning that it does not require to generate the complete reaction network implied by rules prior to simulation. In a simulation, each molecule (or complex of molecules) is taken to occupy a single lattice site that cannot be shared with another molecule (or complex). SPATKIN is capable of simulating a wide array of membrane-associated processes, including adsorption, desorption and crowding. Models are specified using an extension of the BioNetGen language, which allows to account for spatial features of the simulated process. The C ++ source code for SPATKIN is distributed freely under the terms of the GNU GPLv3 license. The source code can be compiled for execution on popular platforms (Windows, Mac and Linux). An installer for 64-bit Windows and a macOS app are available. The source code and precompiled binaries are available at the SPATKIN Web site (http://pmbm.ippt.pan.pl/software/spatkin). spatkin.simulator@gmail.com. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1

    International Nuclear Information System (INIS)

    Oztunali, O.I.; Roles, G.W.

    1986-01-01

    Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology

  6. Update of Part 61 Impacts Analysis Methodology. Methodology report. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Oztunali, O.I.; Roles, G.W.

    1986-01-01

    Under contract to the US Nuclear Regulatory Commission, the Envirosphere Company has expanded and updated the impacts analysis methodology used during the development of the 10 CFR Part 61 rule to allow improved consideration of the costs and impacts of treatment and disposal of low-level waste that is close to or exceeds Class C concentrations. The modifications described in this report principally include: (1) an update of the low-level radioactive waste source term, (2) consideration of additional alternative disposal technologies, (3) expansion of the methodology used to calculate disposal costs, (4) consideration of an additional exposure pathway involving direct human contact with disposed waste due to a hypothetical drilling scenario, and (5) use of updated health physics analysis procedures (ICRP-30). Volume 1 of this report describes the calculational algorithms of the updated analysis methodology.

  7. A rule-based computer control system for PBX-M neutral beams

    International Nuclear Information System (INIS)

    Frank, K.T.; Kozub, T.A.; Kugel, H.W.

    1987-01-01

    The Princeton Beta Experiment (PBX) neutral beams have been routinely operated under automatic computer control. A major upgrade of the computer configuration was undertaken to coincide with the PBX machine modification. The primary tasks included in the computer control system are data acquisition, waveform reduction, automatic control and data storage. The portion of the system which will remain intact is the rule-based approach to automatic control. Increased computational and storage capability will allow the expansion of the knowledge base previously used. The hardware configuration supported by the PBX Neutral Beam (XNB) software includes a dedicated Microvax with five CAMAC crates and four process controllers. The control algorithms are rule-based and goal-driven. The automatic control system raises ion source electrical parameters to selected energy goals and maintains these levels until new goals are requested or faults are detected

  8. Method for automatic control rod operation using rule-based control

    International Nuclear Information System (INIS)

    Kinoshita, Mitsuo; Yamada, Naoyuki; Kiguchi, Takashi

    1988-01-01

    An automatic control rod operation method using rule-based control is proposed. Its features are as follows: (1) a production system to recognize plant events, determine control actions and realize fast inference (fast selection of a suitable production rule), (2) use of the fuzzy control technique to determine quantitative control variables. The method's performance was evaluated by simulation tests on automatic control rod operation at a BWR plant start-up. The results were as follows; (1) The performance which is related to stabilization of controlled variables and time required for reactor start-up, was superior to that of other methods such as PID control and program control methods, (2) the process time to select and interpret the suitable production rule, which was the same as required for event recognition or determination of control action, was short (below 1 s) enough for real time control. The results showed that the method is effective for automatic control rod operation. (author)

  9. Automatic detection of esophageal pressure events. Is there an alternative to rule-based criteria?

    DEFF Research Database (Denmark)

    Kruse-Andersen, S; Rütz, K; Kolberg, Jens Godsk

    1995-01-01

    of relevant pressure peaks at the various recording levels. Until now, this selection has been performed entirely by rule-based systems, requiring each pressure deflection to fit within predefined rigid numerical limits in order to be detected. However, due to great variations in the shapes of the pressure...... curves generated by muscular contractions, rule-based criteria do not always select the pressure events most relevant for further analysis. We have therefore been searching for a new concept for automatic event recognition. The present study describes a new system, based on the method of neurocomputing.......79-0.99 and accuracies of 0.89-0.98, depending on the recording level within the esophageal lumen. The neural networks often recognized peaks that clearly represented true contractions but that had been rejected by a rule-based system. We conclude that neural networks have potentials for automatic detections...

  10. Integration of object-oriented knowledge representation with the CLIPS rule based system

    Science.gov (United States)

    Logie, David S.; Kamil, Hasan

    1990-01-01

    The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.

  11. A Java-Web-Based-Learning Methodology, Case Study ...

    African Journals Online (AJOL)

    A Java-Web-Based-Learning Methodology, Case Study : Waterborne diseases. The recent advances in web technologies have opened new opportunities for computer-based-education. One can learn independently of time and place constraints, and have instantaneous access to relevant updated material at minimal cost.

  12. Uncertain rule-based fuzzy systems introduction and new directions

    CERN Document Server

    Mendel, Jerry M

    2017-01-01

    The second edition of this textbook provides a fully updated approach to fuzzy sets and systems that can model uncertainty — i.e., “type-2” fuzzy sets and systems. The author demonstrates how to overcome the limitations of classical fuzzy sets and systems, enabling a wide range of applications from time-series forecasting to knowledge mining to control. In this new edition, a bottom-up approach is presented that begins by introducing classical (type-1) fuzzy sets and systems, and then explains how they can be modified to handle uncertainty. The author covers fuzzy rule-based systems – from type-1 to interval type-2 to general type-2 – in one volume. For hands-on experience, the book provides information on accessing MatLab and Java software to complement the content. The book features a full suite of classroom material. Presents fully updated material on new breakthroughs in human-inspired rule-based techniques for handling real-world uncertainties; Allows those already familiar with type-1 fuzzy se...

  13. A knowledge representation meta-model for rule-based modelling of signalling networks

    Directory of Open Access Journals (Sweden)

    Adrien Basso-Blandin

    2016-03-01

    Full Text Available The study of cellular signalling pathways and their deregulation in disease states, such as cancer, is a large and extremely complex task. Indeed, these systems involve many parts and processes but are studied piecewise and their literatures and data are consequently fragmented, distributed and sometimes—at least apparently—inconsistent. This makes it extremely difficult to build significant explanatory models with the result that effects in these systems that are brought about by many interacting factors are poorly understood. The rule-based approach to modelling has shown some promise for the representation of the highly combinatorial systems typically found in signalling where many of the proteins are composed of multiple binding domains, capable of simultaneous interactions, and/or peptide motifs controlled by post-translational modifications. However, the rule-based approach requires highly detailed information about the precise conditions for each and every interaction which is rarely available from any one single source. Rather, these conditions must be painstakingly inferred and curated, by hand, from information contained in many papers—each of which contains only part of the story. In this paper, we introduce a graph-based meta-model, attuned to the representation of cellular signalling networks, which aims to ease this massive cognitive burden on the rule-based curation process. This meta-model is a generalization of that used by Kappa and BNGL which allows for the flexible representation of knowledge at various levels of granularity. In particular, it allows us to deal with information which has either too little, or too much, detail with respect to the strict rule-based meta-model. Our approach provides a basis for the gradual aggregation of fragmented biological knowledge extracted from the literature into an instance of the meta-model from which we can define an automated translation into executable Kappa programs.

  14. Scaling properties of foreign exchange volatility

    NARCIS (Netherlands)

    Gençay, R.; Selçuk, F.; Whitcher, B.

    2001-01-01

    In this paper, we investigate the scaling properties of foreign exchange volatility. Our methodology is based on a wavelet multi-scaling approach which decomposes the variance of a time series and the covariance between two time series on a scale by scale basis through the application of a discrete

  15. Methodology based in the fuzzy logic for constructing the objective functions in optimization problems of nuclear fuel: application to the cells radial design

    International Nuclear Information System (INIS)

    Barragan M, A.M.; Martin del Campo M, C.; Palomera P, M.A.

    2005-01-01

    A methodology based on Fuzzy Logic for the construction of the objective function of the optimization problems of nuclear fuel is described. It was created an inference system that responds, in certain form, as a human expert when it has the task of qualifying different radial designs of fuel cells. Specifically it is detailed how an inference system based based on Fuzzy Logic that has five enter variables and one exit variable was built, which corresponds to the objective function for the radial design of a fuel cell for a BWR. The use of Fuzzy with Mat lab offered the visualization capacity of the exit variable in function of one or two enter variables at the same time. This allowed to build, in appropriate way, the combination of the inference rules and the membership functions of those diffuse sets used for each one of the enter variables. The obtained objective function was used in an optimization process based on Taboo search. The new methodology was proven for the design of a cell used in a fuel assemble of the Laguna Verde reactor obtaining excellent results. (Author)

  16. Evaluation of Rule-based Modularization in Model Transformation Languages illustrated with ATL

    NARCIS (Netherlands)

    Ivanov, Ivan; van den Berg, Klaas; Jouault, Frédéric

    This paper studies ways for modularizing transformation definitions in current rule-based model transformation languages. Two scenarios are shown in which the modular units are identified on the base of the relations between source and target metamodels and on the base of generic transformation

  17. Model identification methodology for fluid-based inerters

    Science.gov (United States)

    Liu, Xiaofu; Jiang, Jason Zheng; Titurus, Branislav; Harrison, Andrew

    2018-06-01

    Inerter is the mechanical dual of the capacitor via the force-current analogy. It has the property that the force across the terminals is proportional to their relative acceleration. Compared with flywheel-based inerters, fluid-based forms have advantages of improved durability, inherent damping and simplicity of design. In order to improve the understanding of the physical behaviour of this fluid-based device, especially caused by the hydraulic resistance and inertial effects in the external tube, this work proposes a comprehensive model identification methodology. Firstly, a modelling procedure is established, which allows the topological arrangement of the mechanical networks to be obtained by mapping the damping, inertance and stiffness effects directly to their respective hydraulic counterparts. Secondly, an experimental sequence is followed, which separates the identification of friction, stiffness and various damping effects. Furthermore, an experimental set-up is introduced, where two pressure gauges are used to accurately measure the pressure drop across the external tube. The theoretical models with improved confidence are obtained using the proposed methodology for a helical-tube fluid inerter prototype. The sources of remaining discrepancies are further analysed.

  18. Optical Generation of Fuzzy-Based Rules

    Science.gov (United States)

    Gur, Eran; Mendlovic, David; Zalevsky, Zeev

    2002-08-01

    In the last third of the 20th century, fuzzy logic has risen from a mathematical concept to an applicable approach in soft computing. Today, fuzzy logic is used in control systems for various applications, such as washing machines, train-brake systems, automobile automatic gear, and so forth. The approach of optical implementation of fuzzy inferencing was given by the authors in previous papers, giving an extra emphasis to applications with two dominant inputs. In this paper the authors introduce a real-time optical rule generator for the dual-input fuzzy-inference engine. The paper briefly goes over the dual-input optical implementation of fuzzy-logic inferencing. Then, the concept of constructing a set of rules from given data is discussed. Next, the authors show ways to implement this procedure optically. The discussion is accompanied by an example that illustrates the transformation from raw data into fuzzy set rules.

  19. Fall Down Detection Under Smart Home System.

    Science.gov (United States)

    Juang, Li-Hong; Wu, Ming-Ni

    2015-10-01

    Medical technology makes an inevitable trend for the elderly population, therefore the intelligent home care is an important direction for science and technology development, in particular, elderly in-home safety management issues become more and more important. In this research, a low of operation algorithm and using the triangular pattern rule are proposed, then can quickly detect fall-down movements of humanoid by the installation of a robot with camera vision at home that will be able to judge the fall-down movements of in-home elderly people in real time. In this paper, it will present a preliminary design and experimental results of fall-down movements from body posture that utilizes image pre-processing and three triangular-mass-central points to extract the characteristics. The result shows that the proposed method would adopt some characteristic value and the accuracy can reach up to 90 % for a single character posture. Furthermore the accuracy can be up to 100 % when a continuous-time sampling criterion and support vector machine (SVM) classifier are used.

  20. Compensatory Processing During Rule-Based Category Learning in Older Adults

    Science.gov (United States)

    Bharani, Krishna L.; Paller, Ken A.; Reber, Paul J.; Weintraub, Sandra; Yanar, Jorge; Morrison, Robert G.

    2016-01-01

    Healthy older adults typically perform worse than younger adults at rule-based category learning, but better than patients with Alzheimer's or Parkinson's disease. To further investigate aging's effect on rule-based category learning, we monitored event-related potentials (ERPs) while younger and neuropsychologically typical older adults performed a visual category-learning task with a rule-based category structure and trial-by-trial feedback. Using these procedures, we previously identified ERPs sensitive to categorization strategy and accuracy in young participants. In addition, previous studies have demonstrated the importance of neural processing in the prefrontal cortex and the medial temporal lobe for this task. In this study, older adults showed lower accuracy and longer response times than younger adults, but there were two distinct subgroups of older adults. One subgroup showed near-chance performance throughout the procedure, never categorizing accurately. The other subgroup reached asymptotic accuracy that was equivalent to that in younger adults, although they categorized more slowly. These two subgroups were further distinguished via ERPs. Consistent with the compensation theory of cognitive aging, older adults who successfully learned showed larger frontal ERPs when compared with younger adults. Recruitment of prefrontal resources may have improved performance while slowing response times. Additionally, correlations of feedback-locked P300 amplitudes with category-learning accuracy differentiated successful younger and older adults. Overall, the results suggest that the ability to adapt one's behavior in response to feedback during learning varies across older individuals, and that the failure of some to adapt their behavior may reflect inadequate engagement of prefrontal cortex. PMID:26422522

  1. SNMG: a social-level norm-based methodology for macro-governing service collaboration processes

    Science.gov (United States)

    Gao, Ji; Lv, Hexin; Jin, Zhiyong; Xu, Ping

    2017-08-01

    In order to adapt to the accelerative open tendency of collaborations between enterprises, this paper proposes a Social-level Norm-based methodology for Macro-Governing service collaboration processes, called SNMG, to regulate and control the social-level visible macro-behaviors of the social individuals participating in collaborations. SNMG not only can remove effectively the uncontrollability hindrance confronted with by open social activities, but also enables across-management-domain collaborations to be implemented by uniting the centralized controls of social individuals for respective social activities. Therefore, this paper provides a brand-new system construction mode to promote the development and large-scale deployment of service collaborations.

  2. Scaling of Thermal-Hydraulic Phenomena and System Code Assessment

    International Nuclear Information System (INIS)

    Wolfert, K.

    2008-01-01

    In the last five decades large efforts have been undertaken to provide reliable thermal-hydraulic system codes for the analyses of transients and accidents in nuclear power plants. Many separate effects tests and integral system tests were carried out to establish a data base for code development and code validation. In this context the question has to be answered, to what extent the results of down-scaled test facilities represent the thermal-hydraulic behaviour expected in a full-scale nuclear reactor under accidental conditions. Scaling principles, developed by many scientists and engineers, present a scientific technical basis and give a valuable orientation for the design of test facilities. However, it is impossible for a down-scaled facility to reproduce all physical phenomena in the correct temporal sequence and in the kind and strength of their occurrence. The designer needs to optimize a down-scaled facility for the processes of primary interest. This leads compulsorily to scaling distortions of other processes with less importance. Taking into account these weak points, a goal oriented code validation strategy is required, based on the analyses of separate effects tests and integral system tests as well as transients occurred in full-scale nuclear reactors. The CSNI validation matrices are an excellent basis for the fulfilling of this task. Separate effects tests in full scale play here an important role.

  3. Consistence of Network Filtering Rules

    Institute of Scientific and Technical Information of China (English)

    SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian

    2004-01-01

    The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.

  4. Small-scale CDM projects in a competitive electricity industry: How good is a simplified baseline methodology?

    International Nuclear Information System (INIS)

    Shrestha, Ram M.; Abeygunawardana, A.M.A.K.

    2007-01-01

    Setting baseline emissions is one of the principal tasks involved in awarding credits for greenhouse gas emission (GHG) mitigation projects under the Clean Development Mechanism (CDM). An emission baseline has to be project-specific in order to be accurate. However, project-specific baseline calculations are subject to high transaction costs, which disadvantage small-scale projects. For this reason, the CDM-Executive Board (CDM-EB) has approved simplified baseline methodologies for selected small-scale CDM project categories. While the simplified methods help reduce the transaction cost, they may also result in inaccuracies in the estimation of emission reductions from CDM projects. The purpose of this paper is to present a rigorous economic scheduling method for calculating the GHG emission reduction in a hypothetical competitive electricity industry due to the operation of a renewable energy-based power plant under CDM and compare the GHG emission reduction derived from the rigorous method with that obtained from the use of a simplified (i.e., standardized) method approved by the CDM-EB. A key finding of the paper is that depending upon the level of power demand, prices of electricity and input fuels, the simplified method can lead to either significant overestimation or substantial underestimation of emission reduction due to the operation of renewable energy-based power projects in a competitive electricity industry

  5. Heuristic simulation of nuclear systems on a supercomputer using the HAL-1987 general-purpose production-rule analysis system

    International Nuclear Information System (INIS)

    Ragheb, M.; Gvillo, D.; Makowitz, H.

    1987-01-01

    HAL-1987 is a general-purpose tool for the construction of production-rule analysis systems. It uses the rule-based paradigm from the part of artificial intelligence concerned with knowledge engineering. It uses backward-chaining and forward-chaining in an antecedent-consequent logic, and is programmed in Portable Standard Lisp (PSL). The inference engine is flexible and accommodates general additions and modifications to the knowledge base. The system is used in coupled symbolic-procedural programming adaptive methodologies for stochastic simulations. In Monte Carlo simulations of particle transport, the system considers the pre-processing of the input data to the simulation and adaptively controls the variance reduction process as the simulation progresses. This is accomplished through the use of a knowledge base of rules which encompass the user's expertise in the variance reduction process. It is also applied to the construction of model-based systems for monitoring, fault-diagnosis and crisis-alert in engineering devices, particularly in the field of nuclear reactor safety analysis

  6. Risk-based Regulatory Evaluation Program methodology

    International Nuclear Information System (INIS)

    DuCharme, A.R.; Sanders, G.A.; Carlson, D.D.; Asselin, S.V.

    1987-01-01

    The objectives of this DOE-supported Regulatory Evaluation Progrwam are to analyze and evaluate the safety importance and economic significance of existing regulatory guidance in order to assist in the improvement of the regulatory process for current generation and future design reactors. A risk-based cost-benefit methodology was developed to evaluate the safety benefit and cost of specific regulations or Standard Review Plan sections. Risk-based methods can be used in lieu of or in combination with deterministic methods in developing regulatory requirements and reaching regulatory decisions

  7. Development of a rule-based diagnostic platform on an object-oriented expert system shell

    International Nuclear Information System (INIS)

    Wang, Wenlin; Yang, Ming; Seong, Poong Hyun

    2016-01-01

    Highlights: • Multilevel Flow Model represents system knowledge as a domain map in expert system. • Rule-based fault diagnostic expert system can identify root cause via a causal chain. • Rule-based fault diagnostic expert system can be used for fault simulation training. - Abstract: This paper presents the development and implementation of a real-time rule-based diagnostic platform. The knowledge is acquired from domain experts and textbooks and the design of the fault diagnosis expert system was performed in the following ways: (i) establishing of corresponding classes and instances to build the domain map, (ii) creating of generic fault models based on events, and (iii) building of diagnostic reasoning based on rules. Knowledge representation is a complicated issue of expert systems. One highlight of this paper is that the Multilevel Flow Model has been used to represent the knowledge, which composes the domain map within the expert system as well as providing a concise description of the system. The developed platform is illustrated using the pressure safety system of a pressurized water reactor as an example of the simulation test bed; the platform is developed using the commercial and industrially validated software G2. The emulation test was conducted and it has been proven that the fault diagnosis expert system can identify the faults correctly and in a timely way; this system can be used as a simulation-based training tool to assist operators to make better decisions.

  8. Multilevel Association Rule Mining for Bridge Resource Management Based on Immune Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Yang Ou

    2014-01-01

    Full Text Available This paper is concerned with the problem of multilevel association rule mining for bridge resource management (BRM which is announced by IMO in 2010. The goal of this paper is to mine the association rules among the items of BRM and the vessel accidents. However, due to the indirect data that can be collected, which seems useless for the analysis of the relationship between items of BIM and the accidents, the cross level association rules need to be studied, which builds the relation between the indirect data and items of BRM. In this paper, firstly, a cross level coding scheme for mining the multilevel association rules is proposed. Secondly, we execute the immune genetic algorithm with the coding scheme for analyzing BRM. Thirdly, based on the basic maritime investigation reports, some important association rules of the items of BRM are mined and studied. Finally, according to the results of the analysis, we provide the suggestions for the work of seafarer training, assessment, and management.

  9. Fuzzy rule-based modelling for human health risk from naturally occurring radioactive materials in produced water

    International Nuclear Information System (INIS)

    Shakhawat, Chowdhury; Tahir, Husain; Neil, Bose

    2006-01-01

    Produced water, discharged from offshore oil and gas operations, contains chemicals from formation water, condensed water, and any chemical added down hole or during the oil/water separation process. Although, most of the contaminants fall below the detection limits within a short distance from the discharge port, a few of the remaining contaminants including naturally occurring radioactive materials (NORM) are of concern due to their bioavailability in the media and bioaccumulation characteristics in finfish and shellfish species used for human consumption. In the past, several initiatives have been taken to model human health risk from NORM in produced water. The parameters of the available risk assessment models are imprecise and sparse in nature. In this study, a fuzzy possibilistic evaluation using fuzzy rule based modeling has been presented. Being conservative in nature, the possibilistic approach considers possible input parameter values; thus provides better environmental prediction than the Monte Carlo (MC) calculation. The uncertainties of the input parameters were captured with fuzzy triangular membership functions (TFNs). Fuzzy if-then rules were applied for input concentrations of two isotopes of radium, namely 226 Ra, and 228 Ra, available in produced water and bulk dilution to evaluate the radium concentration in fish tissue used for human consumption. The bulk dilution was predicted using four input parameters: produced water discharge rate, ambient seawater velocity, depth of discharge port and density gradient. The evaluated cancer risk shows compliance with the regulatory guidelines; thus minimum risk to human health is expected from NORM components in produced water

  10. A risk-based sensor placement methodology

    International Nuclear Information System (INIS)

    Lee, Ronald W.; Kulesz, James J.

    2008-01-01

    A risk-based sensor placement methodology is proposed to solve the problem of optimal location of sensors to protect population against the exposure to, and effects of, known and/or postulated chemical, biological, and/or radiological threats. Risk is calculated as a quantitative value representing population at risk from exposure at standard exposure levels. Historical meteorological data are used to characterize weather conditions as the frequency of wind speed and direction pairs. The meteorological data drive atmospheric transport and dispersion modeling of the threats, the results of which are used to calculate risk values. Sensor locations are determined via an iterative dynamic programming algorithm whereby threats detected by sensors placed in prior iterations are removed from consideration in subsequent iterations. In addition to the risk-based placement algorithm, the proposed methodology provides a quantification of the marginal utility of each additional sensor. This is the fraction of the total risk accounted for by placement of the sensor. Thus, the criteria for halting the iterative process can be the number of sensors available, a threshold marginal utility value, and/or a minimum cumulative utility achieved with all sensors

  11. Molecular-Scale Electronics: From Concept to Function.

    Science.gov (United States)

    Xiang, Dong; Wang, Xiaolong; Jia, Chuancheng; Lee, Takhee; Guo, Xuefeng

    2016-04-13

    Creating functional electrical circuits using individual or ensemble molecules, often termed as "molecular-scale electronics", not only meets the increasing technical demands of the miniaturization of traditional Si-based electronic devices, but also provides an ideal window of exploring the intrinsic properties of materials at the molecular level. This Review covers the major advances with the most general applicability and emphasizes new insights into the development of efficient platform methodologies for building reliable molecular electronic devices with desired functionalities through the combination of programmed bottom-up self-assembly and sophisticated top-down device fabrication. First, we summarize a number of different approaches of forming molecular-scale junctions and discuss various experimental techniques for examining these nanoscale circuits in details. We then give a full introduction of characterization techniques and theoretical simulations for molecular electronics. Third, we highlight the major contributions and new concepts of integrating molecular functionalities into electrical circuits. Finally, we provide a critical discussion of limitations and main challenges that still exist for the development of molecular electronics. These analyses should be valuable for deeply understanding charge transport through molecular junctions, the device fabrication process, and the roadmap for future practical molecular electronics.

  12. Modelling Aṣṭādhyāyī: An Approach Based on the Methodology of Ancillary Disciplines (Vedāṅga)

    Science.gov (United States)

    Mishra, Anand

    This article proposes a general model based on the common methodological approach of the ancillary disciplines (Vedāṅga) associated with the Vedas taking examples from Śikṣā, Chandas, Vyākaraṇa and Prātiśā khya texts. It develops and elaborates this model further to represent the contents and processes of Aṣṭādhyāyī. Certain key features are added to my earlier modelling of Pāṇinian system of Sanskrit grammar. This includes broader coverage of the Pāṇinian meta-language, mechanism for automatic application of rules and positioning the grammatical system within the procedural complexes of ancillary disciplines.

  13. Attribute Index and Uniform Design Based Multiobjective Association Rule Mining with Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Jie Zhang

    2013-01-01

    Full Text Available In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  14. Attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm.

    Science.gov (United States)

    Zhang, Jie; Wang, Yuping; Feng, Junhong

    2013-01-01

    In association rule mining, evaluating an association rule needs to repeatedly scan database to compare the whole database with the antecedent, consequent of a rule and the whole rule. In order to decrease the number of comparisons and time consuming, we present an attribute index strategy. It only needs to scan database once to create the attribute index of each attribute. Then all metrics values to evaluate an association rule do not need to scan database any further, but acquire data only by means of the attribute indices. The paper visualizes association rule mining as a multiobjective problem rather than a single objective one. In order to make the acquired solutions scatter uniformly toward the Pareto frontier in the objective space, elitism policy and uniform design are introduced. The paper presents the algorithm of attribute index and uniform design based multiobjective association rule mining with evolutionary algorithm, abbreviated as IUARMMEA. It does not require the user-specified minimum support and minimum confidence anymore, but uses a simple attribute index. It uses a well-designed real encoding so as to extend its application scope. Experiments performed on several databases demonstrate that the proposed algorithm has excellent performance, and it can significantly reduce the number of comparisons and time consumption.

  15. Validation of a Methodology to Predict Micro-Vibrations Based on Finite Element Model Approach

    Science.gov (United States)

    Soula, Laurent; Rathband, Ian; Laduree, Gregory

    2014-06-01

    This paper presents the second part of the ESA R&D study called "METhodology for Analysis of structure- borne MICro-vibrations" (METAMIC). After defining an integrated analysis and test methodology to help predicting micro-vibrations [1], a full-scale validation test campaign has been carried out. It is based on a bread-board representative of typical spacecraft (S/C) platform consisting in a versatile structure made of aluminium sandwich panels equipped with different disturbance sources and a dummy payload made of a silicon carbide (SiC) bench. The bread-board has been instrumented with a large set of sensitive accelerometers and tests have been performed including back-ground noise measurement, modal characterization and micro- vibration tests. The results provided responses to the perturbation coming from a reaction wheel or cryo-cooler compressors, operated independently then simultaneously with different operation modes. Using consistent modelling and associated experimental characterization techniques, a correlation status has been assessed by comparing test results with predictions based on FEM approach. Very good results have been achieved particularly for the case of a wheel in sweeping rate operation with test results over-predicted within a reasonable margin lower than two. Some limitations of the methodology have also been identified for sources operating at a fixed rate or coming with a small number of dominant harmonics and recommendations have been issued in order to deal with model uncertainties and stay conservative.

  16. Method to restore images from chaotic frequency-down-converted light using phase matching

    International Nuclear Information System (INIS)

    Andreoni, Alessandra; Puddu, Emiliano; Bondani, Maria

    2006-01-01

    We present an optical frequency-down-conversion process of the image of an object illuminated with chaotic light in which also the low-frequency field entering the second-order nonlinear crystal is chaotic. We show that the fulfillment of the phase-matching conditions by the chaotic interacting fields provides the rules to retrieve the object image by calculating suitable correlations of the local intensity fluctuations even if a single record of down-converted chaotic image is available

  17. Implementation and adaptation of a macro-scale methodology to calculate direct economic losses

    Science.gov (United States)

    Natho, Stephanie; Thieken, Annegret

    2017-04-01

    As one of the 195 member countries of the United Nations, Germany signed the Sendai Framework for Disaster Risk Reduction 2015-2030 (SFDRR). With this, though voluntary and non-binding, Germany agreed to report on achievements to reduce disaster impacts. Among other targets, the SFDRR aims at reducing direct economic losses in relation to the global gross domestic product by 2030 - but how to measure this without a standardized approach? The United Nations Office for Disaster Risk Reduction (UNISDR) has hence proposed a methodology to estimate direct economic losses per event and country on the basis of the number of damaged or destroyed items in different sectors. The method bases on experiences from developing countries. However, its applicability in industrial countries has not been investigated so far. Therefore, this study presents the first implementation of this approach in Germany to test its applicability for the costliest natural hazards and suggests adaptations. The approach proposed by UNISDR considers assets in the sectors agriculture, industry, commerce, housing, and infrastructure by considering roads, medical and educational facilities. The asset values are estimated on the basis of sector and event specific number of affected items, sector specific mean sizes per item, their standardized construction costs per square meter and a loss ratio of 25%. The methodology was tested for the three costliest natural hazard types in Germany, i.e. floods, storms and hail storms, considering 13 case studies on the federal or state scale between 1984 and 2016. Not any complete calculation of all sectors necessary to describe the total direct economic loss was possible due to incomplete documentation. Therefore, the method was tested sector-wise. Three new modules were developed to better adapt this methodology to German conditions covering private transport (cars), forestry and paved roads. Unpaved roads in contrast were integrated into the agricultural and

  18. Methodology used to produce an encoded 1:100,000-scale digital hydrographic data layer for the Pacific Northwest

    Science.gov (United States)

    Fisher, B.J.

    1996-01-01

    The U.S. Geological Survey (USGS) has produced a River Reach File data layer for the Pacific Northwest for use in water-resource management applications. The Pacific Northwest (PNW) River Reach Files, a geo-referenced river reach data layer at 1:100,000-scale, are encoded with the U.S. Environmental Protection Agency"s (EPA) reach numbers. The encoding was a primary task of the River Reach project, because EPA"s reach identifiers are also an integral hydrologic component in a regional Northwest Environmental Data Base-an ongoing effort by Federal and State agencies to compile information on reach-specific resources on rivers in Oregon, Idaho, Washington, and western Montana. A unique conflation algorithm was developed by the USGS to transfer the EPA reach codes and other meaningful attributes from the 1:250,000-scale EPA TRACE graphic files to the PNW Reach Files. The PNW Reach Files also were designed so that reach-specific information upstream or downstream from a point in the stream network could be extracted from feature attribute tables or from a Geographic Information System. This report documents the methodology used to create this 1:100,000-scale hydrologic data layer.

  19. Validity and reliability of the Persian version of mobile phone addiction scale

    OpenAIRE

    Mazaheri, Maryam Amidi; Karbasi, Mojtaba

    2014-01-01

    Background: With regard to large number of mobile users especially among college students in Iran, addiction to mobile phone is attracting increasing concern. There is an urgent need for reliable and valid instrument to measure this phenomenon. This study examines validity and reliability of the Persian version of mobile phone addiction scale (MPAIS) in college students. Materials and Methods: this methodological study was down in Isfahan University of Medical Sciences. One thousand one hundr...

  20. An expert system design to diagnose cancer by using a new method reduced rule base.

    Science.gov (United States)

    Başçiftçi, Fatih; Avuçlu, Emre

    2018-04-01

    A Medical Expert System (MES) was developed which uses Reduced Rule Base to diagnose cancer risk according to the symptoms in an individual. A total of 13 symptoms were used. With the new MES, the reduced rules are controlled instead of all possibilities (2 13 = 8192 different possibilities occur). By controlling reduced rules, results are found more quickly. The method of two-level simplification of Boolean functions was used to obtain Reduced Rule Base. Thanks to the developed application with the number of dynamic inputs and outputs on different platforms, anyone can easily test their own cancer easily. More accurate results were obtained considering all the possibilities related to cancer. Thirteen different risk factors were determined to determine the type of cancer. The truth table produced in our study has 13 inputs and 4 outputs. The Boolean Function Minimization method is used to obtain less situations by simplifying logical functions. Diagnosis of cancer quickly thanks to control of the simplified 4 output functions. Diagnosis made with the 4 output values obtained using Reduced Rule Base was found to be quicker than diagnosis made by screening all 2 13 = 8192 possibilities. With the improved MES, more probabilities were added to the process and more accurate diagnostic results were obtained. As a result of the simplification process in breast and renal cancer diagnosis 100% diagnosis speed gain, in cervical cancer and lung cancer diagnosis rate gain of 99% was obtained. With Boolean function minimization, less number of rules is evaluated instead of evaluating a large number of rules. Reducing the number of rules allows the designed system to work more efficiently and to save time, and facilitates to transfer the rules to the designed Expert systems. Interfaces were developed in different software platforms to enable users to test the accuracy of the application. Any one is able to diagnose the cancer itself using determinative risk factors. Thereby