WorldWideScience

Sample records for model near-term machines

  1. Evaluating Modeled Impact Metrics for Human Health, Agriculture Growth, and Near-Term Climate

    Science.gov (United States)

    Seltzer, K. M.; Shindell, D. T.; Faluvegi, G.; Murray, L. T.

    2017-12-01

    Simulated metrics that assess impacts on human health, agriculture growth, and near-term climate were evaluated using ground-based and satellite observations. The NASA GISS ModelE2 and GEOS-Chem models were used to simulate the near-present chemistry of the atmosphere. A suite of simulations that varied by model, meteorology, horizontal resolution, emissions inventory, and emissions year were performed, enabling an analysis of metric sensitivities to various model components. All simulations utilized consistent anthropogenic global emissions inventories (ECLIPSE V5a or CEDS), and an evaluation of simulated results were carried out for 2004-2006 and 2009-2011 over the United States and 2014-2015 over China. Results for O3- and PM2.5-based metrics featured minor differences due to the model resolutions considered here (2.0° × 2.5° and 0.5° × 0.666°) and model, meteorology, and emissions inventory each played larger roles in variances. Surface metrics related to O3 were consistently high biased, though to varying degrees, demonstrating the need to evaluate particular modeling frameworks before O3 impacts are quantified. Surface metrics related to PM2.5 were diverse, indicating that a multimodel mean with robust results are valuable tools in predicting PM2.5-related impacts. Oftentimes, the configuration that captured the change of a metric best over time differed from the configuration that captured the magnitude of the same metric best, demonstrating the challenge in skillfully simulating impacts. These results highlight the strengths and weaknesses of these models in simulating impact metrics related to air quality and near-term climate. With such information, the reliability of historical and future simulations can be better understood.

  2. Modeling the Near-Term Risk of Climate Uncertainty: Interdependencies among the U.S. States

    Science.gov (United States)

    Lowry, T. S.; Backus, G.; Warren, D.

    2010-12-01

    Decisions made to address climate change must start with an understanding of the risk of an uncertain future to human systems, which in turn means understanding both the consequence as well as the probability of a climate induced impact occurring. In other words, addressing climate change is an exercise in risk-informed policy making, which implies that there is no single correct answer or even a way to be certain about a single answer; the uncertainty in future climate conditions will always be present and must be taken as a working-condition for decision making. In order to better understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions, this study estimates the impacts from responses to climate change on U.S. state- and national-level economic activity by employing a risk-assessment methodology for evaluating uncertain future climatic conditions. Using the results from the Intergovernmental Panel on Climate Change’s (IPCC) Fourth Assessment Report (AR4) as a proxy for climate uncertainty, changes in hydrology over the next 40 years were mapped and then modeled to determine the physical consequences on economic activity and to perform a detailed 70-industry analysis of the economic impacts among the interacting lower-48 states. The analysis determines industry-level effects, employment impacts at the state level, interstate population migration, consequences to personal income, and ramifications for the U.S. trade balance. The conclusions show that the average risk of damage to the U.S. economy from climate change is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs. Further analysis shows that an increase in uncertainty raises this risk. This paper will present the methodology behind the approach, a summary of the underlying models, as well as the path forward for improving the approach.

  3. Noninfectious Fever in the Near-Term Pregnant Rat Induces Fetal Brain Inflammation: A Model for the Consequences of Epidural-Associated Maternal Fever.

    Science.gov (United States)

    Segal, Scott; Pancaro, Carlo; Bonney, Iwona; Marchand, James E

    2017-12-01

    Women laboring with epidural analgesia experience fever much more frequently than do women who chose other forms of analgesia, and maternal intrapartum fever is associated with numerous adverse consequences, including brain injury in the fetus. We developed a model of noninfectious inflammatory fever in the near-term pregnant rat to simulate the pathophysiology of epidural-associated fever and hypothesized that it would produce fetal brain inflammation. Twenty-four pregnant Sprague-Dawley rats were studied at 20 days gestation (term: 22 days). Dams were treated by injection of rat recombinant interleukin (IL)-6 or vehicle at 90-minute intervals, and temperature was monitored every 30 minutes. Eight hours after the first treatment, dams were delivered of fetuses and then killed. Maternal IL-6 was measured at delivery. Fetal brains (n = 24) were processed and stained for ED-1/CD68, a marker for activated microglia, and cell counts in the lateral septal and hippocampal brain regions were measured. Fetal brains were also stained for cyclooxygenase-2 (COX-2), a downstream marker of neuroinflammation. Eight fetal brains were further analyzed for quantitative forebrain COX-2 by Western blotting compared to a β-actin standard. Maternal temperature and IL-6 levels were compared between treatments, as were cell counts, COX-2 staining, and COX-2 levels by Mann-Whitney U test, repeated-measures analysis of variance, or Fisher exact test, as appropriate. Injection of rat IL-6 at 90-minute intervals produced an elevation of maternal temperature compared to vehicle (P fever is inducible in the near-term pregnant rat by injection of IL-6 at levels comparable to those observed during human epidural labor analgesia. Maternal IL-6 injection causes neuroinflammation in the fetus.

  4. Long-term functional outcomes and correlation with regional brain connectivity by MRI diffusion tractography metrics in a near-term rabbit model of intrauterine growth restriction.

    Directory of Open Access Journals (Sweden)

    Miriam Illa

    Full Text Available BACKGROUND: Intrauterine growth restriction (IUGR affects 5-10% of all newborns and is associated with increased risk of memory, attention and anxiety problems in late childhood and adolescence. The neurostructural correlates of long-term abnormal neurodevelopment associated with IUGR are unknown. Thus, the aim of this study was to provide a comprehensive description of the long-term functional and neurostructural correlates of abnormal neurodevelopment associated with IUGR in a near-term rabbit model (delivered at 30 days of gestation and evaluate the development of quantitative imaging biomarkers of abnormal neurodevelopment based on diffusion magnetic resonance imaging (MRI parameters and connectivity. METHODOLOGY: At +70 postnatal days, 10 cases and 11 controls were functionally evaluated with the Open Field Behavioral Test which evaluates anxiety and attention and the Object Recognition Task that evaluates short-term memory and attention. Subsequently, brains were collected, fixed and a high resolution MRI was performed. Differences in diffusion parameters were analyzed by means of voxel-based and connectivity analysis measuring the number of fibers reconstructed within anxiety, attention and short-term memory networks over the total fibers. PRINCIPAL FINDINGS: The results of the neurobehavioral and cognitive assessment showed a significant higher degree of anxiety, attention and memory problems in cases compared to controls in most of the variables explored. Voxel-based analysis (VBA revealed significant differences between groups in multiple brain regions mainly in grey matter structures, whereas connectivity analysis demonstrated lower ratios of fibers within the networks in cases, reaching the statistical significance only in the left hemisphere for both networks. Finally, VBA and connectivity results were also correlated with functional outcome. CONCLUSIONS: The rabbit model used reproduced long-term functional impairments and their

  5. Formal modeling of virtual machines

    Science.gov (United States)

    Cremers, A. B.; Hibbard, T. N.

    1978-01-01

    Systematic software design can be based on the development of a 'hierarchy of virtual machines', each representing a 'level of abstraction' of the design process. The reported investigation presents the concept of 'data space' as a formal model for virtual machines. The presented model of a data space combines the notions of data type and mathematical machine to express the close interaction between data and control structures which takes place in a virtual machine. One of the main objectives of the investigation is to show that control-independent data type implementation is only of limited usefulness as an isolated tool of program development, and that the representation of data is generally dictated by the control context of a virtual machine. As a second objective, a better understanding is to be developed of virtual machine state structures than was heretofore provided by the view of the state space as a Cartesian product.

  6. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  7. Iterative near-term ecological forecasting: Needs, opportunities, and challenges.

    Science.gov (United States)

    Dietze, Michael C; Fox, Andrew; Beck-Johnson, Lindsay M; Betancourt, Julio L; Hooten, Mevin B; Jarnevich, Catherine S; Keitt, Timothy H; Kenney, Melissa A; Laney, Christine M; Larsen, Laurel G; Loescher, Henry W; Lunch, Claire K; Pijanowski, Bryan C; Randerson, James T; Read, Emily K; Tredennick, Andrew T; Vargas, Rodrigo; Weathers, Kathleen C; White, Ethan P

    2018-02-13

    Two foundational questions about sustainability are "How are ecosystems and the services they provide going to change in the future?" and "How do human decisions affect these trajectories?" Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  8. Iterative near-term ecological forecasting: Needs, opportunities, and challenges

    Science.gov (United States)

    Dietze, Michael C.; Fox, Andrew; Beck-Johnson, Lindsay; Betancourt, Julio L.; Hooten, Mevin B.; Jarnevich, Catherine S.; Keitt, Timothy H.; Kenney, Melissa A.; Laney, Christine M.; Larsen, Laurel G.; Loescher, Henry W.; Lunch, Claire K.; Pijanowski, Bryan; Randerson, James T.; Read, Emily; Tredennick, Andrew T.; Vargas, Rodrigo; Weathers, Kathleen C.; White, Ethan P.

    2018-01-01

    Two foundational questions about sustainability are “How are ecosystems and the services they provide going to change in the future?” and “How do human decisions affect these trajectories?” Answering these questions requires an ability to forecast ecological processes. Unfortunately, most ecological forecasts focus on centennial-scale climate responses, therefore neither meeting the needs of near-term (daily to decadal) environmental decision-making nor allowing comparison of specific, quantitative predictions to new observational data, one of the strongest tests of scientific theory. Near-term forecasts provide the opportunity to iteratively cycle between performing analyses and updating predictions in light of new evidence. This iterative process of gaining feedback, building experience, and correcting models and methods is critical for improving forecasts. Iterative, near-term forecasting will accelerate ecological research, make it more relevant to society, and inform sustainable decision-making under high uncertainty and adaptive management. Here, we identify the immediate scientific and societal needs, opportunities, and challenges for iterative near-term ecological forecasting. Over the past decade, data volume, variety, and accessibility have greatly increased, but challenges remain in interoperability, latency, and uncertainty quantification. Similarly, ecologists have made considerable advances in applying computational, informatic, and statistical methods, but opportunities exist for improving forecast-specific theory, methods, and cyberinfrastructure. Effective forecasting will also require changes in scientific training, culture, and institutions. The need to start forecasting is now; the time for making ecology more predictive is here, and learning by doing is the fastest route to drive the science forward.

  9. Short-acting sulfonamides near term and neonatal jaundice

    DEFF Research Database (Denmark)

    Klarskov, Pia; Andersen, Jon Trærup; Jimenez-Solem, Espen

    2013-01-01

    To investigate the association between maternal use of sulfamethizole near term and the risk of neonatal jaundice.......To investigate the association between maternal use of sulfamethizole near term and the risk of neonatal jaundice....

  10. Mathematical modeling and analysis of WEDM machining ...

    Indian Academy of Sciences (India)

    Home; Journals; Sadhana; Volume 42; Issue 6. Mathematical modeling and analysis ... The present work is mainly focused on the analysis and optimization of the WEDM process parameters of Inconel 625. The four machining ... Response surface methodology was used to develop the experimental models. The parametric ...

  11. simulation tools for electrical machines modelling: teaching and ...

    African Journals Online (AJOL)

    Dr Obe

    used to model non-linearites in synchronous machine. The machine is modeled in ... Electrical machines who are involved in engineering undergraduate education will find the script very useful in terms of ... Keywords: Asynchronous machine; MATLAB scripts; engineering education; skin-effect; saturation effect; dynamic ...

  12. Prototype-based models in machine learning

    NARCIS (Netherlands)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of

  13. Understanding and modelling Man-Machine Interaction

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1991-01-01

    This paper gives an overview of the current state of the art in man machine systems interaction studies, focusing on the problems derived from highly automated working environments and the role of humans in the control loop. In particular, it is argued that there is a need for sound approaches to design and analysis of Man-Machine Interaction (MMI), which stem from the contribution of three expertises in interfacing domains, namely engineering, computer science and psychology: engineering for understanding and modelling plants and their material and energy conservation principles; psychology for understanding and modelling humans and their cognitive behaviours; computer science for converting models in sound simulations running in appropriate computer architectures. (author)

  14. Understanding and modelling man-machine interaction

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    1996-01-01

    This paper gives an overview of the current state of the art in man-machine system interaction studies, focusing on the problems derived from highly automated working environments and the role of humans in the control loop. In particular, it is argued that there is a need for sound approaches to the design and analysis of man-machine interaction (MMI), which stem from the contribution of three expertises in interfacing domains, namely engineering, computer science and psychology: engineering for understanding and modelling plants and their material and energy conservation principles; psychology for understanding and modelling humans an their cognitive behaviours; computer science for converting models in sound simulations running in appropriate computer architectures. (orig.)

  15. Electromechanical model of machine for vibroabrasive treatment of machine parts

    OpenAIRE

    Gorbatiyk, Ruslan; Palamarchuk, Igor; Chubyk, Roman

    2015-01-01

    A lot of operations on trimming clean and finishing – stripping up treatment, first of all, removing of burrs, rounding and processing of borders, until recently time was carried out by hand, and hardly exposed to automation and became a serious obstacle in subsequent growth of the labor productivity. Machines with free kinematics connection between a tool and the treating parts is provided by the printing-down of all of the surface of the machine parts, that allows us to effectively treat bo...

  16. Modelling and Identification of Induction Machines

    Energy Technology Data Exchange (ETDEWEB)

    Nestli, T.F.

    1995-12-01

    To obtain high quality control of the induction machine, field orientation is probably the most frequently used control strategy. Using this strategy requires that one of the flux space vectors be known. Since this cannot be measured, many predictor models for calculation of the rotor flux space vector in real time have been developed. This doctoral thesis presents an analysis method for evaluating and comparing predictor models for flux calculation with respect to sensitivity to parameter deviations and measurement errors and with respect to dynamics. It is concluded that the best predictor models in the minimum sensitivity sense should have properties similar to the current and voltage models at lower and higher frequencies, respectively. To further reduce flux estimation errors, a new saturation model for the Inverse {Gamma}-formulation of the induction machine is developed. It is shown that the leakage reactance varies mainly with stator current, and the magnetizing reactance depends both on stator flux and rotor current magnitudes, i.e., both on magnetization and load. The reactance models are verified by experiments. An off-line identification algorithm is developed to identify the parameters of the reactance model and initial values for the stator and rotor resistances. The algorithm is verified in laboratory experiments, which also demonstrate the temperature dependence of the resistances. 36 refs., 49 figs., 6 tabs.

  17. Towards a generalized energy prediction model for machine tools.

    Science.gov (United States)

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  18. Modeling software with finite state machines a practical approach

    CERN Document Server

    Wagner, Ferdinand; Wagner, Thomas; Wolstenholme, Peter

    2006-01-01

    Modeling Software with Finite State Machines: A Practical Approach explains how to apply finite state machines to software development. It provides a critical analysis of using finite state machines as a foundation for executable specifications to reduce software development effort and improve quality. This book discusses the design of a state machine and of a system of state machines. It also presents a detailed analysis of development issues relating to behavior modeling with design examples and design rules for using finite state machines. This volume describes a coherent and well-tested fr

  19. VIRTUAL MODELING OF A NUMERICAL CONTROL MACHINE TOOL USED FOR COMPLEX MACHINING OPERATIONS

    Directory of Open Access Journals (Sweden)

    POPESCU Adrian

    2015-11-01

    Full Text Available This paper presents the 3D virtual model of the numerical control machine Modustar 100, in terms of machine elements. This is a CNC machine of modular construction, all components allowing the assembly in various configurations. The paper focused on the design of the subassemblies specific to the axes numerically controlled by means of CATIA v5, which contained different drive kinematic chains of different translation modules that ensures translation on X, Y and Z axis. Machine tool development for high speed and highly precise cutting demands employment of advanced simulation techniques witch it reflect on cost of total development of the machine.

  20. Managing the Near Term Functions of Change in Medical Units.

    Science.gov (United States)

    1986-06-06

    v List of Tables and Figures .......................................... vii Chapter One The Importance of Change Manangement ...AD-A1?2 83e MRNAGING THE NEAR TERM FUNCTIONS OF CHANGE IN MEDICAL 1/2 UNITS(U) ARMY COMMAND AND GENERAL STAFF COLL FORT LEAVENWORTH KS R G BRUELAND...im 11111. 111111.25 1.4 16 MICROCOPY RESOLUTION TEST CHART NATIONAL BUREAU OF STANDARDS- 193-A 0 (MANAGING THE NEAR TERM FUNCTIONS OF CHANGE IN

  1. Screening for Prediabetes Using Machine Learning Models

    Directory of Open Access Journals (Sweden)

    Soo Beom Choi

    2014-01-01

    Full Text Available The global prevalence of diabetes is rapidly increasing. Studies support the necessity of screening and interventions for prediabetes, which could result in serious complications and diabetes. This study aimed at developing an intelligence-based screening model for prediabetes. Data from the Korean National Health and Nutrition Examination Survey (KNHANES were used, excluding subjects with diabetes. The KNHANES 2010 data (n=4685 were used for training and internal validation, while data from KNHANES 2011 (n=4566 were used for external validation. We developed two models to screen for prediabetes using an artificial neural network (ANN and support vector machine (SVM and performed a systematic evaluation of the models using internal and external validation. We compared the performance of our models with that of a screening score model based on logistic regression analysis for prediabetes that had been developed previously. The SVM model showed the areas under the curve of 0.731 in the external datasets, which is higher than those of the ANN model (0.729 and the screening score model (0.712, respectively. The prescreening methods developed in this study performed better than the screening score model that had been developed previously and may be more effective method for prediabetes screening.

  2. Thermal models of electric machines with dynamic workloads

    Directory of Open Access Journals (Sweden)

    Christian Pohlandt

    2015-07-01

    Full Text Available Electric powertrains are increasingly used in off-highway machines because of easy controllability and excellent overall efficiency. The main goals are increasing the energy efficiency of the machine and the optimization of the work process. The thermal behaviour of electric machines with dynamic workloads applied to is a key design factor for electric powertrains in off-highway machines. This article introduces a methodology to model the thermal behaviour of electric machines. Using a noncausal modelling approach, an electric powertrain is analysed for dynamic workloads. Cause-effect relationships and reasons for increasing temperature are considered as well as various cooling techniques. The validation of the overall simulation model of the powertrain with measured field data workloads provides convincing results to evaluate numerous applications of electric machines in off-highway machines.

  3. Impurity control in near-term tokamak reactors

    International Nuclear Information System (INIS)

    Stacey, W.M. Jr.; Smith, D.L.; Brooks, J.N.

    1976-10-01

    Several methods for reducing impurity contamination in near-term tokamak reactors by modifying the first-wall surface with a low-Z or low-sputter material are examined. A review of the sputtering data and an assessment of the technological feasibility of various wall modification schemes are presented. The power performance of a near-term tokamak reactor is simulated for various first-wall surface materials, with and without a divertor, in order to evaluate the likely effect of plasma contamination associated with these surface materials

  4. Prototype-based models in machine learning.

    Science.gov (United States)

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2016-01-01

    An overview is given of prototype-based models in machine learning. In this framework, observations, i.e., data, are stored in terms of typical representatives. Together with a suitable measure of similarity, the systems can be employed in the context of unsupervised and supervised analysis of potentially high-dimensional, complex datasets. We discuss basic schemes of competitive vector quantization as well as the so-called neural gas approach and Kohonen's topology-preserving self-organizing map. Supervised learning in prototype systems is exemplified in terms of learning vector quantization. Most frequently, the familiar Euclidean distance serves as a dissimilarity measure. We present extensions of the framework to nonstandard measures and give an introduction to the use of adaptive distances in relevance learning. © 2016 Wiley Periodicals, Inc.

  5. Testing and Modeling of Machine Properties in Resistance Welding

    DEFF Research Database (Denmark)

    Wu, Pei

    The objective of this work has been to test and model the machine properties including the mechanical properties and the electrical properties in resistance welding. The results are used to simulate the welding process more accurately. The state of the art in testing and modeling machine properties...... in resistance welding has been described based on a comprehensive literature study. The present thesis has been subdivided into two parts: Part I: Mechanical properties of resistance welding machines. Part II: Electrical properties of resistance welding machines. In part I, the electrode force in the squeeze...... it is lower than the spring force. The work in part I is focused on the dynamic mechanical properties of resistance welding machines. A universal method has been developed to characterize the dynamic mechanical behaviour of C-frame machines. The method is based on a mathematical model, in which three...

  6. Experimental force modeling for deformation machining stretching ...

    Indian Academy of Sciences (India)

    Deformation machining is a hybrid process that combines two manufacturing processes—thin structure machining and single-point incremental forming. This process enables the creation of complex structures and geometries, which would be rather difficult or sometimes impossible to manufacture. A comprehensive ...

  7. Mathematical modeling and analysis of WEDM machining ...

    Indian Academy of Sciences (India)

    M P GARG

    discharge machining (WEDM) is the process considered in the present text for machining of Inconel 625 as it can provide an effective solution ... alloy with excellent resistance to oxidation and corrosion over a broad range of conditions ... 100-mm-thick to 30° for a 400-mm-thick work piece can be obtained on the cut surface ...

  8. Experimental force modeling for deformation machining stretching ...

    Indian Academy of Sciences (India)

    ARSHPREET SINGH

    Deformation machining is a hybrid process that combines two manufacturing processes—thin structure machining and ... structures and geometries, which would be rather difficult or sometimes impossible to manufacture. A com- prehensive ... sheet metal is deformed locally into plastic stage, enabling creation of complex ...

  9. Simulation Tools for Electrical Machines Modelling: Teaching and ...

    African Journals Online (AJOL)

    Simulation tools are used both for research and teaching to allow a good comprehension of the systems under study before practical implementations. This paper illustrates the way MATLAB is used to model non-linearites in synchronous machine. The machine is modeled in rotor reference frame with currents as state ...

  10. Testing and Modeling of Mechanical Characteristics of Resistance Welding Machines

    DEFF Research Database (Denmark)

    Wu, Pei; Zhang, Wenqi; Bay, Niels

    2003-01-01

    The dynamic mechanical response of resistance welding machine is very important to the weld quality in resistance welding especially in projection welding when collapse or deformation of work piece occurs. It is mainly governed by the mechanical parameters of machine. In this paper, a mathematical...... model for characterizing the dynamic mechanical responses of machine and a special test set-up called breaking test set-up are developed. Based on the model and the test results, the mechanical parameters of machine are determined, including the equivalent mass, damping coefficient, and stiffness...

  11. Advanced wind turbine near-term product development. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    None

    1996-01-01

    In 1990 the US Department of Energy initiated the Advanced Wind Turbine (AWT) Program to assist the growth of a viable wind energy industry in the US. This program, which has been managed through the National Renewable Energy Laboratory (NREL) in Golden, Colorado, has been divided into three phases: (1) conceptual design studies, (2) near-term product development, and (3) next-generation product development. The goals of the second phase were to bring into production wind turbines which would meet the cost goal of $0.05 kWh at a site with a mean (Rayleigh) windspeed of 5.8 m/s (13 mph) and a vertical wind shear exponent of 0.14. These machines were to allow a US-based industry to compete domestically with other sources of energy and to provide internationally competitive products. Information is given in the report on design values of peak loads and of fatigue spectra and the results of the design process are summarized in a table. Measured response is compared with the results from mathematical modeling using the ADAMS code and is discussed. Detailed information is presented on the estimated costs of maintenance and on spare parts requirements. A failure modes and effects analysis was carried out and resulted in approximately 50 design changes including the identification of ten previously unidentified failure modes. The performance results of both prototypes are examined and adjusted for air density and for correlation between the anemometer site and the turbine location. The anticipated energy production at the reference site specified by NREL is used to calculate the final cost of energy using the formulas indicated in the Statement of Work. The value obtained is $0.0514/kWh in January 1994 dollars. 71 figs., 30 tabs.

  12. Virtual NC machine model with integrated knowledge data

    International Nuclear Information System (INIS)

    Sidorenko, Sofija; Dukovski, Vladimir

    2002-01-01

    The concept of virtual NC machining was established for providing a virtual product that could be compared with an appropriate designed product, in order to make NC program correctness evaluation, without real experiments. This concept is applied in the intelligent CAD/CAM system named VIRTUAL MANUFACTURE. This paper presents the first intelligent module that enables creation of the virtual models of existed NC machines and virtual creation of new ones, applying modular composition. Creation of a virtual NC machine is carried out via automatic knowledge data saving (features of the created NC machine). (Author)

  13. Emission model for mobile machines based on machine sales combined with fuel sales; Emissiemodel mobiele machines gebaseerd op machineverkopen in combinatie met brandstofafzet'

    Energy Technology Data Exchange (ETDEWEB)

    Hulskotte, J.; Verbeek, R.

    2009-11-15

    This report describes not only the EMMA model; it also describes technical measures for decreasing the emission of mobile machine. The emission of mobile machines into the air constitutes a large share of the total air pollution. [Dutch] In dit rapport wordt niet alleen het model EMMA beschreven, maar ook zijn in het rapport technische maatregelen beschreven waarmee de uitstoot van mobiele machines kan worden verminderd. De uitstoot van mobiele machines naar de buitenlucht heeft namelijk een belangrijk aandeel in de totale luchtverontreiniging.

  14. Discrete Model Reference Adaptive Control System for Automatic Profiling Machine

    Directory of Open Access Journals (Sweden)

    Peng Song

    2012-01-01

    Full Text Available Automatic profiling machine is a movement system that has a high degree of parameter variation and high frequency of transient process, and it requires an accurate control in time. In this paper, the discrete model reference adaptive control system of automatic profiling machine is discussed. Firstly, the model of automatic profiling machine is presented according to the parameters of DC motor. Then the design of the discrete model reference adaptive control is proposed, and the control rules are proven. The results of simulation show that adaptive control system has favorable dynamic performances.

  15. Practical methods for near-term piloted Mars missions

    Science.gov (United States)

    Zubrin, Robert M.; Weaver, David B.

    1993-01-01

    An evaluation is made of ways of using near-term technologies for direct and semidirect manned Mars missions. A notable feature of the present schemes is the in situ propellant production of CH4/O2 and H2O on the Martian surface in order to reduce surface consumable and return propellant requirements. Medium-energy conjunction class trajectories are shown to be optimal for such missions. Attention is given to the backup plans and abort philosophy of these missions. Either the Russian Energia B or U.S. Saturn VII launch vehicles may be used.

  16. Trajectories for a Near Term Mission to the Interstellar Medium

    Science.gov (United States)

    Arora, Nitin; Strange, Nathan; Alkalai, Leon

    2015-01-01

    Trajectories for rapid access to the interstellar medium (ISM) with a Kuiper Belt Object (KBO) flyby, launching between 2022 and 2030, are described. An impulsive-patched-conic broad search algorithm combined with a local optimizer is used for the trajectory computations. Two classes of trajectories, (1) with a powered Jupiter flyby and (2) with a perihelion maneuver, are studied and compared. Planetary flybys combined with leveraging maneuvers reduce launch C3 requirements (by factor of 2 or more) and help satisfy mission-phasing constraints. Low launch C3 combined with leveraging and a perihelion maneuver is found to be enabling for a near-term potential mission to the ISM.

  17. SIMULATION TOOLS FOR ELECTRICAL MACHINES MODELLING ...

    African Journals Online (AJOL)

    Dr Obe

    [10]D.W. Marquardt, "An Algorithm for least-square estimation of non-linear parameters" J Soc. Ind. Appl. Math, voI.1l, No.2, June 1963,pp.431-441. [11] Peter Vas, Electrical machines and drives-A space vector theory approach, Clarendon Press,. Oxford, 1992. [12] MATLAB User's Guide. The Mathworks, Inc,. Natick, 199l.

  18. On the Conditioning of Machine-Learning-Assisted Turbulence Modeling

    Science.gov (United States)

    Wu, Jinlong; Sun, Rui; Wang, Qiqi; Xiao, Heng

    2017-11-01

    Recently, several researchers have demonstrated that machine learning techniques can be used to improve the RANS modeled Reynolds stress by training on available database of high fidelity simulations. However, obtaining improved mean velocity field remains an unsolved challenge, restricting the predictive capability of current machine-learning-assisted turbulence modeling approaches. In this work we define a condition number to evaluate the model conditioning of data-driven turbulence modeling approaches, and propose a stability-oriented machine learning framework to model Reynolds stress. Two canonical flows, the flow in a square duct and the flow over periodic hills, are investigated to demonstrate the predictive capability of the proposed framework. The satisfactory prediction performance of mean velocity field for both flows demonstrates the predictive capability of the proposed framework for machine-learning-assisted turbulence modeling. With showing the capability of improving the prediction of mean flow field, the proposed stability-oriented machine learning framework bridges the gap between the existing machine-learning-assisted turbulence modeling approaches and the demand of predictive capability of turbulence models in real applications.

  19. Developing Parametric Models for the Assembly of Machine Fixtures for Virtual Multiaxial CNC Machining Centers

    Science.gov (United States)

    Balaykin, A. V.; Bezsonov, K. A.; Nekhoroshev, M. V.; Shulepov, A. P.

    2018-01-01

    This paper dwells upon a variance parameterization method. Variance or dimensional parameterization is based on sketching, with various parametric links superimposed on the sketch objects and user-imposed constraints in the form of an equation system that determines the parametric dependencies. This method is fully integrated in a top-down design methodology to enable the creation of multi-variant and flexible fixture assembly models, as all the modeling operations are hierarchically linked in the built tree. In this research the authors consider a parameterization method of machine tooling used for manufacturing parts using multiaxial CNC machining centers in the real manufacturing process. The developed method allows to significantly reduce tooling design time when making changes of a part’s geometric parameters. The method can also reduce time for designing and engineering preproduction, in particular, for development of control programs for CNC equipment and control and measuring machines, automate the release of design and engineering documentation. Variance parameterization helps to optimize construction of parts as well as machine tooling using integrated CAE systems. In the framework of this study, the authors demonstrate a comprehensive approach to parametric modeling of machine tooling in the CAD package used in the real manufacturing process of aircraft engines.

  20. A Machine-Checked, Type-Safe Model of Java Concurrency : Language, Virtual Machine, Memory Model, and Verified Compiler

    OpenAIRE

    Lochbihler, Andreas

    2012-01-01

    The Java programming language provides safety and security guarantees such as type safety and its security architecture. They distinguish it from other mainstream programming languages like C and C++. In this work, we develop a machine-checked model of concurrent Java and the Java memory model and investigate the impact of concurrency on these guarantees. From the formal model, we automatically obtain an executable verified compiler to bytecode and a validated virtual machine.

  1. Trustless Machine Learning Contracts; Evaluating and Exchanging Machine Learning Models on the Ethereum Blockchain

    OpenAIRE

    Kurtulmus, A. Besir; Daniel, Kenny

    2018-01-01

    Using blockchain technology, it is possible to create contracts that offer a reward in exchange for a trained machine learning model for a particular data set. This would allow users to train machine learning models for a reward in a trustless manner. The smart contract will use the blockchain to automatically validate the solution, so there would be no debate about whether the solution was correct or not. Users who submit the solutions won't have counterparty risk that they won't get paid fo...

  2. Probabilistic models and machine learning in structural bioinformatics

    DEFF Research Database (Denmark)

    Hamelryck, Thomas

    2009-01-01

    . Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis...

  3. Probabilistic forecasts of near-term climate change based on a resampling ensemble technique

    OpenAIRE

    Räisänen, J.; Ruokolainen, L.

    2006-01-01

    Probabilistic forecasts of near-term climate change are derived by using a multimodel ensemble of climate change simulations and a simple resampling technique that increases the number of realizations for the possible combination of anthropogenic climate change and internal climate variability. The technique is based on the assumption that the probability distribution of local climate changes is only a function of the all-model mean global average warming. Although this is unlikely to be exac...

  4. Near-term climate mitigation by short-lived forcers

    Science.gov (United States)

    Smith, Steven J.; Mizrahi, Andrew

    2013-01-01

    Emissions reductions focused on anthropogenic climate-forcing agents with relatively short atmospheric lifetimes, such as methane (CH4) and black carbon, have been suggested as a strategy to reduce the rate of climate change over the next several decades. We find that reductions of methane and black carbon would likely have only a modest impact on near-term global climate warming. Even with maximally feasible reductions phased in from 2015 to 2035, global mean temperatures in 2050 would be reduced by 0.16 °C, with a range of 0.04–0.35 °C because of uncertainties in carbonaceous aerosol emissions and aerosol forcing per unit of emissions. The high end of this range is only possible if total historical aerosol forcing is relatively small. More realistic emission reductions would likely provide an even smaller climate benefit. We find that the climate benefit from reductions in short-lived forcing agents are smaller than previously estimated. These near-term climate benefits of targeted reductions in short-lived forcers are not substantially different in magnitude from the benefits from a comprehensive climate policy. PMID:23940357

  5. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  6. Static Stiffness Modeling of Parallel Kinematics Machine Tool Joints

    OpenAIRE

    O. K. Akmaev; B. A. Enikeev; A. I. Nigmatullin

    2015-01-01

    The possible variants of an original parallel kinematics machine-tool structure are explored in this article. A new Hooke's universal joint design based on needle roller bearings with the ability of a preload setting is proposed. The bearing stiffness modeling is carried out using a variety of methods. The elastic deformation modeling of a Hook’s joint and a spherical rolling joint have been developed to assess the possibility of using these joints in machine tools with parallel k...

  7. Modeling Grinding Processes as Micro-Machining Operation ...

    African Journals Online (AJOL)

    A computational based model for surface grinding process as a micro-machined operation has been developed. In this model, grinding forces are made up of chip formation force and sliding force. Mathematical expressions for Modeling tangential grinding force and normal grinding force were obtained. The model was ...

  8. Monitoring Vibration of A Model of Rotating Machine

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2012-03-01

    Full Text Available Mechanical movement or motion of a rotating machine normally causes additional vibration. A vibration sensing device must be added to constantly monitor vibration level of the system having a rotating machine, since the vibration frequency and amplitude cannot be measured quantitatively by only sight or touch. If the vibration signals from the machine have a lot of noise, there are possibilities that the rotating machine has defects that can lead to failure. In this experimental research project, a vibration structure is constructed in a scaled model to simulate vibration and to monitor system performance in term of vibration level in case of rotation with balanced and unbalanced condition. In this scaled model, the output signal of the vibration sensor is processed in a microcontroller and then transferred to a computer via a serial communication medium, and plotted on the screen with data plotter software developed using C language. The signal waveform of the vibration is displayed to allow further analysis of the vibration. Vibration level monitor can be set in the microcontroller to allow shutdown of the rotating machine in case of excessive vibration to protect the rotating machine from further damage. Experiment results show the agreement with theory that unbalance condition on a rotating machine can lead to larger vibration amplitude compared to balance condition. Adding and reducing the mass for balancing can be performed to obtain lower vibration level. 

  9. Near-term electric vehicle program: Phase I, final report

    Energy Technology Data Exchange (ETDEWEB)

    Rowlett, B. H.; Murry, R.

    1977-08-01

    A final report is given for an Energy Research and Development Administration effort aimed at a preliminary design of an energy-efficient electric commuter car. An electric-powered passenger vehicle using a regenerative power system was designed to meet the near-term ERDA electric automobile goals. The program objectives were to (1) study the parameters that affect vehicle performance, range, and cost; (2) design an entirely new electric vehicle that meets performance and economic requirements; and (3) define a program to develop this vehicle design for production in the early 1980's. The design and performance features of the preliminary (baseline) electric-powered passenger vehicle design are described, including the baseline power system, system performance, economic analysis, reliability and safety, alternate designs and options, development plan, and conclusions and recommendations. All aspects of the baseline design were defined in sufficient detail to verify performance expectations and system feasibility.

  10. A friction model for free-machining steels and its applicability to machinability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, K. [Ibaraki Univ., Hitachi (Japan). Dept. of Mech. Eng.; Kubo, A. [Kitami Inst. of Tech. (Japan). Dept. of Mechanical Engineering; Childs, T.H.C. [Leeds Univ. (United Kingdom). Dept. of Mechanical Engineering

    2001-07-01

    The present paper proposes an empirical model describing friction behaviour at the tool-chip interface in machining free-machining steels. Split tool dynamometry is employed to measure stress distribution on the tool rake face during machining various resulphurised steels. The role of free-machining additives can be expressed as decreases in the shear flow stress of the chip material at the tool-chip interface. The decreasing ratio compared to a reference steel depends on the coverage and the shear stress of the additives in the real area of contact. The empirical equation derived on the basis of adhesion theory has a form similar to that for the reference steel, covering the features observed in the experiment. Using the friction characteristics thus obtained, a finite element-based analysis differentiates the role of free-machining additives in the cutting phenomena, which include chip formation, cutting force and cutting temperature. The simulation results are found to be in reasonable agreement with experiments. (orig.)

  11. From Points to Forecasts: Predicting Invasive Species Habitat Suitability in the Near Term

    Directory of Open Access Journals (Sweden)

    Tracy R. Holcombe

    2010-05-01

    Full Text Available We used near-term climate scenarios for the continental United States, to model 12 invasive plants species. We created three potential habitat suitability models for each species using maximum entropy modeling: (1 current; (2 2020; and (3 2035. Area under the curve values for the models ranged from 0.92 to 0.70, with 10 of the 12 being above 0.83 suggesting strong and predictable species-environment matching. Change in area between the current potential habitat and 2035 ranged from a potential habitat loss of about 217,000 km2, to a potential habitat gain of about 133,000 km2.

  12. An abstract machine model of dynamic module replacement

    OpenAIRE

    Walton, Chris; Kırlı, Dilsun; Gilmore, Stephen

    2000-01-01

    In this paper we define an abstract machine model for the mλ typed intermediate language. This abstract machine is used to give a formal description of the operation of run-time module replacement for the programming language Dynamic ML. The essential technical device which we employ for module replacement is a modification of two-space copying garbage collection. We show how the operation of module replacement could be applied to other garbage-collected languages such as Java.

  13. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    Science.gov (United States)

    Saleem, A.; Salah, M.; Ahmed, N.; Silberschmidt, V. V.

    2013-07-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance.

  14. Component based modelling of piezoelectric ultrasonic actuators for machining applications

    International Nuclear Information System (INIS)

    Saleem, A; Ahmed, N; Salah, M; Silberschmidt, V V

    2013-01-01

    Ultrasonically Assisted Machining (UAM) is an emerging technology that has been utilized to improve the surface finishing in machining processes such as turning, milling, and drilling. In this context, piezoelectric ultrasonic transducers are being used to vibrate the cutting tip while machining at predetermined amplitude and frequency. However, modelling and simulation of these transducers is a tedious and difficult task. This is due to the inherent nonlinearities associated with smart materials. Therefore, this paper presents a component-based model of ultrasonic transducers that mimics the nonlinear behaviour of such a system. The system is decomposed into components, a mathematical model of each component is created, and the whole system model is accomplished by aggregating the basic components' model. System parameters are identified using Finite Element technique which then has been used to simulate the system in Matlab/SIMULINK. Various operation conditions are tested and performed to demonstrate the system performance

  15. Learning About Climate and Atmospheric Models Through Machine Learning

    Science.gov (United States)

    Lucas, D. D.

    2017-12-01

    From the analysis of ensemble variability to improving simulation performance, machine learning algorithms can play a powerful role in understanding the behavior of atmospheric and climate models. To learn about model behavior, we create training and testing data sets through ensemble techniques that sample different model configurations and values of input parameters, and then use supervised machine learning to map the relationships between the inputs and outputs. Following this procedure, we have used support vector machines, random forests, gradient boosting and other methods to investigate a variety of atmospheric and climate model phenomena. We have used machine learning to predict simulation crashes, estimate the probability density function of climate sensitivity, optimize simulations of the Madden Julian oscillation, assess the impacts of weather and emissions uncertainty on atmospheric dispersion, and quantify the effects of model resolution changes on precipitation. This presentation highlights recent examples of our applications of machine learning to improve the understanding of climate and atmospheric models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  16. Vector machine techniques for modeling of seismic liquefaction data

    Directory of Open Access Journals (Sweden)

    Pijush Samui

    2014-06-01

    Full Text Available This article employs three soft computing techniques, Support Vector Machine (SVM; Least Square Support Vector Machine (LSSVM and Relevance Vector Machine (RVM, for prediction of liquefaction susceptibility of soil. SVM and LSSVM are based on the structural risk minimization (SRM principle which seeks to minimize an upper bound of the generalization error consisting of the sum of the training error and a confidence interval. RVM is a sparse Bayesian kernel machine. SVM, LSSVM and RVM have been used as classification tools. The developed SVM, LSSVM and RVM give equations for prediction of liquefaction susceptibility of soil. A comparative study has been carried out between the developed SVM, LSSVM and RVM models. The results from this article indicate that the developed SVM gives the best performance for prediction of liquefaction susceptibility of soil.

  17. Twin support vector machines models, extensions and applications

    CERN Document Server

    Jayadeva; Chandra, Suresh

    2017-01-01

    This book provides a systematic and focused study of the various aspects of twin support vector machines (TWSVM) and related developments for classification and regression. In addition to presenting most of the basic models of TWSVM and twin support vector regression (TWSVR) available in the literature, it also discusses the important and challenging applications of this new machine learning methodology. A chapter on “Additional Topics” has been included to discuss kernel optimization and support tensor machine topics, which are comparatively new but have great potential in applications. It is primarily written for graduate students and researchers in the area of machine learning and related topics in computer science, mathematics, electrical engineering, management science and finance.

  18. Assessing Implicit Knowledge in BIM Models with Machine Learning

    DEFF Research Database (Denmark)

    Krijnen, Thomas; Tamke, Martin

    2015-01-01

    architects and engineers are able to deduce non-explicitly explicitly stated information, which is often the core of the transported architectural information. This paper investigates how machine learning approaches allow a computational system to deduce implicit knowledge from a set of BIM models.......The promise, which comes along with Building Information Models, is that they are information rich, machine readable and represent the insights of multiple building disciplines within single or linked models. However, this knowledge has to be stated explicitly in order to be understood. Trained...

  19. [Model transfer method based on support vector machine].

    Science.gov (United States)

    Xiong, Yu-hong; Wen, Zhi-yu; Liang, Yu-qian; Chen, Qin; Zhang, Bo; Liu, Yu; Xiang, Xian-yi

    2007-01-01

    The model transfer is a basic method to build up universal and comparable performance of spectrometer data by seeking a mathematical transformation relation among different spectrometers. Because of nonlinear effect and small calibration sample set in fact, it is important to solve the problem of model transfer under the condition of nonlinear effect in evidence and small sample set. This paper summarizes support vector machines theory, puts forward the method of model transfer based on support vector machine and piecewise direct standardization, and makes use of computer simulation method, giving a example to explain the method and compare it with artificial neural network in the end.

  20. Comparative study of Moore and Mealy machine models adaptation ...

    African Journals Online (AJOL)

    Information and Communications Technology has influenced the need for automated machines that can carry out important production procedures and, automata models are among the computational models used in design and construction of industrial processes. The production process of the popular African Black Soap ...

  1. comparative study of moore and mealy machine models adaptation

    African Journals Online (AJOL)

    user

    Information and Communications Technology has influenced the need for automated machines that can carry out important production procedures and, automata models are among the computational models used in design and construction of industrial processes. The production process of the popular African Black Soap ...

  2. Rover/NERVA-derived near-term nuclear propulsion

    Science.gov (United States)

    FY-92 accomplishments centered on conceptual design and analyses for 25, 50, and 75 K engines with emphasis on the 50 K engine. During the first period of performance, flow and energy balances were prepared for each of these configurations and thrust-to-weight values were estimated. A review of fuel technology and key data from the Rover/NERVA program established a baseline for proven reactor performance and areas of enhancement to meet near-term goals. Studies were performed of the criticality and temperature profiles for probable fuel and moderator loadings for the three engine sizes, with a more detailed analysis of the 50 K size. During the second period of performance, analyses of the 50 K engine continued. A chamber/nozzle contour was selected and heat transfer and fatigue analyses were performed for likely construction materials. Reactor analyses were performed to determine component radiation heating rates, reactor radiation fields, water immersion poisoning requirements, temperature limits for restartability, and a tie-tube thermal analysis. Finally, a brief assessment of key enabling technologies was made, with a view toward identifying development issues and identification of the critical path toward achieving engine qualification within 10 years.

  3. Status and near-term plans for DIII-D

    International Nuclear Information System (INIS)

    Davis, L.G.; Callis, R.W.; Luxon, J.L.; Stambaugh, R.D.

    1987-10-01

    The DIII-D tokamak at GA Technologies began plasma operation in February of 1986 and is dedicated to the study of highly non-circular plasmas. High beta operation with enhanced energy confinement is paramount among the goals of the DIII-D research program. Commissioning of the device and facility has verified the design capability including coil and vessel loading, volt-second consumption, bakeout temperature, vessel armor, and neutral beamline thermal integrity and control systems performance. Initial experimental results demonstrate the DIII-D is capable of attaining high confinement (H-mode) discharges in a divertor configuration using modest neutral beam heating or ECH. Record values of I/sub p/aB/sub T/ have been achieved with ohmic heating as a first step toward operation at high values of toroidal beta and record values of beta have been achieved using neutral beam heating. This paper summarizes results to date and gives the near term plans for the facility. 13 refs., 6 figs., 1 tab

  4. A Multiple Model Prediction Algorithm for CNC Machine Wear PHM

    Directory of Open Access Journals (Sweden)

    Huimin Chen

    2011-01-01

    Full Text Available The 2010 PHM data challenge focuses on the remaining useful life (RUL estimation for cutters of a high speed CNC milling machine using measurements from dynamometer, accelerometer, and acoustic emission sensors. We present a multiple model approach for wear depth estimation of milling machine cutters using the provided data. The feature selection, initial wear estimation and multiple model fusion components of the proposed algorithm are explained in details and compared with several alternative methods using the training data. The final submission ranked #2 among professional and student participants and the method is applicable to other data driven PHM problems.

  5. Static Stiffness Modeling of Parallel Kinematics Machine Tool Joints

    Directory of Open Access Journals (Sweden)

    O. K. Akmaev

    2015-09-01

    Full Text Available The possible variants of an original parallel kinematics machine-tool structure are explored in this article. A new Hooke's universal joint design based on needle roller bearings with the ability of a preload setting is proposed. The bearing stiffness modeling is carried out using a variety of methods. The elastic deformation modeling of a Hook’s joint and a spherical rolling joint have been developed to assess the possibility of using these joints in machine tools with parallel kinematics.

  6. Linguistically motivated statistical machine translation models and algorithms

    CERN Document Server

    Xiong, Deyi

    2015-01-01

    This book provides a wide variety of algorithms and models to integrate linguistic knowledge into Statistical Machine Translation (SMT). It helps advance conventional SMT to linguistically motivated SMT by enhancing the following three essential components: translation, reordering and bracketing models. It also serves the purpose of promoting the in-depth study of the impacts of linguistic knowledge on machine translation. Finally it provides a systematic introduction of Bracketing Transduction Grammar (BTG) based SMT, one of the state-of-the-art SMT formalisms, as well as a case study of linguistically motivated SMT on a BTG-based platform.

  7. Modeling Music Emotion Judgments Using Machine Learning Methods.

    Science.gov (United States)

    Vempala, Naresh N; Russo, Frank A

    2017-01-01

    Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML) methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  8. Likely near-term solar-thermal water splitting technologies

    Energy Technology Data Exchange (ETDEWEB)

    Perkins, C.; Weimer, A.W. [University of Colorado, Boulder, CO (United States). Engineering Center

    2004-12-01

    Thermodynamic and materials considerations were made for some two- and three-step thermochemical cycles to split water using solar-thermal processing. The direct thermolysis of water to produce H{sub 2} using solar-thermal processing is unlikely in the near term due to ultra-high-temperature requirements exceeding 3000 K and the need to separate H{sub 2} from O{sub 2} at these temperatures. However, several lower temperature (<2500 K) thermochemical cycles including ZnO/Zn, Mn{sub 2}O{sub 3}/MnO, substituted iron oxide, and the sulfur-iodine route (S-I) provide an opportunity for high-temperature solar-thermal development. Although zirconia-based materials are well suited for metal oxide routes in terms of chemical compatibility at these temperatures, thermal shock issues are a major concern for solar-thermal applications. Hence, efforts need to be directed towards methods for designing reactors to eliminate thermal shock (ZrO{sub 2} based) or that use graphite (very compatible in terms of temperature and thermal shock) with designs that prevent contact of chemical species with graphite materials at high temperatures. Fluid-wall reactor configurations where inert gases provide a blanket to protect the graphite wall appear promising in this regard, but their use will impact process efficiency. For the case of S-I up to 1800 K, silicon carbide appears to be a suitable material for the high-temperature H{sub 2}SO{sub 4} dissociation. There is a need for a significant amount of work to be done in the area of high-temperature solar-thermal reactor engineering to develop thermochemical water splitting processes. (author)

  9. Antimatter Production for Near-Term Propulsion Applications

    Science.gov (United States)

    Gerrish, Harold P.; Schmidt, George R.

    1999-01-01

    This presentation discusses the use and potential of power generated from Proton-Antiproton Annihilation. The problem is that there is not enough production of anti-protons, and that the production methods are inefficient. The cost for 1 gram of antiprotons is estimated at 62.5 trillion dollars. Applications which require large quantities (i.e., about 1 kg) will require dramatic improvements in the efficiency of the production of the antiprotons. However, applications which involve small quantities (i.e., 1 to 10 micrograms may be practical with a relative expansion of capacities. There are four "conventional" antimatter propulsion concepts which are: (1) the solid core, (2) the gas core, (3) the plasma core, and the (4) beam core. These are compared in terms of specific impulse, propulsive energy utilization and vehicle structure/propellant mass ratio. Antimatter-catalyzed fusion propulsion is also evaluated. The improvements outlined in the presentation to the Fermilab production, and other sites. capability would result in worldwide capacity of several micrograms per year, by the middle of the next decade. The conclusions drawn are: (1) the Conventional antimatter propulsion IS not practical due to large p-bar requirement; (2) Antimatter-catalyzed systems can be reasonably considered this "solves" energy cost problem by employing substantially smaller quantities; (3) With current infrastructure, cost for 1 microgram of p-bars is $62.5 million, but with near-term improvements cost should drop; (4) Milligram-scale facility would require a $15 billion investment, but could produce 1 mg, at $0.1/kW-hr, for $6.25 million.

  10. NSTX: Facility/Research Highlights and Near Term Facility Plans

    International Nuclear Information System (INIS)

    Ono, M.

    2008-01-01

    The National Spherical Torus Experiment (NSTX) is a collaborative mega-ampere-class spherical torus research facility with high power heating and current drive systems and the state-of-the-art comprehensive diagnostics. For the 2008 experimental campaign, the high harmonic fast wave (HHFW) heating efficiency in deuterium improved significantly with lithium evaporation and produced a record central Te of 5 keV. The HHFW heating of NBI-heated discharges was also demonstrated for the first time with lithium application. The EBW emission in H-mode was also improved dramatically with lithium which was shown to be attributable to reduced edge collisional absorption. Newly installed FIDA energetic particle diagnostic measured significant transport of energetic ions associated with TAE avalanche as well as n=1 kink activities. A full 75 channel poloidal CHERS system is now operational yielding tantalizing initial results. In the near term, major upgrade activities include a liquid-lithium divertor target to achieve lower collisionality regime, the HHFW antenna upgrades to double its power handling capability in H-mode, and a beam-emission spectroscopy diagnostic to extend the localized turbulence measurements toward the ion gyro-radius scale from the present concentration on the electron gyro-radius scale. For the longer term, a new center stack to significantly expand the plasma operating parameters is planned along with a second NBI system to double the NBI heating and CD power and provide current profile control. These upgrades will enable NSTX to explore fully non-inductive operations over a much expanded plasma parameter space in terms of higher plasma temperature and lower collisionality, thereby significantly reducing the physics parameter gap between the present NSTX and the projected next-step ST experiments

  11. "Near-term" Natural Catastrophe Risk Management and Risk Hedging in a Changing Climate

    Science.gov (United States)

    Michel, Gero; Tiampo, Kristy

    2014-05-01

    Competing with analytics - Can the insurance market take advantage of seasonal or "near-term" forecasting and temporal changes in risk? Natural perils (re)insurance has been based on models following climatology i.e. the long-term "historical" average. This is opposed to considering the "near-term" and forecasting hazard and risk for the seasons or years to come. Variability and short-term changes in risk are deemed abundant for almost all perils. In addition to hydrometeorological perils whose changes are vastly discussed, earthquake activity might also change over various time-scales affected by earlier local (or even global) events, regional changes in the distribution of stresses and strains and more. Only recently has insurance risk modeling of (stochastic) hurricane-years or extratropical-storm-years started considering our ability to forecast climate variability herewith taking advantage of apparent correlations between climate indicators and the activity of storm events. Once some of these "near-term measures" were in the market, rating agencies and regulators swiftly adopted these concepts demanding companies to deploy a selection of more conservative "time-dependent" models. This was despite the fact that the ultimate effect of some of these measures on insurance risk was not well understood. Apparent short-term success over the last years in near-term seasonal hurricane forecasting was brought to a halt in 2013 when these models failed to forecast the exceptional shortage of hurricanes herewith contradicting an active-year forecast. The focus of earthquake forecasting has in addition been mostly on high rather than low temporal and regional activity despite the fact that avoiding losses does not by itself create a product. This presentation sheds light on new risk management concepts for over-regional and global (re)insurance portfolios that take advantage of forecasting changes in risk. The presentation focuses on the "upside" and on new opportunities

  12. Near Term Hybrid Passenger Vehicle Development Program. Phase I, Final report

    Energy Technology Data Exchange (ETDEWEB)

    Montalenti, P.; Piccolo, R.

    1979-09-21

    Activities performed in the Near Term Hybrid Vehicle (NTHV) program which studied the technical, economic, and fuel conservation aspects of replacing new 1985 full sized passenger cars in the US with automobiles having combination heat engines and electric motor power are summarized. These studies included NTHV design for the body power units, transmission system, and controls; evaluation of alternative strategies; the fuel conservation expected; goals for vehicle performance, safety and reliability; economic analysis, and mathematical models for use in the computer-aided design of the optimum performance NTHV. (LCL)

  13. Analytical model for Stirling cycle machine design

    Energy Technology Data Exchange (ETDEWEB)

    Formosa, F. [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France); Despesse, G. [Laboratoire Capteurs Actionneurs et Recuperation d' Energie, CEA-LETI-MINATEC, Grenoble (France)

    2010-10-15

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined. (author)

  14. Analytical model for Stirling cycle machine design

    International Nuclear Information System (INIS)

    Formosa, F.; Despesse, G.

    2010-01-01

    In order to study further the promising free piston Stirling engine architecture, there is a need of an analytical thermodynamic model which could be used in a dynamical analysis for preliminary design. To aim at more realistic values, the models have to take into account the heat losses and irreversibilities on the engine. An analytical model which encompasses the critical flaws of the regenerator and furthermore the heat exchangers effectivenesses has been developed. This model has been validated using the whole range of the experimental data available from the General Motor GPU-3 Stirling engine prototype. The effects of the technological and operating parameters on Stirling engine performance have been investigated. In addition to the regenerator influence, the effect of the cooler effectiveness is underlined.

  15. Innovative model of business process reengineering at machine building enterprises

    Science.gov (United States)

    Nekrasov, R. Yu; Tempel, Yu A.; Tempel, O. A.

    2017-10-01

    The paper provides consideration of business process reengineering viewed as amanagerial innovation accepted by present day machine building enterprises, as well as waysto improve its procedure. A developed innovative model of reengineering measures isdescribed and is based on the process approach and other principles of company management.

  16. Modelling, Construction, and Testing of a Simple HTS Machine Demonstrator

    DEFF Research Database (Denmark)

    Jensen, Bogi Bech; Abrahamsen, Asger Bech

    2011-01-01

    This paper describes the construction, modeling and experimental testing of a high temperature superconducting (HTS) machine prototype employing second generation (2G) coated conductors in the field winding. The prototype is constructed in a simple way, with the purpose of having an inexpensive way...

  17. An incremental anomaly detection model for virtual machines

    Science.gov (United States)

    Zhang, Hancui; Chen, Shuyu; Liu, Jun; Zhou, Zhen; Wu, Tianshu

    2017-01-01

    Self-Organizing Map (SOM) algorithm as an unsupervised learning method has been applied in anomaly detection due to its capabilities of self-organizing and automatic anomaly prediction. However, because of the algorithm is initialized in random, it takes a long time to train a detection model. Besides, the Cloud platforms with large scale virtual machines are prone to performance anomalies due to their high dynamic and resource sharing characters, which makes the algorithm present a low accuracy and a low scalability. To address these problems, an Improved Incremental Self-Organizing Map (IISOM) model is proposed for anomaly detection of virtual machines. In this model, a heuristic-based initialization algorithm and a Weighted Euclidean Distance (WED) algorithm are introduced into SOM to speed up the training process and improve model quality. Meanwhile, a neighborhood-based searching algorithm is presented to accelerate the detection time by taking into account the large scale and high dynamic features of virtual machines on cloud platform. To demonstrate the effectiveness, experiments on a common benchmark KDD Cup dataset and a real dataset have been performed. Results suggest that IISOM has advantages in accuracy and convergence velocity of anomaly detection for virtual machines on cloud platform. PMID:29117245

  18. Control of discrete event systems modeled as hierarchical state machines

    Science.gov (United States)

    Brave, Y.; Heymann, M.

    1991-01-01

    The authors examine a class of discrete event systems (DESs) modeled as asynchronous hierarchical state machines (AHSMs). For this class of DESs, they provide an efficient method for testing reachability, which is an essential step in many control synthesis procedures. This method utilizes the asynchronous nature and hierarchical structure of AHSMs, thereby illustrating the advantage of the AHSM representation as compared with its equivalent (flat) state machine representation. An application of the method is presented where an online minimally restrictive solution is proposed for the problem of maintaining a controlled AHSM within prescribed legal bounds.

  19. Modeling RHIC using the standard machine formal accelerator description

    International Nuclear Information System (INIS)

    Pilat, F.; Trahern, C.G.; Wei, J.

    1997-01-01

    The Standard Machine Format (SMF) is a structured description of accelerator lattices which supports both the hierarchy of beam lines and generic lattice objects as well as those deviations (field errors, alignment efforts, etc.) associated with each component of the as-installed machine. In this paper we discuss the use of SMF to describe the Relativistic Heavy Ion Collider (RHIC) as well as the ancillary data structures (such as field quality measurements) that are necessarily incorporated into the RHIC SMF model. Future applications of SMF are outlined, including its use in the RHIC operational environment

  20. Machine learning models in breast cancer survival prediction.

    Science.gov (United States)

    Montazeri, Mitra; Montazeri, Mohadeseh; Montazeri, Mahdieh; Beigzadeh, Amin

    2016-01-01

    Breast cancer is one of the most common cancers with a high mortality rate among women. With the early diagnosis of breast cancer survival will increase from 56% to more than 86%. Therefore, an accurate and reliable system is necessary for the early diagnosis of this cancer. The proposed model is the combination of rules and different machine learning techniques. Machine learning models can help physicians to reduce the number of false decisions. They try to exploit patterns and relationships among a large number of cases and predict the outcome of a disease using historical cases stored in datasets. The objective of this study is to propose a rule-based classification method with machine learning techniques for the prediction of different types of Breast cancer survival. We use a dataset with eight attributes that include the records of 900 patients in which 876 patients (97.3%) and 24 (2.7%) patients were females and males respectively. Naive Bayes (NB), Trees Random Forest (TRF), 1-Nearest Neighbor (1NN), AdaBoost (AD), Support Vector Machine (SVM), RBF Network (RBFN), and Multilayer Perceptron (MLP) machine learning techniques with 10-cross fold technique were used with the proposed model for the prediction of breast cancer survival. The performance of machine learning techniques were evaluated with accuracy, precision, sensitivity, specificity, and area under ROC curve. Out of 900 patients, 803 patients and 97 patients were alive and dead, respectively. In this study, Trees Random Forest (TRF) technique showed better results in comparison to other techniques (NB, 1NN, AD, SVM and RBFN, MLP). The accuracy, sensitivity and the area under ROC curve of TRF are 96%, 96%, 93%, respectively. However, 1NN machine learning technique provided poor performance (accuracy 91%, sensitivity 91% and area under ROC curve 78%). This study demonstrates that Trees Random Forest model (TRF) which is a rule-based classification model was the best model with the highest level of

  1. Applying Machine Trust Models to Forensic Investigations

    Science.gov (United States)

    Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin

    Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.

  2. Modeling Geomagnetic Variations using a Machine Learning Framework

    Science.gov (United States)

    Cheung, C. M. M.; Handmer, C.; Kosar, B.; Gerules, G.; Poduval, B.; Mackintosh, G.; Munoz-Jaramillo, A.; Bobra, M.; Hernandez, T.; McGranaghan, R. M.

    2017-12-01

    We present a framework for data-driven modeling of Heliophysics time series data. The Solar Terrestrial Interaction Neural net Generator (STING) is an open source python module built on top of state-of-the-art statistical learning frameworks (traditional machine learning methods as well as deep learning). To showcase the capability of STING, we deploy it for the problem of predicting the temporal variation of geomagnetic fields. The data used includes solar wind measurements from the OMNI database and geomagnetic field data taken by magnetometers at US Geological Survey observatories. We examine the predictive capability of different machine learning techniques (recurrent neural networks, support vector machines) for a range of forecasting times (minutes to 12 hours). STING is designed to be extensible to other types of data. We show how STING can be used on large sets of data from different sensors/observatories and adapted to tackle other problems in Heliophysics.

  3. A comparative study of machine learning models for ethnicity classification

    Science.gov (United States)

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  4. Latent domain models for statistical machine translation

    NARCIS (Netherlands)

    Hoàng, C.

    2017-01-01

    A data-driven approach to model translation suffers from the data mismatch problem and demands domain adaptation techniques. Given parallel training data originating from a specific domain, training an MT system on the data would result in a rather suboptimal translation for other domains. But does

  5. Global ocean modeling on the Connection Machine

    International Nuclear Information System (INIS)

    Smith, R.D.; Dukowicz, J.K.; Malone, R.C.

    1993-01-01

    The authors have developed a version of the Bryan-Cox-Semtner ocean model (Bryan, 1969; Semtner, 1976; Cox, 1984) for massively parallel computers. Such models are three-dimensional, Eulerian models that use latitude and longitude as the horizontal spherical coordinates and fixed depth levels as the vertical coordinate. The incompressible Navier-Stokes equations, with a turbulent eddy viscosity, and mass continuity equation are solved, subject to the hydrostatic and Boussinesq approximations. The traditional model formulation uses a rigid-lid approximation (vertical velocity = 0 at the ocean surface) to eliminate fast surface waves. These waves would otherwise require that a very short time step be used in numerical simulations, which would greatly increase the computational cost. To solve the equations with the rigid-lid assumption, the equations of motion are split into two parts: a set of twodimensional ''barotropic'' equations describing the vertically-averaged flow, and a set of three-dimensional ''baroclinic'' equations describing temperature, salinity and deviations of the horizontal velocities from the vertically-averaged flow

  6. Support vector machine based battery model for electric vehicles

    International Nuclear Information System (INIS)

    Wang Junping; Chen Quanshi; Cao Binggang

    2006-01-01

    The support vector machine (SVM) is a novel type of learning machine based on statistical learning theory that can map a nonlinear function successfully. As a battery is a nonlinear system, it is difficult to establish the relationship between the load voltage and the current under different temperatures and state of charge (SOC). The SVM is used to model the battery nonlinear dynamics in this paper. Tests are performed on an 80Ah Ni/MH battery pack with the Federal Urban Driving Schedule (FUDS) cycle to set up the SVM model. Compared with the Nernst and Shepherd combined model, the SVM model can simulate the battery dynamics better with small amounts of experimental data. The maximum relative error is 3.61%

  7. Bilingual Cluster Based Models for Statistical Machine Translation

    Science.gov (United States)

    Yamamoto, Hirofumi; Sumita, Eiichiro

    We propose a domain specific model for statistical machine translation. It is well-known that domain specific language models perform well in automatic speech recognition. We show that domain specific language and translation models also benefit statistical machine translation. However, there are two problems with using domain specific models. The first is the data sparseness problem. We employ an adaptation technique to overcome this problem. The second issue is domain prediction. In order to perform adaptation, the domain must be provided, however in many cases, the domain is not known or changes dynamically. For these cases, not only the translation target sentence but also the domain must be predicted. This paper focuses on the domain prediction problem for statistical machine translation. In the proposed method, a bilingual training corpus, is automatically clustered into sub-corpora. Each sub-corpus is deemed to be a domain. The domain of a source sentence is predicted by using its similarity to the sub-corpora. The predicted domain (sub-corpus) specific language and translation models are then used for the translation decoding. This approach gave an improvement of 2.7 in BLEU score on the IWSLT05 Japanese to English evaluation corpus (improving the score from 52.4 to 55.1). This is a substantial gain and indicates the validity of the proposed bilingual cluster based models.

  8. Transferable Atomic Multipole Machine Learning Models for Small Organic Molecules.

    Science.gov (United States)

    Bereau, Tristan; Andrienko, Denis; von Lilienfeld, O Anatole

    2015-07-14

    Accurate representation of the molecular electrostatic potential, which is often expanded in distributed multipole moments, is crucial for an efficient evaluation of intermolecular interactions. Here we introduce a machine learning model for multipole coefficients of atom types H, C, O, N, S, F, and Cl in any molecular conformation. The model is trained on quantum-chemical results for atoms in varying chemical environments drawn from thousands of organic molecules. Multipoles in systems with neutral, cationic, and anionic molecular charge states are treated with individual models. The models' predictive accuracy and applicability are illustrated by evaluating intermolecular interaction energies of nearly 1,000 dimers and the cohesive energy of the benzene crystal.

  9. Numerical modeling and optimization of machining duplex stainless steels

    Directory of Open Access Journals (Sweden)

    Rastee D. Koyee

    2015-01-01

    Full Text Available The shortcomings of the machining analytical and empirical models in combination with the industry demands have to be fulfilled. A three-dimensional finite element modeling (FEM introduces an attractive alternative to bridge the gap between pure empirical and fundamental scientific quantities, and fulfill the industry needs. However, the challenging aspects which hinder the successful adoption of FEM in the machining sector of manufacturing industry have to be solved first. One of the greatest challenges is the identification of the correct set of machining simulation input parameters. This study presents a new methodology to inversely calculate the input parameters when simulating the machining of standard duplex EN 1.4462 and super duplex EN 1.4410 stainless steels. JMatPro software is first used to model elastic–viscoplastic and physical work material behavior. In order to effectively obtain an optimum set of inversely identified friction coefficients, thermal contact conductance, Cockcroft–Latham critical damage value, percentage reduction in flow stress, and Taylor–Quinney coefficient, Taguchi-VIKOR coupled with Firefly Algorithm Neural Network System is applied. The optimization procedure effectively minimizes the overall differences between the experimentally measured performances such as cutting forces, tool nose temperature and chip thickness, and the numerically obtained ones at any specified cutting condition. The optimum set of input parameter is verified and used for the next step of 3D-FEM application. In the next stage of the study, design of experiments, numerical simulations, and fuzzy rule modeling approaches are employed to optimize types of chip breaker, insert shapes, process conditions, cutting parameters, and tool orientation angles based on many important performances. Through this study, not only a new methodology in defining the optimal set of controllable parameters for turning simulations is introduced, but also

  10. Acoustic signal characterization of a ball milling machine model

    International Nuclear Information System (INIS)

    Andrade-Romero, J Alexis; Romero, Jesus F A; Amestegui, Mauricio

    2011-01-01

    Los Angeles machine is used both for mining process and for standard testing covering strength of materials. As the present work is focused on the latter application, an improvement in the estimation procedure for the resistance percentage of small-size coarse aggregate is presented. More precisely, is proposed a pattern identification strategy of the vibratory signal for estimating the resistance percentage using a simplified chaotic model and the continuous wavelet transform.

  11. Near-term electric-vehicle program. Phase II. Mid-term review summary report

    Energy Technology Data Exchange (ETDEWEB)

    1978-07-27

    The general objective of the Near-Term Electric Vehicle Program is to confirm that, in fact, the complete spectrum of requirements placed on the automobile (e.g., safety, producibility, utility, etc.) can still be satisfied if electric power train concepts are incorporated in lieu of contemporary power train concepts, and that the resultant set of vehicle characteristics are mutually compatible, technologically achievable, and economically achievable. The focus of the approach to meeting this general objective involves the design, development, and fabrication of complete electric vehicles incorporating, where necessary, extensive technological advancements. A mid-term summary is presented of Phase II which is a continuation of the preliminary design study conducted in Phase I of the program. Information is included on vehicle performance and performance simulation models; battery subsystems; control equipment; power systems; vehicle design and components for suspension, steering, and braking; scale model testing; structural analysis; and vehicle dynamics analysis. (LCL)

  12. High-resolution ensemble projections of near-term regional climate over the continental United States

    Science.gov (United States)

    Ashfaq, Moetasim; Rastogi, Deeksha; Mei, Rui; Kao, Shih-Chieh; Gangrade, Sudershan; Naz, Bibi S.; Touma, Danielle

    2016-09-01

    We present high-resolution near-term ensemble projections of hydroclimatic changes over the contiguous U.S. using a regional climate model (RegCM4) that dynamically downscales 11 global climate models from the fifth phase of Coupled Model Intercomparison Project at 18 km horizontal grid spacing. All model integrations span 41 years in the historical period (1965-2005) and 41 years in the near-term future period (2010-2050) under Representative Concentration Pathway 8.5 and cover a domain that includes the contiguous U.S. and parts of Canada and Mexico. Should emissions continue to rise, surface temperatures in every region within the U.S. will reach a new climate norm well before mid 21st century regardless of the magnitudes of regional warming. Significant warming will likely intensify the regional hydrological cycle through the acceleration of the historical trends in cold, warm, and wet extremes. The future temperature response will be partly regulated by changes in snow hydrology over the regions that historically receive a major portion of cold season precipitation in the form of snow. Our results indicate the existence of the Clausius-Clapeyron scaling at regional scales where per degree centigrade rise in surface temperature will lead to a 7.4% increase in precipitation from extremes. More importantly, both winter (snow) and summer (liquid) extremes are projected to increase across the U.S. These changes in precipitation characteristics will be driven by a shift toward shorter and wetter seasons. Overall, projected changes in the regional hydroclimate can have substantial impacts on the natural and human systems across the U.S.

  13. Modeling Music Emotion Judgments Using Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Naresh N. Vempala

    2018-01-01

    Full Text Available Emotion judgments and five channels of physiological data were obtained from 60 participants listening to 60 music excerpts. Various machine learning (ML methods were used to model the emotion judgments inclusive of neural networks, linear regression, and random forests. Input for models of perceived emotion consisted of audio features extracted from the music recordings. Input for models of felt emotion consisted of physiological features extracted from the physiological recordings. Models were trained and interpreted with consideration of the classic debate in music emotion between cognitivists and emotivists. Our models supported a hybrid position wherein emotion judgments were influenced by a combination of perceived and felt emotions. In comparing the different ML approaches that were used for modeling, we conclude that neural networks were optimal, yielding models that were flexible as well as interpretable. Inspection of a committee machine, encompassing an ensemble of networks, revealed that arousal judgments were predominantly influenced by felt emotion, whereas valence judgments were predominantly influenced by perceived emotion.

  14. Inverse Analysis and Modeling for Tunneling Thrust on Shield Machine

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2013-01-01

    Full Text Available With the rapid development of sensor and detection technologies, measured data analysis plays an increasingly important role in the design and control of heavy engineering equipment. The paper proposed a method for inverse analysis and modeling based on mass on-site measured data, in which dimensional analysis and data mining techniques were combined. The method was applied to the modeling of the tunneling thrust on shield machines and an explicit expression for thrust prediction was established. Combined with on-site data from a tunneling project in China, the inverse identification of model coefficients was carried out using the multiple regression method. The model residual was analyzed by statistical methods. By comparing the on-site data and the model predicted results in the other two projects with different tunneling conditions, the feasibility of the model was discussed. The work may provide a scientific basis for the rational design and control of shield tunneling machines and also a new way for mass on-site data analysis of complex engineering systems with nonlinear, multivariable, time-varying characteristics.

  15. Near-Term Research and Testing of the CWE-300: Executive Summary of Project Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Cannon Wind Eagle Corporation

    2000-08-24

    This report is a summary of activity on this subcontract during the period from September 1, 1997 through August 30, 1999. The contract entailed the engineering, component tests, system field tests, certification, and preparation for manufacturing the existing Cannon Wind Eagle 300-kW (CWE-300) wind turbine. The CWE 300 is a lightweight, flexible machine with a number of innovative design features that, relative to comparable rigid-hub machines, promises to contribute to reduced capital, installation, and maintenance costs. The architecture of the CWE-300 evolved from earlier wind turbine models developed over several decades. The current design retains many of the desirable features of earlier machines, addresses problems exhibited by those machines, and incorporates further innovative design features.

  16. Long-term perspective underscores need for stronger near-term policies on climate change

    Science.gov (United States)

    Marcott, S. A.; Shakun, J. D.; Clark, P. U.; Mix, A. C.; Pierrehumbert, R.; Goldner, A. P.

    2014-12-01

    Despite scientific consensus that substantial anthropogenic climate change will occur during the 21st century and beyond, the social, economic and political will to address this global challenge remains mired in uncertainty and indecisiveness. One contributor to this situation may be that scientific findings are often couched in technical detail focusing on near-term changes and uncertainties and often lack a relatable long-term context. We argue that viewing near-term changes from a long-term perspective provides a clear demonstration that policy decisions made in the next few decades will affect the Earth's climate, and with it our socio-economic well-being, for the next ten millennia or more. To provide a broader perspective, we present a graphical representation of Earth's long-term climate history that clearly identifies the connection between near-term policy options and the geological scale of future climate change. This long view is based on a combination of recently developed global proxy temperature reconstructions of the last 20,000 years and model projections of surface temperature for the next 10,000 years. Our synthesis places the 20th and 21st centuries, when most emissions are likely to occur, into the context of the last twenty millennia over which time the last Ice Age ended and human civilization developed, and the next ten millennia, over which time the projected impacts will occur. This long-term perspective raises important questions about the most effective adaptation and mitigation policies. For example, although some consider it economically viable to raise seawalls and dikes in response to 21st century sea level change, such a strategy does not account for the need for continuously building much higher defenses in the 22nd century and beyond. Likewise, avoiding tipping points in the climate system in the short term does not necessarily imply that such thresholds will not still be crossed in the more distant future as slower components

  17. ANN and RSM approach for modelling and multi objective optimization of abrasive water jet machining process

    Directory of Open Access Journals (Sweden)

    Srinath Reddy N.

    2018-09-01

    Full Text Available Abrasive Water Jet Machining is one of the novel nontraditional cutting processes found diverse applications in machining different kinds of difficult-to-machine materials. Process parameters play an important role in finding the economics of machining process at good quality. This research focused on the predictive models for explaining the functional relationship between input and output parameters of AWJ machining process. No single set of parametric combination of machining variables can suggest the better responses concurrently, due to its conflicting nature. Hence, an approach of Multi-objective has been attempted for the best combination of process parameters by modelling AWJM process using of ANN. It served a set of optimal process parameters to AWJ machining process, which shows a development with an enhanced productivity. Wide set of trail experiments have been considered with a broader range of machining parameters for modelling and, then, for validating. The model is capable of predicting optimized responses.

  18. Near term climate projections for invasive species distributions

    Science.gov (United States)

    Jarnevich, C.S.; Stohlgren, T.J.

    2009-01-01

    Climate change and invasive species pose important conservation issues separately, and should be examined together. We used existing long term climate datasets for the US to project potential climate change into the future at a finer spatial and temporal resolution than the climate change scenarios generally available. These fine scale projections, along with new species distribution modeling techniques to forecast the potential extent of invasive species, can provide useful information to aide conservation and invasive species management efforts. We created habitat suitability maps for Pueraria montana (kudzu) under current climatic conditions and potential average conditions up to 30 years in the future. We examined how the potential distribution of this species will be affected by changing climate, and the management implications associated with these changes. Our models indicated that P. montana may increase its distribution particularly in the Northeast with climate change and may decrease in other areas. ?? 2008 Springer Science+Business Media B.V.

  19. Support Vector Machines for Petrophysical Modelling and Lithoclassification

    Science.gov (United States)

    Al-Anazi, Ammal Fannoush Khalifah

    2011-12-01

    Given increasing challenges of oil and gas production from partially depleted conventional or unconventional reservoirs, reservoir characterization is a key element of the reservoir development workflow. Reservoir characterization impacts well placement, injection and production strategies, and field management. Reservoir characterization projects point and line data to a large three-dimensional volume. The relationship between variables, e.g. porosity and permeability, is often established by regression yet the complexities between measured variables often lead to poor correlation coefficients between the regressed variables. Recent advances in machine learning methods have provided attractive alternatives for constructing interpretation models of rock properties in heterogeneous reservoirs. Here, Support Vector Machines (SVMs), a class of a learning machine that is formulated to output regression models and classifiers of competitive generalization capability, has been explored to determine its capabilities for determining the relationship, both in regression and in classification, between reservoir rock properties. This thesis documents research on the capability of SVMs to model petrophysical and elastic properties in heterogeneous sandstone and carbonate reservoirs. Specifically, the capabilities of SVM regression and classification has been examined and compared to neural network-based methods, namely multilayered neural networks, radial basis function neural networks, general regression neural networks, probabilistic neural networks, and linear discriminant analysis. The petrophysical properties that have been evaluated include porosity, permeability, Poisson's ratio and Young's modulus. Statistical error analysis reveals that the SVM method yields comparable or superior predictions of petrophysical and elastic rock properties and classification of the lithology compared to neural networks. The SVM method also shows uniform prediction capability under the

  20. MODEL RESEARCH OF THE ACIVE VIBROIZOLATION CABS MACHINE

    Directory of Open Access Journals (Sweden)

    Jerzy MARGIELEWICZ

    2014-03-01

    Full Text Available The study was carried out computer simulations of mechatronic model bridge crane, which is intended to theoretical evaluation of the possibility of eliminating the mechanical vibrations affecting the operator's cab driven machine. The model studies used fixed value control, the controlled variable is selected as the vertical displacement of the cab. Also included in the research model rheological model of the operator's body. We examined four overhead cranes with lifting capacity of 50T, which are classified in accordance with the directive of the European Union concerning the design of cranes, the four classes of cranes HC stiffness. The use of an active vibration isolation system in which distinguishes two negative feedback loops, very well eliminate mechanical vibration to the operator.

  1. Electric machines modeling, condition monitoring, and fault diagnosis

    CERN Document Server

    Toliyat, Hamid A; Choi, Seungdeog; Meshgin-Kelk, Homayoun

    2012-01-01

    With countless electric motors being used in daily life, in everything from transportation and medical treatment to military operation and communication, unexpected failures can lead to the loss of valuable human life or a costly standstill in industry. To prevent this, it is important to precisely detect or continuously monitor the working condition of a motor. Electric Machines: Modeling, Condition Monitoring, and Fault Diagnosis reviews diagnosis technologies and provides an application guide for readers who want to research, develop, and implement a more effective fault diagnosis and condi

  2. Near-term deployment of carbon capture and sequestration from biorefineries in the United States.

    Science.gov (United States)

    Sanchez, Daniel L; Johnson, Nils; McCoy, Sean T; Turner, Peter A; Mach, Katharine J

    2018-04-23

    Capture and permanent geologic sequestration of biogenic CO 2 emissions may provide critical flexibility in ambitious climate change mitigation. However, most bioenergy with carbon capture and sequestration (BECCS) technologies are technically immature or commercially unavailable. Here, we evaluate low-cost, commercially ready CO 2 capture opportunities for existing ethanol biorefineries in the United States. The analysis combines process engineering, spatial optimization, and lifecycle assessment to consider the technical, economic, and institutional feasibility of near-term carbon capture and sequestration (CCS). Our modeling framework evaluates least cost source-sink relationships and aggregation opportunities for pipeline transport, which can cost-effectively transport small CO 2 volumes to suitable sequestration sites; 216 existing US biorefineries emit 45 Mt CO 2 annually from fermentation, of which 60% could be captured and compressed for pipeline transport for under $25/tCO 2 A sequestration credit, analogous to existing CCS tax credits, of $60/tCO 2 could incent 30 Mt of sequestration and 6,900 km of pipeline infrastructure across the United States. Similarly, a carbon abatement credit, analogous to existing tradeable CO 2 credits, of $90/tCO 2 can incent 38 Mt of abatement. Aggregation of CO 2 sources enables cost-effective long-distance pipeline transport to distant sequestration sites. Financial incentives under the low-carbon fuel standard in California and recent revisions to existing federal tax credits suggest a substantial near-term opportunity to permanently sequester biogenic CO 2 This financial opportunity could catalyze the growth of carbon capture, transport, and sequestration; improve the lifecycle impacts of conventional biofuels; support development of carbon-negative fuels; and help fulfill the mandates of low-carbon fuel policies across the United States. Copyright © 2018 the Author(s). Published by PNAS.

  3. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  4. Advanced Machine Learning Emulators of Radiative Transfer Models

    Science.gov (United States)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  5. Modeling the Swift Bat Trigger Algorithm with Machine Learning

    Science.gov (United States)

    Graff, Philip B.; Lien, Amy Y.; Baker, John G.; Sakamoto, Takanori

    2016-01-01

    To draw inferences about gamma-ray burst (GRB) source populations based on Swift observations, it is essential to understand the detection efficiency of the Swift burst alert telescope (BAT). This study considers the problem of modeling the Swift / BAT triggering algorithm for long GRBs, a computationally expensive procedure, and models it using machine learning algorithms. A large sample of simulated GRBs from Lien et al. is used to train various models: random forests, boosted decision trees (with AdaBoost), support vector machines, and artificial neural networks. The best models have accuracies of greater than or equal to 97 percent (less than or equal to 3 percent error), which is a significant improvement on a cut in GRB flux, which has an accuracy of 89.6 percent (10.4 percent error). These models are then used to measure the detection efficiency of Swift as a function of redshift z, which is used to perform Bayesian parameter estimation on the GRB rate distribution. We find a local GRB rate density of n (sub 0) approaching 0.48 (sup plus 0.41) (sub minus 0.23) per cubic gigaparsecs per year with power-law indices of n (sub 1) approaching 1.7 (sup plus 0.6) (sub minus 0.5) and n (sub 2) approaching minus 5.9 (sup plus 5.7) (sub minus 0.1) for GRBs above and below a break point of z (redshift) (sub 1) approaching 6.8 (sup plus 2.8) (sub minus 3.2). This methodology is able to improve upon earlier studies by more accurately modeling Swift detection and using this for fully Bayesian model fitting.

  6. Near-term Intensification of the Hydrological Cycle in the United States

    Science.gov (United States)

    Ashfaq, M.; Rastogi, D.; Mei, R.; Kao, S. C.; Naz, B. S.; Gangrade, S.

    2015-12-01

    We present state-of-the-art near-term projections of hydrological changes over the continental U.S. from a hierarchical high-resolution regional modeling framework. We dynamically downscale 11 Global Climate Models (CCSM4, ACCESS1-0, NorESM1-M, MRI-CGCM3, GFDL-ESM2M, FGOALS-g2, bcc-csm1-1, MIROC5, MPI-ESM-MR, IPSL-ESM-MR, CMCC-CM5) from the 5th phase of Coupled Model Inter-comparison Project at 4-km horizontal grid spacing using a modeling framework that consists of a regional climate model (RegCM4) and a hydrological model (VIC). All model integrations span 41 years in the historic period (1965-2005) and 41 years in the near-term future period (2010-2050) under RCP 8.5. The RegCM4 domain covers the continental U.S. and parts of Canada and Mexico at 18-km horizontal grid spacing whereas the VIC domain covers only the continental U.S. at 4-km horizontal grid spacing. Should the emissions continue to rise throughout the next four decades of the 21st century, our results suggest that every region within the continental U.S. will be at least 2°C warmer before the mid-21st century, leading to the likely intensification of the regional hydrological cycle and the acceleration of the observed trends in the cold, warm and wet extremes. We also find an overall increase (decrease) in the inflows to the flood-controlling (hydroelectric) reservoirs across the United States, raising the likelihood of flooding events and significant impacts on the federal hydroelectric power generation. However, certain water-stressed regions such as California will be further constrained by extreme dry and wet conditions; these regions are incapable of storing rising quantities of runoff and wet years will not necessarily equate to an increase in water supply availability. Overall, these changes in the regional hydro-meteorology can have substantial impacts on the natural and human systems across the U.S.

  7. Machine learning based switching model for electricity load forecasting

    International Nuclear Information System (INIS)

    Fan Shu; Chen Luonan; Lee, Weijen

    2008-01-01

    In deregulated power markets, forecasting electricity loads is one of the most essential tasks for system planning, operation and decision making. Based on an integration of two machine learning techniques: Bayesian clustering by dynamics (BCD) and support vector regression (SVR), this paper proposes a novel forecasting model for day ahead electricity load forecasting. The proposed model adopts an integrated architecture to handle the non-stationarity of time series. Firstly, a BCD classifier is applied to cluster the input data set into several subsets by the dynamics of the time series in an unsupervised manner. Then, groups of SVRs are used to fit the training data of each subset in a supervised way. The effectiveness of the proposed model is demonstrated with actual data taken from the New York ISO and the Western Farmers Electric Cooperative in Oklahoma

  8. Coal demand prediction based on a support vector machine model

    Energy Technology Data Exchange (ETDEWEB)

    Jia, Cun-liang; Wu, Hai-shan; Gong, Dun-wei [China University of Mining & Technology, Xuzhou (China). School of Information and Electronic Engineering

    2007-01-15

    A forecasting model for coal demand of China using a support vector regression was constructed. With the selected embedding dimension, the output vectors and input vectors were constructed based on the coal demand of China from 1980 to 2002. After compared with lineal kernel and Sigmoid kernel, a radial basis function(RBF) was adopted as the kernel function. By analyzing the relationship between the error margin of prediction and the model parameters, the proper parameters were chosen. The support vector machines (SVM) model with multi-input and single output was proposed. Compared the predictor based on RBF neural networks with test datasets, the results show that the SVM predictor has higher precision and greater generalization ability. In the end, the coal demand from 2003 to 2006 is accurately forecasted. l0 refs., 2 figs., 4 tabs.

  9. Modeling RHIC Using the Standard Machine Format Accelerator Description

    Science.gov (United States)

    Pilat, F.; Trahern, C. G.; Wei, J.; Satogata, T.; Tepikian, S.

    1997-05-01

    The Standard Machine Format (SMF)(N. Malitsky, R. Talman, et. al., A Proposed Flat Yet Hierarchical Accelerator Lattice Object Model), Particle Accel. 55, 313(1996). is a structured description of accelerator lattices which supports both the hierarchy of beam lines and generic lattice objects as well as the deviations (field errors, misalignments, etc.) associated with each distinct component which are necessary for accurate modeling of beam dynamics. In this paper we discuss the use of SMF to describe the Relativistic Heavy Ion Collider (RHIC) as well as the ancillary data structures (such as field quality measurements) that are necessarily incorporated into the RHIC SMF model. Future applications of SMF are outlined, including its use in the RHIC operational environment.

  10. Error modeling for surrogates of dynamical systems using machine learning

    Science.gov (United States)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-12-01

    A machine-learning-based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (e.g., random forests, LASSO) to map a large set of inexpensively computed `error indicators' (i.e., features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering), and subsequently constructs a `local' regression model to predict the time-instantaneous error within each identified region of feature space. We consider two uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance, and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (e.g., time-integrated errors). We apply the proposed framework to model errors in reduced-order models of nonlinear oil--water subsurface flow simulations. The reduced-order models used in this work entail application of trajectory piecewise linearization with proper orthogonal decomposition. When the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well-averaged errors.

  11. The Abstract Machine Model for Transaction-based System Control

    Energy Technology Data Exchange (ETDEWEB)

    Chassin, David P.

    2003-01-31

    Recent work applying statistical mechanics to economic modeling has demonstrated the effectiveness of using thermodynamic theory to address the complexities of large scale economic systems. Transaction-based control systems depend on the conjecture that when control of thermodynamic systems is based on price-mediated strategies (e.g., auctions, markets), the optimal allocation of resources in a market-based control system results in an emergent optimal control of the thermodynamic system. This paper proposes an abstract machine model as the necessary precursor for demonstrating this conjecture and establishes the dynamic laws as the basis for a special theory of emergence applied to the global behavior and control of complex adaptive systems. The abstract machine in a large system amounts to the analog of a particle in thermodynamic theory. The permit the establishment of a theory dynamic control of complex system behavior based on statistical mechanics. Thus we may be better able to engineer a few simple control laws for a very small number of devices types, which when deployed in very large numbers and operated as a system of many interacting markets yields the stable and optimal control of the thermodynamic system.

  12. Subspace identification of Hammer stein models using support vector machines

    International Nuclear Information System (INIS)

    Al-Dhaifallah, Mujahed

    2011-01-01

    System identification is the art of finding mathematical tools and algorithms that build an appropriate mathematical model of a system from measured input and output data. Hammerstein model, consisting of a memoryless nonlinearity followed by a dynamic linear element, is often a good trade-off as it can represent some dynamic nonlinear systems very accurately, but is nonetheless quite simple. Moreover, the extensive knowledge about LTI system representations can be applied to the dynamic linear block. On the other hand, finding an effective representation for the nonlinearity is an active area of research. Recently, support vector machines (SVMs) and least squares support vector machines (LS-SVMs) have demonstrated powerful abilities in approximating linear and nonlinear functions. In contrast with other approximation methods, SVMs do not require a-priori structural information. Furthermore, there are well established methods with guaranteed convergence (ordinary least squares, quadratic programming) for fitting LS-SVMs and SVMs. The general objective of this research is to develop new subspace algorithms for Hammerstein systems based on SVM regression.

  13. Hidden physics models: Machine learning of nonlinear partial differential equations

    Science.gov (United States)

    Raissi, Maziar; Karniadakis, George Em

    2018-03-01

    While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from small data. In particular, we introduce hidden physics models, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data.

  14. Modeling and Simulation of Process-Machine Interaction in Grinding of Cemented Carbide Indexable Inserts

    Directory of Open Access Journals (Sweden)

    Wei Feng

    2015-01-01

    Full Text Available Interaction of process and machine in grinding of hard and brittle materials such as cemented carbide may cause dynamic instability of the machining process resulting in machining errors and a decrease in productivity. Commonly, the process and machine tools were dealt with separately, which does not take into consideration the mutual interaction between the two subsystems and thus cannot represent the real cutting operations. This paper proposes a method of modeling and simulation to understand well the process-machine interaction in grinding process of cemented carbide indexable inserts. First, a virtual grinding wheel model is built by considering the random nature of abrasive grains and a kinematic-geometrical simulation is adopted to describe the grinding process. Then, a wheel-spindle model is simulated by means of the finite element method to represent the machine structure. The characteristic equation of the closed-loop dynamic grinding system is derived to provide a mathematic description of the process-machine interaction. Furthermore, a coupling simulation of grinding wheel-spindle deformations and grinding process force by combining both the process and machine model is developed to investigate the interaction between process and machine. This paper provides an integrated grinding model combining the machine and process models, which can be used to predict process-machine interactions in grinding process.

  15. Functional networks inference from rule-based machine learning models.

    Science.gov (United States)

    Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume

    2016-01-01

    Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

  16. Risk factors for near-term myocardial infarction in apparently healthy men and women

    DEFF Research Database (Denmark)

    Nordestgaard, Børge; Adourian, Aram S; Freiberg, Jacob Johannes von S

    2010-01-01

    Limited information is available regarding risk factors for the near-term (4 years) onset of myocardial infarction (MI). We evaluated established cardiovascular risk factors and putative circulating biomarkers as predictors for MI within 4 years of measurement....

  17. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Multimedia

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  18. Rotary ATPases: models, machine elements and technical specifications.

    Science.gov (United States)

    Stewart, Alastair G; Sobti, Meghna; Harvey, Richard P; Stock, Daniela

    2013-01-01

    Rotary ATPases are molecular rotary motors involved in biological energy conversion. They either synthesize or hydrolyze the universal biological energy carrier adenosine triphosphate. Recent work has elucidated the general architecture and subunit compositions of all three sub-types of rotary ATPases. Composite models of the intact F-, V- and A-type ATPases have been constructed by fitting high-resolution X-ray structures of individual subunits or sub-complexes into low-resolution electron densities of the intact enzymes derived from electron cryo-microscopy. Electron cryo-tomography has provided new insights into the supra-molecular arrangement of eukaryotic ATP synthases within mitochondria and mass-spectrometry has started to identify specifically bound lipids presumed to be essential for function. Taken together these molecular snapshots show that nano-scale rotary engines have much in common with basic design principles of man made machines from the function of individual "machine elements" to the requirement of the right "fuel" and "oil" for different types of motors.

  19. A Reference Model for Virtual Machine Launching Overhead

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young

    2016-07-01

    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.

  20. Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling.

    Science.gov (United States)

    Cuperlovic-Culf, Miroslava

    2018-01-11

    Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies.

  1. Machine Learning Methods for Analysis of Metabolic Data and Metabolic Pathway Modeling

    Science.gov (United States)

    Cuperlovic-Culf, Miroslava

    2018-01-01

    Machine learning uses experimental data to optimize clustering or classification of samples or features, or to develop, augment or verify models that can be used to predict behavior or properties of systems. It is expected that machine learning will help provide actionable knowledge from a variety of big data including metabolomics data, as well as results of metabolism models. A variety of machine learning methods has been applied in bioinformatics and metabolism analyses including self-organizing maps, support vector machines, the kernel machine, Bayesian networks or fuzzy logic. To a lesser extent, machine learning has also been utilized to take advantage of the increasing availability of genomics and metabolomics data for the optimization of metabolic network models and their analysis. In this context, machine learning has aided the development of metabolic networks, the calculation of parameters for stoichiometric and kinetic models, as well as the analysis of major features in the model for the optimal application of bioreactors. Examples of this very interesting, albeit highly complex, application of machine learning for metabolism modeling will be the primary focus of this review presenting several different types of applications for model optimization, parameter determination or system analysis using models, as well as the utilization of several different types of machine learning technologies. PMID:29324649

  2. Dynamic modeling of an asynchronous squirrel-cage machine; Modelisation dynamique d'une machine asynchrone a cage

    Energy Technology Data Exchange (ETDEWEB)

    Guerette, D.

    2009-07-01

    This document presented a detailed mathematical explanation and validation of the steps leading to the development of an asynchronous squirrel-cage machine. The MatLab/Simulink software was used to model a wind turbine at variable high speeds. The asynchronous squirrel-cage machine is an electromechanical system coupled to a magnetic circuit. The resulting electromagnetic circuit can be represented as a set of resistances, leakage inductances and mutual inductances. Different models were used for a comparison study, including the Munteanu, Boldea, Wind Turbine Blockset, and SimPowerSystem. MatLab/Simulink modeling results were in good agreement with the results from other comparable models. Simulation results were in good agreement with analytical calculations. 6 refs, 2 tabs, 9 figs.

  3. Two-Stage Electricity Demand Modeling Using Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Krzysztof Gajowniczek

    2017-10-01

    Full Text Available Forecasting of electricity demand has become one of the most important areas of research in the electric power industry, as it is a critical component of cost-efficient power system management and planning. In this context, accurate and robust load forecasting is supposed to play a key role in reducing generation costs, and deals with the reliability of the power system. However, due to demand peaks in the power system, forecasts are inaccurate and prone to high numbers of errors. In this paper, our contributions comprise a proposed data-mining scheme for demand modeling through peak detection, as well as the use of this information to feed the forecasting system. For this purpose, we have taken a different approach from that of time series forecasting, representing it as a two-stage pattern recognition problem. We have developed a peak classification model followed by a forecasting model to estimate an aggregated demand volume. We have utilized a set of machine learning algorithms to benefit from both accurate detection of the peaks and precise forecasts, as applied to the Polish power system. The key finding is that the algorithms can detect 96.3% of electricity peaks (load value equal to or above the 99th percentile of the load distribution and deliver accurate forecasts, with mean absolute percentage error (MAPE of 3.10% and resistant mean absolute percentage error (r-MAPE of 2.70% for the 24 h forecasting horizon.

  4. Modeling the Virtual Machine Launching Overhead under Fermicloud

    Energy Technology Data Exchange (ETDEWEB)

    Garzoglio, Gabriele [Fermilab; Wu, Hao [Fermilab; Ren, Shangping [IIT, Chicago; Timm, Steven [Fermilab; Bernabeu, Gerard [Fermilab; Noh, Seo-Young [KISTI, Daejeon

    2014-11-12

    FermiCloud is a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows. The Cloud Bursting module of the FermiCloud enables the FermiCloud, when more computational resources are needed, to automatically launch virtual machines to available resources such as public clouds. One of the main challenges in developing the cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on FermiCloud’s system operational data, the VM launching overhead is not a constant. It varies with physical resource (CPU, memory, I/O device) utilization at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launch overhead reference model is needed. The paper is to develop a VM launch overhead reference model based on operational data we have obtained on FermiCloud and uses the reference model to guide the cloud bursting process.

  5. Developing a PLC-friendly state machine model: lessons learned

    Science.gov (United States)

    Pessemier, Wim; Deconinck, Geert; Raskin, Gert; Saey, Philippe; Van Winckel, Hans

    2014-07-01

    Modern Programmable Logic Controllers (PLCs) have become an attractive platform for controlling real-time aspects of astronomical telescopes and instruments due to their increased versatility, performance and standardization. Likewise, vendor-neutral middleware technologies such as OPC Unified Architecture (OPC UA) have recently demonstrated that they can greatly facilitate the integration of these industrial platforms into the overall control system. Many practical questions arise, however, when building multi-tiered control systems that consist of PLCs for low level control, and conventional software and platforms for higher level control. How should the PLC software be structured, so that it can rely on well-known programming paradigms on the one hand, and be mapped to a well-organized OPC UA interface on the other hand? Which programming languages of the IEC 61131-3 standard closely match the problem domains of the abstraction levels within this structure? How can the recent additions to the standard (such as the support for namespaces and object-oriented extensions) facilitate a model based development approach? To what degree can our applications already take advantage of the more advanced parts of the OPC UA standard, such as the high expressiveness of the semantic modeling language that it defines, or the support for events, aggregation of data, automatic discovery, ... ? What are the timing and concurrency problems to be expected for the higher level tiers of the control system due to the cyclic execution of control and communication tasks by the PLCs? We try to answer these questions by demonstrating a semantic state machine model that can readily be implemented using IEC 61131 and OPC UA. One that does not aim to capture all possible states of a system, but rather one that attempts to organize the course-grained structure and behaviour of a system. In this paper we focus on the intricacies of this seemingly simple task, and on the lessons that we

  6. Modelling and Simulation of a Synchronous Machine with Power Electronic Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede

    2005-01-01

    This paper reports the modeling and simulation of a synchronous machine with a power electronic interface in direct phase model. The implementation of a direct phase model of synchronous machines in MATLAB/SIMULINK is presented .The power electronic system associated with the synchronous machine...... is modelled in SIMULINK as well. The resulting model can more accurately represent non-idea situations such as non-symmetrical parameters of the electrical machines and unbalance conditions. The model may be used for both steady state and large-signal dynamic analysis. This is particularly useful...... in the systems where a detailed study is needed in order to assess the overall system stability. Simulation studies are performed under various operation conditions. It is shown that the developed model could be used for studies of various applications of synchronous machines such as in renewable and DG...

  7. Landmine policy in the near-term: a framework for technology analysis and action

    Energy Technology Data Exchange (ETDEWEB)

    Eimerl, D., LLNL

    1997-08-01

    Any effective solution to the problem of leftover landmines and other post-conflict unexploded ordnance (UXO) must take into account the real capabilities of demining technologies and the availability of sufficient resources to carry out demining operations. Economic and operational factors must be included in analyses of humanitarian demining. These factors will provide a framework for using currently available resources and technologies to complete this task in a time frame that is both practical and useful. Since it is likely that reliable advanced technologies for demining are still several years away, this construct applies to the intervening period. It may also provide a framework for utilizing advanced technologies as they become available. This study is an economic system model for demining operations carried out by the developed nations that clarifies the role and impact of technology on the economic performance and viability of these operations. It also provides a quantitative guide to assess the performance penalties arising from gaps in current technology, as well as the potential advantages and desirable features of new technologies that will significantly affect the international community`s ability to address this problem. Implications for current and near-term landmine and landmine technology policies are drawn.

  8. The near-term impacts of carbon mitigation policies on manufacturing industries

    International Nuclear Information System (INIS)

    Morgenstern, Richard D.; Ho Mun; Shih, J.-S.; Zhang Xuehua

    2004-01-01

    Who pays for new policies to reduce carbon dioxide and other greenhouse gas emissions in the United States? This paper considers a slice of the question by examining the near-term impact on domestic manufacturing industries of both upstream (economy-wide) and downstream (electric power industry only) carbon mitigation policies. Detailed Census data on the electricity use of four-digit manufacturing industries are combined with input-output information on inter-industry purchases to paint a detailed picture of carbon use, including effects on final demand. Regional information on electricity supply and use by region is also incorporated. A relatively simple model is developed which yields estimates of the relative burdens within the manufacturing sector of alternative carbon policies. Overall, the principal conclusion is that within the manufacturing sector (which by definition excludes coal production and electricity generation), only a small number of industries would bear a disproportionate short-term burden of a carbon tax or similar policy. Not surprisingly, an electricity-only policy affects very different manufacturing industries than an economy-wide carbon tax

  9. Assessment of two mammographic density related features in predicting near-term breast cancer risk

    Science.gov (United States)

    Zheng, Bin; Sumkin, Jules H.; Zuley, Margarita L.; Wang, Xingwei; Klym, Amy H.; Gur, David

    2012-02-01

    In order to establish a personalized breast cancer screening program, it is important to develop risk models that have high discriminatory power in predicting the likelihood of a woman developing an imaging detectable breast cancer in near-term (e.g., BIRADS), and computed mammographic density related features we compared classification performance in estimating the likelihood of detecting cancer during the subsequent examination using areas under the ROC curves (AUC). The AUCs were 0.63+/-0.03, 0.54+/-0.04, 0.57+/-0.03, 0.68+/-0.03 when using woman's age, BIRADS rating, computed mean density and difference in computed bilateral mammographic density, respectively. Performance increased to 0.62+/-0.03 and 0.72+/-0.03 when we fused mean and difference in density with woman's age. The results suggest that, in this study, bilateral mammographic tissue density is a significantly stronger (p<0.01) risk indicator than both woman's age and mean breast density.

  10. A Near-Term, High-Confidence Heavy Lift Launch Vehicle

    Science.gov (United States)

    Rothschild, William J.; Talay, Theodore A.

    2009-01-01

    The use of well understood, legacy elements of the Space Shuttle system could yield a near-term, high-confidence Heavy Lift Launch Vehicle that offers significant performance, reliability, schedule, risk, cost, and work force transition benefits. A side-mount Shuttle-Derived Vehicle (SDV) concept has been defined that has major improvements over previous Shuttle-C concepts. This SDV is shown to carry crew plus large logistics payloads to the ISS, support an operationally efficient and cost effective program of lunar exploration, and offer the potential to support commercial launch operations. This paper provides the latest data and estimates on the configurations, performance, concept of operations, reliability and safety, development schedule, risks, costs, and work force transition opportunities for this optimized side-mount SDV concept. The results presented in this paper have been based on established models and fully validated analysis tools used by the Space Shuttle Program, and are consistent with similar analysis tools commonly used throughout the aerospace industry. While these results serve as a factual basis for comparisons with other launch system architectures, no such comparisons are presented in this paper. The authors welcome comparisons between this optimized SDV and other Heavy Lift Launch Vehicle concepts.

  11. Isolation systems influence in the seismic loading propagation analysis applied to an innovative near term reactor

    International Nuclear Information System (INIS)

    Lo Frano, R.; Forasassi, G.

    2010-01-01

    Integrity of a Nuclear Power Plant (NPP) must be ensured during the plant life in any design condition and, particularly, in the event of a severe earthquake. To investigate the seismic resistance capability of as-built structures systems and components, in the event of a Safe Shutdown Earthquake (SSE), and analyse its related effects on a near term deployment reactor and its internals, a deterministic methodological approach, based on the evaluation of the propagation of seismic waves along the structure, was applied considering, also, the use of innovative anti-seismic techniques. In this paper the attention is focused on the use and influence of seismic isolation technologies (e.g. isolators based on passive energy dissipation) that seem able to ensure the full integrity and operability of NPP structures, to enhance the seismic safety (improving the design of new NPPs and if possible, to retrofit existing facilities) and to attain a standardization plant design. To the purpose of this study a numerical assessment of dynamic response/behaviour of the structures was accomplished by means of the finite element approach and setting up, as accurately as possible, a representative three-dimensional model of mentioned NPP structures. The obtained results in terms of response spectra (carried out from both cases of isolated and not isolated seismic analyses) are herein presented and compared in order to highlight the isolation technique effectiveness.

  12. Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, M Mohan; Gorin, Alexander [School of Engineering and Science, Curtin University of Technology, Sarawak (Malaysia); Abou-El-Hossein, K A, E-mail: mohan.m@curtin.edu.my [Mechanical and Aeronautical Department, Nelson Mandela Metropolitan University, Port Elegebeth, 6031 (South Africa)

    2011-02-15

    Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.

  13. Predictive Surface Roughness Model for End Milling of Machinable Glass Ceramic

    International Nuclear Information System (INIS)

    Reddy, M Mohan; Gorin, Alexander; Abou-El-Hossein, K A

    2011-01-01

    Advanced ceramics of Machinable glass ceramic is attractive material to produce high accuracy miniaturized components for many applications in various industries such as aerospace, electronics, biomedical, automotive and environmental communications due to their wear resistance, high hardness, high compressive strength, good corrosion resistance and excellent high temperature properties. Many research works have been conducted in the last few years to investigate the performance of different machining operations when processing various advanced ceramics. Micro end-milling is one of the machining methods to meet the demand of micro parts. Selecting proper machining parameters are important to obtain good surface finish during machining of Machinable glass ceramic. Therefore, this paper describes the development of predictive model for the surface roughness of Machinable glass ceramic in terms of speed, feed rate by using micro end-milling operation.

  14. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    Science.gov (United States)

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  15. Access, Equity, and Opportunity. Women in Machining: A Model Program.

    Science.gov (United States)

    Warner, Heather

    The Women in Machining (WIM) program is a Machine Action Project (MAP) initiative that was developed in response to a local skilled metalworking labor shortage, despite a virtual absence of women and people of color from area shops. The project identified post-war stereotypes and other barriers that must be addressed if women are to have an equal…

  16. An improved modelling of asynchronous machine with skin-effect ...

    African Journals Online (AJOL)

    The conventional method of analysis of Asynchronous machine fails to give accurate results especially when the machine is operated under high rotor frequency. At high rotor frequency, skin-effect dominates causing the rotor impedance to be frequency dependant. This paper therefore presents an improved method of ...

  17. Probabilistic models and machine learning in structural bioinformatics.

    Science.gov (United States)

    Hamelryck, Thomas

    2009-10-01

    Structural bioinformatics is concerned with the molecular structure of biomacromolecules on a genomic scale, using computational methods. Classic problems in structural bioinformatics include the prediction of protein and RNA structure from sequence, the design of artificial proteins or enzymes, and the automated analysis and comparison of biomacromolecules in atomic detail. The determination of macromolecular structure from experimental data (for example coming from nuclear magnetic resonance, X-ray crystallography or small angle X-ray scattering) has close ties with the field of structural bioinformatics. Recently, probabilistic models and machine learning methods based on Bayesian principles are providing efficient and rigorous solutions to challenging problems that were long regarded as intractable. In this review, I will highlight some important recent developments in the prediction, analysis and experimental determination of macromolecular structure that are based on such methods. These developments include generative models of protein structure, the estimation of the parameters of energy functions that are used in structure prediction, the superposition of macromolecules and structure determination methods that are based on inference. Although this review is not exhaustive, I believe the selected topics give a good impression of the exciting new, probabilistic road the field of structural bioinformatics is taking.

  18. Coarse-Grained Modeling of Molecular Machines in AAA+ Family

    Science.gov (United States)

    Yoshimoto, Kenji; Brooks, Charles L., III

    2007-03-01

    We present a new coarse-grained model of the large protein complexes which belong to AAA+ (ATPase associated with diverse cellular activities) family. The AAA+ proteins are highly efficient molecular machines driven by the ATP (adenosine triphosphate) binding and hydrolysis and are involved in various cellular events. While a number of groups are developing various coarse-grained models for different AAA+ proteins, the molecular details of ATP binding and hydrolysis are often neglected. In this study, we provide a robust approach to coarse-graining both the AAA+ protein and the ATP (or ADP) molecules. By imposing the distance restraints between the phosphates of the ATP and the neighboring Cα of the proteins, which are used to conserve a typical motif of ATP binding pocket, we are able to predict large conformational changes of the AAA+ proteins, such as replicative hexameric helicases. In the case of the hexameric LTag (large tumor antigen), the backbone RMSD between the predicted ATP-bound structure and the X-ray structure is 1.2 å, and the RMSD between the predicted ADP-bound structure and the X-ray structure is 1.5 å. Using the same approach, we also investigate conformational changes in the hexameric E1 protein, whose X-ray structure was recently solved with ssDNA, and give some insights into the molecular mechanisms of DNA translocation.

  19. A comparative study of slope failure prediction using logistic regression, support vector machine and least square support vector machine models

    Science.gov (United States)

    Zhou, Lim Yi; Shan, Fam Pei; Shimizu, Kunio; Imoto, Tomoaki; Lateh, Habibah; Peng, Koay Swee

    2017-08-01

    A comparative study of logistic regression, support vector machine (SVM) and least square support vector machine (LSSVM) models has been done to predict the slope failure (landslide) along East-West Highway (Gerik-Jeli). The effects of two monsoon seasons (southwest and northeast) that occur in Malaysia are considered in this study. Two related factors of occurrence of slope failure are included in this study: rainfall and underground water. For each method, two predictive models are constructed, namely SOUTHWEST and NORTHEAST models. Based on the results obtained from logistic regression models, two factors (rainfall and underground water level) contribute to the occurrence of slope failure. The accuracies of the three statistical models for two monsoon seasons are verified by using Relative Operating Characteristics curves. The validation results showed that all models produced prediction of high accuracy. For the results of SVM and LSSVM, the models using RBF kernel showed better prediction compared to the models using linear kernel. The comparative results showed that, for SOUTHWEST models, three statistical models have relatively similar performance. For NORTHEAST models, logistic regression has the best predictive efficiency whereas the SVM model has the second best predictive efficiency.

  20. Impact of Model Detail of Synchronous Machines on Real-time Transient Stability Assessment

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Østergaard, Jacob

    2013-01-01

    In this paper, it is investigated how detailed the model of a synchronous machine needs to be in order to assess transient stability using a Single Machine Equivalent (SIME). The results will show how the stability mechanism and the stability assessment are affected by the model detail. In order ...

  1. Development of Mathematical Model for Lifecycle Management Process of New Type of Multirip Saw Machine

    Directory of Open Access Journals (Sweden)

    B. V. Phung

    2017-01-01

    Full Text Available The subject of research is a new type of the multirip saw machine with circular reciprocating saw blades. This machine has a number of advantages in comparison with other machines of similar purpose. The paper presents an overview of different types of saw equipment and describes basic characteristics of the machine under investigation.Using the concept of lifecycle management of the considered machine in a unified information space is necessary to improve quality and competitiveness in the current production environment. In this lifecycle all the members, namely designers, technologists, customers, etc., have a philosophy to tend to optimize the overall machine design as much as possible. However, it is not always possible to achieve. Conversely, at the boundary between the phases there are several mismatching situations, if not even conflicting inconsistencies. For example, improvement of mass characteristics can lead to poor stability and rigidity of the saw blade. Machine output improvement through increasing frequency of the machine motor rotation, on the other side, results in reducing stable ability of the saw blades and so on.In order to provide a coherent framework for the collaborative environment between the members of the life cycle, the article presents a technique to construct a mathematical model that allows combining all different members’ requirements in the unified information model. The article also gives analysis of kinematic and dynamic behavior and technological characteristics of the machine. Describes in detail all the controlled parameters, functional constraints, and quality criteria of the machine under consideration. Depending on the controlled parameters, the analytical relationships formulate functional constraints and quality criteria of the machine. The proposed algorithm allows fast and exact calculation of all the functional constraints and quality criteria of the machine for a given vector of the control

  2. Simulated Near-term Climate Change Impacts on Major Crops across Latin America and the Caribbean

    Science.gov (United States)

    Gourdji, S.; Mesa-Diez, J.; Obando-Bonilla, D.; Navarro-Racines, C.; Moreno, P.; Fisher, M.; Prager, S.; Ramirez-Villegas, J.

    2016-12-01

    Robust estimates of climate change impacts on agricultural production can help to direct investments in adaptation in the coming decades. In this study commissioned by the Inter-American Development Bank, near-term climate change impacts (2020-2049) are simulated relative to a historical baseline period (1971-2000) for five major crops (maize, rice, wheat, soybean and dry bean) across Latin America and the Caribbean (LAC) using the DSSAT crop model. No adaptation or technological change is assumed, thereby providing an analysis of existing climatic stresses on yields in the region and a worst-case scenario in the coming decades. DSSAT is run across irrigated and rain-fed growing areas in the region at a 0.5° spatial resolution for each crop. Crop model inputs for soils, planting dates, crop varieties and fertilizer applications are taken from previously-published datasets, and also optimized for this study. Results show that maize and dry bean are the crops most affected by climate change, followed by wheat, with only minimal changes for rice and soybean. Generally, rain-fed production sees more severe yield declines than irrigated production, although large increases in irrigation water are needed to maintain yields, reducing the yield-irrigation productivity in most areas and potentially exacerbating existing supply limitations in watersheds. This is especially true for rice and soybean, the two crops showing the most neutral yield changes. Rain-fed yields for maize and bean are projected to decline most severely in the sub-tropical Caribbean, Central America and northern South America, where climate models show a consistent drying trend. Crop failures are also projected to increase in these areas, necessitating switches to other crops or investment in adaptation measures. Generally, investment in agricultural adaptation to climate change (such as improved seed and irrigation infrastructure) will be needed throughout the LAC region in the 21st century.

  3. A self-calibrating robot based upon a virtual machine model of parallel kinematics

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Eiríksson, Eyþór Rúnar; Hansen, Hans Nørgaard

    2016-01-01

    a virtual machine of the kinematics system, built on principles from geometrical metrology. Relevant mathematically non-trivial deviations to the ideal machine are identified and decomposed into elemental deviations. From these deviations, a routine is added to a physical machine tool, which allows......A delta-type parallel kinematics system for Additive Manufacturing has been created, which through a probing system can recognise its geometrical deviations from nominal and compensate for these in the driving inverse kinematic model of the machine. Novelty is that this model is derived from...... it to recognise its own geometry by probing the vertical offset from tool point to the machine table, at positions in the horizontal plane. After automatic calibration the positioning error of the machine tool was reduced from an initial error after its assembly of ±170 µm to a calibrated error of ±3 µm...

  4. Multi-objective optimization model of CNC machining to minimize processing time and environmental impact

    Science.gov (United States)

    Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad

    2017-11-01

    Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.

  5. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  6. Using bias correction to achieve reliable near-term climate projections

    Science.gov (United States)

    Van Schaeybroeck, Bert; Vannitsem, Stéphane; Termonia, Piet

    2017-04-01

    Internationally coordinated climate initiatives (e.g. CORDEX or CMIP5) collect large multi-model ensembles of climate simulations to provide a sample of potential outcomes. Due to their large model biases, the climate models are used for sensitivity experiments in the context of climate change, rather than for reproducing exact climatological conditions. In other words, the main model output from model simulations under a certain greenhouse gas scenario are the climate changes or trends rather than the absolute values. The ensemble simulations are perfectly reliable when the reality (observations) can be considered as a member from the ensemble. The reliability of climate predictions was already investigated on different time scales, from monthly and seasonal up to decadal ones (Räisänen, 2007; Weisheimer, 2011; Corti, 212). However, it has been shown that global models cannot reliably reproduce climate change trends of the past decades (Van Oldenborgh et al. 2013). More specifically, for both precipitation and temperature, the observations are more frequently an outlier to the CMIP5 ensemble than expected. Such underdispersive ensembles are common to medium-range ensemble weather forecasting. However, in numerical weather predictions, the lack of reliability is overcome by the use of advanced bias-correction methods (Van Schaeybroeck and Vannitsem, 2015). We present an application of such post-processing (also called Model Output Statistics, MOS) techniques to climate predictions, with the aim of increasing the reliability of climate trends from the CORDEX ensemble. After a validation of the method on a historical period, we apply the calibration to different future near-term scenarios. The applied technique allows to correct each ensemble member in such a way that spatio-temporal correlations are preserved. References S. Corti, A. Weisheimer, T. N. Palmer, F. J. Doblas-Reyes, and L. Magnusson, Reliability of decadal predictions, Geophys. Res. Lett., 39, L21712

  7. Towards an automatic model transformation mechanism from UML state machines to DEVS models

    Directory of Open Access Journals (Sweden)

    Ariel González

    2015-08-01

    Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.

  8. Experimental "evolutional machines": mathematical and experimental modeling of biological evolution

    Science.gov (United States)

    Brilkov, A. V.; Loginov, I. A.; Morozova, E. V.; Shuvaev, A. N.; Pechurkin, N. S.

    Experimentalists possess model systems of two major types for study of evolution continuous cultivation in the chemostat and long-term development in closed laboratory microecosystems with several trophic structure If evolutionary changes or transfer from one steady state to another in the result of changing qualitative properties of the system take place in such systems the main characteristics of these evolution steps can be measured By now this has not been realized from the point of view of methodology though a lot of data on the work of both types of evolutionary machines has been collected In our experiments with long-term continuous cultivation we used the bacterial strains containing in plasmids the cloned genes of bioluminescence and green fluorescent protein which expression level can be easily changed and controlled In spite of the apparent kinetic diversity of evolutionary transfers in two types of systems the general mechanisms characterizing the increase of used energy flow by populations of primer producent can be revealed at their study According to the energy approach at spontaneous transfer from one steady state to another e g in the process of microevolution competition or selection heat dissipation characterizing the rate of entropy growth should increase rather then decrease or maintain steady as usually believed The results of our observations of experimental evolution require further development of thermodynamic theory of open and closed biological systems and further study of general mechanisms of biological

  9. Assessing the near-term risk of climate uncertainty : interdependencies among the U.S. states.

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.

    2010-04-01

    Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.

  10. Modelling Machine Tools using Structure Integrated Sensors for Fast Calibration

    Directory of Open Access Journals (Sweden)

    Benjamin Montavon

    2018-02-01

    Full Text Available Monitoring of the relative deviation between commanded and actual tool tip position, which limits the volumetric performance of the machine tool, enables the use of contemporary methods of compensation to reduce tolerance mismatch and the uncertainties of on-machine measurements. The development of a primarily optical sensor setup capable of being integrated into the machine structure without limiting its operating range is presented. The use of a frequency-modulating interferometer and photosensitive arrays in combination with a Gaussian laser beam allows for fast and automated online measurements of the axes’ motion errors and thermal conditions with comparable accuracy, lower cost, and smaller dimensions as compared to state-of-the-art optical measuring instruments for offline machine tool calibration. The development is tested through simulation of the sensor setup based on raytracing and Monte-Carlo techniques.

  11. Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.

    Science.gov (United States)

    Musso, Fabio; Feverati, Giovanni

    2012-01-01

    We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Machine learning in updating predictive models of planning and scheduling transportation projects

    Science.gov (United States)

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  13. Underlying finite state machine for the social engineering attack detection model

    CSIR Research Space (South Africa)

    Mouton, Francois

    2017-08-01

    Full Text Available one to have a clearer overview of the mental processing performed within the model. While the current model provides a general procedural template for implementing detection mechanisms for social engineering attacks, the finite state machine provides a...

  14. Profound hypotension and associated electrocardiographic changes during prolonged cord occlusion in the near term fetal sheep

    NARCIS (Netherlands)

    Wibbens, B; Westgate, JA; Bennet, L; Roelfsema, [No Value; De Haan, HH; Hunter, CJ; Gunn, AJ

    Objective: To determine whether the onset of fetal hypotension during profound asphyxia is reflected by alterations in the ratio between the T height, measured from the level of the PQ interval, and the QRS amplitude (T/QRS ratio) and ST waveform. Study design: Chronically instrumented near-term

  15. Acute maternal rehydration increases the urine production rate in the near-term human fetus

    NARCIS (Netherlands)

    Haak, MC; Aarnoudse, JG; Oosterhof, H.

    OBJECTIVE: We sought to investigate the effect of a decrease of maternal plasma osmolality produced by hypotonic rehydration on the fetal urine production rate in normal near-term human fetuses. STUDY DESIGN: Twenty-one healthy pregnant women attending the clinic for antenatal care were studied

  16. Photovoltaic (PV) Pricing Trends: Historical, Recent, and Near-Term Projections

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, D.; Barbose, G.; Margolis, R.; Wiser, R.; Darghouth, N.; Goodrich, A.

    2012-11-01

    This report helps to clarify the confusion surrounding different estimates of system pricing by distinguishing between past, current, and near-term projected estimates. It also discusses the different methodologies and factors that impact the estimated price of a PV system, such as system size, location, technology, and reporting methods.These factors, including timing, can have a significant impact on system pricing.

  17. Do differences in future sulfate emission pathways matter for near-term climate? A case study for the Asian monsoon

    Science.gov (United States)

    Bartlett, Rachel E.; Bollasina, Massimo A.; Booth, Ben B. B.; Dunstone, Nick J.; Marenco, Franco; Messori, Gabriele; Bernie, Dan J.

    2018-03-01

    Anthropogenic aerosols could dominate over greenhouse gases in driving near-term hydroclimate change, especially in regions with high present-day aerosol loading such as Asia. Uncertainties in near-future aerosol emissions represent a potentially large, yet unexplored, source of ambiguity in climate projections for the coming decades. We investigated the near-term sensitivity of the Asian summer monsoon to aerosols by means of transient modelling experiments using HadGEM2-ES under two existing climate change mitigation scenarios selected to have similar greenhouse gas forcing, but to span a wide range of plausible global sulfur dioxide emissions. Increased sulfate aerosols, predominantly from East Asian sources, lead to large regional dimming through aerosol-radiation and aerosol-cloud interactions. This results in surface cooling and anomalous anticyclonic flow over land, while abating the western Pacific subtropical high. The East Asian monsoon circulation weakens and precipitation stagnates over Indochina, resembling the observed southern-flood-northern-drought pattern over China. Large-scale circulation adjustments drive suppression of the South Asian monsoon and a westward extension of the Maritime Continent convective region. Remote impacts across the Northern Hemisphere are also generated, including a northwestward shift of West African monsoon rainfall induced by the westward displacement of the Indian Ocean Walker cell, and temperature anomalies in northern midlatitudes linked to propagation of Rossby waves from East Asia. These results indicate that aerosol emissions are a key source of uncertainty in near-term projection of regional and global climate; a careful examination of the uncertainties associated with aerosol pathways in future climate assessments must be highly prioritised.

  18. Global and Regional Temperature-change Potentials for Near-term Climate Forcers

    Science.gov (United States)

    Collins, W.J.; Fry, M. M.; Yu, H.; Fuglestvedt, J. S.; Shindell, D. T.; West, J. J.

    2013-01-01

    The emissions of reactive gases and aerosols can affect climate through the burdens of ozone, methane and aerosols, having both cooling and warming effects. These species are generally referred to near-term climate forcers (NTCFs) or short-lived climate pollutants (SLCPs), because of their short atmospheric residence time. The mitigation of these would be attractive for both air quality and climate on a 30-year timescale, provided it is not at the expense of CO2 mitigation. In this study we examine the climate effects of the emissions of NTCFs from 4 continental regions (East Asia, Europe, North America and South Asia) using results from the Task Force on Hemispheric Transport of Air Pollution Source-Receptor global chemical transport model simulations. We address 3 aerosol species (sulphate, particulate organic matter and black carbon - BC) and 4 ozone precursors (methane, reactive nitrogen oxides - NOx, volatile organic compounds VOC, and carbon monoxide - CO). For the aerosols the global warming potentials (GWPs) and global temperature change potentials (GTPs) are simply time-dependent scaling of the equilibrium radiative forcing, with the GTPs decreasing more rapidly with time than the GWPs. While the aerosol climate metrics have only a modest dependence on emission region, emissions of NOx and VOCs from South Asia have GWPs and GTPs of higher magnitude than from the other northern hemisphere regions. On regional basis, the northern mid-latitude temperature response to northern mid-latitude emissions is approximately twice as large as the global average response for aerosol emission, and about 20-30% larger than the global average for methane, VOC and CO emissions. We also found that temperatures in the Arctic latitudes appear to be particularly sensitive to black carbon emissions from South Asia.

  19. Ecological and biomedical effects of effluents from near-term electric vehicle storage battery cycles

    Energy Technology Data Exchange (ETDEWEB)

    1980-05-01

    An assessment of the ecological and biomedical effects due to commercialization of storage batteries for electric and hybrid vehicles is given. It deals only with the near-term batteries, namely Pb/acid, Ni/Zn, and Ni/Fe, but the complete battery cycle is considered, i.e., mining and milling of raw materials, manufacture of the batteries, cases and covers; use of the batteries in electric vehicles, including the charge-discharge cycles; recycling of spent batteries; and disposal of nonrecyclable components. The gaseous, liquid, and solid emissions from various phases of the battery cycle are identified. The effluent dispersal in the environment is modeled and ecological effects are assessed in terms of biogeochemical cycles. The metabolic and toxic responses by humans and laboratory animals to constituents of the effluents are discussed. Pertinent environmental and health regulations related to the battery industry are summarized and regulatory implications for large-scale storage battery commercialization are discussed. Each of the seven sections were abstracted and indexed individually for EDB/ERA. Additional information is presented in the seven appendixes entitled; growth rate scenario for lead/acid battery development; changes in battery composition during discharge; dispersion of stack and fugitive emissions from battery-related operations; methodology for estimating population exposure to total suspended particulates and SO/sub 2/ resulting from central power station emissions for the daily battery charging demand of 10,000 electric vehicles; determination of As air emissions from Zn smelting; health effects: research related to EV battery technologies. (JGB)

  20. Global and regional temperature-change potentials for near-term climate forcers

    Directory of Open Access Journals (Sweden)

    W. J. Collins

    2013-03-01

    Full Text Available We examine the climate effects of the emissions of near-term climate forcers (NTCFs from 4 continental regions (East Asia, Europe, North America and South Asia using results from the Task Force on Hemispheric Transport of Air Pollution Source-Receptor global chemical transport model simulations. We address 3 aerosol species (sulphate, particulate organic matter and black carbon and 4 ozone precursors (methane, reactive nitrogen oxides (NOx, volatile organic compounds and carbon monoxide. We calculate the global climate metrics: global warming potentials (GWPs and global temperature change potentials (GTPs. For the aerosols these metrics are simply time-dependent scalings of the equilibrium radiative forcings. The GTPs decrease more rapidly with time than the GWPs. The aerosol forcings and hence climate metrics have only a modest dependence on emission region. The metrics for ozone precursors include the effects on the methane lifetime. The impacts via methane are particularly important for the 20 yr GTPs. Emissions of NOx and VOCs from South Asia have GWPs and GTPs of higher magnitude than from the other Northern Hemisphere regions. The analysis is further extended by examining the temperature-change impacts in 4 latitude bands, and calculating absolute regional temperature-change potentials (ARTPs. The latitudinal pattern of the temperature response does not directly follow the pattern of the diagnosed radiative forcing. We find that temperatures in the Arctic latitudes appear to be particularly sensitive to BC emissions from South Asia. The northern mid-latitude temperature response to northern mid-latitude emissions is approximately twice as large as the global average response for aerosol emission, and about 20–30% larger than the global average for methane, VOC and CO emissions.

  1. Using financial risk measures for analyzing generalization performance of machine learning models.

    Science.gov (United States)

    Takeda, Akiko; Kanamori, Takafumi

    2014-09-01

    We propose a unified machine learning model (UMLM) for two-class classification, regression and outlier (or novelty) detection via a robust optimization approach. The model embraces various machine learning models such as support vector machine-based and minimax probability machine-based classification and regression models. The unified framework makes it possible to compare and contrast existing learning models and to explain their differences and similarities. In this paper, after relating existing learning models to UMLM, we show some theoretical properties for UMLM. Concretely, we show an interpretation of UMLM as minimizing a well-known financial risk measure (worst-case value-at risk (VaR) or conditional VaR), derive generalization bounds for UMLM using such a risk measure, and prove that solving problems of UMLM leads to estimators with the minimized generalization bounds. Those theoretical properties are applicable to related existing learning models. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Balancing of rotating machines using influence coefficients calculated by numerical models; Equilibrage des machines tournantes par coefficients d`influence a l`aide de modeles numeriques

    Energy Technology Data Exchange (ETDEWEB)

    Chevalier, R.; Bigret, R.; Karajani, R.; Vialard, S.

    1995-09-01

    The balancing of large rotating machines (turbine-generator sets and reactor coolant pumps) is generally carried out at Electricite de France using a influence coefficient method. For this, the influence of unbalances in the balancing planes have to be ascertained and, as a consequence, involves stopping and starting the machines several times. The purpose of the presented study is to analyse the possibility of reducing machine unavailability through the use of influence coefficients calculated with the help of a adjusted numerical (unbalance response) model for the balancing process. The principles of this method are shown and applied to a mock-up of a shaft line fitted with a full set of instruments (bearings and shaft) and having modal characteristics similar to common machines. The results are encouraging. They show the feasibility of the method.

  3. A Sustainable Model for Integrating Current Topics in Machine Learning Research into the Undergraduate Curriculum

    Science.gov (United States)

    Georgiopoulos, M.; DeMara, R. F.; Gonzalez, A. J.; Wu, A. S.; Mollaghasemi, M.; Gelenbe, E.; Kysilka, M.; Secretan, J.; Sharma, C. A.; Alnsour, A. J.

    2009-01-01

    This paper presents an integrated research and teaching model that has resulted from an NSF-funded effort to introduce results of current Machine Learning research into the engineering and computer science curriculum at the University of Central Florida (UCF). While in-depth exposure to current topics in Machine Learning has traditionally occurred…

  4. International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines

    CERN Document Server

    Belyaev, Alexander; Krommer, Michael

    2017-01-01

    The papers in this volume present and discuss the frontiers in the mechanics of controlled machines and structures. They are based on papers presented at the International Workshop on Advanced Dynamics and Model Based Control of Structures and Machines held in Vienna in September 2015. The workshop continues a series of international workshops held in Linz (2008) and St. Petersburg (2010).

  5. Static Stiffness Modeling of a Novel PKM-Machine Tool Structure

    Directory of Open Access Journals (Sweden)

    O. K. Akmaev

    2014-07-01

    Full Text Available This article presents a new configuration of a 3-dof machine tool with parallel kinematics. Elastic deformations of the machine tool have been modeled with finite elements, stiffness coefficients at characteristic points of the working area for different cutting forces have been calculated.

  6. The Effect of Unreliable Machine for Two Echelons Deteriorating Inventory Model

    Directory of Open Access Journals (Sweden)

    I Nyoman Sutapa

    2014-01-01

    Full Text Available Many researchers have developed two echelons supply chain, however only few of them consider deteriorating items and unreliable machine in their models In this paper, we develop an inventory deteriorating model for two echelons supply chain with unreliable machine. The unreliable machine time is assumed uniformly distributed. The model is solved using simple heuristic since a closed form model can not be derived. A numerical example is used to show how the model works. A sensitivity analysis is conducted to show effect of different lost sales cost in the model. The result shows that increasing lost sales cost will increase both manufacture and buyer costs however buyer’s total cost increase higher than manufacture’s total cost as manufacture’s machine is more unreliable.

  7. Rapid evaluation of machine tools with position-dependent milling stability based on response surface model

    Directory of Open Access Journals (Sweden)

    Li Zhang

    2016-03-01

    Full Text Available The milling stability is one of the important evaluation criterions of dynamic characteristics of machine tools, and it is of great importance for machine tools’ design and manufacturing. The milling stability of machine tools generally varies with the position combinations of moving parts. The traditional milling stability analysis of machine tools is based on some specific positions in the whole workspace of machine tools, and the results are not comprehensive. Furthermore, it is very time-consuming for operation and calculation to complete analysis of multiple positions. A new method to rapidly evaluate the stability of machine tools with position dependence is developed in this article. In this method, the key position combinations of moving parts are set as the samples of calculation to calculate the dynamic characteristics of machine tools with SAMCEF finite element simulation analysis software. Then the minimum critical axial cutting depth of each sample is obtained. The relationship between the position and the value of minimum critical axial cutting depth at any position in the whole workspace can be obtained through established response surface model. The precision of the response surface model is evaluated and the model could be used to rapidly evaluate the milling stability of machine tools with position dependence. With a precision horizontal machining center with box-in-box structure as an example, the value of minimum critical axial cutting depth at any position is shown. This method of rapid evaluation of machine tools with position-dependent stability avoids complicated theoretical calculation, so it can be easily adopted by engineers and technicians in the phase of design process of machine tools.

  8. Comparative study for different statistical models to optimize cutting parameters of CNC end milling machines

    International Nuclear Information System (INIS)

    El-Berry, A.; El-Berry, A.; Al-Bossly, A.

    2010-01-01

    In machining operation, the quality of surface finish is an important requirement for many work pieces. Thus, that is very important to optimize cutting parameters for controlling the required manufacturing quality. Surface roughness parameter (Ra) in mechanical parts depends on turning parameters during the turning process. In the development of predictive models, cutting parameters of feed, cutting speed, depth of cut, are considered as model variables. For this purpose, this study focuses on comparing various machining experiments which using CNC vertical machining center, work pieces was aluminum 6061. Multiple regression models are used to predict the surface roughness at different experiments.

  9. A Model of Parallel Kinematics for Machine Calibration

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Bæk Nielsen, Morten; Kløve Christensen, Simon

    2016-01-01

    . This research identifies that the rapid lift and repositioning capabilities of delta robots can reduce defects on extruded 3D printed parts when compared to traditional Cartesian motion systems. This is largely due to the fact that repositioning is so rapid that the extruded strand is instantly broken...... developed in order to decompose the different types of geometrical errors into 6 elementary cases. Deliberate introduction of errors to the virtual machine has subsequently allowed for the generation of deviation plots that can be used as a strong tool for the identification and correction of geometrical...... errors on a physical machine tool....

  10. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    The increasing importance of turning operations is gaining new dimensions in the present industrial age, in which the growing competition calls for all the efforts to be directed towards the economical manufacture of machined parts as well as surface finish is one of the most critical quality measure in mechanical products.

  11. Mathematical model of five-phase induction machine

    Czech Academy of Sciences Publication Activity Database

    Schreier, Luděk; Bendl, Jiří; Chomát, Miroslav

    2011-01-01

    Roč. 56, č. 2 (2011), s. 141-157 ISSN 0001-7043 R&D Projects: GA ČR GA102/08/0424 Institutional research plan: CEZ:AV0Z20570509 Keywords : five-phase induction machines * symmetrical components * spatial wave harmonics Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering

  12. M2 priority screening system for near-term activities: Project documentation. Final report December 11, 1992--May 31, 1994

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-08-12

    From May through August, 1993, the M-2 Group within M Division at LANL conducted with the support of the LANL Integration and Coordination Office (ICO) and Applied Decision Analysis, Inc. (ADA), whose purpose was to develop a system for setting priorities among activities. This phase of the project concentrated on prioritizing near-tenn activities (i.e., activities that must be conducted in the next six months) necessary for setting up this new group. Potential future project phases will concentrate on developing a tool for setting priorities and developing annual budgets for the group`s operations. The priority screening system designed to address the near-term problem was developed, applied in a series of meeting with the group managers, and used as an aid in the assignment of tasks to group members. The model was intended and used as a practical tool for documenting and explaining decisions about near-term priorities, and not as a substitute for M-2 management judgment and decision-making processes.

  13. Modelling of the dynamic behaviour of hard-to-machine alloys

    Directory of Open Access Journals (Sweden)

    Bäker M.

    2012-08-01

    Full Text Available Machining of titanium alloys and nickel based superalloys can be difficult due to their excellent mechanical properties combining high strength, ductility, and excellent overall high temperature performance. Machining of these alloys can, however, be improved by simulating the processes and by optimizing the machining parameters. The simulations, however, need accurate material models that predict the material behaviour in the range of strains and strain rates that occur in the machining processes. In this work, the behaviour of titanium 15-3-3-3 alloy and nickel based superalloy 625 were characterized in compression, and Johnson-Cook material model parameters were obtained from the results. For the titanium alloy, the adiabatic Johnson-Cook model predicts softening of the material adequately, but the high strain hardening rate of Alloy 625 in the model prevents the localization of strain and no shear bands were formed when using this model. For Alloy 625, the Johnson-Cook model was therefore modified to decrease the strain hardening rate at large strains. The models were used in the simulations of orthogonal cutting of the material. For both materials, the models are able to predict the serrated chip formation, frequently observed in the machining of these alloys. The machining forces also match relatively well, but some differences can be seen in the details of the experimentally obtained and simulated chip shapes.

  14. Near-Term Electric Vehicle Program. Phase II: Mid-Term Summary Report.

    Energy Technology Data Exchange (ETDEWEB)

    None

    1978-08-01

    The Near Term Electric Vehicle (NTEV) Program is a constituent elements of the overall national Electric and Hybrid Vehicle Program that is being implemented by the Department of Energy in accordance with the requirements of the Electric and Hybrid Vehicle Research, Development, and Demonstration Act of 1976. Phase II of the NTEV Program is focused on the detailed design and development, of complete electric integrated test vehicles that incorporate current and near-term technology, and meet specified DOE objectives. The activities described in this Mid-Term Summary Report are being carried out by two contractor teams. The prime contractors for these contractor teams are the General Electric Company and the Garrett Corporation. This report is divided into two discrete parts. Part 1 describes the progress of the General Electric team and Part 2 describes the progress of the Garrett team.

  15. Elective caesarean section and respiratory morbidity in the term and near-term neonate

    DEFF Research Database (Denmark)

    Hansen, Anne Kirkeby; Wisborg, Kirsten; Uldbjerg, Niels

    2007-01-01

    AIM: The aim of this review was to assess the relationship between delivery by elective caesarean section and respiratory morbidity in the term and near-term neonate. METHODS: Searches were made in the MEDLINE database, EMBASE, Cochrane database and Web of Science to identify peer-reviewed studies...... in English on elective caesarean section and respiratory morbidity in the newborn. We included studies that compared elective caesarean section to vaginal or intended vaginal delivery, with clear definition of outcome measures and information about gestational age. RESULTS: Nine eligible studies were...... identified. All studies found that delivery by elective caesarean section increased the risk of various respiratory morbidities in the newborn near term compared with vaginal delivery, although the findings were not statistically significant in all studies. It was inappropriate to carry out a meta...

  16. Materials problems and possible solutions for near term Tokamak fusion reactors

    International Nuclear Information System (INIS)

    Kulcinski, G.L.

    1978-01-01

    It is the purpose of this paper to clarify the magnitude of the problems that might arise from neutron damage and to put into perspective the methods and facilities that might be used to solve these problems. First, a brief review of some of the fundamental aspects of radiation damage from neutrons will be given for the non-materials scientist followed by a current listing of the anticipated radiation environment of the various near term (TFTR, JET, T-20), EPR, and DPR designs. The reader should note that such designs are highly fluid and may change considerably in the future (in fact due to the very problem we will be discussing). Next, the present and future facilities that could be used to test CTR materials will be reviewed and their utility in providing pertinent fundamental and engineering data will be discussed. Finally, some conclusions and recommendations on the near term reactor materials problems will be presented

  17. Near-Term Nuclear Power Revival? A U.S. and International Perspective

    International Nuclear Information System (INIS)

    Braun, C.

    2004-01-01

    In this paper I review the causes for the renewed interest in the near-term revival of nuclear power in the U.S. and internationally. I comment on the progress already made in the U.S. in restarting a second era of commercial nuclear power plant construction, and on what is required going forwards, from a utilities perspective, to commit to and implement new plant orders. I review the specific nuclear projects discussed and committed to in the U.S. and abroad in terms of utilities, sites, vendor and suppliers teams, and project arrangements. I will then offer some tentative conclusions regarding the prospects for a near-term U.S. and global nuclear power revival

  18. Near-Term Opportunities for Carbon Dioxide Capture and Storage 2007

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2007-07-01

    This document contains the summary report of the workshop on global assessments for near-term opportunities for carbon dioxide capture and storage (CCS), which took place on 21-22 June 2007 in Oslo, Norway. It provided an opportunity for direct dialogue between concerned stakeholders in the global effort to accelerate the development and commercialisation of CCS technology. This is part of a series of three workshops on near-term opportunities for this important mitigation option that will feed into the G8 Plan of Action on Climate Change, Clean Energy and Sustainable Development. The ultimate goal of this effort is to present a report and policy recommendations to the G8 leaders at their 2008 summit meeting in Japan.

  19. Association Rule-based Predictive Model for Machine Failure in Industrial Internet of Things

    Science.gov (United States)

    Kwon, Jung-Hyok; Lee, Sol-Bee; Park, Jaehoon; Kim, Eui-Jik

    2017-09-01

    This paper proposes an association rule-based predictive model for machine failure in industrial Internet of things (IIoT), which can accurately predict the machine failure in real manufacturing environment by investigating the relationship between the cause and type of machine failure. To develop the predictive model, we consider three major steps: 1) binarization, 2) rule creation, 3) visualization. The binarization step translates item values in a dataset into one or zero, then the rule creation step creates association rules as IF-THEN structures using the Lattice model and Apriori algorithm. Finally, the created rules are visualized in various ways for users’ understanding. An experimental implementation was conducted using R Studio version 3.3.2. The results show that the proposed predictive model realistically predicts machine failure based on association rules.

  20. Non-linear hybrid control oriented modelling of a digital displacement machine

    DEFF Research Database (Denmark)

    Pedersen, Niels Henrik; Johansen, Per; Andersen, Torben O.

    2017-01-01

    Proper feedback control of digital fluid power machines (Pressure, flow, torque or speed control) requires a control oriented model, from where the system dynamics can be analyzed, stability can be proven and design criteria can be specified. The development of control oriented models for hydraulic...... Digital Displacement Machines (DDM) is complicated due to non-smooth machine behavior, where the dynamics comprises both analog, digital and non-linear elements. For a full stroke operated DDM the power throughput is altered in discrete levels based on the ratio of activated pressure chambers....... In this paper, a control oriented hybrid model is established, which combines the continuous non-linear pressure chamber dynamics and the discrete shaft position dependent activation of the pressure chambers. The hybrid machine model is further extended to describe the dynamics of a Digital Fluid Power...

  1. The Near-Term Impacts of Carbon Mitigation Policies on Manufacturing Industries

    OpenAIRE

    Morgenstern, Richard; Shih, Jhih-Shyang; Ho, Mun; Zhang, Xuehua

    2002-01-01

    Who will pay for new policies to reduce carbon dioxide and other greenhouse gas emissions in the United States? This paper considers a slice of the question by examining the near-term impact on domestic manufacturing industries of both upstream (economy-wide) and downstream (electric power industry only) carbon mitigation policies. Detailed Census data on the electricity use of four-digit manufacturing industries is combined with input-output information on interindustry purchases to paint a ...

  2. Mobile robotics for CANDU reactor maintenance: case studies and near-term improvements

    International Nuclear Information System (INIS)

    Lipsett, M. G.; Rody, K.H.

    1995-01-01

    Although robotics researchers have been promising that robotics would soon be performing tasks in hazardous environments, the reality has yet to live up to the hype. The presently available crop of robots suitable for deployment in industrial situations are remotely operated, requiring skilled users. This talk describes cases where mobile robots have been used successfully in CANDU stations, discusses the difficulties in using mobile robots for reactor maintenance, and provides near-term goals for achievable improvements in performance and usefulness. (author)

  3. Photovoltaic System Pricing Trends: Historical, Recent, and Near-Term Projections. 2014 Edition (Presentation)

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, D.; Barbose, G.; Margolis, R.; James, T.; Weaver, S.; Darghouth, N.; Fu, R.; Davidson, C.; Booth, S.; Wiser, R.

    2014-09-01

    This presentation, based on research at Lawrence Berkeley National Laboratory and the National Renewable Energy Laboratory, provides a high-level overview of historical, recent, and projected near-term PV pricing trends in the United States focusing on the installed price of PV systems. It also attempts to provide clarity surrounding the wide variety of potentially conflicting data available about PV system prices. This PowerPoint is the third edition from this series.

  4. Photovoltaic System Pricing Trends. Historical, Recent, and Near-Term Projections, 2015 Edition

    Energy Technology Data Exchange (ETDEWEB)

    Feldman, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bolinger, Mark [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Chung, Donald [National Renewable Energy Lab. (NREL), Golden, CO (United States); Fu, Ran [National Renewable Energy Lab. (NREL), Golden, CO (United States); Seel, Joachim [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Davidson, Carolyn [National Renewable Energy Lab. (NREL), Golden, CO (United States); Darghouth, Naïm [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wiser, Ryan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-08-25

    This presentation, based on research at Lawrence Berkeley National Laboratory and the National Renewable Energy Laboratory, provides a high-level overview of historical, recent, and projected near-term PV pricing trends in the United States focusing on the installed price of PV systems. It also attempts to provide clarity surrounding the wide variety of potentially conflicting data available about PV system prices. This PowerPoint is the fourth edition from this series.

  5. Geospatial analysis of near-term potential for carbon-negative bioenergy in the United States.

    Science.gov (United States)

    Baik, Ejeong; Sanchez, Daniel L; Turner, Peter A; Mach, Katharine J; Field, Christopher B; Benson, Sally M

    2018-03-27

    Bioenergy with carbon capture and storage (BECCS) is a negative-emissions technology that may play a crucial role in climate change mitigation. BECCS relies on the capture and sequestration of carbon dioxide (CO 2 ) following bioenergy production to remove and reliably sequester atmospheric CO 2 Previous BECCS deployment assessments have largely overlooked the potential lack of spatial colocation of suitable storage basins and biomass availability, in the absence of long-distance biomass and CO 2 transport. These conditions could constrain the near-term technical deployment potential of BECCS due to social and economic barriers that exist for biomass and CO 2 transport. This study leverages biomass production data and site-specific injection and storage capacity estimates at high spatial resolution to assess the near-term deployment opportunities for BECCS in the United States. If the total biomass resource available in the United States was mobilized for BECCS, an estimated 370 Mt CO 2 ⋅y -1 of negative emissions could be supplied in 2020. However, the absence of long-distance biomass and CO 2 transport, as well as limitations imposed by unsuitable regional storage and injection capacities, collectively decrease the technical potential of negative emissions to 100 Mt CO 2 ⋅y -1 Meeting this technical potential may require large-scale deployment of BECCS technology in more than 1,000 counties, as well as widespread deployment of dedicated energy crops. Specifically, the Illinois basin, Gulf region, and western North Dakota have the greatest potential for near-term BECCS deployment. High-resolution spatial assessment as conducted in this study can inform near-term opportunities that minimize social and economic barriers to BECCS deployment. Copyright © 2018 the Author(s). Published by PNAS.

  6. Conditions for Model Matching of Switched Asynchronous Sequential Machines with Output Feedback

    OpenAIRE

    Jung–Min Yang

    2016-01-01

    Solvability of the model matching problem for input/output switched asynchronous sequential machines is discussed in this paper. The control objective is to determine the existence condition and design algorithm for a corrective controller that can match the stable-state behavior of the closed-loop system to that of a reference model. Switching operations and correction procedures are incorporated using output feedback so that the controlled switched machine can show the ...

  7. Prediction Model of Machining Failure Trend Based on Large Data Analysis

    Science.gov (United States)

    Li, Jirong

    2017-12-01

    The mechanical processing has high complexity, strong coupling, a lot of control factors in the machining process, it is prone to failure, in order to improve the accuracy of fault detection of large mechanical equipment, research on fault trend prediction requires machining, machining fault trend prediction model based on fault data. The characteristics of data processing using genetic algorithm K mean clustering for machining, machining feature extraction which reflects the correlation dimension of fault, spectrum characteristics analysis of abnormal vibration of complex mechanical parts processing process, the extraction method of the abnormal vibration of complex mechanical parts processing process of multi-component spectral decomposition and empirical mode decomposition Hilbert based on feature extraction and the decomposition results, in order to establish the intelligent expert system for the data base, combined with large data analysis method to realize the machining of the Fault trend prediction. The simulation results show that this method of fault trend prediction of mechanical machining accuracy is better, the fault in the mechanical process accurate judgment ability, it has good application value analysis and fault diagnosis in the machining process.

  8. Behavioral Modeling for Mental Health using Machine Learning Algorithms.

    Science.gov (United States)

    Srividya, M; Mohanavalli, S; Bhalaji, N

    2018-04-03

    Mental health is an indicator of emotional, psychological and social well-being of an individual. It determines how an individual thinks, feels and handle situations. Positive mental health helps one to work productively and realize their full potential. Mental health is important at every stage of life, from childhood and adolescence through adulthood. Many factors contribute to mental health problems which lead to mental illness like stress, social anxiety, depression, obsessive compulsive disorder, drug addiction, and personality disorders. It is becoming increasingly important to determine the onset of the mental illness to maintain proper life balance. The nature of machine learning algorithms and Artificial Intelligence (AI) can be fully harnessed for predicting the onset of mental illness. Such applications when implemented in real time will benefit the society by serving as a monitoring tool for individuals with deviant behavior. This research work proposes to apply various machine learning algorithms such as support vector machines, decision trees, naïve bayes classifier, K-nearest neighbor classifier and logistic regression to identify state of mental health in a target group. The responses obtained from the target group for the designed questionnaire were first subject to unsupervised learning techniques. The labels obtained as a result of clustering were validated by computing the Mean Opinion Score. These cluster labels were then used to build classifiers to predict the mental health of an individual. Population from various groups like high school students, college students and working professionals were considered as target groups. The research presents an analysis of applying the aforementioned machine learning algorithms on the target groups and also suggests directions for future work.

  9. Man-Machine Interface Design for Modeling and Simulation Software

    Directory of Open Access Journals (Sweden)

    Arnstein J. Borstad

    1986-07-01

    Full Text Available Computer aided design (CAD systems, or more generally interactive software, are today being developed for various application areas like VLSI-design, mechanical structure design, avionics design, cartographic design, architectual design, office automation, publishing, etc. Such tools are becoming more and more important in order to be productive and to be able to design quality products. One important part of CAD-software development is the man-machine interface (MMI design.

  10. A comparison of machine learning and Bayesian modelling for molecular serotyping.

    Science.gov (United States)

    Newton, Richard; Wernisch, Lorenz

    2017-08-11

    Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological

  11. A Study of Synchronous Machine Model Implementations in Matlab/Simulink Simulations for New and Renewable Energy Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Iov, Florin

    2005-01-01

    A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the complex...... synchronous machine and the implemented model could be used for studies of various applications of synchronous machines including in renewable and DG generation systems....

  12. A modeling method for hybrid energy behaviors in flexible machining systems

    International Nuclear Information System (INIS)

    Li, Yufeng; He, Yan; Wang, Yan; Wang, Yulin; Yan, Ping; Lin, Shenlong

    2015-01-01

    Increasingly environmental and economic pressures have led to great concerns regarding the energy consumption of machining systems. Understanding energy behaviors of flexible machining systems is a prerequisite for improving energy efficiency of these systems. This paper proposes a modeling method to predict energy behaviors in flexible machining systems. The hybrid energy behaviors not only depend on the technical specification related of machine tools and workpieces, but are significantly affected by individual production scenarios. In the method, hybrid energy behaviors are decomposed into Structure-related energy behaviors, State-related energy behaviors, Process-related energy behaviors and Assignment-related energy behaviors. The modeling method for the hybrid energy behaviors is proposed based on Colored Timed Object-oriented Petri Net (CTOPN). The former two types of energy behaviors are modeled by constructing the structure of CTOPN, whist the latter two types of behaviors are simulated by applying colored tokens and associated attributes. Machining on two workpieces in the experimental workshop were undertaken to verify the proposed modeling method. The results showed that the method can provide multi-perspective transparency on energy consumption related to machine tools, workpieces as well as production management, and is particularly suitable for flexible manufacturing system when frequent changes in machining systems are often encountered. - Highlights: • Energy behaviors in flexible machining systems are modeled in this paper. • Hybrid characteristics of energy behaviors are examined from multiple viewpoints. • Flexible modeling method CTOPN is used to predict the hybrid energy behaviors. • This work offers a multi-perspective transparency on energy consumption

  13. Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View.

    Science.gov (United States)

    Luo, Wei; Phung, Dinh; Tran, Truyen; Gupta, Sunil; Rana, Santu; Karmakar, Chandan; Shilton, Alistair; Yearwood, John; Dimitrova, Nevenka; Ho, Tu Bao; Venkatesh, Svetha; Berk, Michael

    2016-12-16

    As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs. To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence. A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method. The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models. A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community. ©Wei Luo, Dinh Phung, Truyen Tran, Sunil Gupta, Santu Rana, Chandan Karmakar, Alistair Shilton, John Yearwood, Nevenka Dimitrova, Tu Bao Ho, Svetha Venkatesh, Michael Berk. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 16.12.2016.

  14. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    Science.gov (United States)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  15. A new near-term breast cancer risk prediction scheme based on the quantitative analysis of ipsilateral view mammograms.

    Science.gov (United States)

    Sun, Wenqing; Tseng, Tzu-Liang Bill; Qian, Wei; Saltzstein, Edward C; Zheng, Bin; Yu, Hui; Zhou, Shi

    2018-03-01

    To help improve efficacy of screening mammography and eventually establish an optimal personalized screening paradigm, this study aimed to develop and test a new near-term breast cancer risk prediction scheme based on the quantitative analysis of ipsilateral view of the negative screening mammograms. The dataset includes digital mammograms acquired from 392 women with two sequential full-field digital mammography examinations. All the first ("prior") sets of mammograms were interpreted as negative during the original reading. In the sequential ("current") screening, 202 were proved positive and 190 remained negative/benign. For each pair of the "prior" ipsilateral mammograms, we adaptively fused the image features computed from two views. Using four different types of image features, we built four elastic net support vector machine (EnSVM) based classifiers. Then, the initial prediction scores form the 4 EnSVMs were combined to build a final artificial neural network (ANN) classifier that produces the final risk prediction score. The performance of the new scheme was evaluated by using a 10-fold cross-validation method and an assessment index of the area under the receiver operating characteristic curve (AUC). A total number of 466 features were initially extracted from each pair of ipsilateral mammograms. Among them, 51 were selected to build the EnSVM based prediction scheme. The AUC = 0.737 ± 0.052 was yielded using the new scheme. Applying an optimal operating threshold, the prediction sensitivity was 60.4% (122 of 202) and the specificity was 79.0% (150 of 190). The study results showed moderately high positive association between computed risk scores using the "prior" negative mammograms and the actual outcome of the image-detectable breast cancers in the next subsequent screening examinations. The study also demonstrated that quantitative analysis of the ipsilateral views of the mammograms enabled to provide useful information in predicting near-term

  16. Thermal Error Modeling of a Machine Tool Using Data Mining Scheme

    Science.gov (United States)

    Wang, Kun-Chieh; Tseng, Pai-Chang

    In this paper the knowledge discovery technique is used to build an effective and transparent mathematic thermal error model for machine tools. Our proposed thermal error modeling methodology (called KRL) integrates the schemes of K-means theory (KM), rough-set theory (RS), and linear regression model (LR). First, to explore the machine tool's thermal behavior, an integrated system is designed to simultaneously measure the temperature ascents at selected characteristic points and the thermal deformations at spindle nose under suitable real machining conditions. Second, the obtained data are classified by the KM method, further reduced by the RS scheme, and a linear thermal error model is established by the LR technique. To evaluate the performance of our proposed model, an adaptive neural fuzzy inference system (ANFIS) thermal error model is introduced for comparison. Finally, a verification experiment is carried out and results reveal that the proposed KRL model is effective in predicting thermal behavior in machine tools. Our proposed KRL model is transparent, easily understood by users, and can be easily programmed or modified for different machining conditions.

  17. Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Wenjie Tian

    2014-01-01

    Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.

  18. Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools

    Science.gov (United States)

    Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu

    2018-03-01

    Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.

  19. Building Customer Churn Prediction Models in Fitness Industry with Machine Learning Methods

    OpenAIRE

    Shan, Min

    2017-01-01

    With the rapid growth of digital systems, churn management has become a major focus within customer relationship management in many industries. Ample research has been conducted for churn prediction in different industries with various machine learning methods. This thesis aims to combine feature selection and supervised machine learning methods for defining models of churn prediction and apply them on fitness industry. Forward selection is chosen as feature selection methods. Support Vector ...

  20. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    Science.gov (United States)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  1. A Framework for Modeling Human-Machine Interactions

    Science.gov (United States)

    Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.

  2. An Introduction to Topic Modeling as an Unsupervised Machine Learning Way to Organize Text Information

    Science.gov (United States)

    Snyder, Robin M.

    2015-01-01

    The field of topic modeling has become increasingly important over the past few years. Topic modeling is an unsupervised machine learning way to organize text (or image or DNA, etc.) information such that related pieces of text can be identified. This paper/session will present/discuss the current state of topic modeling, why it is important, and…

  3. Evaluation of selected near-term energy-conservation options for the Midwest

    Energy Technology Data Exchange (ETDEWEB)

    Evans, A.R.; Colsher, C.S.; Hamilton, R.W.; Buehring, W.A.

    1978-11-01

    This report evaluates the potential for implementation of near-term energy-conservation practices for the residential, commercial, agricultural, industrial, transportation, and utility sectors of the economy in twelve states: Illinois, Indiana, Iowa, Kansas, Michigan, Minnesota, Missouri, Nebraska, North Dakota, Ohio, South Dakota, and Wisconsin. The information used to evaluate the magnitude of achievable energy savings includes regional energy use, the regulatory/legislative climate relating to energy conservation, technical characteristics of the measures, and their feasibility of implementation. This work is intended to provide baseline information for an ongoing regional assessment of energy and environmental impacts in the Midwest. 80 references.

  4. Photovoltaic village power application: Assessment of the near-term market

    Science.gov (United States)

    Rosenblum, L.; Bifano, W. J.; Poley, W. A.; Scudder, L. R.

    1978-01-01

    The village power application represents a potential market for photovoltaics. The price of energy for photovoltaic systems was compared to that of utility line extensions and diesel generators. The potential domestic demand was defined in both the government and commercial sectors. The foreign demand and sources of funding for village power systems in the developing countries were also discussed briefly. It was concluded that a near term domestic market of at least 12 MW min and a foreign market of about 10 GW exists.

  5. Multiphysics Modeling of an Permanent Magnet Synchronous Machine

    Directory of Open Access Journals (Sweden)

    MARTIS Claudia

    2012-10-01

    Full Text Available This paper analyzes the noise and vibration in PMSMs. There are three types of vibrations in electrical machines: electromagnetic,mechanical and aerodynamic. Electromagnetic force are the main cause of noise and vibration in PMSMs. It is very important to calculate precisely the natural frequencies of the stator system. If oneradial force (which are the main cause for electromagnetic vibration has the frequency close to the natural frequency of the stator system for the same order of vibrational mode, then this force canproduce dangerous vibration in the stator system. The natural frequencies for a stator system of a PMSM have been calculated. Finally a Structural Analysis has been made , pointing out the radialdisplacement and stress for the chosen PMSM .

  6. Product Quality Modelling Based on Incremental Support Vector Machine

    International Nuclear Information System (INIS)

    Wang, J; Zhang, W; Qin, B; Shi, W

    2012-01-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  7. Product Quality Modelling Based on Incremental Support Vector Machine

    Science.gov (United States)

    Wang, J.; Zhang, W.; Qin, B.; Shi, W.

    2012-05-01

    Incremental Support vector machine (ISVM) is a new learning method developed in recent years based on the foundations of statistical learning theory. It is suitable for the problem of sequentially arriving field data and has been widely used for product quality prediction and production process optimization. However, the traditional ISVM learning does not consider the quality of the incremental data which may contain noise and redundant data; it will affect the learning speed and accuracy to a great extent. In order to improve SVM training speed and accuracy, a modified incremental support vector machine (MISVM) is proposed in this paper. Firstly, the margin vectors are extracted according to the Karush-Kuhn-Tucker (KKT) condition; then the distance from the margin vectors to the final decision hyperplane is calculated to evaluate the importance of margin vectors, where the margin vectors are removed while their distance exceed the specified value; finally, the original SVs and remaining margin vectors are used to update the SVM. The proposed MISVM can not only eliminate the unimportant samples such as noise samples, but also can preserve the important samples. The MISVM has been experimented on two public data and one field data of zinc coating weight in strip hot-dip galvanizing, and the results shows that the proposed method can improve the prediction accuracy and the training speed effectively. Furthermore, it can provide the necessary decision supports and analysis tools for auto control of product quality, and also can extend to other process industries, such as chemical process and manufacturing process.

  8. Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model

    Science.gov (United States)

    Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward

    2018-04-01

    A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.

  9. Input data for mathematical modeling and numerical simulation of switched reluctance machines.

    Science.gov (United States)

    Memon, Ali Asghar; Shaikh, Muhammad Mujtaba

    2017-10-01

    The modeling and simulation of Switched Reluctance (SR) machine and drives is challenging for its dual pole salient structure and magnetic saturation. This paper presents the input data in form of experimentally obtained magnetization characteristics. This data was used for computer simulation based model of SR machine, "Selecting Best Interpolation Technique for Simulation Modeling of Switched Reluctance Machine" [1], "Modeling of Static Characteristics of Switched Reluctance Motor" [2]. This data is primary source of other data tables of co energy and static torque which are also among the required data essential for the simulation and can be derived from this data. The procedure and experimental setup for collection of the data is presented in detail.

  10. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  11. A Virtual Machine Migration Strategy Based on Time Series Workload Prediction Using Cloud Model

    Directory of Open Access Journals (Sweden)

    Yanbing Liu

    2014-01-01

    Full Text Available Aimed at resolving the issues of the imbalance of resources and workloads at data centers and the overhead together with the high cost of virtual machine (VM migrations, this paper proposes a new VM migration strategy which is based on the cloud model time series workload prediction algorithm. By setting the upper and lower workload bounds for host machines, forecasting the tendency of their subsequent workloads by creating a workload time series using the cloud model, and stipulating a general VM migration criterion workload-aware migration (WAM, the proposed strategy selects a source host machine, a destination host machine, and a VM on the source host machine carrying out the task of the VM migration. Experimental results and analyses show, through comparison with other peer research works, that the proposed method can effectively avoid VM migrations caused by momentary peak workload values, significantly lower the number of VM migrations, and dynamically reach and maintain a resource and workload balance for virtual machines promoting an improved utilization of resources in the entire data center.

  12. Dynamic Modeling of a 2-RPU+2-UPS Hybrid Manipulator for Machining Application

    Directory of Open Access Journals (Sweden)

    Ruiqin Li

    2017-10-01

    Full Text Available This paper presents a novel 5-DOF gantry hybrid machine tool, designed with a 2-RPU+2-UPS parallel mechanism for 3T2R motion. The 2-RPU+2-UPS parallel mechanism is connected to a long linear guide to realize 5-axis machining. A dynamic model is developed for this parallel-serial hybrid system. Screw theory is adopted to establish the kinematic equations of the system, upon which the dynamics model is developed by utilizing the principle of virtual work. A numerical example for processing slender structural parts is included to show the validity of the analytical dynamic model developed.

  13. Phase I of the Near-Term Hybrid Vehicle Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-10

    Heat engine/electric hybrid vehicles offer the potential of greatly reduced petroleum consumption, compared to conventional vehicles, without the disadvantages of limited performance and operating range associated with pure electric vehicles. This report documents a hybrid vehicle design approach which is aimed at the development of the technology required to achieve this potential, in such a way that it is transferable to the auto industry in the near term. The development of this design approach constituted Phase I of the Near-Term Hybrid Vehicle Program. The major tasks in this program were: mission analysis and performance specification studies; design tradeoff studies; and preliminary design. Detailed reports covering each of these tasks are included as appendices to this report. A fourth task, sensitivity studies, is also included in the report on the design tradeoff studies. Because of the detail with which these appendices cover methodology and results, the body of this report has been prepared as a brief executive summary of the program activities and results, with appropriate references to the detailed material in the appendices.

  14. Classical boson sampling algorithms with superior performance to near-term experiments

    Science.gov (United States)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  15. Phase I of the Near-Term Hybrid Passenger-Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    Heat engine/electric hybrid vehicles offer the potential of greatly reduced petroleum consumption, compared to conventional vehicles, without the disadvantages of limited performance and operating range associated with purely electric vehicles. This report documents a hybrid-vehicle design approach which is aimed at the development of the technology required to achieve this potential - in such a way that it is transferable to the auto industry in the near term. The development of this design approach constituted Phase I of the Near-Term Hybrid-Vehicle Program. The major tasks in this program were: (1) Mission Analysis and Performance Specification Studies; (2) Design Tradeoff Studies; and (3) Preliminary Design. Detailed reports covering each of these tasks are included as appendices to this report and issued under separate cover; a fourth task, Sensitivity Studies, is also included in the report on the Design Tradeoff Studies. Because of the detail with which these appendices cover methodology and both interim and final results, the body of this report was prepared as a brief executive summary of the program activities and results, with appropriate references to the detailed material in the appendices.

  16. Antimatter Requirements and Energy Costs for Near-Term Propulsion Applications

    Science.gov (United States)

    Schmidt, G. R.; Gerrish, H. P.; Martin, J. J.; Smith, G. A.; Meyer, K. J.

    1999-01-01

    The superior energy density of antimatter annihilation has often been pointed to as the ultimate source of energy for propulsion. However, the limited capacity and very low efficiency of present-day antiproton production methods suggest that antimatter may be too costly to consider for near-term propulsion applications. We address this issue by assessing the antimatter requirements for six different types of propulsion concepts, including two in which antiprotons are used to drive energy release from combined fission/fusion. These requirements are compared against the capacity of both the current antimatter production infrastructure and the improved capabilities that could exist within the early part of next century. Results show that although it may be impractical to consider systems that rely on antimatter as the sole source of propulsive energy, the requirements for propulsion based on antimatter-assisted fission/fusion do fall within projected near-term production capabilities. In fact, a new facility designed solely for antiproton production but based on existing technology could feasibly support interstellar precursor missions and omniplanetary spaceflight with antimatter costs ranging up to $6.4 million per mission.

  17. Economic analysis of direct hydrogen PEM fuel cells in three near-term markets

    International Nuclear Information System (INIS)

    Mahadevan, K.; Stone, H.; Judd, K.; Paul, D.

    2007-01-01

    Direct hydrogen polymer electrolyte membrane fuel cells (H-PEMFCs) offer several near-term opportunities including backup power applications in state and local agencies of emergency response; forklifts in high throughput distribution centers; and, airport ground support equipment. This paper presented an analysis of the market requirements for introducing H-PEMFCs successfully, as well as an analysis of the lifecycle costs of H-PEMFCs and competing alternatives in three near-term markets. It also used three scenarios as examples of the potential for market penetration of H-PEMFCs. For each of the three potential opportunities, the paper presented the market requirements, a lifecycle cost analysis, and net present value of the lifecycle costs. A sensitivity analysis of the net present value of the lifecycle costs and of the average annual cost of owning and operating each of the H-PEMFC opportunities was also conducted. It was concluded that H-PEMFC-powered pallet trucks in high-productivity environments represented a promising early opportunity. However, the value of H-PEMFC-powered forklifts compared to existing alternatives was reduced for applications with lower hours of operation and declining labor rates. In addition, H-PEMFC-powered baggage tractors in airports were more expensive than battery-powered baggage tractors on a lifecycle cost basis. 9 tabs., 4 figs

  18. Near-term and next-generation nuclear power plant concepts

    International Nuclear Information System (INIS)

    Shiga, Shigenori; Handa, Norihiko; Heki, Hideaki

    2002-01-01

    Near-term and next-generation nuclear reactors will be required to have high economic competitiveness in the deregulated electricity market, flexibility with respect to electricity demand and investment, and good public acceptability. For near-term reactors in the 2010s, Toshiba is developing an improved advanced boiling water reactor (ABWR) based on the present ABWR with newly rationalized systems and components; a construction period of 36 months, one year shorter than the current period; and a power lineup ranging from 800 MWe to 1,600 MWe. For future reactors in the 2020s and beyond, Toshiba is developing the ABWR-II for large-scale, centralized power sources; a supercritical water-cooled power reactor with high thermal efficiency for medium-scale power sources; a modular reactor with siting flexibility for small-scale power sources; and a small, fast neutron reactor with inherent safety for independent power sources. From the viewpoint of efficient uranium resource utilization, a low-moderation BWR core with a high conversion factor is also being developed. (author)

  19. A hybrid analytical model for open-circuit field calculation of multilayer interior permanent magnet machines

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2017-08-01

    Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.

  20. Novel Simplified Model for Asynchronous Machine with Consideration of Frequency Characteristic

    Directory of Open Access Journals (Sweden)

    Changchun Cai

    2014-01-01

    Full Text Available The frequency characteristic of electric equipment should be considered in the digital simulation of power systems. The traditional asynchronous machine third-order transient model excludes not only the stator transient but also the frequency characteristics, thus decreasing the application sphere of the model and resulting in a large error under some special conditions. Based on the physical equivalent circuit and Park model for asynchronous machines, this study proposes a novel asynchronous third-order transient machine model with consideration of the frequency characteristic. In the new definitions of variables, the voltages behind the reactance are redefined as the linear equation of flux linkage. In this way, the rotor voltage equation is not associated with the derivative terms of frequency. However, the derivative terms of frequency should not always be ignored in the application of the traditional third-order transient model. Compared with the traditional third-order transient model, the novel simplified third-order transient model with consideration of the frequency characteristic is more accurate without increasing the order and complexity. Simulation results show that the novel third-order transient model for the asynchronous machine is suitable and effective and is more accurate than the widely used traditional simplified third-order transient model under some special conditions with drastic frequency fluctuations.

  1. Lexical ambiguity resolution for Turkish in direct transfer machine translation models

    OpenAIRE

    Tantuğ, A. Cüneyd; Tantug, A. Cuneyd; Oflazer, Kemal; Adalı, Eşref; Adali, Esref

    2006-01-01

    This paper presents a statistical lexical ambiguity resolution method in direct transfer machine translation models in which the target language is Turkish. Since direct transfer MT models do not have full syntactic information, most of the lexical ambiguity resolution methods are not very helpful. Our disambiguation model is based on statistical language models. We have investigated the performances of some statistical language model types and parameters in lexical ambiguity resolution for o...

  2. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods.

    Science.gov (United States)

    Luo, Gang; Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-08-29

    To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient's weight kept rising in the past year). This process becomes infeasible with limited budgets. This study's goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new modeling problems crucial for care

  3. Automating Construction of Machine Learning Models With Clinical Big Data: Proposal Rationale and Methods

    Science.gov (United States)

    Stone, Bryan L; Johnson, Michael D; Tarczy-Hornoch, Peter; Wilcox, Adam B; Mooney, Sean D; Sheng, Xiaoming; Haug, Peter J; Nkoy, Flory L

    2017-01-01

    Background To improve health outcomes and cut health care costs, we often need to conduct prediction/classification using large clinical datasets (aka, clinical big data), for example, to identify high-risk patients for preventive interventions. Machine learning has been proposed as a key technology for doing this. Machine learning has won most data science competitions and could support many clinical activities, yet only 15% of hospitals use it for even limited purposes. Despite familiarity with data, health care researchers often lack machine learning expertise to directly use clinical big data, creating a hurdle in realizing value from their data. Health care researchers can work with data scientists with deep machine learning knowledge, but it takes time and effort for both parties to communicate effectively. Facing a shortage in the United States of data scientists and hiring competition from companies with deep pockets, health care systems have difficulty recruiting data scientists. Building and generalizing a machine learning model often requires hundreds to thousands of manual iterations by data scientists to select the following: (1) hyper-parameter values and complex algorithms that greatly affect model accuracy and (2) operators and periods for temporally aggregating clinical attributes (eg, whether a patient’s weight kept rising in the past year). This process becomes infeasible with limited budgets. Objective This study’s goal is to enable health care researchers to directly use clinical big data, make machine learning feasible with limited budgets and data scientist resources, and realize value from data. Methods This study will allow us to achieve the following: (1) finish developing the new software, Automated Machine Learning (Auto-ML), to automate model selection for machine learning with clinical big data and validate Auto-ML on seven benchmark modeling problems of clinical importance; (2) apply Auto-ML and novel methodology to two new

  4. A Temperature Sensor Clustering Method for Thermal Error Modeling of Heavy Milling Machine Tools

    Directory of Open Access Journals (Sweden)

    Fengchun Li

    2017-01-01

    Full Text Available A clustering method is an effective way to select the proper temperature sensor location for thermal error modeling of machine tools. In this paper, a new temperature sensor clustering method is proposed. By analyzing the characteristics of the temperature of the sensors in a heavy floor-type milling machine tool, an indicator involving both the Euclidean distance and the correlation coefficient was proposed to reflect the differences between temperature sensors, and the indicator was expressed by a distance matrix to be used for hierarchical clustering. Then, the weight coefficient in the distance matrix and the number of the clusters (groups were optimized by a genetic algorithm (GA, and the fitness function of the GA was also rebuilt by establishing the thermal error model at one rotation speed, then deriving its accuracy at two different rotation speeds with a temperature disturbance. Thus, the parameters for clustering, as well as the final selection of the temperature sensors, were derived. Finally, the method proposed in this paper was verified on a machine tool. According to the selected temperature sensors, a thermal error model of the machine tool was established and used to predict the thermal error. The results indicate that the selected temperature sensors can accurately predict thermal error at different rotation speeds, and the proposed temperature sensor clustering method for sensor selection is expected to be used for the thermal error modeling for other machine tools.

  5. Advanced Model of Squirrel Cage Induction Machine for Broken Rotor Bars Fault Using Multi Indicators

    Directory of Open Access Journals (Sweden)

    Ilias Ouachtouk

    2016-01-01

    Full Text Available Squirrel cage induction machine are the most commonly used electrical drives, but like any other machine, they are vulnerable to faults. Among the widespread failures of the induction machine there are rotor faults. This paper focuses on the detection of broken rotor bars fault using multi-indicator. However, diagnostics of asynchronous machine rotor faults can be accomplished by analysing the anomalies of machine local variable such as torque, magnetic flux, stator current and neutral voltage signature analysis. The aim of this research is to summarize the existing models and to develop new models of squirrel cage induction motors with consideration of the neutral voltage and to study the effect of broken rotor bars on the different electrical quantities such as the park currents, torque, stator currents and neutral voltage. The performance of the model was assessed by comparing the simulation and experimental results. The obtained results show the effectiveness of the model, and allow detection and diagnosis of these defects.

  6. Uncertainty analysis in rainfall-runoff modelling : Application of machine learning techniques

    NARCIS (Netherlands)

    Shrestha, D.l.

    2009-01-01

    This thesis presents powerful machine learning (ML) techniques to build predictive models of uncertainty with application to hydrological models. Two different methods are developed and tested. First one focuses on parameter uncertainty analysis by emulating the results of Monte Carlo simulations of

  7. Uncertainty Analysis in Rainfall-Runoff Modelling: Application of Machine Learning Techniques

    NARCIS (Netherlands)

    Shrestha, D.L.

    2009-01-01

    This thesis presents powerful machine learning (ML) techniques to build predictive models of uncertainty with application to hydrological models. Two different methods are developed and tested. First one focuses on parameter uncertainty analysis by emulating the results of Monte Carlo simulations of

  8. PredicT-ML: a tool for automating machine learning model building with big clinical data.

    Science.gov (United States)

    Luo, Gang

    2016-01-01

    Predictive modeling is fundamental to transforming large clinical data sets, or "big clinical data," into actionable knowledge for various healthcare applications. Machine learning is a major predictive modeling approach, but two barriers make its use in healthcare challenging. First, a machine learning tool user must choose an algorithm and assign one or more model parameters called hyper-parameters before model training. The algorithm and hyper-parameter values used typically impact model accuracy by over 40 %, but their selection requires many labor-intensive manual iterations that can be difficult even for computer scientists. Second, many clinical attributes are repeatedly recorded over time, requiring temporal aggregation before predictive modeling can be performed. Many labor-intensive manual iterations are required to identify a good pair of aggregation period and operator for each clinical attribute. Both barriers result in time and human resource bottlenecks, and preclude healthcare administrators and researchers from asking a series of what-if questions when probing opportunities to use predictive models to improve outcomes and reduce costs. This paper describes our design of and vision for PredicT-ML (prediction tool using machine learning), a software system that aims to overcome these barriers and automate machine learning model building with big clinical data. The paper presents the detailed design of PredicT-ML. PredicT-ML will open the use of big clinical data to thousands of healthcare administrators and researchers and increase the ability to advance clinical research and improve healthcare.

  9. The role of measurement and modelling of machine tools in improving product quality

    Directory of Open Access Journals (Sweden)

    Longstaff A.P.

    2013-01-01

    Full Text Available Manufacturing of high-quality components and assemblies is clearly recognised by industrialised nations as an important means of wealth generation. A “right first time” paradigm to producing finished components is the desirable goal to maximise economic benefits and reduce environmental impact. Such an ambition is only achievable through an accurate model of the machinery used to shape the finished article. In the first analysis, computer aided design (CAD and computer aided manufacturing (CAM can be used to produce an instruction list of three-dimensional coordinates and intervening tool paths to translate the intent of a design engineer into an unambiguous set of commands for a manufacturing machine. However, in order for the resultant manufacturing program to produce the desired output within the specified tolerance, the model of the machine has to be sufficiently accurate. In this paper, the spatial and temporal sources of error and various contemporary means of modelling are discussed. Limitations and assumptions in the models are highlighted and an estimate of their impact is made. Measurement of machine tools plays a vital role in establishing the accuracy of a particular machine and calibrating its unique model, but is an often misunderstood and misapplied discipline. Typically, the individual errors of the machine will be quantified at a given moment in time, but without sufficient consideration either for the uncertainty of individual measurements or a full appreciation of the complex interaction between each independently measured error. This paper draws on the concept of a “conformance zone”, as specified in the ISO 230:1 – 2012, to emphasise the need for a fuller understanding of the complex uncertainty of measurement model for a machine tool. Work towards closing the gap in this understanding is described and limitations are noted.

  10. Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine

    Directory of Open Access Journals (Sweden)

    Hang-cheong Wong

    2012-01-01

    Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.

  11. A proposed model for assessing service quality in small machining and industrial maintenance companies

    Directory of Open Access Journals (Sweden)

    Morvam dos Santos Netto

    2014-11-01

    Full Text Available Machining and industrial maintenance services include repair (corrective maintenance of equipments, activities involving the assembly-disassembly of equipments, fault diagnosis, machining operations, forming operations, welding processes, assembly and test of equipments. This article proposes a model for assessing the quality of services provided by small machining and industrial maintenance companies, since there is a gap in the literature regarding this issue and because the importance of small service companies in socio-economic development of the country. The model is an adaptation of the SERVQUAL instrument and the criteria determining the quality of services are designed according to the service cycle of a typical small machining and industrial maintenance company. In this sense, the Moments of Truth have been considered in the preparation of two separate questionnaires. The first questionnaire contains 24 statements that reflect the expectations of customers, and the second one contains 24 statements that measure perceptions of service performance. An additional item was included in each questionnaire to assess, respectively, the overall expectation about the services and the overall company performance. Therefore, it is a model that considers the interfaces of the client/supplier relationship, the peculiarities of the machining and industrial maintenance service sector and the company size.

  12. A Review of Current Machine Learning Methods Used for Cancer Recurrence Modeling and Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Hemphill, Geralyn M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type has become a necessity in cancer research. A major challenge in cancer management is the classification of patients into appropriate risk groups for better treatment and follow-up. Such risk assessment is critically important in order to optimize the patient’s health and the use of medical resources, as well as to avoid cancer recurrence. This paper focuses on the application of machine learning methods for predicting the likelihood of a recurrence of cancer. It is not meant to be an extensive review of the literature on the subject of machine learning techniques for cancer recurrence modeling. Other recent papers have performed such a review, and I will rely heavily on the results and outcomes from these papers. The electronic databases that were used for this review include PubMed, Google, and Google Scholar. Query terms used include “cancer recurrence modeling”, “cancer recurrence and machine learning”, “cancer recurrence modeling and machine learning”, and “machine learning for cancer recurrence and prediction”. The most recent and most applicable papers to the topic of this review have been included in the references. It also includes a list of modeling and classification methods to predict cancer recurrence.

  13. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  14. Geospatial Analysis of Near-Term Technical Potential of BECCS in the U.S.

    Science.gov (United States)

    Baik, E.; Sanchez, D.; Turner, P. A.; Mach, K. J.; Field, C. B.; Benson, S. M.

    2017-12-01

    Atmospheric carbon dioxide (CO2) removal using bioenergy with carbon capture and storage (BECCS) is crucial for achieving stringent climate change mitigation targets. To date, previous work discussing the feasibility of BECCS has largely focused on land availability and bioenergy potential, while CCS components - including capacity, injectivity, and location of potential storage sites - have not been thoroughly considered in the context of BECCS. A high-resolution geospatial analysis of both biomass production and potential geologic storage sites is conducted to consider the near-term deployment potential of BECCS in the U.S. The analysis quantifies the overlap between the biomass resource and CO2 storage locations within the context of storage capacity and injectivity. This analysis leverages county-level biomass production data from the U.S. Department of Energy's Billion Ton Report alongside potential CO2 geologic storage sites as provided by the USGS Assessment of Geologic Carbon Dioxide Storage Resources. Various types of lignocellulosic biomass (agricultural residues, dedicated energy crops, and woody biomass) result in a potential 370-400 Mt CO2 /yr of negative emissions in 2020. Of that CO2, only 30-31% of the produced biomass (110-120 Mt CO2 /yr) is co-located with a potential storage site. While large potential exists, there would need to be more than 250 50-MW biomass power plants fitted with CCS to capture all the co-located CO2 capacity in 2020. Neither absolute injectivity nor absolute storage capacity is likely to limit BECCS, but the results show regional capacity and injectivity constraints in the U.S. that had not been identified in previous BECCS analysis studies. The state of Illinois, the Gulf region, and western North Dakota emerge as the best locations for near-term deployment of BECCS with abundant biomass, sufficient storage capacity and injectivity, and the co-location of the two resources. Future studies assessing BECCS potential should

  15. Accelerated Monte Carlo system reliability analysis through machine-learning-based surrogate models of network connectivity

    International Nuclear Information System (INIS)

    Stern, R.E.; Song, J.; Work, D.B.

    2017-01-01

    The two-terminal reliability problem in system reliability analysis is known to be computationally intractable for large infrastructure graphs. Monte Carlo techniques can estimate the probability of a disconnection between two points in a network by selecting a representative sample of network component failure realizations and determining the source-terminal connectivity of each realization. To reduce the runtime required for the Monte Carlo approximation, this article proposes an approximate framework in which the connectivity check of each sample is estimated using a machine-learning-based classifier. The framework is implemented using both a support vector machine (SVM) and a logistic regression based surrogate model. Numerical experiments are performed on the California gas distribution network using the epicenter and magnitude of the 1989 Loma Prieta earthquake as well as randomly-generated earthquakes. It is shown that the SVM and logistic regression surrogate models are able to predict network connectivity with accuracies of 99% for both methods, and are 1–2 orders of magnitude faster than using a Monte Carlo method with an exact connectivity check. - Highlights: • Surrogate models of network connectivity are developed by machine-learning algorithms. • Developed surrogate models can reduce the runtime required for Monte Carlo simulations. • Support vector machine and logistic regressions are employed to develop surrogate models. • Numerical example of California gas distribution network demonstrate the proposed approach. • The developed models have accuracies 99%, and are 1–2 orders of magnitude faster than MCS.

  16. Predicting Mouse Liver Microsomal Stability with "Pruned" Machine Learning Models and Public Data.

    Science.gov (United States)

    Perryman, Alexander L; Stratton, Thomas P; Ekins, Sean; Freundlich, Joel S

    2016-02-01

    Mouse efficacy studies are a critical hurdle to advance translational research of potential therapeutic compounds for many diseases. Although mouse liver microsomal (MLM) stability studies are not a perfect surrogate for in vivo studies of metabolic clearance, they are the initial model system used to assess metabolic stability. Consequently, we explored the development of machine learning models that can enhance the probability of identifying compounds possessing MLM stability. Published assays on MLM half-life values were identified in PubChem, reformatted, and curated to create a training set with 894 unique small molecules. These data were used to construct machine learning models assessed with internal cross-validation, external tests with a published set of antitubercular compounds, and independent validation with an additional diverse set of 571 compounds (PubChem data on percent metabolism). "Pruning" out the moderately unstable / moderately stable compounds from the training set produced models with superior predictive power. Bayesian models displayed the best predictive power for identifying compounds with a half-life ≥1 h. Our results suggest the pruning strategy may be of general benefit to improve test set enrichment and provide machine learning models with enhanced predictive value for the MLM stability of small organic molecules. This study represents the most exhaustive study to date of using machine learning approaches with MLM data from public sources.

  17. MODELS OF LIVE MIGRATION WITH ITERATIVE APPROACH AND MOVE OF VIRTUAL MACHINES

    Directory of Open Access Journals (Sweden)

    S. M. Aleksankov

    2015-11-01

    Full Text Available Subject of Research. The processes of live migration without shared storage with pre-copy approach and move migration are researched. Migration of virtual machines is an important opportunity of virtualization technology. It enables applications to move transparently with their runtime environments between physical machines. Live migration becomes noticeable technology for efficient load balancing and optimizing the deployment of virtual machines to physical hosts in data centres. Before the advent of live migration, only network migration (the so-called, «Move», has been used, that entails stopping the virtual machine execution while copying to another physical server, and, consequently, unavailability of the service. Method. Algorithms of live migration without shared storage with pre-copy approach and move migration of virtual machines are reviewed from the perspective of research of migration time and unavailability of services at migrating of virtual machines. Main Results. Analytical models are proposed predicting migration time of virtual machines and unavailability of services at migrating with such technologies as live migration with pre-copy approach without shared storage and move migration. In the latest works on the time assessment of unavailability of services and migration time using live migration without shared storage experimental results are described, that are applicable to draw general conclusions about the changes of time for unavailability of services and migration time, but not to predict their values. Practical Significance. The proposed models can be used for predicting the migration time and time of unavailability of services, for example, at implementation of preventive and emergency works on the physical nodes in data centres.

  18. Analysis of near-term production and market opportunities for hydrogen and related activities

    Energy Technology Data Exchange (ETDEWEB)

    Mauro, R.; Leach, S. [National Hydrogen Association, Washington, DC (United States)

    1995-09-01

    This paper summarizes current and planned activities in the areas of hydrogen production and use, near-term venture opportunities, and codes and standards. The rationale for these efforts is to assess industry interest and engage in activities that move hydrogen technologies down the path to commercialization. Some of the work presented in this document is a condensed, preliminary version of reports being prepared under the DOE/NREL contract. In addition, the NHA work funded by Westinghouse Savannah River Corporation (WSRC) to explore the opportunities and industry interest in a Hydrogen Research Center is briefly described. Finally, the planned support of and industry input to the Hydrogen Technical Advisory Panel (HTAP) on hydrogen demonstration projects is discussed.

  19. Preliminary results of steady state characterization of near term electric vehicle breadboard propulsion system

    Science.gov (United States)

    Sargent, N. B.

    1980-01-01

    The steady state test results on a breadboard version of the General Electric Near Term Electric Vehicle (ETV-1) are discussed. The breadboard was built using exact duplicate vehicle propulsion system components with few exceptions. Full instrumentation was provided to measure individual component efficiencies. Tests were conducted on a 50 hp dynamometer in a road load simulator facility. Characterization of the propulsion system over the lower half of the speed-torque operating range has shown the system efficiency to be composed of a predominant motor loss plus a speed dependent transaxle loss. At the lower speeds with normal road loads the armature chopper loss is also a significant factor. At the conditions corresponding to a cycle for which the vehicle system was specifically designed, the efficiencies are near optimum.

  20. Icterus Neonatorum in Near-Term and Term Infants; An overview

    Directory of Open Access Journals (Sweden)

    Rehan Ali

    2012-05-01

    Full Text Available Neonatal jaundice is the yellowish discoloration of the skin and/or sclerae of newborn infants caused by tissue deposition of bilirubin. Physiological jaundice is mild, unconjugated (indirect-reacting bilirubinaemia, and affects nearly all newborns. Physiological jaundice levels typically peak at 5 to 6 mg/dL (86 to 103 μmol/L at 72 to 96 hours of age, and do not exceed 17 to 18 mg/dL (291–308 μmol/L. Levels may not peak until seven days of age in Asian infants, or in infants born at 35 to 37 weeks’ gestation. Higher levels of unconjugated hyperbilirubinaemia are considered pathological and occur in a variety of conditions. The clinical features and management of unconjugated hyperbilirubinaemia in healthy near-term and term infants, as well as bilirubin toxicity and the prevention of kernicterus, are reviewed here. The pathogenesis and aetiology of this disorder are discussed separately.

  1. Heliostat Manufacturing for near-term markets. Phase II final report

    International Nuclear Information System (INIS)

    1998-01-01

    This report describes a project by Science Applications International Corporation and its subcontractors Boeing/Rocketdyne and Bechtel Corp. to develop manufacturing technology for production of SAIC stretched membrane heliostats. The project consists of three phases, of which two are complete. This first phase had as its goals to identify and complete a detailed evaluation of manufacturing technology, process changes, and design enhancements to be pursued for near-term heliostat markets. In the second phase, the design of the SAIC stretched membrane heliostat was refined, manufacturing tooling for mirror facet and structural component fabrication was implemented, and four proof-of-concept/test heliostats were produced and installed in three locations. The proposed plan for Phase III calls for improvements in production tooling to enhance product quality and prepare increased production capacity. This project is part of the U.S. Department of Energy's Solar Manufacturing Technology Program (SolMaT)

  2. Chemicals from Biomass: A Market Assessment of Bioproducts with Near-Term Potential

    Energy Technology Data Exchange (ETDEWEB)

    Biddy, Mary J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Scarlata, Christopher [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kinchin, Christopher [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-03-23

    Production of chemicals from biomass offers a promising opportunity to reduce U.S. dependence on imported oil, as well as to improve the overall economics and sustainability of an integrated biorefinery. Given the increasing momentum toward the deployment and scale-up of bioproducts, this report strives to: (1) summarize near-term potential opportunities for growth in biomass-derived products; (2) identify the production leaders who are actively scaling up these chemical production routes; (3) review the consumers and market champions who are supporting these efforts; (4) understand the key drivers and challenges to move biomass-derived chemicals to market; and (5) evaluate the impact that scale-up of chemical strategies will have on accelerating the production of biofuels.

  3. Photon beam modelling with Pinnacle3 Treatment Planning System for a Rokus M Co-60 Machine

    International Nuclear Information System (INIS)

    Dulcescu, Mihaela; Murgulet Cristian

    2008-01-01

    The basic relationships of the convolution/superposition dose calculation technique are reviewed, and a modelling technique that can be used for obtaining a satisfactory beam model for a commercially available convolution/superposition-based treatment planning system is described. A fluence energy spectrum for a Co-60 treatment machine obtained from a Monte Carlo simulation was used for modelling the fluence spectrum for a Rokus M machine. In order to achieve this model we measured the depth dose distribution and the dose profiles with a Wellhofer dosimetry system. The primary fluence was iteratively modelled by comparing the computed depth dose curves and beam profiles with the depth dose curves and crossbeam profiles measured in a water phantom. The objective of beam modelling is to build a model of the primary fluence that the patient is exposed to, which can then be used for the calculation of the dose deposited in the patient. (authors)

  4. Static Object Detection Based on a Dual Background Model and a Finite-State Machine

    Directory of Open Access Journals (Sweden)

    Heras Evangelio Rubén

    2011-01-01

    Full Text Available Detecting static objects in video sequences has a high relevance in many surveillance applications, such as the detection of abandoned objects in public areas. In this paper, we present a system for the detection of static objects in crowded scenes. Based on the detection of two background models learning at different rates, pixels are classified with the help of a finite-state machine. The background is modelled by two mixtures of Gaussians with identical parameters except for the learning rate. The state machine provides the meaning for the interpretation of the results obtained from background subtraction; it can be implemented as a look-up table with negligible computational cost and it can be easily extended. Due to the definition of the states in the state machine, the system can be used either full automatically or interactively, making it extremely suitable for real-life surveillance applications. The system was successfully validated with several public datasets.

  5. New models for energy beam machining enable accurate generation of free forms.

    Science.gov (United States)

    Axinte, Dragos; Billingham, John; Bilbao Guillerna, Aitor

    2017-09-01

    We demonstrate that, despite differences in their nature, many energy beam controlled-depth machining processes (for example, waterjet, pulsed laser, focused ion beam) can be modeled using the same mathematical framework-a partial differential evolution equation that requires only simple calibrations to capture the physics of each process. The inverse problem can be solved efficiently through the numerical solution of the adjoint problem and leads to beam paths that generate prescribed three-dimensional features with minimal error. The viability of this modeling approach has been demonstrated by generating accurate free-form surfaces using three processes that operate at very different length scales and with different physical principles for material removal: waterjet, pulsed laser, and focused ion beam machining. Our approach can be used to accurately machine materials that are hard to process by other means for scalable applications in a wide variety of industries.

  6. LINEAR KERNEL SUPPORT VECTOR MACHINES FOR MODELING PORE-WATER PRESSURE RESPONSES

    Directory of Open Access Journals (Sweden)

    KHAMARUZAMAN W. YUSOF

    2017-08-01

    Full Text Available Pore-water pressure responses are vital in many aspects of slope management, design and monitoring. Its measurement however, is difficult, expensive and time consuming. Studies on its predictions are lacking. Support vector machines with linear kernel was used here to predict the responses of pore-water pressure to rainfall. Pore-water pressure response data was collected from slope instrumentation program. Support vector machine meta-parameter calibration and model development was carried out using grid search and k-fold cross validation. The mean square error for the model on scaled test data is 0.0015 and the coefficient of determination is 0.9321. Although pore-water pressure response to rainfall is a complex nonlinear process, the use of linear kernel support vector machine can be employed where high accuracy can be sacrificed for computational ease and time.

  7. A Novel Machine Learning Strategy Based on Two-Dimensional Numerical Models in Financial Engineering

    Directory of Open Access Journals (Sweden)

    Qingzhen Xu

    2013-01-01

    Full Text Available Machine learning is the most commonly used technique to address larger and more complex tasks by analyzing the most relevant information already present in databases. In order to better predict the future trend of the index, this paper proposes a two-dimensional numerical model for machine learning to simulate major U.S. stock market index and uses a nonlinear implicit finite-difference method to find numerical solutions of the two-dimensional simulation model. The proposed machine learning method uses partial differential equations to predict the stock market and can be extensively used to accelerate large-scale data processing on the history database. The experimental results show that the proposed algorithm reduces the prediction error and improves forecasting precision.

  8. Advanced Backstepping controller for induction generator using multi-scalar machine model for wind power purposes

    Energy Technology Data Exchange (ETDEWEB)

    Nemmour, A.L.; Khezzar, A.; Hacil, M.; Louze, L. [Laboratoire d' Electrotechnique LEC, Universite Mentouri, Constantine (Algeria); Mehazzem, F. [Groupe ESIEE, Paris Est University (France); Abdessemed, R. [Laboratoire d' Electrotechnique LEB, Universite El Hadj Lakhdar, Batna (Algeria)

    2010-10-15

    This paper presents a new non-linear control Algorithm based on the Backstepping approach for an isolated induction generator (IG) driven by a wind turbine. For this purpose and in order to reduce the complexity of the real induction machine mathematical model, the multi-scalar machine model is exploited. The machine delivers an active power to the load via a converter connected to a single capacitor on the dc-side. So, during the voltage build-up process, the necessary stator currents references to be injected by the converter are calculated from the desired active power to be sent to the load and the rotor flux magnitude. Simulation results show that the proposed control provides perfect tracking performances of the DC-bus voltage and the rotor flux magnitude to their reference trajectories. (author)

  9. Simulation Model of the Weaving Machine "Camel” and Selection of the Sufficient Driving Motor

    Directory of Open Access Journals (Sweden)

    Ondřej MAREK

    2013-12-01

    Full Text Available The paper deals with the mathematical model of the waving machine CAMEL. This machine consists of many moving parts (rotational and translational movements, belts, flexible elements and therefore it is very complex. CAMEL uses servomotors working in electronic cam regime. It means that the actual angular velocity of the rotor is not constant and therefore it is really important to reduce the moment of inertia of rotating elements. The inertia of the rotor of the drive is very important too. Existing simulation model can help to choose the optimal drive of the machine. It also allows selecting the best displacement laws for different speeds (rpm in order to decrease the effective torque which is proportional to the heating of the servomotor.

  10. WITHDRAWN: Prostaglandins versus oxytocin for prelabour rupture of membranes at or near term.

    Science.gov (United States)

    Tan, B P; Hannah, M E

    2007-07-18

    The conventional method of induction of labour is with intravenous oxytocin. More recently, induction with prostaglandins, followed by an infusion of oxytocin if necessary, has been used. The objective of this review was to assess the effects of induction of labour with prostaglandins compared with oxytocin, at or near term. We searched the Cochrane Pregnancy and Childbirth Group trials register. Randomised and quasi-randomised trials of early stimulation of uterine contractions with prostaglandins (with or without oxytocin) versus with oxytocin alone (not combined with prostaglandins) in women with spontaneous rupture of membranes before labour (34 weeks or more gestation). Two reviewers assessed trial quality and extracted data. Seventeen trials were included. Most of the trials were of moderate to good quality. Based on six trials, prostaglandins compared with oxytocin were associated with increased chorioamnionitis (odds ratio of 1.49, 95% confidence interval 1.07 to 2.09) and maternal nausea/vomiting. Based on eight trials, prostaglandins were associated with a decrease in epidural analgesia, odds ratio of 0.85, 95% confidence interval 0.73 to 0.98 and internal fetal heart rate monitoring (based on one trial). Caesarean section, endometritis and perinatal mortality were not significantly different between the groups. Women with prelabour rupture of membranes at or near term having their labour induced with prostaglandins appear to have a lower risk of epidural analgesia and fetal heart rate monitoring. However there appears to be an increased risk of chorioamnionitis and nausea/vomiting with prostaglandins compared to oxytocin.[This abstract has been prepared centrally.].

  11. Prostaglandins versus oxytocin for prelabour rupture of membranes at or near term.

    Science.gov (United States)

    Tan, B P; Hannah, M E

    2000-01-01

    The conventional method of induction of labour is with intravenous oxytocin. More recently, induction with prostaglandins, followed by an infusion of oxytocin if necessary, has been used. The objective of this review was to assess the effects of induction of labour with prostaglandins compared with oxytocin, at or near term. We searched the Cochrane Pregnancy and Childbirth Group trials register. Randomised and quasi-randomised trials of early stimulation of uterine contractions with prostaglandins (with or without oxytocin) versus with oxytocin alone (not combined with prostaglandins) in women with spontaneous rupture of membranes before labour (34 weeks or more gestation). Two reviewers assessed trial quality and extracted data. Seventeen trials were included. Most of the trials were of moderate to good quality. Based on six trials, prostaglandins compared with oxytocin were associated with increased chorioamnionitis (odds ratio of 1.49, 95% confidence interval 1.07 to 2.09) and maternal nausea/vomiting. Based on eight trials, prostaglandins were associated with a decrease in epidural analgesia, odds ratio of 0.85, 95% confidence interval 0.73 to 0.98 and internal fetal heart rate monitoring (based on one trial). Caesarean section, endometritis and perinatal mortality were not significantly different between the groups. Women with prelabour rupture of membranes at or near term having their labour induced with prostaglandins appear to have a lower risk of epidural analgesia and fetal heart rate monitoring. However there appears to be an increased risk of chorioamnionitis and nausea/vomiting with prostaglandins compared to oxytocin.

  12. River suspended sediment modelling using the CART model: A comparative study of machine learning techniques.

    Science.gov (United States)

    Choubin, Bahram; Darabi, Hamid; Rahmati, Omid; Sajedi-Hosseini, Farzaneh; Kløve, Bjørn

    2018-02-15

    Suspended sediment load (SSL) modelling is an important issue in integrated environmental and water resources management, as sediment affects water quality and aquatic habitats. Although classification and regression tree (CART) algorithms have been applied successfully to ecological and geomorphological modelling, their applicability to SSL estimation in rivers has not yet been investigated. In this study, we evaluated use of a CART model to estimate SSL based on hydro-meteorological data. We also compared the accuracy of the CART model with that of the four most commonly used models for time series modelling of SSL, i.e. adaptive neuro-fuzzy inference system (ANFIS), multi-layer perceptron (MLP) neural network and two kernels of support vector machines (RBF-SVM and P-SVM). The models were calibrated using river discharge, stage, rainfall and monthly SSL data for the Kareh-Sang River gauging station in the Haraz watershed in northern Iran, where sediment transport is a considerable issue. In addition, different combinations of input data with various time lags were explored to estimate SSL. The best input combination was identified through trial and error, percent bias (PBIAS), Taylor diagrams and violin plots for each model. For evaluating the capability of the models, different statistics such as Nash-Sutcliffe efficiency (NSE), Kling-Gupta efficiency (KGE) and percent bias (PBIAS) were used. The results showed that the CART model performed best in predicting SSL (NSE=0.77, KGE=0.8, PBIAS<±15), followed by RBF-SVM (NSE=0.68, KGE=0.72, PBIAS<±15). Thus the CART model can be a helpful tool in basins where hydro-meteorological data are readily available. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Design ensemble machine learning model for breast cancer diagnosis.

    Science.gov (United States)

    Hsieh, Sheau-Ling; Hsieh, Sung-Huai; Cheng, Po-Hsun; Chen, Chi-Huang; Hsu, Kai-Ping; Lee, I-Shun; Wang, Zhenyu; Lai, Feipei

    2012-10-01

    In this paper, we classify the breast cancer of medical diagnostic data. Information gain has been adapted for feature selections. Neural fuzzy (NF), k-nearest neighbor (KNN), quadratic classifier (QC), each single model scheme as well as their associated, ensemble ones have been developed for classifications. In addition, a combined ensemble model with these three schemes has been constructed for further validations. The experimental results indicate that the ensemble learning performs better than individual single ones. Moreover, the combined ensemble model illustrates the highest accuracy of classifications for the breast cancer among all models.

  14. Improving wave forecasting by integrating ensemble modelling and machine learning

    Science.gov (United States)

    O'Donncha, F.; Zhang, Y.; James, S. C.

    2017-12-01

    Modern smart-grid networks use technologies to instantly relay information on supply and demand to support effective decision making. Integration of renewable-energy resources with these systems demands accurate forecasting of energy production (and demand) capacities. For wave-energy converters, this requires wave-condition forecasting to enable estimates of energy production. Current operational wave forecasting systems exhibit substantial errors with wave-height RMSEs of 40 to 60 cm being typical, which limits the reliability of energy-generation predictions thereby impeding integration with the distribution grid. In this study, we integrate physics-based models with statistical learning aggregation techniques that combine forecasts from multiple, independent models into a single "best-estimate" prediction of the true state. The Simulating Waves Nearshore physics-based model is used to compute wind- and currents-augmented waves in the Monterey Bay area. Ensembles are developed based on multiple simulations perturbing input data (wave characteristics supplied at the model boundaries and winds) to the model. A learning-aggregation technique uses past observations and past model forecasts to calculate a weight for each model. The aggregated forecasts are compared to observation data to quantify the performance of the model ensemble and aggregation techniques. The appropriately weighted ensemble model outperforms an individual ensemble member with regard to forecasting wave conditions.

  15. A Data Flow Model to Solve the Data Distribution Changing Problem in Machine Learning

    Directory of Open Access Journals (Sweden)

    Shang Bo-Wen

    2016-01-01

    Full Text Available Continuous prediction is widely used in broad communities spreading from social to business and the machine learning method is an important method in this problem.When we use the machine learning method to predict a problem. We use the data in the training set to fit the model and estimate the distribution of data in the test set.But when we use machine learning to do the continuous prediction we get new data as time goes by and use the data to predict the future data, there may be a problem. As the size of the data set increasing over time, the distribution changes and there will be many garbage data in the training set.We should remove the garbage data as it reduces the accuracy of the prediction. The main contribution of this article is using the new data to detect the timeliness of historical data and remove the garbage data.We build a data flow model to describe how the data flow among the test set, training set, validation set and the garbage set and improve the accuracy of prediction. As the change of the data set, the best machine learning model will change.We design a hybrid voting algorithm to fit the data set better that uses seven machine learning models predicting the same problem and uses the validation set putting different weights on the learning models to give better model more weights. Experimental results show that, when the distribution of the data set changes over time, our time flow model can remove most of the garbage data and get a better result than the traditional method that adds all the data to the data set; our hybrid voting algorithm has a better prediction result than the average accuracy of other predict models

  16. A Novel Application of Machine Learning Methods to Model Microcontroller Upset Due to Intentional Electromagnetic Interference

    Science.gov (United States)

    Bilalic, Rusmir

    A novel application of support vector machines (SVMs), artificial neural networks (ANNs), and Gaussian processes (GPs) for machine learning (GPML) to model microcontroller unit (MCU) upset due to intentional electromagnetic interference (IEMI) is presented. In this approach, an MCU performs a counting operation (0-7) while electromagnetic interference in the form of a radio frequency (RF) pulse is direct-injected into the MCU clock line. Injection times with respect to the clock signal are the clock low, clock rising edge, clock high, and the clock falling edge periods in the clock window during which the MCU is performing initialization and executing the counting procedure. The intent is to cause disruption in the counting operation and model the probability of effect (PoE) using machine learning tools. Five experiments were executed as part of this research, each of which contained a set of 38,300 training points and 38,300 test points, for a total of 383,000 total points with the following experiment variables: injection times with respect to the clock signal, injected RF power, injected RF pulse width, and injected RF frequency. For the 191,500 training points, the average training error was 12.47%, while for the 191,500 test points the average test error was 14.85%, meaning that on average, the machine was able to predict MCU upset with an 85.15% accuracy. Leaving out the results for the worst-performing model (SVM with a linear kernel), the test prediction accuracy for the remaining machines is almost 89%. All three machine learning methods (ANNs, SVMs, and GPML) showed excellent and consistent results in their ability to model and predict the PoE on an MCU due to IEMI. The GP approach performed best during training with a 7.43% average training error, while the ANN technique was most accurate during the test with a 10.80% error.

  17. Evaluation of discrete modeling efficiency of asynchronous electric machines

    OpenAIRE

    Byczkowska-Lipińska, Liliana; Stakhiv, Petro; Hoholyuk, Oksana; Vasylchyshyn, Ivanna

    2011-01-01

    In the paper the problem of effective mathematical macromodels in the form of state variables intended for asynchronous motor transient analysis is considered. Their comparing with traditional mathematical models of asynchronous motors including models built into MATLAB/Simulink software was carried out and analysis of their efficiency was conducted.

  18. A mechanistic ultrasonic vibration amplitude model during rotary ultrasonic machining of CFRP composites.

    Science.gov (United States)

    Ning, Fuda; Wang, Hui; Cong, Weilong; Fernando, P K S C

    2017-04-01

    Rotary ultrasonic machining (RUM) has been investigated in machining of brittle, ductile, as well as composite materials. Ultrasonic vibration amplitude, as one of the most important input variables, affects almost all the output variables in RUM. Numerous investigations on measuring ultrasonic vibration amplitude without RUM machining have been reported. In recent years, ultrasonic vibration amplitude measurement with RUM of ductile materials has been investigated. It is found that the ultrasonic vibration amplitude with RUM was different from that without RUM under the same input variables. RUM is primarily used in machining of brittle materials through brittle fracture removal. With this reason, the method for measuring ultrasonic vibration amplitude in RUM of ductile materials is not feasible for measuring that in RUM of brittle materials. However, there are no reported methods for measuring ultrasonic vibration amplitude in RUM of brittle materials. In this study, ultrasonic vibration amplitude in RUM of brittle materials is investigated by establishing a mechanistic amplitude model through cutting force. Pilot experiments are conducted to validate the calculation model. The results show that there are no significant differences between amplitude values calculated by model and those obtained from experimental investigations. The model can provide a relationship between ultrasonic vibration amplitude and input variables, which is a foundation for building models to predict other output variables in RUM. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Performance Analysis and Modeling of a Tubular Staggered-Tooth Transverse-Flux PM Linear Machine

    Directory of Open Access Journals (Sweden)

    Shaohong Zhu

    2016-03-01

    Full Text Available This paper investigates the performance analysis and mathematical modeling of a staggered-tooth transverse-flux permanent magnet linear synchronous machine (STTF-PMLSM, which is characterized by simple structure and low flux leakage. Firstly, the structure advantages and operation principle of the STTF-PMLSM are introduced, and a simplified one phase model is established to investigate the performance of the machine in order to save the computation time. Then, the electromagnetic characteristics, including no-load flux linkage, electromotive force (EMF, inductance, detent force and thrust force, are simulated and analyzed in detail. After that, the theoretical analysis of the detent force, thrust force, and power factor are carried out. And the theoretical analysis results are validated with 3-D finite-element method (FEM. Finally, an improved mathematical model of the machine based on d-q rotating coordinate system is proposed, in which inductance harmonics and coupling between d- and q-axis inductance is considered. The results from the proposed mathematical model are in accordance with the results from 3-D FEM, which proves the validity and effectiveness of the proposed mathematical model. This provides a powerful foundation for the control of the machine.

  20. Modelling habitat requirements of white-clawed crayfish (Austropotamobius pallipes using support vector machines

    Directory of Open Access Journals (Sweden)

    Favaro L.

    2011-07-01

    Full Text Available The white-clawed crayfish’s habitat has been profoundly modified in Piedmont (NW Italy due to environmental changes caused by human impact. Consequently, native populations have decreased markedly. In this research project, support vector machines were tested as possible tools for evaluating the ecological factors that determine the presence of white-clawed crayfish. A system of 175 sites was investigated, 98 of which recorded the presence of Austropotamobius pallipes. At each site 27 physical-chemical, environmental and climatic variables were measured according to their importance to A. pallipes. Various feature selection methods were employed. These yielded three subsets of variables that helped build three different types of models: (1 models with no variable selection; (2 models built by applying Goldberg’s genetic algorithm after variable selection; (3 models built by using a combination of four supervised-filter evaluators after variable selection. These different model types helped us realise how important it was to select the right features if we wanted to build support vector machines that perform as well as possible. In addition, support vector machines have a high potential for predicting indigenous crayfish occurrence, according to our findings. Therefore, they are valuable tools for freshwater management, tools that may prove to be much more promising than traditional and other machine-learning techniques.

  1. Comparing and Validating Machine Learning Models for Mycobacterium tuberculosis Drug Discovery.

    Science.gov (United States)

    Lane, Thomas; Russo, Daniel P; Zorn, Kimberley M; Clark, Alex M; Korotcov, Alexandru; Tkachenko, Valery; Reynolds, Robert C; Perryman, Alexander L; Freundlich, Joel S; Ekins, Sean

    2018-04-26

    Tuberculosis is a global health dilemma. In 2016, the WHO reported 10.4 million incidences and 1.7 million deaths. The need to develop new treatments for those infected with Mycobacterium tuberculosis ( Mtb) has led to many large-scale phenotypic screens and many thousands of new active compounds identified in vitro. However, with limited funding, efforts to discover new active molecules against Mtb needs to be more efficient. Several computational machine learning approaches have been shown to have good enrichment and hit rates. We have curated small molecule Mtb data and developed new models with a total of 18,886 molecules with activity cutoffs of 10 μM, 1 μM, and 100 nM. These data sets were used to evaluate different machine learning methods (including deep learning) and metrics and to generate predictions for additional molecules published in 2017. One Mtb model, a combined in vitro and in vivo data Bayesian model at a 100 nM activity yielded the following metrics for 5-fold cross validation: accuracy = 0.88, precision = 0.22, recall = 0.91, specificity = 0.88, kappa = 0.31, and MCC = 0.41. We have also curated an evaluation set ( n = 153 compounds) published in 2017, and when used to test our model, it showed the comparable statistics (accuracy = 0.83, precision = 0.27, recall = 1.00, specificity = 0.81, kappa = 0.36, and MCC = 0.47). We have also compared these models with additional machine learning algorithms showing Bayesian machine learning models constructed with literature Mtb data generated by different laboratories generally were equivalent to or outperformed deep neural networks with external test sets. Finally, we have also compared our training and test sets to show they were suitably diverse and different in order to represent useful evaluation sets. Such Mtb machine learning models could help prioritize compounds for testing in vitro and in vivo.

  2. Predictive modeling and multi-objective optimization of machining-induced residual stresses: Investigation of machining parameter effects

    Science.gov (United States)

    Ulutan, Durul

    2013-01-01

    In the aerospace industry, titanium and nickel-based alloys are frequently used for critical structural components, especially due to their higher strength at both low and high temperatures, and higher wear and chemical degradation resistance. However, because of their unfavorable thermal properties, deformation and friction-induced microstructural changes prevent the end products from having good surface integrity properties. In addition to surface roughness, microhardness changes, and microstructural alterations, the machining-induced residual stress profiles of titanium and nickel-based alloys contribute in the surface integrity of these products. Therefore, it is essential to create a comprehensive method that predicts the residual stress outcomes of machining processes, and understand how machining parameters (cutting speed, uncut chip thickness, depth of cut, etc.) or tool parameters (tool rake angle, cutting edge radius, tool material/coating, etc.) affect the machining-induced residual stresses. Since experiments involve a certain amount of error in measurements, physics-based simulation experiments should also involve an uncertainty in the predicted values, and a rich set of simulation experiments are utilized to create expected value and variance for predictions. As the first part of this research, a method to determine the friction coefficients during machining from practical experiments was introduced. Using these friction coefficients, finite element-based simulation experiments were utilized to determine flow stress characteristics of materials and then to predict the machining-induced forces and residual stresses, and the results were validated using the experimental findings. A sensitivity analysis on the numerical parameters was conducted to understand the effect of changing physical and numerical parameters, increasing the confidence on the selected parameters, and the effect of machining parameters on machining-induced forces and residual

  3. Numerical modeling and experimental investigation of laser-assisted machining of silicon nitride ceramics

    Science.gov (United States)

    Shen, Xinwei

    Laser-assisted machining (LAM) is a promising non-conventional machining technique for advanced ceramics. However, the fundamental machining mechanism which governs the LAM process is not well understood so far. Hence, the main objective of this study is to explore the machining mechanism and provide guidance for future LAM operations. In this study, laser-assisted milling (LAMill) of silicon nitride ceramics is focused. Experimental experience reveals that workpiece temperature in LAM of silicon nitride ceramics determines the surface quality of the machined workpiece. Thus, in order to know the thermal features of the workpiece in LAM, the laser-silicon nitride interaction mechanism is investigated via heating experiments. The trends of temperature affected by the key parameters (laser power, laser beam diameter, feed rate, and preheat time) are obtained through a parametric study. Experimental results show that high operating temperature leads to low cutting force, good surface finish, small edge chipping, and low residual stress. The temperature range for brittle-to-ductile transition should be avoided due to the rapid increase of fracture toughness. In order to know the temperature distribution at the cutting zone in the workpiece, a transient three-dimensional thermal model is developed using finite element analysis (FEA) and validated through experiments. Heat generation associated with machining is considered and demonstrated to have little impact on LAM. The model indicates that laser power is one critical parameter for successful operation of LAM. Feed and cutting speed can indirectly affect the operating temperatures. Furthermore, a machining model is established with the distinct element method (or discrete element method, DEM) to simulate the dynamic process of LAM. In the microstructural modeling of a beta-type silicon nitride ceramic, clusters are used to simulate the rod-like grains of the silicon nitride ceramic and parallel bonds act as the

  4. Analysis of precision and accuracy in a simple model of machine learning

    Science.gov (United States)

    Lee, Julian

    2017-12-01

    Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.

  5. Mapping groundwater contamination risk of multiple aquifers using multi-model ensemble of machine learning algorithms.

    Science.gov (United States)

    Barzegar, Rahim; Moghaddam, Asghar Asghari; Deo, Ravinesh; Fijani, Elham; Tziritis, Evangelos

    2018-04-15

    Constructing accurate and reliable groundwater risk maps provide scientifically prudent and strategic measures for the protection and management of groundwater. The objectives of this paper are to design and validate machine learning based-risk maps using ensemble-based modelling with an integrative approach. We employ the extreme learning machines (ELM), multivariate regression splines (MARS), M5 Tree and support vector regression (SVR) applied in multiple aquifer systems (e.g. unconfined, semi-confined and confined) in the Marand plain, North West Iran, to encapsulate the merits of individual learning algorithms in a final committee-based ANN model. The DRASTIC Vulnerability Index (VI) ranged from 56.7 to 128.1, categorized with no risk, low and moderate vulnerability thresholds. The correlation coefficient (r) and Willmott's Index (d) between NO 3 concentrations and VI were 0.64 and 0.314, respectively. To introduce improvements in the original DRASTIC method, the vulnerability indices were adjusted by NO 3 concentrations, termed as the groundwater contamination risk (GCR). Seven DRASTIC parameters utilized as the model inputs and GCR values utilized as the outputs of individual machine learning models were served in the fully optimized committee-based ANN-predictive model. The correlation indicators demonstrated that the ELM and SVR models outperformed the MARS and M5 Tree models, by virtue of a larger d and r value. Subsequently, the r and d metrics for the ANN-committee based multi-model in the testing phase were 0.8889 and 0.7913, respectively; revealing the superiority of the integrated (or ensemble) machine learning models when compared with the original DRASTIC approach. The newly designed multi-model ensemble-based approach can be considered as a pragmatic step for mapping groundwater contamination risks of multiple aquifer systems with multi-model techniques, yielding the high accuracy of the ANN committee-based model. Copyright © 2017 Elsevier B

  6. Which method predicts recidivism best?: A comparison of statistical, machine learning, and data mining predictive models

    OpenAIRE

    Tollenaar, N.; van der Heijden, P.G.M.

    2012-01-01

    Using criminal population conviction histories of recent offenders, prediction mod els are developed that predict three types of criminal recidivism: general recidivism, violent recidivism and sexual recidivism. The research question is whether prediction techniques from modern statistics, data mining and machine learning provide an improvement in predictive performance over classical statistical methods, namely logistic regression and linear discrim inant analysis. These models are compared ...

  7. A comparative study of machine learning classifiers for modeling travel mode choice

    NARCIS (Netherlands)

    Hagenauer, J; Helbich, M

    2017-01-01

    The analysis of travel mode choice is an important task in transportation planning and policy making in order to understand and predict travel demands. While advances in machine learning have led to numerous powerful classifiers, their usefulness for modeling travel mode choice remains largely

  8. Modelling and optimization of a permanent-magnet machine in a flywheel

    NARCIS (Netherlands)

    Holm, S.R.

    2003-01-01

    This thesis describes the derivation of an analytical model for the design and optimization of a permanent-magnet machine for use in an energy storage flywheel. A prototype of this flywheel is to be used as the peak-power unit in a hybrid electric city bus. The thesis starts by showing the

  9. Comparison of Models Needed for Conceptual Design of Man-Machine Systems in Different Application Domains

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1986-01-01

    and subjective preferences. For design of man-machine systems in process control, a framework has been developed in terms of separate representation of the problem domain, the decision task, and the information processing strategies required. The author analyzes the application of this framework to a number......For systematic and computer-aided design of man-machine systems, a consistent framework is needed, i. e. , a set of models which allows the selection of system characteristics which serve the individual user not only to satisfy his goal, but also to select mental processes that match his resources...

  10. Big data - modelling of midges in Europa using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Cuellar, Ana Carolina; Kjær, Lene Jung; Skovgaard, Henrik

    2017-01-01

    coordinates of each trap, start and end dates of trapping. We used 120 environmental predictor variables together with Random Forest machine learning algorithms to predict the overall species distribution (probability of occurrence) and monthly abundance in Europe. We generated maps for every month...... and the Obsoletus group, although abundance was generally higher for a longer period of time for C. imicula than for the Obsoletus group. Using machine learning techniques, we were able to model the spatial distribution in Europe for C. imicola and the Obsoletus group in terms of abundance and suitability...

  11. Modelling of Moving Coil Actuators in Fast Switching Valves Suitable for Digital Hydraulic Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Roemer, Daniel Beck; Bech, Michael Møller

    2015-01-01

    The efficiency of digital hydraulic machines is strongly dependent on the valve switching time. Recently, fast switching have been achieved by using a direct electromagnetic moving coil actuator as the force producing element in fast switching hydraulic valves suitable for digital hydraulic...... machines. Mathematical models of the valve switching, targeted for design optimisation of the moving coil actuator, are developed. A detailed analytical model is derived and presented and its accuracy is evaluated against transient electromagnetic finite element simulations. The model includes...... an estimation of the eddy currents generated in the actuator yoke upon current rise, as they may have significant influence on the coil current response. The analytical model facilitates fast simulation of the transient actuator response opposed to the transient electro-magnetic finite element model which...

  12. Identification and non-integer order modelling of synchronous machines operating as generator

    Directory of Open Access Journals (Sweden)

    Szymon Racewicz

    2012-09-01

    Full Text Available This paper presents an original mathematical model of a synchronous generator using derivatives of fractional order. In contrast to classical models composed of a large number of R-L ladders, it comprises half-order impedances, which enable the accurate description of the electromagnetic induction phenomena in a wide frequency range, while minimizing the order and number of model parameters. The proposed model takes into account the skin eff ect in damper cage bars, the eff ects of eddy currents in rotor solid parts, and the saturation of the machine magnetic circuit. The half-order transfer functions used for modelling these phenomena were verifi ed by simulation of ferromagnetic sheet impedance using the fi nite elements method. The analysed machine’s parameters were identified on the basis of SSFR (StandStill Frequency Response characteristics measured on a gradually magnetised synchronous machine.

  13. Modeling and simulation of the fluid flow in wire electrochemical machining with rotating tool (wire ECM)

    Science.gov (United States)

    Klocke, F.; Herrig, T.; Zeis, M.; Klink, A.

    2017-10-01

    Combining the working principle of electrochemical machining (ECM) with a universal rotating tool, like a wire, could manage lots of challenges of the classical ECM sinking process. Such a wire-ECM process could be able to machine flexible and efficient 2.5-dimensional geometries like fir tree slots in turbine discs. Nowadays, established manufacturing technologies for slotting turbine discs are broaching and wire electrical discharge machining (wire EDM). Nevertheless, high requirements on surface integrity of turbine parts need cost intensive process development and - in case of wire-EDM - trim cuts to reduce the heat affected rim zone. Due to the process specific advantages, ECM is an attractive alternative manufacturing technology and is getting more and more relevant for sinking applications within the last few years. But ECM is also opposed with high costs for process development and complex electrolyte flow devices. In the past, few studies dealt with the development of a wire ECM process to meet these challenges. However, previous concepts of wire ECM were only suitable for micro machining applications. Due to insufficient flushing concepts the application of the process for machining macro geometries failed. Therefore, this paper presents the modeling and simulation of a new flushing approach for process assessment. The suitability of a rotating structured wire electrode in combination with an axial flushing for electrodes with high aspect ratios is investigated and discussed.

  14. Static stiffness modeling of a novel hybrid redundant robot machine

    Energy Technology Data Exchange (ETDEWEB)

    Li Ming, E-mail: hackingming@gmail.com [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland); Wu Huapeng; Handroos, Heikki [Laboratory of Intelligent Machines, Lappeenranta University of Technology (Finland)

    2011-10-15

    This paper presents a modeling method to study the stiffness of a hybrid serial-parallel robot IWR (Intersector Welding Robot) for the assembly of ITER vacuum vessel. The stiffness matrix of the basic element in the robot is evaluated using matrix structural analysis (MSA); the stiffness of the parallel mechanism is investigated by taking account of the deformations of both hydraulic limbs and joints; the stiffness of the whole integrated robot is evaluated by employing the virtual joint method and the principle of virtual work. The obtained stiffness model of the hybrid robot is analytical and the deformation results of the robot workspace under certain external load are presented.

  15. Static stiffness modeling of a novel hybrid redundant robot machine

    International Nuclear Information System (INIS)

    Li Ming; Wu Huapeng; Handroos, Heikki

    2011-01-01

    This paper presents a modeling method to study the stiffness of a hybrid serial-parallel robot IWR (Intersector Welding Robot) for the assembly of ITER vacuum vessel. The stiffness matrix of the basic element in the robot is evaluated using matrix structural analysis (MSA); the stiffness of the parallel mechanism is investigated by taking account of the deformations of both hydraulic limbs and joints; the stiffness of the whole integrated robot is evaluated by employing the virtual joint method and the principle of virtual work. The obtained stiffness model of the hybrid robot is analytical and the deformation results of the robot workspace under certain external load are presented.

  16. An Ant Optimization Model for Unrelated Parallel Machine Scheduling with Energy Consumption and Total Tardiness

    Directory of Open Access Journals (Sweden)

    Peng Liang

    2015-01-01

    Full Text Available This research considers an unrelated parallel machine scheduling problem with energy consumption and total tardiness. This problem is compounded by two challenges: differences of unrelated parallel machines energy consumption and interaction between job assignments and machine state operations. To begin with, we establish a mathematical model for this problem. Then an ant optimization algorithm based on ATC heuristic rule (ATC-ACO is presented. Furthermore, optimal parameters of proposed algorithm are defined via Taguchi methods for generating test data. Finally, comparative experiments indicate the proposed ATC-ACO algorithm has better performance on minimizing energy consumption as well as total tardiness and the modified ATC heuristic rule is more effectively on reducing energy consumption.

  17. Implementation of the Lanczos algorithm for the Hubbard model on the Connection Machine system

    International Nuclear Information System (INIS)

    Leung, P.W.; Oppenheimer, P.E.

    1992-01-01

    An implementation of the Lanczos algorithm for the exact diagonalization of the two dimensional Hubbard model on a 4x4 square lattice on the Connection Machine CM-2 system is described. The CM-2 is a massively parallel machine with distributed memory. The program is written in C/PARIS. This implementation minimizes memory usage by generating the matrix elements as needed instead of storing them. The Lanczos vectors are stored across the local memory of the processors. Using translational symmetry only, the dimension of the Hilbert space at half filling is more than 10 million. A speed of about 2.4 min per iteration is achieved on a 64K CM-2. This implementation is scalable. Running it on a bigger machine with more processors speeds up the process. The performance analysis of this implementation is shown and discuss its advantages and disadvantages are discussed

  18. Dynamic modeling of a high-speed over-constrained press machine

    International Nuclear Information System (INIS)

    Li, Yejian; Sun Yu; Peng, Binbin; Hu, Fengfeng

    2016-01-01

    This paper presents a study on the dynamic modeling of a high-speed over-constrained press machine. The main contribution of the paper is the development of an efficient approach to perform the dynamic analysis of a planner over-constrained mechanism. The key idea is the establishment of a more general methodology, which is to gain a deformation compatibility equation for the over-constrained mechanism on the basis of the deformation compatibility analysis at each position of the mechanisms. And this equation is then used together with the force/moment equilibrium equations obtained by the D'Alembert principle to form a total equation for the dynamics of the over-constrained mechanism. The approach is applied to a particular press machine to validate the effectiveness of the approach, and in the meantime to provide some useful information for the improvement of the design of this press machine.

  19. Modelling rollover behaviour of exacavator-based forest machines

    Science.gov (United States)

    M.W. Veal; S.E. Taylor; Robert B. Rummer

    2003-01-01

    This poster presentation provides results from analytical and computer simulation models of rollover behaviour of hydraulic excavators. These results are being used as input to the operator protective structure standards development process. Results from rigid body mechanics and computer simulation methods agree well with field rollover test data. These results show...

  20. Empirical model for estimating the surface roughness of machined ...

    African Journals Online (AJOL)

    Michael Horsfall

    Regression Analysis to construct a prediction model for surface roughness such that once the process parameters (cutting speed, feed, depth of cut, Nose. Radius and Speed) are given, the surface roughness can be predicted. The work piece material was EN8 which was processed by carbide-inserted tool conducted on ...

  1. Syntactic discriminative language model rerankers for statistical machine translation

    NARCIS (Netherlands)

    Carter, S.; Monz, C.

    2011-01-01

    This article describes a method that successfully exploits syntactic features for n-best translation candidate reranking using perceptrons. We motivate the utility of syntax by demonstrating the superior performance of parsers over n-gram language models in differentiating between Statistical

  2. Hopfield Models as Nondeterministic Finite-state Machines

    NARCIS (Netherlands)

    Drossaers, M.F.J.

    1992-01-01

    The use of neural networks for integrated linguistic analysis may be profitable. This paper presents the first results of our research on that subject: a Hopfield model for syntactical analysis. We construct a neural network as an implementation of a bounded push-down automaton, which can accept

  3. Mathematical Model of Lifetime Duration at Insulation of Electrical Machines

    Directory of Open Access Journals (Sweden)

    Mihaela Răduca

    2009-10-01

    Full Text Available Abstract. This paper present a mathematical model of lifetime duration at hydro generator stator winding insulation when at hydro generator can be appear the damage regimes. The estimation to make by take of the programming and non-programming revisions, through the introduction and correlation of the new defined notions.

  4. The development of fully dynamic rotating machine models for nuclear training simulators

    International Nuclear Information System (INIS)

    Birsa, J.J.

    1990-01-01

    Prior to beginning the development of an enhanced set of electrical plant models for several nuclear training simulators, an extensive literature search was conducted to evaluate and select rotating machine models for use on these simulators. These models include the main generator, diesel generators, in-plant electric power distribution and off-side power. Form the results of this search, various models were investigated and several were selected for further evaluation. Several computer studies were performed on the selected models in order to determine their suitability for use in a training simulator environment. One surprising result of this study was that a number of established, classical models could not be made to reproduce actual plant steady-state data over the range necessary for a training simulator. This evaluation process and its results are presented in this paper. Various historical, as well as contemporary, electrical models of rotating machines are discussed. Specific criteria for selection of rotating machine models for training simulator use are presented

  5. Issues of Application of Machine Learning Models for Virtual and Real-Life Buildings

    Directory of Open Access Journals (Sweden)

    Young Min Kim

    2016-06-01

    Full Text Available The current Building Energy Performance Simulation (BEPS tools are based on first principles. For the correct use of BEPS tools, simulationists should have an in-depth understanding of building physics, numerical methods, control logics of building systems, etc. However, it takes significant time and effort to develop a first principles-based simulation model for existing buildings—mainly due to the laborious process of data gathering, uncertain inputs, model calibration, etc. Rather than resorting to an expert’s effort, a data-driven approach (so-called “inverse” approach has received growing attention for the simulation of existing buildings. This paper reports a cross-comparison of three popular machine learning models (Artificial Neural Network (ANN, Support Vector Machine (SVM, and Gaussian Process (GP for predicting a chiller’s energy consumption in a virtual and a real-life building. The predictions based on the three models are sufficiently accurate compared to the virtual and real measurements. This paper addresses the following issues for the successful development of machine learning models: reproducibility, selection of inputs, training period, outlying data obtained from the building energy management system (BEMS, and validation of the models. From the result of this comparative study, it was found that SVM has a disadvantage in computation time compared to ANN and GP. GP is the most sensitive to a training period among the three models.

  6. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    Directory of Open Access Journals (Sweden)

    Wm M. Wood

    2018-02-01

    Full Text Available A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t, and current, I(t. The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX” model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  7. A tool for urban soundscape evaluation applying Support Vector Machines for developing a soundscape classification model.

    Science.gov (United States)

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F

    2014-06-01

    To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified). © 2013 Elsevier B.V. All rights reserved.

  8. Shot-by-shot spectrum model for rod-pinch, pulsed radiography machines

    Science.gov (United States)

    Wood, Wm M.

    2018-02-01

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thus allowing for rapid optimization of the model across many shots. "Goodness of fit" is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays ("MCNPX") model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. Improvements to the model, specifically for application to other geometries, are discussed.

  9. Component simulation in problems of calculated model formation of automatic machine mechanisms

    Directory of Open Access Journals (Sweden)

    Telegin Igor

    2017-01-01

    Full Text Available The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gaps in kinematic pairs, friction forces, design and technological loads. As an example in the paper there are considered a formalization of stages in the computer model formation of the cutting mechanism in cold stamping automatic machine AV1818 and methods of for the computation of their parameters on the basis of its solid-state model.

  10. Input data for mathematical modeling and numerical simulation of switched reluctance machines

    Directory of Open Access Journals (Sweden)

    Ali Asghar Memon

    2017-10-01

    Full Text Available The modeling and simulation of Switched Reluctance (SR machine and drives is challenging for its dual pole salient structure and magnetic saturation. This paper presents the input data in form of experimentally obtained magnetization characteristics. This data was used for computer simulation based model of SR machine, “Selecting Best Interpolation Technique for Simulation Modeling of Switched Reluctance Machine” [1], “Modeling of Static Characteristics of Switched Reluctance Motor” [2]. This data is primary source of other data tables of co energy and static torque which are also among the required data essential for the simulation and can be derived from this data. The procedure and experimental setup for collection of the data is presented in detail.

  11. Control volume based modelling of compressible flow in reciprocating machines

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Thomsen, Per Grove; Carlsen, Henrik

    2004-01-01

    conservation laws for mass, energy, and momentum applied to a staggered mesh consisting of two overlapping strings of control volumes. Loss mechanisms can be included directly in the governing equations of models by including them as terms in the conservation laws. Heat transfer, flow friction......, and multidimensional effects must be calculated using empirical correlations; correlations for steady state flow can be used as an approximation. A transformation that assumes ideal gas is presented for transforming equations for masses and energies in control volumes into the corresponding pressures and temperatures...

  12. Component simulation in problems of calculated model formation of automatic machine mechanisms

    OpenAIRE

    Telegin Igor; Kozlov Alexander; Zhirkov Alexander

    2017-01-01

    The paper deals with the problems of the component simulation method application in the problems of the automation of the mechanical system model formation with the further possibility of their CAD-realization. The purpose of the investigations mentioned consists in the automation of the CAD-model formation of high-speed mechanisms in automatic machines and in the analysis of dynamic processes occurred in their units taking into account their elasto-inertial properties, power dissipation, gap...

  13. Three near term commercial markets in space and their potential role in space exploration

    Science.gov (United States)

    Gavert, Raymond B.

    2001-02-01

    Independent market studies related to Low Earth Orbit (LEO) commercialization have identified three near term markets that have return-on-investment potential. These markets are: (1) Entertainment (2) Education (3) Advertising/sponsorship. Commercial activity is presently underway focusing on these areas. A private company is working with the Russians on a commercial module attached to the ISS that will involve entertainment and probably the other two activities as well. A separate corporation has been established to commercialize the Russian Mir Space Station with entertainment and promotional advertising as important revenue sources. A new startup company has signed an agreement with NASA for commercial media activity on the International Space Station (ISS). Profit making education programs are being developed by a private firm to allow students to play the role of an astronaut and work closely with space scientists and astronauts. It is expected that the success of these efforts on the ISS program will extend to exploration missions beyond LEO. The objective of this paper is to extrapolate some of the LEO commercialization experiences to see what might be expected in space exploration missions to Mars, the Moon and beyond. .

  14. California Power-to-Gas and Power-to-Hydrogen Near-Term Business Case Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Eichman, Josh [National Renewable Energy Lab. (NREL), Golden, CO (United States); Flores-Espino, Francisco [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    Flexible operation of electrolysis systems represents an opportunity to reduce the cost of hydrogen for a variety of end-uses while also supporting grid operations and thereby enabling greater renewable penetration. California is an ideal location to realize that value on account of growing renewable capacity and markets for hydrogen as a fuel cell electric vehicle (FCEV) fuel, refineries, and other end-uses. Shifting the production of hydrogen to avoid high cost electricity and participation in utility and system operator markets along with installing renewable generation to avoid utility charges and increase revenue from the Low Carbon Fuel Standard (LCFS) program can result in around $2.5/kg (21%) reduction in the production and delivery cost of hydrogen from electrolysis. This reduction can be achieved without impacting the consumers of hydrogen. Additionally, future strategies for reducing hydrogen cost were explored and include lower cost of capital, participation in the Renewable Fuel Standard program, capital cost reduction, and increased LCFS value. Each must be achieved independently and could each contribute to further reductions. Using the assumptions in this study found a 29% reduction in cost if all future strategies are realized. Flexible hydrogen production can simultaneously improve the performance and decarbonize multiple energy sectors. The lessons learned from this study should be used to understand near-term cost drivers and to support longer-term research activities to further improve cost effectiveness of grid integrated electrolysis systems.

  15. Near term hybrid passenger vehicle development program. Phase I. Appendices C and D. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-01-01

    The derivation of and actual preliminary design of the Near Term Hybrid Vehicle (NTHV) are presented. The NTHV uses a modified GM Citation body, a VW Rabbit turbocharged diesel engine, a 24KW compound dc electric motor, a modified GM automatic transmission, and an on-board computer for transmission control. The following NTHV information is presented: the results of the trade-off studies are summarized; the overall vehicle design; the selection of the design concept and the base vehicle (the Chevrolet Citation), the battery pack configuration, structural modifications, occupant protection, vehicle dynamics, and aerodynamics; the powertrain design, including the transmission, coupling devices, engine, motor, accessory drive, and powertrain integration; the motor controller; the battery type, duty cycle, charger, and thermal requirements; the control system (electronics); the identification of requirements, software algorithm requirements, processor selection and system design, sensor and actuator characteristics, displays, diagnostics, and other topics; environmental system including heating, air conditioning, and compressor drive; the specifications, weight breakdown, and energy consumption measures; advanced technology components, and the data sources and assumptions used. (LCL)

  16. Phase I of the Near-Term Hybrid Passenger-Vehicle Development Program. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-10-01

    Under contract to the Jet Propulsion Laboratory of the California Institute of Technology, Minicars conducted Phase I of the Near-Term Hybrid Passenger Vehicle (NTHV) Development Program. This program led to the preliminary design of a hybrid (electric and internal combustion engine powered) vehicle and fulfilled the objectives set by JPL. JPL requested that the report address certain specific topics. A brief summary of all Phase I activities is given initially; the hybrid vehicle preliminary design is described in Sections 4, 5, and 6. Table 2 of the Summary lists performance projections for the overall vehicle and some of its subsystems. Section 4.5 gives references to the more-detailed design information found in the Preliminary Design Data Package (Appendix C). Alternative hybrid-vehicle design options are discussed in Sections 3 through 6. A listing of the tradeoff study alternatives is included in Section 3. Computer simulations are discussed in Section 9. Section 8 describes the supporting economic analyses. Reliability and safety considerations are discussed specifically in Section 7 and are mentioned in Sections 4, 5, and 6. Section 10 lists conclusions and recommendations arrived at during the performance of Phase I. A complete bibliography follows the list of references.

  17. Short-term outcome for term and near-term singleton infants with intrapartum polyhydramnios.

    Science.gov (United States)

    Leibovitch, Leah; Schushan-Eisen, Irit; Kuint, Jacob; Weissmann-Brenner, Alina; Maayan-Metzger, Ayala

    2012-01-01

    To evaluate rates of early short-term neonatal complications among term and near-term newborn infants with polyhydramnios. Retrospective data were collected on 788 term infants with prenatal diagnosis of polyhydramnios and 1,576 matched controls, including information on maternal condition and on infant perinatal complications. The total rate of major congenital malformations among infants born to mothers with polyhydramnios was 2.3% compared to 0.13% for those with normal amniotic fluid index (p polyhydramnios, but no major congenital malformations, are at increased risk for minor congenital malformations (4.2%) as well as for postnatal complications, such as respiratory distress (5.7%), cardiovascular manifestations (mainly delayed closure of the ductus arteriosus; 3.1%) and hypoglycemia (7%) compared to controls. Multivariate logistic regression revealed that polyhydramnios was associated only with postnatal respiratory distress and hypoglycemia. The severity of polyhydramnios was not associated with an increased rate of neonatal complications. Although infants with polyhydramnios, but no major congenital malformations, were found to have increased rates of respiratory distress and hypoglycemia, these clinical manifestations were mild and had little effect on the babies' well-being and length of hospital stay. Copyright © 2011 S. Karger AG, Basel.

  18. Round and round: Little consensus exists on the near-term future of natural gas

    International Nuclear Information System (INIS)

    Lunan, D.

    2004-01-01

    The various combinations of factors influencing natural gas supply and demand and the future price of natural gas is discussed. Expert opinion is that prices will continue to track higher, demand will grow with the surging American economy, and supplies will remain constrained providing more fuel for another cycle of ever-higher prices. There is also considerable concern about the continuing rise in demand and tight supply situation in the near term, and the uncertainty about when, or even whether, major new sources will become available. The prediction is that the overriding impact of declining domestic supplies will put a premium on natural gas at any given time. Overall, it appears certain that higher prices are here to stay: as a result, industrial gas users will see their competitiveness eroded, and individual consumers will see their heating bills rise. Governments, too, will be affected as the increasing cost of natural gas will slow down the pace of conversion of coal-fired power generating plants to natural gas, reducing anticipated emissions benefits and in the process compromising environmental goals. Current best estimates put prices for the 2004/2005 heating season at about US$5.40 per MMBtu, whereas the longer term price range is estimated to lie in the range of US$4.75 to US$5.25 per MMBtu. 2 figs

  19. Model for Investigation of Operational Wind Power Plant Regimes with Doubly–Fed Asynchronous Machine in Power System

    Directory of Open Access Journals (Sweden)

    R. I. Mustafayev

    2012-01-01

    Full Text Available The paper presents methodology for mathematical modeling of power system (its part when jointly operated with wind power plants (stations that contain asynchronous doubly-fed machines used as generators. The essence and advantage of the methodology is that it allows efficiently to mate equations of doubly-fed asynchronous machines, written in the axes that rotate with the machine rotor speed with the equations of external electric power system, written in synchronously rotating axes.

  20. A Model of Parallel Kinematics for Machine Calibration

    DEFF Research Database (Denmark)

    Pedersen, David Bue; Bæk Nielsen, Morten; Kløve Christensen, Simon

    2016-01-01

    . This research identifies that the rapid lift and repositioning capabilities of delta robots can reduce defects on extruded 3D printed parts when compared to traditional Cartesian motion systems. This is largely due to the fact that repositioning is so rapid that the extruded strand is instantly broken......Parallel kinematics have been adopted by more than 25 manufacturers of high-end desktop 3D printers [Wohlers Report (2015), p.118] as well as by research projects such as the WASP project [WASP (2015)], a 12 meter tall linear delta robot for Additive Manufacture of large-scale components...... the operator with a strong tool for easing this task. The kinematics and calibration of delta robots, in particular, are less researched than that of traditional Cartesian robots, for which tried-and-true methods for calibrating are well known. A forwards and reverse virtual model of a delta robot has been...

  1. Analytical Modeling of a Novel Transverse Flux Machine for Direct Drive Wind Turbine Applications: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Hasan, IIftekhar; Husain, Tausif; Uddin, Md Wasi; Sozer, Yilmaz; Husain; Iqbal; Muljadi, Eduard

    2015-08-24

    This paper presents a nonlinear analytical model of a novel double-sided flux concentrating Transverse Flux Machine (TFM) based on the Magnetic Equivalent Circuit (MEC) model. The analytical model uses a series-parallel combination of flux tubes to predict the flux paths through different parts of the machine including air gaps, permanent magnets, stator, and rotor. The two-dimensional MEC model approximates the complex three-dimensional flux paths of the TFM and includes the effects of magnetic saturation. The model is capable of adapting to any geometry that makes it a good alternative for evaluating prospective designs of TFM compared to finite element solvers that are numerically intensive and require more computation time. A single-phase, 1-kW, 400-rpm machine is analytically modeled, and its resulting flux distribution, no-load EMF, and torque are verified with finite element analysis. The results are found to be in agreement, with less than 5% error, while reducing the computation time by 25 times.

  2. A new machine condition monitoring method based on likelihood change of a stochastic model

    Science.gov (United States)

    Hwang, Kyu Hwan; Lee, Jong Min; Hwang, Yoha

    2013-12-01

    In industry, a machine condition monitoring system has become more important with ever-increasing requirements on productivity and cost saving. Although researches have been very active, many currently available intelligent monitoring methods have common drawbacks, which are the requirement of defect model for every interested defect type and inaccurate diagnostic performance. To overcome those drawbacks, authors propose a new machine condition monitoring method based on likelihood change of a stochastic model using only normal operation data. Hidden Markov model (HMM) has been selected as a stochastic model based on its accurate and robust diagnostic performance. By observing the likelihood change of a pre-trained normal HMM on incoming data in unknown condition, defect can be precisely detected from sudden drop of likelihood value. Therefore, though the types of defect cannot be identified, defects can be precisely detected with only normal model. Defect models can also be used when defect data are available. And in this case, not only the precise detection of defect but also the correct identification of defect type is possible. In this paper, the proposed monitoring method based on likelihood change of normal continuous HMM have been successfully applied to monitoring of the machine condition and weld condition, proving its great potential with accurate and robust diagnostic performance results.

  3. A Machine Learning Approach for Air Quality Prediction: Model Regularization and Optimization

    Directory of Open Access Journals (Sweden)

    Dixian Zhu

    2018-02-01

    Full Text Available In this paper, we tackle air quality forecasting by using machine learning approaches to predict the hourly concentration of air pollutants (e.g., ozone, particle matter ( PM 2.5 and sulfur dioxide. Machine learning, as one of the most popular techniques, is able to efficiently train a model on big data by using large-scale optimization algorithms. Although there exist some works applying machine learning to air quality prediction, most of the prior studies are restricted to several-year data and simply train standard regression models (linear or nonlinear to predict the hourly air pollution concentration. In this work, we propose refined models to predict the hourly air pollution concentration on the basis of meteorological data of previous days by formulating the prediction over 24 h as a multi-task learning (MTL problem. This enables us to select a good model with different regularization techniques. We propose a useful regularization by enforcing the prediction models of consecutive hours to be close to each other and compare it with several typical regularizations for MTL, including standard Frobenius norm regularization, nuclear norm regularization, and ℓ 2 , 1 -norm regularization. Our experiments have showed that the proposed parameter-reducing formulations and consecutive-hour-related regularizations achieve better performance than existing standard regression models and existing regularizations.

  4. Modeling and optimizing electrodischarge machine process (EDM) with an approach based on genetic algorithm

    Science.gov (United States)

    Zabbah, Iman

    2012-01-01

    Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of proportional erosion tool has an important role in this machining. That is directly related to the choosing of input parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article, besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous methods.

  5. Using cognitive modeling to improve the man-machine interface

    International Nuclear Information System (INIS)

    Newton, R.A.; Zyduck, R.C.; Johnson, D.R.

    1982-01-01

    A group of utilities from the Westinghouse Owners Group was formed in early 1980 to examine the interface requirements and to determine how they could be implemented. The products available from the major vendors were examined early in 1980 and judged not to be completely applicable. The utility group then decided to develop its own specifications for a Safety Assessment System (SAS) and, later in 1980, contracted with a company to develop the system, prepare the software and demonstrate the system on a simulator. The resulting SAS is a state-of-the-art system targeted for implementation on pressurized water reactor nuclear units. It has been designed to provide control room operators with centralized and easily understandable information from a computer-based data and display system. This paper gives an overview of the SAS plus a detailed description of one of its functional areas - called AIDS. The AIDS portion of SAS is an advanced concept which uses cognitive modeling of the operator as the basis for its design

  6. Parallel phase model : a programming model for high-end parallel machines with manycores.

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Junfeng (Syracuse University, Syracuse, NY); Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  7. Sensitivity analysis of machine-learning models of hydrologic time series

    Science.gov (United States)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  8. Optimization of Roller Velocity for Quenching Machine Based on Heat Transfer Mathematical Model

    Directory of Open Access Journals (Sweden)

    Yunfeng He

    2017-01-01

    Full Text Available During quenching process of steel plate, control parameters are important to product quality. In this work, heat transfer mathematical model has been developed for roller-type quenching machine to predict the temperature field of plate at first, and then an optimization schedule considering quenching technology and equipment limitations is developed firstly based on the heat transfer mathematical model with considering the shortest quenching time. A numerical simulation is performed during optimization process to investigate the effects of roller velocity on the temperature of representative plate. Based on the optimization method, study is also performed for different thickness of plate to obtain the corresponding roller velocity. The results show that the optimized roller velocity can be achieved for the roller-type continuous quenching machine based on the heat transfer mathematical model. With the increasing of plate’s thickness, the optimized roller velocity decreases exponentially.

  9. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  10. Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.

    Science.gov (United States)

    Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose

    2018-02-22

    Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.

  11. Law machines: scale models, forensic materiality and the making of modern patent law.

    Science.gov (United States)

    Pottage, Alain

    2011-10-01

    Early US patent law was machine made. Before the Patent Office took on the function of examining patent applications in 1836, questions of novelty and priority were determined in court, within the forum of the infringement action. And at all levels of litigation, from the circuit courts up to the Supreme Court, working models were the media through which doctrine, evidence and argument were made legible, communicated and interpreted. A model could be set on a table, pointed at, picked up, rotated or upended so as to display a point of interest to a particular audience within the courtroom, and, crucially, set in motion to reveal the 'mode of operation' of a machine. The immediate object of demonstration was to distinguish the intangible invention from its tangible embodiment, but models also'machined' patent law itself. Demonstrations of patent claims with models articulated and resolved a set of conceptual tensions that still make the definition and apprehension of the invention difficult, even today, but they resolved these tensions in the register of materiality, performativity and visibility, rather than the register of conceptuality. The story of models tells us something about how inventions emerge and subsist within the context of patent litigation and patent doctrine, and it offers a starting point for renewed reflection on the question of how technology becomes property.

  12. Contribution to the modelling of induction machines by fractional order; Contribution a la modelisation dynamique d'ordre non entier de la machine asynchrone a cage

    Energy Technology Data Exchange (ETDEWEB)

    Canat, S.

    2005-07-15

    Induction machine is most widespread in industry. Its traditional modeling does not take into account the eddy current in the rotor bars which however induce strong variations as well of the resistance as of the resistance of the rotor. This diffusive phenomenon, called 'skin effect' could be modeled by a compact transfer function using fractional derivative (non integer order). This report theoretically analyzes the electromagnetic phenomenon on a single rotor bar before approaching the rotor as a whole. This analysis is confirmed by the results of finite elements calculations of the magnetic field, exploited to identify a fractional order model of the induction machine (identification method of Levenberg-Marquardt). Then, the model is confronted with an identification of experimental results. Finally, an automatic method is carried out to approximate the dynamic model by integer order transfer function on a frequency band. (author)

  13. Multifrequency spiral vector model for the brushless doubly-fed induction machine

    DEFF Research Database (Denmark)

    Han, Peng; Cheng, Ming; Zhu, Xinkai

    2017-01-01

    This paper presents a multifrequency spiral vector model for both steady-state and dynamic performance analysis of the brushless doubly-fed induction machine (BDFIM) with a nested-loop rotor. Winding function theory is first employed to give a full picture of the inductance characteristics...... analytically, revealing the underlying relationship between harmonic components of stator-rotor mutual inductances and the airgap magnetic field distribution. Different from existing vector models, which only model the fundamental components of mutual inductances, the proposed vector model takes...

  14. Four wind speed multi-step forecasting models using extreme learning machines and signal decomposing algorithms

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei

    2015-01-01

    Highlights: • A hybrid architecture is proposed for the wind speed forecasting. • Four algorithms are used for the wind speed multi-scale decomposition. • The extreme learning machines are employed for the wind speed forecasting. • All the proposed hybrid models can generate the accurate results. - Abstract: Realization of accurate wind speed forecasting is important to guarantee the safety of wind power utilization. In this paper, a new hybrid forecasting architecture is proposed to realize the wind speed accurate forecasting. In this architecture, four different hybrid models are presented by combining four signal decomposing algorithms (e.g., Wavelet Decomposition/Wavelet Packet Decomposition/Empirical Mode Decomposition/Fast Ensemble Empirical Mode Decomposition) and Extreme Learning Machines. The originality of the study is to investigate the promoted percentages of the Extreme Learning Machines by those mainstream signal decomposing algorithms in the multiple step wind speed forecasting. The results of two forecasting experiments indicate that: (1) the method of Extreme Learning Machines is suitable for the wind speed forecasting; (2) by utilizing the decomposing algorithms, all the proposed hybrid algorithms have better performance than the single Extreme Learning Machines; (3) in the comparisons of the decomposing algorithms in the proposed hybrid architecture, the Fast Ensemble Empirical Mode Decomposition has the best performance in the three-step forecasting results while the Wavelet Packet Decomposition has the best performance in the one and two step forecasting results. At the same time, the Wavelet Packet Decomposition and the Fast Ensemble Empirical Mode Decomposition are better than the Wavelet Decomposition and the Empirical Mode Decomposition in all the step predictions, respectively; and (4) the proposed algorithms are effective in the wind speed accurate predictions

  15. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    Science.gov (United States)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  16. AP1000 will meet the challenges of near-term deployment

    International Nuclear Information System (INIS)

    Matzie, Regis A.

    2008-01-01

    The world demand for energy is growing rapidly, particularly in developing countries that are trying to raise the standard of living for billions of people, many of whom do not have access to electricity or clean water. Climate change and the concern for increased emissions of green house gases have brought into question the future primary reliance of fossil fuels. With the projected worldwide increase in energy demand, concern for the environmental impact of carbon emissions, and the recent price volatility of fossil fuels, nuclear energy is undergoing a rapid resurgence. This 'nuclear renaissance' is broad based, reaching across Asia, North America, Europe, as well as selected countries in Africa and South America. Many countries have publicly expressed their intentions to pursue the construction of new nuclear energy plants. Some countries that have previously turned away from commercial nuclear energy are reconsidering the advisability of this decision. This renaissance is facilitated by the availability of more advanced reactor designs than are operating today, with improved safety, economy, and operations. One such design, the Westinghouse AP1000 advanced passive plant, has been a long time in the making! The development of this passive technology started over two decades ago from an embryonic belief that a new approach to design was needed to spawn a nuclear renaissance. The principal challenges were seen as ensuring reactor safety by requiring less reliance on operator actions and overcoming the high plant capital cost of nuclear energy. The AP1000 design is based on the use of innovative passive technology and modular construction, which require significantly less equipment and commodities that facilitate a more rapid construction schedule. Because Westinghouse had the vision and the perseverance to continue the development of this passive technology, the AP1000 design is ready to meet today's challenge of near-term deployment

  17. Epileptiform activity during rewarming from moderate cerebral hypothermia in the near-term fetal sheep.

    Science.gov (United States)

    Gerrits, Luella C; Battin, Malcolm R; Bennet, Laura; Gonzalez, Hernan; Gunn, Alistair J

    2005-03-01

    Moderate hypothermia is consistently neuroprotective after hypoxic-ischemic insults and is the subject of ongoing clinical trials. In pilot studies, we observed rebound seizure activity in one infant during rewarming from a 72-h period of hypothermia. We therefore quantified the development of EEG-defined seizures during rewarming in an experimental paradigm of delayed cooling for cerebral ischemia. Moderate cerebral hypothermia (n=9) or sham cooling (n=13) was initiated 5.5 h after reperfusion from a 30-min period of bilateral carotid occlusion in near-term fetal sheep and continued for 72 h after the insult. During spontaneous rewarming, fetal extradural temperature rose from 32.5 +/- 0.6 degrees C to control levels (39.4 +/- 0.1 degrees C) in 47 +/- 6 min. Carotid blood flow and mean arterial blood pressure increased transiently during rewarming. The cooling group showed a significant increase in electrical seizure events 2, 3, and 5 h after rewarming, maximal at 2 h (2.9 +/- 1.2 versus 0.5 +/- 0.5 events/h; p <0.05). From 6 h after rewarming, there was no significant difference between the groups. Individual seizures were typically short (28.8 +/- 5.8 s versus 29.0 +/- 6.8 s in sham cooled; NS), and of modest amplitude (35.9 +/- 2.8 versus 38.8 +/- 3.4 microV; NS). Neuronal loss in the parasagittal cortex was significantly reduced in the cooled group (51 +/- 9% versus 91 +/- 5%; p <0.002) and was not correlated with rebound epileptiform activity. In conclusion, rapid rewarming after a prolonged interval of therapeutic hypothermia can be associated with a transient increase in epileptiform events but does not seem to have significant adverse implications for neural outcome.

  18. Predicting Near-Term Water Quality from Satellite Observations of Watershed Conditions

    Science.gov (United States)

    Weiss, W. J.; Wang, L.; Hoffman, K.; West, D.; Mehta, A. V.; Lee, C.

    2017-12-01

    Despite the strong influence of watershed conditions on source water quality, most water utilities and water resource agencies do not currently have the capability to monitor watershed sources of contamination with great temporal or spatial detail. Typically, knowledge of source water quality is limited to periodic grab sampling; automated monitoring of a limited number of parameters at a few select locations; and/or monitoring relevant constituents at a treatment plant intake. While important, such observations are not sufficient to inform proactive watershed or source water management at a monthly or seasonal scale. Satellite remote sensing data on the other hand can provide a snapshot of an entire watershed at regular, sub-monthly intervals, helping analysts characterize watershed conditions and identify trends that could signal changes in source water quality. Accordingly, the authors are investigating correlations between satellite remote sensing observations of watersheds and source water quality, at a variety of spatial and temporal scales and lags. While correlations between remote sensing observations and direct in situ measurements of water quality have been well described in the literature, there are few studies that link remote sensing observations across a watershed with near-term predictions of water quality. In this presentation, the authors will describe results of statistical analyses and discuss how these results are being used to inform development of a desktop decision support tool to support predictive application of remote sensing data. Predictor variables under evaluation include parameters that describe vegetative conditions; parameters that describe climate/weather conditions; and non-remote sensing, in situ measurements. Water quality parameters under investigation include nitrogen, phosphorus, organic carbon, chlorophyll-a, and turbidity.

  19. Development of near-term batteries for electric vehicles. Summary report, October 1977-September 1979

    Energy Technology Data Exchange (ETDEWEB)

    Rajan, J.B. (comp.)

    1980-06-01

    The status and results through FY 1979 on the Near-Term Electric Vehicle Battery Project of the Argonne National Laboratory are summarized. This project conducts R and D on lead-acid, nickel/zinc and nickel/iron batteries with the objective of achieving commercialization in electric vehicles in the 1980's. Key results of the R and D indicate major technology advancements and achievement of most of FY 1979 performance goals. In the lead-acid system the specific energy was increased from less than 30 Wh/kg to over 40 Wh/kg at the C/3 rate; the peak power density improved from 70 W/kg to over 110 W/kg at the 50% state of charge; and over 200 deep-discharge cycle life demonstrated. In the nickel/iron system a specific energy of 48 Wh/kg was achieved; a peak power of about 100 W/kg demonstrated and a life of 36 cycles obtained. In the nickel/zinc system, specific energies of up to 64 Wh/kg were shown; peak powers of 133 W/kg obtained; and a life of up to 120 cycles measured. Future R and D will emphasize increased cycle life for nickel/zinc batteries and increased cycle life and specific energy for lead-acid and nickel/iron batteries. Testing of 145 cells was completed by NBTL. Cell evaluation included a full set of performance tests plus the application of a simulated power profile equivalent to the power demands of an electric vehicle in stop-start urban driving. Simplified test profiles which approximate electric vehicle demands are also described.

  20. Partitioning and transmutation: Near-term solution or long-term option?

    International Nuclear Information System (INIS)

    Ramspott, L.D.; Isaacs, T.

    1993-01-01

    Starting in 1989, the concept that partitioning and transmuting actinides from spent nuclear fuel could be a open-quotes solutionclose quotes to the apparent lack of progress in the high-level waste disposal program began to be heard from a variety of sources, both in the US and internationally. There have been numerous papers and sessions at scientific conferences and several conferences devoted to this subject in the last three years. At the request of the US Department of Energy, the National Research Council is evaluating the feasibility of this concept. Because either plutonium or highly enriched uranium is needed to startup breeder reactors, there is a sound rationale for using Pu from reprocessing spent light-water reactor fuel to start a conversion to Pu-breeding liquid metal reactors (LMRs), once society makes the determination that adding a large component of LMRs to the electricity-generating grid is desirable. This is the long-term option referred to in the title. It is compatible with the current and likely future high-level waste program, as well as the current nuclear power industry in the US. However, the thesis of this paper is that partitioning and transmutation (P-T) does not offer a near term solution to high-level waste disposal in the US for numerous reasons, the most important of which is that a repository will be needed even with P-T. Other important reasons include: (1) lack of evidence that the public will be more likely to accept a repository that has a reduced inventory, (2) the waste disposal program delays do not result from technical evidence of lack of safety, (3) the economics of reprocessing and/or P-T are unfavorable, and (4) obtaining the benefits from P-T requires a long-term commitment to nuclear power

  1. Meeting the near-term demand for hydrogen using nuclear energy in competitive power markets

    International Nuclear Information System (INIS)

    Miller, Alistair I.; Duffey, Romney B.

    2004-01-01

    Hydrogen is becoming the reference fuel for future transportation and, in the USA in particular, a vision for its production from advanced nuclear reactors has been formulated. Fulfillment of this vision depend on its economics in 2020 or later. Prior to 2020, hydrogen needs to gain a substantial foothold without incurring excessive costs for the establishment of the distribution network for the new fuel. Water electrolysis and steam-methane reforming (SMR) are the existing hydrogen-production technologies, used for small-scale and large-scale production, respectively. Provided electricity is produced at costs expected for nuclear reactors of near-term design, electrolysis appears to offer superior economics when the SMR-related costs of distribution and sequestration (or an equivalent emission levy) are included. This is shown to hold at least until several percentage points of road transport have been converted to hydrogen. Electrolysis has large advantages over SMRs in being almost scale-independent and allowing local production. The key requirements for affordable electrolysis are low capital cost and relatively high utilization, although the paper shows that it should be advantageous to avoid the peaks of electricity demand and cost. The electricity source must enable high utilization as well as being itself low-cost and emissions-free. By using off-peak electricity, no extra costs for enhanced electricity distribution should occur. The longer-term supply of hydrogen may ultimately evolve away from low-temperature water electrolysis but it appears to be an excellent technology for early deployment and capable of supplying hydrogen at prices not dissimilar from today's costs for gasoline and diesel provided the vehicle's power unit is a fuel cell. (author)

  2. The Relevance Voxel Machine (RVoxM): A Self-Tuning Bayesian Model for Informative Image-Based Prediction

    DEFF Research Database (Denmark)

    Sabuncu, Mert R.; Van Leemput, Koen

    2012-01-01

    This paper presents the relevance voxel machine (RVoxM), a dedicated Bayesian model for making predictions based on medical imaging data. In contrast to the generic machine learning algorithms that have often been used for this purpose, the method is designed to utilize a small number of spatiall...

  3. A Study of Synchronous Machine Model Implementations in Matlab/Simulink Simulations for New and Renewable Energy Systems

    DEFF Research Database (Denmark)

    Chen, Zhe; Blaabjerg, Frede; Iov, Florin

    2005-01-01

    A direct phase model of synchronous machines implemented in MA TLAB/SIMULINK is presented. The effects of the machine saturation have been included. Simulation studies are performed under various conditions. It has been demonstrated that the MATLAB/SIMULINK is an effective tool to study the complex...

  4. Response surface modelling of tool electrode wear rate and material removal rate in micro electrical discharge machining of Inconel 718

    DEFF Research Database (Denmark)

    Puthumana, Govindan

    2017-01-01

    conductivity and high strength causing it extremely difficult tomachine. Micro-Electrical Discharge Machining (Micro-EDM) is a non-conventional method that has a potential toovercome these restrictions for machining of Inconel 718. Response Surface Method (RSM) was used for modelling thetool Electrode Wear...

  5. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  6. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  7. Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality

    Science.gov (United States)

    Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.

    2017-12-01

    Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.

  8. Model-based testing of a vehicle instrument cluster for design validation using machine vision

    International Nuclear Information System (INIS)

    Huang, Yingping; McMurran, Ross; Dhadyalla, Gunwant; Jones, R Peter; Mouzakitis, Alexandros

    2009-01-01

    This paper presents an advanced testing system, combining model-based testing and machine vision technologies, for automated design validation of a vehicle instrument cluster. In the system, a hardware-in-the-loop (HIL) tester, supported by model-based approaches, simulates vehicle operations in real time and dynamically provides all essential signals to the instrument cluster under test. A machine vision system with advanced image processing algorithms is designed to inspect the visual displays. Experiments demonstrate that the system developed is accurate for measuring the pointer position, bar graph position, pointer angular velocity and indicator flash rate, and is highly robust for validating various functionalities including warning lights status, symbol and text displays. Moreover, the system developed greatly eases the task of tedious validation testing and makes onerous repeated tests possible

  9. A Collaboration Model for Community-Based Software Development with Social Machines

    Directory of Open Access Journals (Sweden)

    Dave Murray-Rust

    2016-02-01

    Full Text Available Crowdsourcing is generally used for tasks with minimal coordination, providing limited support for dynamic reconfiguration. Modern systems, exemplified by social ma chines, are subject to continual flux in both the client and development communities and their needs. To support crowdsourcing of open-ended development, systems must dynamically integrate human creativity with machine support. While workflows can be u sed to handle structured, predictable processes, they are less suitable for social machine development and its attendant uncertainty. We present models and techniques for coordination of human workers in crowdsourced software development environments. We combine the Social Compute Unit—a model of ad-hoc human worker teams—with versatile coordination protocols expressed in the Lightweight Social Calculus. This allows us to combine coordination and quality constraints with dynamic assessments of end-user desires, dynamically discovering and applying development protocols.

  10. Low Resourced Machine Translation via Morpho-syntactic Modeling: The Case of Dialectal Arabic

    OpenAIRE

    Erdmann, Alexander; Habash, Nizar; Taji, Dima; Bouamor, Houda

    2017-01-01

    We present the second ever evaluated Arabic dialect-to-dialect machine translation effort, and the first to leverage external resources beyond a small parallel corpus. The subject has not previously received serious attention due to lack of naturally occurring parallel data; yet its importance is evidenced by dialectal Arabic's wide usage and breadth of inter-dialect variation, comparable to that of Romance languages. Our results suggest that modeling morphology and syntax significantly impro...

  11. Inversion of a radiative transfer model for estimation of rice chlorophyll content using support vector machine

    Science.gov (United States)

    Lv, Jie; Yan, Zhenguo; Wei, Jingyi

    2014-11-01

    Accurate retrieval of crop chlorophyll content is of great importance for crop growth monitoring, crop stress situations, and the crop yield estimation. This study focused on retrieval of rice chlorophyll content from data through radiative transfer model inversion. A field campaign was carried out in September 2009 in the farmland of ChangChun, Jinlin province, China. A different set of 10 sites of the same species were used in 2009 for validation of methodologies. Reflectance of rice was collected using ASD field spectrometer for the solar reflective wavelengths (350-2500 nm), chlorophyll content of rice was measured by SPAD-502 chlorophyll meter. Each sample sites was recorded with a Global Position System (GPS).Firstly, the PROSPECT radiative transfer model was inverted using support vector machine in order to link rice spectrum and the corresponding chlorophyll content. Secondly, genetic algorithms were adopted to select parameters of support vector machine, then support vector machine was trained the training data set, in order to establish leaf chlorophyll content estimation model. Thirdly, a validation data set was established based on hyperspectral data, and the leaf chlorophyll content estimation model was applied to the validation data set to estimate leaf chlorophyll content of rice in the research area. Finally, the outcome of the inversion was evaluated using the calculated R2 and RMSE values with the field measurements. The results of the study highlight the significance of support vector machine in estimating leaf chlorophyll content of rice. Future research will concentrated on the view of the definition of satellite images and the selection of the best measurement configuration for accurate estimation of rice characteristics.

  12. A Multi-scale, Multi-Model, Machine-Learning Solar Forecasting Technology

    Energy Technology Data Exchange (ETDEWEB)

    Hamann, Hendrik F. [IBM, Yorktown Heights, NY (United States). Thomas J. Watson Research Center

    2017-05-31

    The goal of the project was the development and demonstration of a significantly improved solar forecasting technology (short: Watt-sun), which leverages new big data processing technologies and machine-learnt blending between different models and forecast systems. The technology aimed demonstrating major advances in accuracy as measured by existing and new metrics which themselves were developed as part of this project. Finally, the team worked with Independent System Operators (ISOs) and utilities to integrate the forecasts into their operations.

  13. Quantitative chemogenomics: machine-learning models of protein-ligand interaction.

    Science.gov (United States)

    Andersson, Claes R; Gustafsson, Mats G; Strömbergsson, Helena

    2011-01-01

    Chemogenomics is an emerging interdisciplinary field that lies in the interface of biology, chemistry, and informatics. Most of the currently used drugs are small molecules that interact with proteins. Understanding protein-ligand interaction is therefore central to drug discovery and design. In the subfield of chemogenomics known as proteochemometrics, protein-ligand-interaction models are induced from data matrices that consist of both protein and ligand information along with some experimentally measured variable. The two general aims of this quantitative multi-structure-property-relationship modeling (QMSPR) approach are to exploit sparse/incomplete information sources and to obtain more general models covering larger parts of the protein-ligand space, than traditional approaches that focuses mainly on specific targets or ligands. The data matrices, usually obtained from multiple sparse/incomplete sources, typically contain series of proteins and ligands together with quantitative information about their interactions. A useful model should ideally be easy to interpret and generalize well to new unseen protein-ligand combinations. Resolving this requires sophisticated machine-learning methods for model induction, combined with adequate validation. This review is intended to provide a guide to methods and data sources suitable for this kind of protein-ligand-interaction modeling. An overview of the modeling process is presented including data collection, protein and ligand descriptor computation, data preprocessing, machine-learning-model induction and validation. Concerns and issues specific for each step in this kind of data-driven modeling will be discussed. © 2011 Bentham Science Publishers

  14. Executive summary for assessing the near-term risk of climate uncertainty : interdependencies among the U.S. states.

    Energy Technology Data Exchange (ETDEWEB)

    Loose, Verne W.; Lowry, Thomas Stephen; Malczynski, Leonard A.; Tidwell, Vincent Carroll; Stamber, Kevin Louis; Reinert, Rhonda K.; Backus, George A.; Warren, Drake E.; Zagonel, Aldo A.; Ehlen, Mark Andrew; Klise, Geoffrey T.; Vargas, Vanessa N.

    2010-04-01

    Policy makers will most likely need to make decisions about climate policy before climate scientists have resolved all relevant uncertainties about the impacts of climate change. This study demonstrates a risk-assessment methodology for evaluating uncertain future climatic conditions. We estimate the impacts of climate change on U.S. state- and national-level economic activity from 2010 to 2050. To understand the implications of uncertainty on risk and to provide a near-term rationale for policy interventions to mitigate the course of climate change, we focus on precipitation, one of the most uncertain aspects of future climate change. We use results of the climate-model ensemble from the Intergovernmental Panel on Climate Change's (IPCC) Fourth Assessment Report 4 (AR4) as a proxy for representing climate uncertainty over the next 40 years, map the simulated weather from the climate models hydrologically to the county level to determine the physical consequences on economic activity at the state level, and perform a detailed 70-industry analysis of economic impacts among the interacting lower-48 states. We determine the industry-level contribution to the gross domestic product and employment impacts at the state level, as well as interstate population migration, effects on personal income, and consequences for the U.S. trade balance. We show that the mean or average risk of damage to the U.S. economy from climate change, at the national level, is on the order of $1 trillion over the next 40 years, with losses in employment equivalent to nearly 7 million full-time jobs.

  15. RMP model based optimization of power system stabilizers in multi-machine power system.

    Science.gov (United States)

    Baek, Seung-Mook; Park, Jung-Wook

    2009-01-01

    This paper describes the nonlinear parameter optimization of power system stabilizer (PSS) by using the reduced multivariate polynomial (RMP) algorithm with the one-shot property. The RMP model estimates the second-order partial derivatives of the Hessian matrix after identifying the trajectory sensitivities, which can be computed from the hybrid system modeling with a set of differential-algebraic-impulsive-switched (DAIS) structure for a power system. Then, any nonlinear controller in the power system can be optimized by achieving a desired performance measure, mathematically represented by an objective function (OF). In this paper, the output saturation limiter of the PSS, which is used to improve low-frequency oscillation damping performance during a large disturbance, is optimally tuned exploiting the Hessian estimated by the RMP model. Its performances are evaluated with several case studies on both single-machine infinite bus (SMIB) and multi-machine power system (MMPS) by time-domain simulation. In particular, all nonlinear parameters of multiple PSSs on IEEE benchmark two-area four-machine power system are optimized to be robust against various disturbances by using the weighted sum of the OFs.

  16. Improving virtual screening predictive accuracy of Human kallikrein 5 inhibitors using machine learning models.

    Science.gov (United States)

    Fang, Xingang; Bagui, Sikha; Bagui, Subhash

    2017-08-01

    The readily available high throughput screening (HTS) data from the PubChem database provides an opportunity for mining of small molecules in a variety of biological systems using machine learning techniques. From the thousands of available molecular descriptors developed to encode useful chemical information representing the characteristics of molecules, descriptor selection is an essential step in building an optimal quantitative structural-activity relationship (QSAR) model. For the development of a systematic descriptor selection strategy, we need the understanding of the relationship between: (i) the descriptor selection; (ii) the choice of the machine learning model; and (iii) the characteristics of the target bio-molecule. In this work, we employed the Signature descriptor to generate a dataset on the Human kallikrein 5 (hK 5) inhibition confirmatory assay data and compared multiple classification models including logistic regression, support vector machine, random forest and k-nearest neighbor. Under optimal conditions, the logistic regression model provided extremely high overall accuracy (98%) and precision (90%), with good sensitivity (65%) in the cross validation test. In testing the primary HTS screening data with more than 200K molecular structures, the logistic regression model exhibited the capability of eliminating more than 99.9% of the inactive structures. As part of our exploration of the descriptor-model-target relationship, the excellent predictive performance of the combination of the Signature descriptor and the logistic regression model on the assay data of the Human kallikrein 5 (hK 5) target suggested a feasible descriptor/model selection strategy on similar targets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Characterization and modeling of 2D-glass micro-machining by spark-assisted chemical engraving (SACE) with constant velocity

    International Nuclear Information System (INIS)

    Didar, Tohid Fatanat; Dolatabadi, Ali; Wüthrich, Rolf

    2008-01-01

    Spark-assisted chemical engraving (SACE) is an unconventional micro-machining technology based on electrochemical discharge used for micro-machining nonconductive materials. SACE 2D micro-machining with constant speed was used to machine micro-channels in glass. Parameters affecting the quality and geometry of the micro-channels machined by SACE technology with constant velocity were presented and the effect of each of the parameters was assessed. The effect of chemical etching on the geometry of micro-channels under different machining conditions has been studied, and a model is proposed for characterization of the micro-channels as a function of machining voltage and applied speed

  18. Use of Mini-Mag Orion and superconducting coils for near-term interstellar transportation

    Science.gov (United States)

    Lenard, Roger X.; Andrews, Dana G.

    2007-06-01

    Interstellar transportation to nearby star systems over periods shorter than the human lifetime requires speeds in the range of 0.1-0.15 c and relatively high accelerations. These speeds are not attainable using rockets, even with advanced fusion engines because at these velocities, the energy density of the spacecraft approaches the energy density of the fuel. Anti-matter engines are theoretically possible but current physical limitations would have to be suspended to get the mass densities required. Interstellar ramjets have not proven practicable, so this leaves beamed momentum propulsion or a continuously fueled Mag-Orion system as the remaining candidates. However, deceleration is also a major issue, but part of the Mini-Mag Orion approach assists in solving this problem. This paper reviews the state of the art from a Phases I and II SBIT between Sandia National Laboratories and Andrews Space, applying our results to near-term interstellar travel. A 1000 T crewed spacecraft and propulsion system dry mass at .1c contains ˜9×1021J. The author has generated technology requirements elsewhere for use of fission power reactors and conventional Brayton cycle machinery to propel a spacecraft using electric propulsion. Here we replace the electric power conversion, radiators, power generators and electric thrusters with a Mini-Mag Orion fission-fusion hybrid. Only a small fraction of fission fuel is actually carried with the spacecraft, the remainder of the propellant (macro-particles of fissionable material with a D-T core) is beamed to the spacecraft, and the total beam energy requirement for an interstellar probe mission is roughly 1020J, which would require the complete fissioning of 1000 ton of Uranium assuming 35% power plant efficiency. This is roughly equivalent to a recurring cost per flight of 3.0 billion dollars in reactor grade enriched uranium using today's prices. Therefore, interstellar flight is an expensive proposition, but not unaffordable, if the

  19. Impact of Human like Cues on Human Trust in Machines: Brain Imaging and Modeling Studies for Human-Machine Interactions

    Science.gov (United States)

    2018-01-05

    theory -of- mind bilateral game with two types of computerized agents: with or without humanlike cues. At the second experiment, human subjects played...Electrophysiological activities in brain regions belonging to the theory -of- mind network correlated with perceived capability, especially when a machine...recorded fMRI or event-related potentials while subjects are playing two cognitive games. At the first experiment, human subjects played a theory -of- mind

  20. Response of the Kuroshio Extension path state to near-term global warming in CMIP5 experiments with MIROC4h

    Science.gov (United States)

    Li, Rui; Jing, Zhao; Chen, Zhaohui; Wu, Lixin

    2017-04-01

    In this study, responses of the Kuroshio Extension (KE) path state to near-term (2006-2035) global warming are investigated using a Kuroshio-resolving atmosphere-ocean coupled model. Under the representative concentration pathway 4.5 (RCP4.5) forcing, the KE system is intensified and its path state tends to move northward and becomes more stable. It is suggested that the local anticyclonic wind stress anomalies in the KE region favor the spin-up of the southern recirculation gyre, and the remote effect induced by the anticyclonic wind stress anomalies over the central and eastern midlatitude North Pacific also contributes to the stabilization of the KE system substantially. The dominant role of wind stress forcing on KE variability under near-term global warming is further confirmed by adopting a linear 1.5 layer reduced-gravity model forced by wind stress curl field from the present climate model. It is also found that the main contributing longitudinal band for KE index (KEI) moves westward in response to the warmed climate. This results from the northwestward expansion of the large-scale sea level pressure (SLP) field.

  1. Filtering Reordering Table Using a Novel Recursive Autoencoder Model for Statistical Machine Translation

    Directory of Open Access Journals (Sweden)

    Jinying Kong

    2017-01-01

    Full Text Available In phrase-based machine translation (PBMT systems, the reordering table and phrase table are very large and redundant. Unlike most previous works which aim to filter phrase table, this paper proposes a novel deep neural network model to prune reordering table. We cast the task as a deep learning problem where we jointly train two models: a generative model to implement rule embedding and a discriminative model to classify rules. The main contribution of this paper is that we optimize the reordering model in PBMT by filtering reordering table using a recursive autoencoder model. To evaluate the performance of the proposed model, we performed it on public corpus to measure its reordering ability. The experimental results show that our approach obtains high improvement in BLEU score with less scale of reordering table on two language pairs: English-Chinese (+0.28 and Uyghur-Chinese (+0.33 MT.

  2. Near Term Hybrid Passenger Vehicle Development Program. Phase I, Final report. Appendix B: trade-off studies. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    Traversi, M.; Piccolo, R.

    1979-06-11

    Trade-off studies of Near Term Hybrid Vehicle (NTHV) design elements were performed to identify the most promising design concept in terms of achievable petroleum savings. The activities in these studies are described. The results are presented as preliminary NTHV body design, expected fuel consumption as a function of vehicle speed, engine requirements, battery requirements, and vehicle reliability and cost. (LCL)

  3. Are there intelligent Turing machines?

    OpenAIRE

    Bátfai, Norbert

    2015-01-01

    This paper introduces a new computing model based on the cooperation among Turing machines called orchestrated machines. Like universal Turing machines, orchestrated machines are also designed to simulate Turing machines but they can also modify the original operation of the included Turing machines to create a new layer of some kind of collective behavior. Using this new model we can define some interested notions related to cooperation ability of Turing machines such as the intelligence quo...

  4. Physics-informed machine learning approach for reconstructing Reynolds stress modeling discrepancies based on DNS data

    Science.gov (United States)

    Wang, Jian-Xun; Wu, Jin-Long; Xiao, Heng

    2017-03-01

    Turbulence modeling is a critical component in numerical simulations of industrial flows based on Reynolds-averaged Navier-Stokes (RANS) equations. However, after decades of efforts in the turbulence modeling community, universally applicable RANS models with predictive capabilities are still lacking. Large discrepancies in the RANS-modeled Reynolds stresses are the main source that limits the predictive accuracy of RANS models. Identifying these discrepancies is of significance to possibly improve the RANS modeling. In this work, we propose a data-driven, physics-informed machine learning approach for reconstructing discrepancies in RANS modeled Reynolds stresses. The discrepancies are formulated as functions of the mean flow features. By using a modern machine learning technique based on random forests, the discrepancy functions are trained by existing direct numerical simulation (DNS) databases and then used to predict Reynolds stress discrepancies in different flows where data are not available. The proposed method is evaluated by two classes of flows: (1) fully developed turbulent flows in a square duct at various Reynolds numbers and (2) flows with massive separations. In separated flows, two training flow scenarios of increasing difficulties are considered: (1) the flow in the same periodic hills geometry yet at a lower Reynolds number and (2) the flow in a different hill geometry with a similar recirculation zone. Excellent predictive performances were observed in both scenarios, demonstrating the merits of the proposed method.

  5. Application of Machine-Learning Models to Predict Tacrolimus Stable Dose in Renal Transplant Recipients

    Science.gov (United States)

    Tang, Jie; Liu, Rong; Zhang, Yue-Li; Liu, Mou-Ze; Hu, Yong-Fang; Shao, Ming-Jie; Zhu, Li-Jun; Xin, Hua-Wen; Feng, Gui-Wen; Shang, Wen-Jun; Meng, Xiang-Guang; Zhang, Li-Rong; Ming, Ying-Zi; Zhang, Wei

    2017-02-01

    Tacrolimus has a narrow therapeutic window and considerable variability in clinical use. Our goal was to compare the performance of multiple linear regression (MLR) and eight machine learning techniques in pharmacogenetic algorithm-based prediction of tacrolimus stable dose (TSD) in a large Chinese cohort. A total of 1,045 renal transplant patients were recruited, 80% of which were randomly selected as the “derivation cohort” to develop dose-prediction algorithm, while the remaining 20% constituted the “validation cohort” to test the final selected algorithm. MLR, artificial neural network (ANN), regression tree (RT), multivariate adaptive regression splines (MARS), boosted regression tree (BRT), support vector regression (SVR), random forest regression (RFR), lasso regression (LAR) and Bayesian additive regression trees (BART) were applied and their performances were compared in this work. Among all the machine learning models, RT performed best in both derivation [0.71 (0.67-0.76)] and validation cohorts [0.73 (0.63-0.82)]. In addition, the ideal rate of RT was 4% higher than that of MLR. To our knowledge, this is the first study to use machine learning models to predict TSD, which will further facilitate personalized medicine in tacrolimus administration in the future.

  6. A general electromagnetic excitation model for electrical machines considering the magnetic saturation and rub impact

    Science.gov (United States)

    Xu, Xueping; Han, Qinkai; Chu, Fulei

    2018-03-01

    The electromagnetic vibration of electrical machines with an eccentric rotor has been extensively investigated. However, magnetic saturation was often neglected. Moreover, the rub impact between the rotor and stator is inevitable when the amplitude of the rotor vibration exceeds the air-gap. This paper aims to propose a general electromagnetic excitation model for electrical machines. First, a general model which takes the magnetic saturation and rub impact into consideration is proposed and validated by the finite element method and reference. The dynamic equations of a Jeffcott rotor system with electromagnetic excitation and mass imbalance are presented. Then, the effects of pole-pair number and rubbing parameters on vibration amplitude are studied and approaches restraining the amplitude are put forward. Finally, the influences of mass eccentricity, resultant magnetomotive force (MMF), stiffness coefficient, damping coefficient, contact stiffness and friction coefficient on the stability of the rotor system are investigated through the Floquet theory, respectively. The amplitude jumping phenomenon is observed in a synchronous generator for different pole-pair numbers. The changes of design parameters can alter the stability states of the rotor system and the range of parameter values forms the zone of stability, which lays helpful suggestions for the design and application of the electrical machines.

  7. Bayesian reliability modeling and assessment solution for NC machine tools under small-sample data

    Science.gov (United States)

    Yang, Zhaojun; Kan, Yingnan; Chen, Fei; Xu, Binbin; Chen, Chuanhai; Yang, Chuangui

    2015-11-01

    Although Markov chain Monte Carlo(MCMC) algorithms are accurate, many factors may cause instability when they are utilized in reliability analysis; such instability makes these algorithms unsuitable for widespread engineering applications. Thus, a reliability modeling and assessment solution aimed at small-sample data of numerical control(NC) machine tools is proposed on the basis of Bayes theories. An expert-judgment process of fusing multi-source prior information is developed to obtain the Weibull parameters' prior distributions and reduce the subjective bias of usual expert-judgment methods. The grid approximation method is applied to two-parameter Weibull distribution to derive the formulas for the parameters' posterior distributions and solve the calculation difficulty of high-dimensional integration. The method is then applied to the real data of a type of NC machine tool to implement a reliability assessment and obtain the mean time between failures(MTBF). The relative error of the proposed method is 5.8020×10-4 compared with the MTBF obtained by the MCMC algorithm. This result indicates that the proposed method is as accurate as MCMC. The newly developed solution for reliability modeling and assessment of NC machine tools under small-sample data is easy, practical, and highly suitable for widespread application in the engineering field; in addition, the solution does not reduce accuracy.

  8. Machine learning methods enable predictive modeling of antibody feature:function relationships in RV144 vaccinees.

    Directory of Open Access Journals (Sweden)

    Ickwon Choi

    2015-04-01

    Full Text Available The adaptive immune response to vaccination or infection can lead to the production of specific antibodies to neutralize the pathogen or recruit innate immune effector cells for help. The non-neutralizing role of antibodies in stimulating effector cell responses may have been a key mechanism of the protection observed in the RV144 HIV vaccine trial. In an extensive investigation of a rich set of data collected from RV144 vaccine recipients, we here employ machine learning methods to identify and model associations between antibody features (IgG subclass and antigen specificity and effector function activities (antibody dependent cellular phagocytosis, cellular cytotoxicity, and cytokine release. We demonstrate via cross-validation that classification and regression approaches can effectively use the antibody features to robustly predict qualitative and quantitative functional outcomes. This integration of antibody feature and function data within a machine learning framework provides a new, objective approach to discovering and assessing multivariate immune correlates.

  9. Predicting Pre-planting Risk of Stagonospora nodorum blotch in Winter Wheat Using Machine Learning Models.

    Science.gov (United States)

    Mehra, Lucky K; Cowger, Christina; Gross, Kevin; Ojiambo, Peter S

    2016-01-01

    Pre-planting factors have been associated with the late-season severity of Stagonospora nodorum blotch (SNB), caused by the fungal pathogen Parastagonospora nodorum, in winter wheat (Triticum aestivum). The relative importance of these factors in the risk of SNB has not been determined and this knowledge can facilitate disease management decisions prior to planting of the wheat crop. In this study, we examined the performance of multiple regression (MR) and three machine learning algorithms namely artificial neural networks, categorical and regression trees, and random forests (RF), in predicting the pre-planting risk of SNB in wheat. Pre-planting factors tested as potential predictor variables were cultivar resistance, latitude, longitude, previous crop, seeding rate, seed treatment, tillage type, and wheat residue. Disease severity assessed at the end of the growing season was used as the response variable. The models were developed using 431 disease cases (unique combinations of predictors) collected from 2012 to 2014 and these cases were randomly divided into training, validation, and test datasets. Models were evaluated based on the regression of observed against predicted severity values of SNB, sensitivity-specificity ROC analysis, and the Kappa statistic. A strong relationship was observed between late-season severity of SNB and specific pre-planting factors in which latitude, longitude, wheat residue, and cultivar resistance were the most important predictors. The MR model explained 33% of variability in the data, while machine learning models explained 47 to 79% of the total variability. Similarly, the MR model correctly classified 74% of the disease cases, while machine learning models correctly classified 81 to 83% of these cases. Results show that the RF algorithm, which explained 79% of the variability within the data, was the most accurate in predicting the risk of SNB, with an accuracy rate of 93%. The RF algorithm could allow early assessment of

  10. Improving understanding of near-term barrier island evolution through multi-decadal assessment of morphologic change

    Science.gov (United States)

    Lentz, Erika E.; Hapke, Cheryl J.; Stockdon, Hilary F.; Hehre, Rachel E.

    2013-01-01

    Observed morphodynamic changes over multiple decades were coupled with storm-driven run-up characteristics at Fire Island, New York, to explore the influence of wave processes relative to the impacts of other coastal change drivers on the near-term evolution of the barrier island. Historical topography was generated from digital stereo-photogrammetry and compared with more recent lidar surveys to quantify near-term (decadal) morphodynamic changes to the beach and primary dune system between the years 1969, 1999, and 2009. Notably increased profile volumes were observed along the entirety of the island in 1999, and likely provide the eolian source for the steady dune crest progradation observed over the relatively quiescent decade that followed. Persistent patterns of erosion and accretion over 10-, 30-, and 40-year intervals are attributable to variations in island morphology, human activity, and variations in offshore bathymetry and island orientation that influence the wave energy reaching the coast. Areas of documented long-term historical inlet formation and extensive bayside marsh development show substantial landward translation of the dune–beach profile over the near-term period of this study. Correlations among areas predicted to overwash, observed elevation changes of the dune crestline, and observed instances of overwash in undeveloped segments of the barrier island verify that overwash locations can be accurately predicted in undeveloped segments of coast. In fact, an assessment of 2012 aerial imagery collected after Hurricane Sandy confirms that overwash occurred at the majority of near-term locations persistently predicted to overwash. In addition to the storm wave climate, factors related to variations within the geologic framework which in turn influence island orientation, offshore slope, and sediment supply impact island behavior on near-term timescales.

  11. Uncertainty "escalation" and use of machine learning to forecast residual and data model uncertainties

    Science.gov (United States)

    Solomatine, Dimitri

    2016-04-01

    When speaking about model uncertainty many authors implicitly assume the data uncertainty (mainly in parameters or inputs) which is probabilistically described by distributions. Often however it is look also into the residual uncertainty as well. It is hence reasonable to classify the main approaches to uncertainty analysis with respect to the two main types of model uncertainty that can be distinguished: A. The residual uncertainty of models. In this case the model parameters and/or model inputs are considered to be fixed (deterministic), i.e. the model is considered to be optimal (calibrated) and deterministic. Model error is considered as the manifestation of uncertainty. If there is enough past data about the model errors (i.e. it uncertainty), it is possible to build a statistical or machine learning model of uncertainty trained on this data. The following methods can be mentioned: (a) quantile regression (QR) method by Koenker and Basset in which linear regression is used to build predictive models for distribution quantiles [1] (b) a more recent approach that takes into account the input variables influencing such uncertainty and uses more advanced machine learning (non-linear) methods (neural networks, model trees etc.) - the UNEEC method [2,3,7] (c) and even more recent DUBRAUE method (Dynamic Uncertainty Model By Regression on Absolute Error), a autoregressive model of model residuals (it corrects the model residual first and then carries out the uncertainty prediction by a autoregressive statistical model) [5] B. The data uncertainty (parametric and/or input) - in this case we study the propagation of uncertainty (presented typically probabilistically) from parameters or inputs to the model outputs. In case of simple functions representing models analytical approaches can be used, or approximation methods (e.g., first-order second moment method). However, for real complex non-linear models implemented in software there is no other choice except using

  12. A hybrid prognostic model for multistep ahead prediction of machine condition

    Science.gov (United States)

    Roulias, D.; Loutas, T. H.; Kostopoulos, V.

    2012-05-01

    Prognostics are the future trend in condition based maintenance. In the current framework a data driven prognostic model is developed. The typical procedure of developing such a model comprises a) the selection of features which correlate well with the gradual degradation of the machine and b) the training of a mathematical tool. In this work the data are taken from a laboratory scale single stage gearbox under multi-sensor monitoring. Tests monitoring the condition of the gear pair from healthy state until total brake down following several days of continuous operation were conducted. After basic pre-processing of the derived data, an indicator that correlated well with the gearbox condition was obtained. Consecutively the time series is split in few distinguishable time regions via an intelligent data clustering scheme. Each operating region is modelled with a feed-forward artificial neural network (FFANN) scheme. The performance of the proposed model is tested by applying the system to predict the machine degradation level on unseen data. The results show the plausibility and effectiveness of the model in following the trend of the timeseries even in the case that a sudden change occurs. Moreover the model shows ability to generalise for application in similar mechanical assets.

  13. Field tests and machine learning approaches for refining algorithms and correlations of driver's model parameters.

    Science.gov (United States)

    Tango, Fabio; Minin, Luca; Tesauri, Francesco; Montanari, Roberto

    2010-03-01

    This paper describes the field tests on a driving simulator carried out to validate the algorithms and the correlations of dynamic parameters, specifically driving task demand and drivers' distraction, able to predict drivers' intentions. These parameters belong to the driver's model developed by AIDE (Adaptive Integrated Driver-vehicle InterfacE) European Integrated Project. Drivers' behavioural data have been collected from the simulator tests to model and validate these parameters using machine learning techniques, specifically the adaptive neuro fuzzy inference systems (ANFIS) and the artificial neural network (ANN). Two models of task demand and distraction have been developed, one for each adopted technique. The paper provides an overview of the driver's model, the description of the task demand and distraction modelling and the tests conducted for the validation of these parameters. A test comparing predicted and expected outcomes of the modelled parameters for each machine learning technique has been carried out: for distraction, in particular, promising results (low prediction errors) have been obtained by adopting an artificial neural network.

  14. Estimating the complexity of 3D structural models using machine learning methods

    Science.gov (United States)

    Mejía-Herrera, Pablo; Kakurina, Maria; Royer, Jean-Jacques

    2016-04-01

    Quantifying the complexity of 3D geological structural models can play a major role in natural resources exploration surveys, for predicting environmental hazards or for forecasting fossil resources. This paper proposes a structural complexity index which can be used to help in defining the degree of effort necessary to build a 3D model for a given degree of confidence, and also to identify locations where addition efforts are required to meet a given acceptable risk of uncertainty. In this work, it is considered that the structural complexity index can be estimated using machine learning methods on raw geo-data. More precisely, the metrics for measuring the complexity can be approximated as the difficulty degree associated to the prediction of the geological objects distribution calculated based on partial information on the actual structural distribution of materials. The proposed methodology is tested on a set of 3D synthetic structural models for which the degree of effort during their building is assessed using various parameters (such as number of faults, number of part in a surface object, number of borders, ...), the rank of geological elements contained in each model, and, finally, their level of deformation (folding and faulting). The results show how the estimated complexity in a 3D model can be approximated by the quantity of partial data necessaries to simulated at a given precision the actual 3D model without error using machine learning algorithms.

  15. Predicting Freeway Work Zone Delays and Costs with a Hybrid Machine-Learning Model

    Directory of Open Access Journals (Sweden)

    Bo Du

    2017-01-01

    Full Text Available A hybrid machine-learning model, integrating an artificial neural network (ANN and a support vector machine (SVM model, is developed to predict spatiotemporal delays, subject to road geometry, number of lane closures, and work zone duration in different periods of a day and in the days of a week. The model is very user friendly, allowing the least inputs from the users. With that the delays caused by a work zone on any location of a New Jersey freeway can be predicted. To this end, tremendous amounts of data from different sources were collected to establish the relationship between the model inputs and outputs. A comparative analysis was conducted, and results indicate that the proposed model outperforms others in terms of the least root mean square error (RMSE. The proposed hybrid model can be used to calculate contractor penalty in terms of cost overruns as well as incentive reward schedule in case of early work competition. Additionally, it can assist work zone planners in determining the best start and end times of a work zone for developing and evaluating traffic mitigation and management plans.

  16. Structure Based Thermostability Prediction Models for Protein Single Point Mutations with Machine Learning Tools.

    Directory of Open Access Journals (Sweden)

    Lei Jia

    Full Text Available Thermostability issue of protein point mutations is a common occurrence in protein engineering. An application which predicts the thermostability of mutants can be helpful for guiding decision making process in protein design via mutagenesis. An in silico point mutation scanning method is frequently used to find "hot spots" in proteins for focused mutagenesis. ProTherm (http://gibk26.bio.kyutech.ac.jp/jouhou/Protherm/protherm.html is a public database that consists of thousands of protein mutants' experimentally measured thermostability. Two data sets based on two differently measured thermostability properties of protein single point mutations, namely the unfolding free energy change (ddG and melting temperature change (dTm were obtained from this database. Folding free energy change calculation from Rosetta, structural information of the point mutations as well as amino acid physical properties were obtained for building thermostability prediction models with informatics modeling tools. Five supervised machine learning methods (support vector machine, random forests, artificial neural network, naïve Bayes classifier, K nearest neighbor and partial least squares regression are used for building the prediction models. Binary and ternary classifications as well as regression models were built and evaluated. Data set redundancy and balancing, the reverse mutations technique, feature selection, and comparison to other published methods were discussed. Rosetta calculated folding free energy change ranked as the most influential features in all prediction models. Other descriptors also made significant contributions to increasing the accuracy of the prediction models.

  17. Mathematical model of forming screw profiles of compressor machines and pumps

    Science.gov (United States)

    Panchuk, K. L.; Lyashkov, A. A.; Varepo, L. G.

    2017-10-01

    The article presents the results of mathematical modeling of screw surfaces shaping for compressor machines and pumps. The study is based on a method of curve movable trihedron. A mathematical model of a flat gearing - the basis for a screw formation - is proposed. The model is based on geometric interpretation of plane curve trihedron motions and known in a geometric theory of plane mechanisms of the Bobillier construction. A geometric scheme of this construction was expanded due to introduction of evolutes simulating instantaneous motions of curves trihedra in a construction scheme. As a result, the mathematical model was obtained, which is more complete in comparison with the known models of flat gearing, which makes it possible to perform synthesis and analysis of profiled screws geometry. It realizes both direct and inverse problems of screws profiling with simultaneous obtaining the curvature of the desired profiles in absent ones. The proposed model can be used as a basis of automated system development for mutually enveloping surfaces screws shaping for compressor machines and pumps.

  18. Mathematical concepts for modeling human behavior in complex man-machine systems

    Science.gov (United States)

    Johannsen, G.; Rouse, W. B.

    1979-01-01

    Many human behavior (e.g., manual control) models have been found to be inadequate for describing processes in certain real complex man-machine systems. An attempt is made to find a way to overcome this problem by examining the range of applicability of existing mathematical models with respect to the hierarchy of human activities in real complex tasks. Automobile driving is chosen as a baseline scenario, and a hierarchy of human activities is derived by analyzing this task in general terms. A structural description leads to a block diagram and a time-sharing computer analogy.

  19. A Multianalyzer Machine Learning Model for Marine Heterogeneous Data Schema Mapping

    Directory of Open Access Journals (Sweden)

    Wang Yan

    2014-01-01

    Full Text Available The main challenges that marine heterogeneous data integration faces are the problem of accurate schema mapping between heterogeneous data sources. In order to improve the schema mapping efficiency and get more accurate learning results, this paper proposes a heterogeneous data schema mapping method basing on multianalyzer machine learning model. The multianalyzer analysis the learning results comprehensively, and a fuzzy comprehensive evaluation system is introduced for output results’ evaluation and multi factor quantitative judging. Finally, the data mapping comparison experiment on the East China Sea observing data confirms the effectiveness of the model and shows multianalyzer’s obvious improvement of mapping error rate.

  20. Improving Language Models in Speech-Based Human-Machine Interaction

    Directory of Open Access Journals (Sweden)

    Raquel Justo

    2013-02-01

    Full Text Available This work focuses on speech-based human-machine interaction. Specifically, a Spoken Dialogue System (SDS that could be integrated into a robot is considered. Since Automatic Speech Recognition is one of the most sensitive tasks that must be confronted in such systems, the goal of this work is to improve the results obtained by this specific module. In order to do so, a hierarchical Language Model (LM is considered. Different series of experiments were carried out using the proposed models over different corpora and tasks. The results obtained show that these models provide greater accuracy in the recognition task. Additionally, the influence of the Acoustic Modelling (AM in the improvement percentage of the Language Models has also been explored. Finally the use of hierarchical Language Models in a language understanding task has been successfully employed, as shown in an additional series of experiments.

  1. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    Science.gov (United States)

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  2. Mechatronics in the mining industry. Modelling of underground machines; Mechatronik im Bergbau. Modellbildung von Untertage-Maschinen

    Energy Technology Data Exchange (ETDEWEB)

    Bruckmann, Tobias; Brandt, Thorsten [mercatronics GmbH, Duisburg (Germany)

    2009-12-17

    The development of new functions for machines operating underground often requires a prolonged and cost-intensive test phase. Precisely the development of complex functions as occur in operating assistance systems, for example, is highly iterative. If a corresponding prototype is required for each iteration step of the development, the development costs will, of course, increase rapidly. Virtual prototypes and simulators based on mathematical models of the machine offer an alternative in this case. The article describes the same principles for modelling the kinematics of underground machines. (orig.)

  3. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  4. Dynamic Modeling and Damping Function of GUPFC in Multi-Machine Power System

    Directory of Open Access Journals (Sweden)

    Sasongko Pramono Hadi

    2011-11-01

    Full Text Available This paper presents a new dynamic model of multi-machine power system equipped with GUPFC for power system study, and using PSS and GUPFC POD controller some effective control schemes are proposed to improve power system stability. Based on UPFC configuration, an additional series boosting transformer is considered to define a GUPFC configuration and its mathematical model; Phillips-Heffron scheme is used to formulate machine model, and modification of network dealing with GUPFC parameter is carried out to develop a MIMO as well as comprehensive power system with GUPFC model. Genetics Algorithm method was proposed to lead-lag compensation design, this technique provides the parameter controller. The controller produced supplementary signals, the PSS for machine and POD for GUPFC. By applying a small disturbance, the dynamic stability power system was investigated. Simulation results show that the proposed power system with GUPFC model is valid and suitable for stability analysis. The installation of GUPFC without POD decreased the damping oscillation. But, the results show that the presence of GUPFC in power system network provided by PSS and POD controller is very potential to improve system stability. A 66% overshoot reduction could be reached, it is obtained 12 s in settling time (shorter, although the rise time become 700 ms longer. Simulation results revealed that the role of POD controller is more dominant than the PSS, however both PSS and GUPFC POD controller simultaneously present a positive interaction. Phase angle of converter C, δC is the most significant control signal POD in oscillation damping.

  5. Superconducting rotating electronic machine

    International Nuclear Information System (INIS)

    Cheon, Hui Yeong

    1989-04-01

    This book is divided into ten chapters, which handles summary of superconducting electronic machine, aspect of using of superconductor, superconducting direct current : Homopolar D. C. Machines, Drum machines, segmented slip-ring principle and carbon fibre brushes, superconducting alternating current turbine generator, design of superconducting alternating current machine, performance of superconducting alternating current machine, superconducting turbo generator by new rotor design, basic design of superconducting current generator, generator and power model, design of rotor and information of material property.

  6. Retention of knowledge and experience from experts in near-term operating plants

    International Nuclear Information System (INIS)

    Jiang, H.

    2007-01-01

    Full text: Tianwan Nuclear Power Station (TNPS) will be put into commercial operation in May, 2007. Right-sizing is on the way to adapt the organization to the new stage of TNPS. TNPS is facing challenges of dilution of expertise by the rightsizing. This condition is aggravated by the incipient training system and a very competitive fighting for attracting technical experts in nuclear area, because the very ambitious projects of nuclear plants which are thriving in China. This can lead to the compromise of the capability to safely and economically operate TNPS. Indubitably, a personnel training plays a very crucial role in knowledge management, especially for countries as China which are weak in professional education system. Key knowledge and skills for safely and reliably operating nuclear power plants can be effectively identified by personnel training system developed in a systematic way and properly implemented. And only by sound and sufficient training can adequate number of replacements be produced. Well-developed IT platform can help the information management in such an era of information and internet. Information should be collected in a systematic way instead of stacking information on an ad hoc basis. But the project database must be established in an well-organized way, and the information should be aroused from sleeping, so that usable data will not be lost and are readily accessible on intranet and available to users. Or else the engineers take great pain to search for data like looking for a needle in a haystack, while useful data are gathering dust somewhere deep in the databank something. Compared to the well-developed industrial countries, there is quite a room in fundamental aspects which are cardinal requisites for effective knowledge management. These factors Contributing to Knowledge Management in Near-Term Operating Plants include not simply training and information management but also almost all other technical and management related to the

  7. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    Science.gov (United States)

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  8. SVMQA: support-vector-machine-based protein single-model quality assessment.

    Science.gov (United States)

    Manavalan, Balachandran; Lee, Jooyoung

    2017-08-15

    The accurate ranking of predicted structural models and selecting the best model from a given candidate pool remain as open problems in the field of structural bioinformatics. The quality assessment (QA) methods used to address these problems can be grouped into two categories: consensus methods and single-model methods. Consensus methods in general perform better and attain higher correlation between predicted and true quality measures. However, these methods frequently fail to generate proper quality scores for native-like structures which are distinct from the rest of the pool. Conversely, single-model methods do not suffer from this drawback and are better suited for real-life applications where many models from various sources may not be readily available. In this study, we developed a support-vector-machine-based single-model global quality assessment (SVMQA) method. For a given protein model, the SVMQA method predicts TM-score and GDT_TS score based on a feature vector containing statistical potential energy terms and consistency-based terms between the actual structural features (extracted from the three-dimensional coordinates) and predicted values (from primary sequence). We trained SVMQA using CASP8, CASP9 and CASP10 targets and determined the machine parameters by 10-fold cross-validation. We evaluated the performance of our SVMQA method on various benchmarking datasets. Results show that SVMQA outperformed the existing best single-model QA methods both in ranking provided protein models and in selecting the best model from the pool. According to the CASP12 assessment, SVMQA was the best method in selecting good-quality models from decoys in terms of GDTloss. SVMQA method can be freely downloaded from http://lee.kias.re.kr/SVMQA/SVMQA_eval.tar.gz. jlee@kias.re.kr. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  9. Hemodynamic modelling of BOLD fMRI - A machine learning approach

    DEFF Research Database (Denmark)

    Jacobsen, Danjal Jakup

    2007-01-01

    This Ph.D. thesis concerns the application of machine learning methods to hemodynamic models for BOLD fMRI data. Several such models have been proposed by different researchers, and they have in common a basis in physiological knowledge of the hemodynamic processes involved in the generation...... of the BOLD signal. The BOLD signal is modelled as a non-linear function of underlying, hidden (non-measurable) hemodynamic state variables. The focus of this thesis work has been to develop methods for learning the parameters of such models, both in their traditional formulation, and in a state space...... formulation. In the latter, noise enters at the level of the hidden states, as well as in the BOLD measurements themselves. A framework has been developed to allow approximate posterior distributions of model parameters to be learned from real fMRI data. This is accomplished with Markov chain Monte Carlo...

  10. Bearing Degradation Process Prediction Based on the Support Vector Machine and Markov Model

    Directory of Open Access Journals (Sweden)

    Shaojiang Dong

    2014-01-01

    Full Text Available Predicting the degradation process of bearings before they reach the failure threshold is extremely important in industry. This paper proposed a novel method based on the support vector machine (SVM and the Markov model to achieve this goal. Firstly, the features are extracted by time and time-frequency domain methods. However, the extracted original features are still with high dimensional and include superfluous information, and the nonlinear multifeatures fusion technique LTSA is used to merge the features and reduces the dimension. Then, based on the extracted features, the SVM model is used to predict the bearings degradation process, and the CAO method is used to determine the embedding dimension of the SVM model. After the bearing degradation process is predicted by SVM model, the Markov model is used to improve the prediction accuracy. The proposed method was validated by two bearing run-to-failure experiments, and the results proved the effectiveness of the methodology.

  11. Monkey models for brain-machine interfaces: the need for maintaining diversity.

    Science.gov (United States)

    Nuyujukian, Paul; Fan, Joline M; Gilja, Vikash; Kalanithi, Paul S; Chestek, Cindy A; Shenoy, Krishna V

    2011-01-01

    Brain-machine interfaces (BMIs) aim to help disabled patients by translating neural signals from the brain into control signals for guiding prosthetic arms, computer cursors, and other assistive devices. Animal models are central to the development of these systems and have helped enable the successful translation of the first generation of BMIs. As we move toward next-generation systems, we face the question of which animal models will aid broader patient populations and achieve even higher performance, robustness, and functionality. We review here four general types of rhesus monkey models employed in BMI research, and describe two additional, complementary models. Given the physiological diversity of neurological injury and disease, we suggest a need to maintain the current diversity of animal models and to explore additional alternatives, as each mimic different aspects of injury or disease.

  12. Advanced induction machine model in phase coordinates for wind turbine applications

    DEFF Research Database (Denmark)

    Fajardo, L.A.; Iov, F.; Hansen, Anca Daniela

    2007-01-01

    and the proposed ABC/abc phase coordinate with varying parameters model, in the presence of external faults. The results are promising for protection and control applications of fixed speed active stall controlled wind turbines. This new approach is useful to support control and planning of wind turbines......In this paper an advanced phase coordinates squirrel cage induction machine model with time varying electrical parameters affected by magnetic saturation and rotor deep bar effects, is presented. The model uses standard data sheet for characterization of the electrical parameters, it is developed...... in C-code and interfaced with Matlab/Simulink through an S-Function. The investigation is conducted in the way to study the ride through capability of Squirrel Cage Induction Generators and compares the behavior of the classical DQ0 model, ABC/abc model in phase coordinate with constant parameters...

  13. A Hybrid dasymetric and machine learning approach to high-resolution residential electricity consumption modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; McManamay, Ryan A [ORNL

    2017-01-01

    As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for detailed information regarding residential energy consumption patterns has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy consumption, the majority of techniques are highly dependent on region-specific data sources and often require building- or dwelling-level details that are not publicly available for many regions in the United States. Furthermore, many existing methods do not account for errors in input data sources and may not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more general hybrid approach to high-resolution residential electricity consumption modeling by merging a dasymetric model with a complementary machine learning algorithm. The method s flexible data requirement and statistical framework ensure that the model both is applicable to a wide range of regions and considers errors in input data sources.

  14. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    International Nuclear Information System (INIS)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.; Sugiura, K.

    2017-01-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  15. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Science.gov (United States)

    Nishizuka, N.; Sugiura, K.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M.

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010-2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite. We detected active regions (ARs) from the full-disk magnetogram, from which ˜60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  16. Solar Flare Prediction Model with Three Machine-learning Algorithms using Ultraviolet Brightening and Vector Magnetograms

    Energy Technology Data Exchange (ETDEWEB)

    Nishizuka, N.; Kubo, Y.; Den, M.; Watari, S.; Ishii, M. [Applied Electromagnetic Research Institute, National Institute of Information and Communications Technology, 4-2-1, Nukui-Kitamachi, Koganei, Tokyo 184-8795 (Japan); Sugiura, K., E-mail: nishizuka.naoto@nict.go.jp [Advanced Speech Translation Research and Development Promotion Center, National Institute of Information and Communications Technology (Japan)

    2017-02-01

    We developed a flare prediction model using machine learning, which is optimized to predict the maximum class of flares occurring in the following 24 hr. Machine learning is used to devise algorithms that can learn from and make decisions on a huge amount of data. We used solar observation data during the period 2010–2015, such as vector magnetograms, ultraviolet (UV) emission, and soft X-ray emission taken by the Solar Dynamics Observatory and the Geostationary Operational Environmental Satellite . We detected active regions (ARs) from the full-disk magnetogram, from which ∼60 features were extracted with their time differentials, including magnetic neutral lines, the current helicity, the UV brightening, and the flare history. After standardizing the feature database, we fully shuffled and randomly separated it into two for training and testing. To investigate which algorithm is best for flare prediction, we compared three machine-learning algorithms: the support vector machine, k-nearest neighbors (k-NN), and extremely randomized trees. The prediction score, the true skill statistic, was higher than 0.9 with a fully shuffled data set, which is higher than that for human forecasts. It was found that k-NN has the highest performance among the three algorithms. The ranking of the feature importance showed that previous flare activity is most effective, followed by the length of magnetic neutral lines, the unsigned magnetic flux, the area of UV brightening, and the time differentials of features over 24 hr, all of which are strongly correlated with the flux emergence dynamics in an AR.

  17. Machine learning of atmospheric chemistry. Applications to a global chemistry transport model.

    Science.gov (United States)

    Evans, M. J.; Keller, C. A.

    2017-12-01

    Atmospheric chemistry is central to many environmental issues such as air pollution, climate change, and stratospheric ozone loss. Chemistry Transport Models (CTM) are a central tool for understanding these issues, whether for research or for forecasting. These models split the atmosphere in a large number of grid-boxes and consider the emission of compounds into these boxes and their subsequent transport, deposition, and chemical processing. The chemistry is represented through a series of simultaneous ordinary differential equations, one for each compound. Given the difference in life-times between the chemical compounds (mili-seconds for O(1D) to years for CH4) these equations are numerically stiff and solving them consists of a significant fraction of the computational burden of a CTM.We have investigated a machine learning approach to solving the differential equations instead of solving them numerically. From an annual simulation of the GEOS-Chem model we have produced a training dataset consisting of the concentration of compounds before and after the differential equations are solved, together with some key physical parameters for every grid-box and time-step. From this dataset we have trained a machine learning algorithm (random regression forest) to be able to predict the concentration of the compounds after the integration step based on the concentrations and physical state at the beginning of the time step. We have then included this algorithm back into the GEOS-Chem model, bypassing the need to integrate the chemistry.This machine learning approach shows many of the characteristics of the full simulation and has the potential to be substantially faster. There are a wide range of application for such an approach - generating boundary conditions, for use in air quality forecasts, chemical data assimilation systems, centennial scale climate simulations etc. We discuss our approches' speed and accuracy, and highlight some potential future directions for

  18. Feature combination networks for the interpretation of statistical machine learning models: application to Ames mutagenicity.

    Science.gov (United States)

    Webb, Samuel J; Hanser, Thierry; Howlin, Brendan; Krause, Paul; Vessey, Jonathan D

    2014-03-25

    A new algorithm has been developed to enable the interpretation of black box models. The developed algorithm is agnostic to learning algorithm and open to all structural based descriptors such as fragments, keys and hashed fingerprints. The algorithm has provided meaningful interpretation of Ames mutagenicity predictions from both random forest and support vector machine models built on a variety of structural fingerprints.A fragmentation algorithm is utilised to investigate the model's behaviour on specific substructures present in the query. An output is formulated summarising causes of activation and deactivation. The algorithm is able to identify multiple causes of activation or deactivation in addition to identifying localised deactivations where the prediction for the query is active overall. No loss in performance is seen as there is no change in the prediction; the interpretation is produced directly on the model's behaviour for the specific query. Models have been built using multiple learning algorithms including support vector machine and random forest. The models were built on public Ames mutagenicity data and a variety of fingerprint descriptors were used. These models produced a good performance in both internal and external validation with accuracies around 82%. The models were used to evaluate the interpretation algorithm. Interpretation was revealed that links closely with understood mechanisms for Ames mutagenicity. This methodology allows for a greater utilisation of the predictions made by black box models and can expedite further study based on the output for a (quantitative) structure activity model. Additionally the algorithm could be utilised for chemical dataset investigation and knowledge extraction/human SAR development.

  19. Prediction of effluent concentration in a wastewater treatment plant using machine learning models.

    Science.gov (United States)

    Guo, Hong; Jeong, Kwanho; Lim, Jiyeon; Jo, Jeongwon; Kim, Young Mo; Park, Jong-pyo; Kim, Joon Ha; Cho, Kyung Hwa

    2015-06-01

    Of growing amount of food waste, the integrated food waste and waste water treatment was regarded as one of the efficient modeling method. However, the load of food waste to the conventional waste treatment process might lead to the high concentration of total nitrogen (T-N) impact on the effluent water quality. The objective of this study is to establish two machine learning models-artificial neural networks (ANNs) and support vector machines (SVMs), in order to predict 1-day interval T-N concentration of effluent from a wastewater treatment plant in Ulsan, Korea. Daily water quality data and meteorological data were used and the performance of both models was evaluated in terms of the coefficient of determination (R2), Nash-Sutcliff efficiency (NSE), relative efficiency criteria (drel). Additionally, Latin-Hypercube one-factor-at-a-time (LH-OAT) and a pattern search algorithm were applied to sensitivity analysis and model parameter optimization, respectively. Results showed that both models could be effectively applied to the 1-day interval prediction of T-N concentration of effluent. SVM model showed a higher prediction accuracy in the training stage and similar result in the validation stage. However, the sensitivity analysis demonstrated that the ANN model was a superior model for 1-day interval T-N concentration prediction in terms of the cause-and-effect relationship between T-N concentration and modeling input values to integrated food waste and waste water treatment. This study suggested the efficient and robust nonlinear time-series modeling method for an early prediction of the water quality of integrated food waste and waste water treatment process. Copyright © 2015. Published by Elsevier B.V.

  20. Integrating Machine Learning into a Crowdsourced Model for Earthquake-Induced Damage Assessment

    Science.gov (United States)

    Rebbapragada, Umaa; Oommen, Thomas

    2011-01-01

    On January 12th, 2010, a catastrophic 7.0M earthquake devastated the country of Haiti. In the aftermath of an earthquake, it is important to rapidly assess damaged areas in order to mobilize the appropriate resources. The Haiti damage assessment effort introduced a promising model that uses crowdsourcing to map damaged areas in freely available remotely-sensed data. This paper proposes the application of machine learning methods to improve this model. Specifically, we apply work on learning from multiple, imperfect experts to the assessment of volunteer reliability, and propose the use of image segmentation to automate the detection of damaged areas. We wrap both tasks in an active learning framework in order to shift volunteer effort from mapping a full catalog of images to the generation of high-quality training data. We hypothesize that the integration of machine learning into this model improves its reliability, maintains the speed of damage assessment, and allows the model to scale to higher data volumes.

  1. Machine learning of frustrated classical spin models. I. Principal component analysis

    Science.gov (United States)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  2. The applications of machine learning algorithms in the modeling of estrogen-like chemicals.

    Science.gov (United States)

    Liu, Huanxiang; Yao, Xiaojun; Gramatica, Paola

    2009-06-01

    Increasing concern is being shown by the scientific community, government regulators, and the public about endocrine-disrupting chemicals that, in the environment, are adversely affecting human and wildlife health through a variety of mechanisms, mainly estrogen receptor-mediated mechanisms of toxicity. Because of the large number of such chemicals in the environment, there is a great need for an effective means of rapidly assessing endocrine-disrupting activity in the toxicology assessment process. When faced with the challenging task of screening large libraries of molecules for biological activity, the benefits of computational predictive models based on quantitative structure-activity relationships to identify possible estrogens become immediately obvious. Recently, in order to improve the accuracy of prediction, some machine learning techniques were introduced to build more effective predictive models. In this review we will focus our attention on some recent advances in the use of these methods in modeling estrogen-like chemicals. The advantages and disadvantages of the machine learning algorithms used in solving this problem, the importance of the validation and performance assessment of the built models as well as their applicability domains will be discussed.

  3. Development of Predictive QSAR Models of 4-Thiazolidinones Antitrypanosomal Activity using Modern Machine Learning Algorithms.

    Science.gov (United States)

    Kryshchyshyn, Anna; Devinyak, Oleg; Kaminskyy, Danylo; Grellier, Philippe; Lesyk, Roman

    2017-11-14

    This paper presents novel QSAR models for the prediction of antitrypanosomal activity among thiazolidines and related heterocycles. The performance of four machine learning algorithms: Random Forest regression, Stochastic gradient boosting, Multivariate adaptive regression splines and Gaussian processes regression have been studied in order to reach better levels of predictivity. The results for Random Forest and Gaussian processes regression are comparable and outperform other studied methods. The preliminary descriptor selection with Boruta method improved the outcome of machine learning methods. The two novel QSAR-models developed with Random Forest and Gaussian processes regression algorithms have good predictive ability, which was proved by the external evaluation of the test set with corresponding Q 2 ext =0.812 and Q 2 ext =0.830. The obtained models can be used further for in silico screening of virtual libraries in the same chemical domain in order to find new antitrypanosomal agents. Thorough analysis of descriptors influence in the QSAR models and interpretation of their chemical meaning allows to highlight a number of structure-activity relationships. The presence of phenyl rings with electron-withdrawing atoms or groups in para-position, increased number of aromatic rings, high branching but short chains, high HOMO energy, and the introduction of 1-substituted 2-indolyl fragment into the molecular structure have been recognized as trypanocidal activity prerequisites. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Machine Learning Approach for Predicting Wall Shear Distribution for Abdominal Aortic Aneurysm and Carotid Bifurcation Models.

    Science.gov (United States)

    Jordanski, Milos; Radovic, Milos; Milosevic, Zarko; Filipovic, Nenad; Obradovic, Zoran

    2018-03-01

    Computer simulations based on the finite element method represent powerful tools for modeling blood flow through arteries. However, due to its computational complexity, this approach may be inappropriate when results are needed quickly. In order to reduce computational time, in this paper, we proposed an alternative machine learning based approach for calculation of wall shear stress (WSS) distribution, which may play an important role in mechanisms related to initiation and development of atherosclerosis. In order to capture relationships between geometric parameters, blood density, dynamic viscosity and velocity, and WSS distribution of geometrically parameterized abdominal aortic aneurysm (AAA) and carotid bifurcation models, we proposed multivariate linear regression, multilayer perceptron neural network and Gaussian conditional random fields (GCRF). Results obtained in this paper show that machine learning approaches can successfully predict WSS distribution at different cardiac cycle time points. Even though all proposed methods showed high potential for WSS prediction, GCRF achieved the highest coefficient of determination (0.930-0.948 for AAA model and 0.946-0.954 for carotid bifurcation model) demonstrating benefits of accounting for spatial correlation. The proposed approach can be used as an alternative method for real time calculation of WSS distribution.

  5. Application of heuristic and machine-learning approach to engine model calibration

    Science.gov (United States)

    Cheng, Jie; Ryu, Kwang R.; Newman, C. E.; Davis, George C.

    1993-03-01

    Automation of engine model calibration procedures is a very challenging task because (1) the calibration process searches for a goal state in a huge, continuous state space, (2) calibration is often a lengthy and frustrating task because of complicated mutual interference among the target parameters, and (3) the calibration problem is heuristic by nature, and often heuristic knowledge for constraining a search cannot be easily acquired from domain experts. A combined heuristic and machine learning approach has, therefore, been adopted to improve the efficiency of model calibration. We developed an intelligent calibration program called ICALIB. It has been used on a daily basis for engine model applications, and has reduced the time required for model calibrations from many hours to a few minutes on average. In this paper, we describe the heuristic control strategies employed in ICALIB such as a hill-climbing search based on a state distance estimation function, incremental problem solution refinement by using a dynamic tolerance window, and calibration target parameter ordering for guiding the search. In addition, we present the application of a machine learning program called GID3* for automatic acquisition of heuristic rules for ordering target parameters.

  6. A Model-based Analysis of Impulsivity Using a Slot-Machine Gambling Paradigm

    Directory of Open Access Journals (Sweden)

    Saee ePaliwal

    2014-07-01

    Full Text Available Impulsivity plays a key role in decision-making under uncertainty. It is a significant contributor to problem and pathological gambling. Standard assessments of impulsivity by questionnaires, however, have various limitations, partly because impulsivity is a broad, multi-faceted concept. What remains unclear is which of these facets contribute to shaping gambling behavior. In the present study, we investigated impulsivity as expressed in a gambling setting by applying computational modeling to data from 47 healthy male volunteers who played a realistic, virtual slot-machine gambling task. Behaviorally, we found that impulsivity, as measured independently by the 11th revision of the Barratt Impulsiveness Scale (BIS-11, correlated significantly with an aggregate read-out of the following gambling responses: bet increases, machines switches, casino switches and double-ups. Using model comparison, we compared a set of hierarchical Bayesian belief-updating models, i.e. the Hierarchical Gaussian Filter (HGF and Rescorla-Wagner reinforcement learning models, with regard to how well they explained different aspects of the behavioral data. We then examined the construct validity of our winning models with multiple regression, relating subject-specific model parameter estimates to the individual BIS-11 total scores. In the most predictive model (a three-level HGF, the two free parameters encoded uncertainty-dependent mechanisms of belief updates and significantly explained BIS-11 variance across subjects. Furthermore, in this model, decision noise was a function of trial-wise uncertainty about winning probability. Collectively, our results provide a proof of concept that hierarchical Bayesian models can characterize the decision-making mechanisms linked to impulsivity. These novel indices of gambling mechanisms unmasked during actual play may be useful for online prevention measures for at-risk players and future assessments of pathological gambling.

  7. Sugeno-Fuzzy Expert System Modeling for Quality Prediction of Non-Contact Machining Process

    Science.gov (United States)

    Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.

    2018-03-01

    Modeling can be categorised into four main domains: prediction, optimisation, estimation and calibration. In this paper, the Takagi-Sugeno-Kang (TSK) fuzzy logic method is examined as a prediction modelling method to investigate the taper quality of laser lathing, which seeks to replace traditional lathe machines with 3D laser lathing in order to achieve the desired cylindrical shape of stock materials. Three design parameters were selected: feed rate, cutting speed and depth of cut. A total of twenty-four experiments were conducted with eight sequential runs and replicated three times. The results were found to be 99% of accuracy rate of the TSK fuzzy predictive model, which suggests that the model is a suitable and practical method for non-linear laser lathing process.

  8. A machine learning approach to the potential-field method for implicit modeling of geological structures

    Science.gov (United States)

    Gonçalves, Ítalo Gomes; Kumaira, Sissa; Guadagnin, Felipe

    2017-06-01

    Implicit modeling has experienced a rise in popularity over the last decade due to its advantages in terms of speed and reproducibility in comparison with manual digitization of geological structures. The potential-field method consists in interpolating a scalar function that indicates to which side of a geological boundary a given point belongs to, based on cokriging of point data and structural orientations. This work proposes a vector potential-field solution from a machine learning perspective, recasting the problem as multi-class classification, which alleviates some of the original method's assumptions. The potentials related to each geological class are interpreted in a compositional data framework. Variogram modeling is avoided through the use of maximum likelihood to train the model, and an uncertainty measure is introduced. The methodology was applied to the modeling of a sample dataset provided with the software Move™. The calculations were implemented in the R language and 3D visualizations were prepared with the rgl package.

  9. Research on Dynamic Modeling and Application of Kinetic Contact Interface in Machine Tool

    Directory of Open Access Journals (Sweden)

    Dan Xu

    2016-01-01

    Full Text Available A method is presented which is a kind of combining theoretic analysis and experiment to obtain the equivalent dynamic parameters of linear guideway through four steps in detail. From statics analysis, vibration model analysis, dynamic experiment, and parameter identification, the dynamic modeling of linear guideway is synthetically studied. Based on contact mechanics and elastic mechanics, the mathematic vibration model and the expressions of basic mode frequency are deduced. Then, equivalent stiffness and damping of guideway are obtained in virtue of single-freedom-degree mode fitting method. Moreover, the investigation above is applied in a certain gantry-type machining center; and through comparing with simulation model and experiment results, both availability and correctness are validated.

  10. Modeling and prediction of human word search behavior in interactive machine translation

    Science.gov (United States)

    Ji, Duo; Yu, Bai; Ma, Bin; Ye, Na

    2017-12-01

    As a kind of computer aided translation method, Interactive Machine Translation technology reduced manual translation repetitive and mechanical operation through a variety of methods, so as to get the translation efficiency, and played an important role in the practical application of the translation work. In this paper, we regarded the behavior of users' frequently searching for words in the translation process as the research object, and transformed the behavior to the translation selection problem under the current translation. The paper presented a prediction model, which is a comprehensive utilization of alignment model, translation model and language model of the searching words behavior. It achieved a highly accurate prediction of searching words behavior, and reduced the switching of mouse and keyboard operations in the users' translation process.

  11. Discriminative feature-rich models for syntax-based machine translation.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.

    2012-12-01

    This report describes the campus executive LDRD %E2%80%9CDiscriminative Feature-Rich Models for Syntax-Based Machine Translation,%E2%80%9D which was an effort to foster a better relationship between Sandia and Carnegie Mellon University (CMU). The primary purpose of the LDRD was to fund the research of a promising graduate student at CMU; in this case, Kevin Gimpel was selected from the pool of candidates. This report gives a brief overview of Kevin Gimpel's research.

  12. The Model of Information Support for Management of Investment Attractiveness of Machine-Building Enterprises

    Directory of Open Access Journals (Sweden)

    Chernetska Olga V.

    2016-11-01

    Full Text Available The article discloses the content of the definition of “information support”, identifies basic approaches to the interpretation of this economic category. The main purpose of information support for management of enterprise investment attractiveness is determined. The key components of information support for management of enterprise investment attractiveness are studied. The main types of automated information systems for management of the investment attractiveness of enterprises are identified and characterized. The basic computer programs for assessing the level of investment attractiveness of enterprises are considered. A model of information support for management of investment attractiveness of machine-building enterprises is developed.

  13. Research on Error Modelling and Identification of 3 Axis NC Machine Tools Based on Cross Grid Encoder Measurement

    International Nuclear Information System (INIS)

    Du, Z C; Lv, C F; Hong, M S

    2006-01-01

    A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'

  14. Induction of labour at or near term for suspected fetal macrosomia.

    Science.gov (United States)

    Boulvain, Michel; Irion, Olivier; Dowswell, Therese; Thornton, Jim G

    2016-05-22

    popular with many women. In settings where obstetricians can be reasonably confident about their scan assessment of fetal weight, the advantages and disadvantages of induction at or near term for fetuses suspected of being macrosomic should be discussed with parents.Although some parents and doctors may feel the evidence already justifies induction, others may justifiably disagree. Further trials of induction shortly before term for suspected fetal macrosomia are needed. Such trials should concentrate on refining the optimum gestation of induction, and improving the accuracy of the diagnosis of macrosomia.

  15. Modelling Water Stress in a Shiraz Vineyard Using Hyperspectral Imaging and Machine Learning

    Directory of Open Access Journals (Sweden)

    Kyle Loggenberg

    2018-01-01

    Full Text Available The detection of water stress in vineyards plays an integral role in the sustainability of high-quality grapes and prevention of devastating crop loses. Hyperspectral remote sensing technologies combined with machine learning provides a practical means for modelling vineyard water stress. In this study, we applied two ensemble learners, i.e., random forest (RF and extreme gradient boosting (XGBoost, for discriminating stressed and non-stressed Shiraz vines using terrestrial hyperspectral imaging. Additionally, we evaluated the utility of a spectral subset of wavebands, derived using RF mean decrease accuracy (MDA and XGBoost gain. Our results show that both ensemble learners can effectively analyse the hyperspectral data. When using all wavebands (p = 176, RF produced a test accuracy of 83.3% (KHAT (kappa analysis = 0.67, and XGBoost a test accuracy of 80.0% (KHAT = 0.6. Using the subset of wavebands (p = 18 produced slight increases in accuracy ranging from 1.7% to 5.5% for both RF and XGBoost. We further investigated the effect of smoothing the spectral data using the Savitzky-Golay filter. The results indicated that the Savitzky-Golay filter reduced model accuracies (ranging from 0.7% to 3.3%. The results demonstrate the feasibility of terrestrial hyperspectral imagery and machine learning to create a semi-automated framework for vineyard water stress modelling.

  16. Modeling PM2.5 Urban Pollution Using Machine Learning and Selected Meteorological Parameters

    Directory of Open Access Journals (Sweden)

    Jan Kleine Deters

    2017-01-01

    Full Text Available Outdoor air pollution costs millions of premature deaths annually, mostly due to anthropogenic fine particulate matter (or PM2.5. Quito, the capital city of Ecuador, is no exception in exceeding the healthy levels of pollution. In addition to the impact of urbanization, motorization, and rapid population growth, particulate pollution is modulated by meteorological factors and geophysical characteristics, which complicate the implementation of the most advanced models of weather forecast. Thus, this paper proposes a machine learning approach based on six years of meteorological and pollution data analyses to predict the concentrations of PM2.5 from wind (speed and direction and precipitation levels. The results of the classification model show a high reliability in the classification of low (25 µg/m3 and low (<10 µg/m3 versus moderate (10–25 µg/m3 concentrations of PM2.5. A regression analysis suggests a better prediction of PM2.5 when the climatic conditions are getting more extreme (strong winds or high levels of precipitation. The high correlation between estimated and real data for a time series analysis during the wet season confirms this finding. The study demonstrates that the use of statistical models based on machine learning is relevant to predict PM2.5 concentrations from meteorological data.

  17. Quantifying surgical complexity with machine learning: looking beyond patient factors to improve surgical models.

    Science.gov (United States)

    Van Esbroeck, Alexander; Rubinfeld, Ilan; Hall, Bruce; Syed, Zeeshan

    2014-11-01

    To investigate the use of machine learning to empirically determine the risk of individual surgical procedures and to improve surgical models with this information. American College of Surgeons National Surgical Quality Improvement Program (ACS NSQIP) data from 2005 to 2009 were used to train support vector machine (SVM) classifiers to learn the relationship between textual constructs in current procedural terminology (CPT) descriptions and mortality, morbidity, Clavien 4 complications, and surgical-site infections (SSI) within 30 days of surgery. The procedural risk scores produced by the SVM classifiers were validated on data from 2010 in univariate and multivariate analyses. The procedural risk scores produced by the SVM classifiers achieved moderate-to-high levels of discrimination in univariate analyses (area under receiver operating characteristic curve: 0.871 for mortality, 0.789 for morbidity, 0.791 for SSI, 0.845 for Clavien 4 complications). Addition of these scores also substantially improved multivariate models comprising patient factors and previously proposed correlates of procedural risk (net reclassification improvement and integrated discrimination improvement: 0.54 and 0.001 for mortality, 0.46 and 0.011 for morbidity, 0.68 and 0.022 for SSI, 0.44 and 0.001 for Clavien 4 complications; P risk for individual procedures. This information can be measured in an entirely data-driven manner and substantially improves multifactorial models to predict postoperative complications. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Machine Learning Models of Post-Intubation Hypoxia During General Anesthesia.

    Science.gov (United States)

    Sippl, Philipp; Ganslandt, Thomas; Prokosch, Hans-Ulrich; Muenster, Tino; Toddenroth, Dennis

    2017-01-01

    Fine-meshed perioperative measurements are offering enormous potential for automatically investigating clinical complications during general anesthesia. In this study, we employed multiple machine learning methods to model perioperative hypoxia and compare their respective capabilities. After exporting and visualizing 620 series of perioperative vital signs, we had ten anesthesiologists annotate the subjective presence and severity of temporary post-intubation oxygen desaturation. We then applied specific clustering and prediction methods on the acquired annotations, and evaluated their performance in comparison to the inter-rater agreement between experts. When reproducing the expert annotations, the sensitivity and specificity of multi-layer neural networks substantially outperformed clustering and simpler threshold-based methods. The achieved performance of our best automated hypoxia models thereby approximately equaled the observed agreement between different medical experts. Furthermore, we deployed our classification methods for processing unlabeled inputs to estimate the incidence of hypoxic episodes in another sizeable patient cohort, which attests to the feasibility of using the approach on a larger scale. We interpret that our machine learning models could be instrumental for computerized observational studies of the clinical determinants of post-intubation oxygen deficiency. Future research might also investigate potential benefits of more advanced preprocessing approaches such as automated feature learning.

  19. Predictive Models for Different Roughness Parameters During Machining Process of Peek Composites Using Response Surface Methodology

    Directory of Open Access Journals (Sweden)

    Mata-Cabrera Francisco

    2013-10-01

    Full Text Available Polyetheretherketone (PEEK composite belongs to a group of high performance thermoplastic polymers and is widely used in structural components. To improve the mechanical and tribological properties, short fibers are added as reinforcement to the material. Due to its functional properties and potential applications, it’s impor- tant to investigate the machinability of non-reinforced PEEK (PEEK, PEEK rein- forced with 30% of carbon fibers (PEEK CF30, and reinforced PEEK with 30% glass fibers (PEEK GF30 to determine the optimal conditions for the manufacture of the parts. The present study establishes the relationship between the cutting con- ditions (cutting speed and feed rate and the roughness (Ra , Rt , Rq , Rp , by develop- ing second order mathematical models. The experiments were planned as per full factorial design of experiments and an analysis of variance has been performed to check the adequacy of the models. These state the adequacy of the derived models to obtain predictions for roughness parameters within ranges of parameters that have been investigated during the experiments. The experimental results show that the most influence of the cutting parameters is the feed rate, furthermore, proved that glass fiber reinforcements produce a worse machinability.

  20. Use of models and mockups in verifying man-machine interfaces

    International Nuclear Information System (INIS)

    Seminara, J.L.

    1985-01-01

    The objective of Human Factors Engineering is to tailor the design of facilities and equipment systems to match the capabilities and limitations of the personnel who will operate and maintain the system. This optimization of the man-machine interface is undertaken to enhance the prospects for safe, reliable, timely, and error-free human performance in meeting system objectives. To ensure the eventual success of a complex man-machine system it is important to systematically and progressively test and verify the adequacy of man-machine interfaces from initial design concepts to system operation. Human factors specialists employ a variety of methods to evaluate the quality of the human-system interface. These methods include: (1) Reviews of two-dimensional drawings using appropriately scaled transparent overlays of personnel spanning the anthropometric range, considering clothing and protective gear encumbrances (2) Use of articulated, scaled, plastic templates or manikins that are overlayed on equipment or facility drawings (3) Development of computerized manikins in computer aided design approaches (4) Use of three-dimensional scale models to better conceptualize work stations, control rooms or maintenance facilities (5) Full or half-scale mockups of system components to evaluate operator/maintainer interfaces (6) Part of full-task dynamic simulation of operator or maintainer tasks and interactive system responses (7) Laboratory and field research to establish human performance capabilities with alternative system design concepts or configurations. Of the design verification methods listed above, this paper will only consider the use of models and mockups in the design process

  1. Estimation of the applicability domain of kernel-based machine learning models for virtual screening

    Directory of Open Access Journals (Sweden)

    Fechner Nikolas

    2010-03-01

    Full Text Available Abstract Background The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. Results We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening

  2. Estimation of the applicability domain of kernel-based machine learning models for virtual screening.

    Science.gov (United States)

    Fechner, Nikolas; Jahn, Andreas; Hinselmann, Georg; Zell, Andreas

    2010-03-11

    The virtual screening of large compound databases is an important application of structural-activity relationship models. Due to the high structural diversity of these data sets, it is impossible for machine learning based QSAR models, which rely on a specific training set, to give reliable results for all compounds. Thus, it is important to consider the subset of the chemical space in which the model is applicable. The approaches to this problem that have been published so far mostly use vectorial descriptor representations to define this domain of applicability of the model. Unfortunately, these cannot be extended easily to structured kernel-based machine learning models. For this reason, we propose three approaches to estimate the domain of applicability of a kernel-based QSAR model. We evaluated three kernel-based applicability domain estimations using three different structured kernels on three virtual screening tasks. Each experiment consisted of the training of a kernel-based QSAR model using support vector regression and the ranking of a disjoint screening data set according to the predicted activity. For each prediction, the applicability of the model for the respective compound is quantitatively described using a score obtained by an applicability domain formulation. The suitability of the applicability domain estimation is evaluated by comparing the model performance on the subsets of the screening data sets obtained by different thresholds for the applicability scores. This comparison indicates that it is possible to separate the part of the chemspace, in which the model gives reliable predictions, from the part consisting of structures too dissimilar to the training set to apply the model successfully. A closer inspection reveals that the virtual screening performance of the model is considerably improved if half of the molecules, those with the lowest applicability scores, are omitted from the screening. The proposed applicability domain formulations

  3. Glucose Oxidase Biosensor Modeling and Predictors Optimization by Machine Learning Methods

    Directory of Open Access Journals (Sweden)

    Felix F. Gonzalez-Navarro

    2016-10-01

    Full Text Available Biosensors are small analytical devices incorporating a biological recognition element and a physico-chemical transducer to convert a biological signal into an electrical reading. Nowadays, their technological appeal resides in their fast performance, high sensitivity and continuous measuring capabilities; however, a full understanding is still under research. This paper aims to contribute to this growing field of biotechnology, with a focus on Glucose-Oxidase Biosensor (GOB modeling through statistical learning methods from a regression perspective. We model the amperometric response of a GOB with dependent variables under different conditions, such as temperature, benzoquinone, pH and glucose concentrations, by means of several machine learning algorithms. Since the sensitivity of a GOB response is strongly related to these dependent variables, their interactions should be optimized to maximize the output signal, for which a genetic algorithm and simulated annealing are used. We report a model that shows a good generalization error and is consistent with the optimization.

  4. [Study on predicting model for acute hypotensive episodes in ICU based on support vector machine].

    Science.gov (United States)

    Lai, Lijuan; Wang, Zhigang; Wu, Xiaoming; Xiong, Dongsheng

    2011-06-01

    The occurrence of acute hypotensive episodes (AHE) in intensive care units (ICU) seriously endangers the lives of patients, and the treatment is mainly depended on the expert experience of doctors. In this paper, a model for predicting the occurrence of AHE in ICU has been developed using the theory of medical Informatics. We analyzed the trend and characteristics of the mean arterial blood pressure (MAP) between the patients who were suffering AHE and those who were not, and extracted the median, mean and other statistical parameters for learning and training based on support vector machine (SVM), then developed a predicting model. On this basis, we also compared different models consisted of different kernel functions. Experiments demonstrated that this approach performed well on classification and prediction, which would contribute to forecast the occurrence of AHE.

  5. Modelling and simulation for table tennis referee regulation based on finite state machine.

    Science.gov (United States)

    Cui, Jianjiang; Liu, Zixuan; Xu, Long

    2017-10-01

    As referee's decisions are made artificially in traditional table tennis matches, many factors in a match, such as fatigue and subjective tendency, may lead to unjust decision. Based on finite state machine (FSM), this paper presents a model for table tennis referee regulation to substitute manual decisions. In this model, the trajectory of the ball is recorded through a binocular visual system while the complete rules extracted from the International Table Tennis Federation (ITTF) rules are described based on FSM. The final decision for the competition is made based on expert system theory. Simulation result shows that the proposed model has high accuracy, and can be generalised to other similar games such as badminton, volleyball, etc.

  6. Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique

    Directory of Open Access Journals (Sweden)

    J. Belhadj

    2006-03-01

    Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology

  7. Extended Park's transformation for 2×3-phase synchronous machine and converter phasor model with representation of AC harmonics

    DEFF Research Database (Denmark)

    Knudsen, Hans

    1995-01-01

    in the stator. A consistent method is developed to determine model parameters from standard machine data. A phasor model of the line commutated converter is presented. The converter model includes not only the fundamental frequency, but also any chosen number of harmonics without a representation of the single...

  8. Dynamic temperature modeling of an SOFC using least squares support vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ying-Wei; Li, Jun; Cao, Guang-Yi; Tu, Heng-Yong [Institute of Fuel Cell, Shanghai Jiao Tong University, Shanghai 200240 (China); Li, Jian; Yang, Jie [School of Materials Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China)

    2008-05-01

    Cell temperature control plays a crucial role in SOFC operation. In order to design effective temperature control strategies by model-based control methods, a dynamic temperature model of an SOFC is presented in this paper using least squares support vector machines (LS-SVMs). The nonlinear temperature dynamics of the SOFC is represented by a nonlinear autoregressive with exogenous inputs (NARXs) model that is implemented using an LS-SVM regression model. Issues concerning the development of the LS-SVM temperature model are discussed in detail, including variable selection, training set construction and tuning of the LS-SVM parameters (usually referred to as hyperparameters). Comprehensive validation tests demonstrate that the developed LS-SVM model is sufficiently accurate to be used independently from the SOFC process, emulating its temperature response from the only process input information over a relatively wide operating range. The powerful ability of the LS-SVM temperature model benefits from the approaches of constructing the training set and tuning hyperparameters automatically by the genetic algorithm (GA), besides the modeling method itself. The proposed LS-SVM temperature model can be conveniently employed to design temperature control strategies of the SOFC. (author)

  9. Comprehensive Modeling of U-Tube Steam Generators Using Extreme Learning Machines

    Science.gov (United States)

    Beyhan, Selami; Kavaklioglu, Kadir

    2015-10-01

    This paper proposes artificial neural network and fuzzy system-based extreme learning machines (ELM) for offline and online modeling of U-tube steam generators (UTSG). Water level of UTSG systems is predicted in a one-step-ahead fashion using nonlinear autoregressive with exogenous input (NARX) topology. Modeling data are generated using a well-known and widely accepted dynamic model reported in the literature. Model performances are analyzed with different number of neurons for the neural network and with different number of rules for the fuzzy system. UTSG models are built at different reactor power levels as well as full range that corresponds to all reactor operating powers. A quantitative comparison of the models are made using the root-mean-squared error (RMSE) and the minimum-descriptive-length (MDL) criteria. Furthermore, conventional back propagation learning-based neural and fuzzy models are also designed for comparing ELMs to classical artificial models. The advantages and disadvantages of the designed models are discussed.

  10. MIP Models and Hybrid Algorithms for Simultaneous Job Splitting and Scheduling on Unrelated Parallel Machines

    Science.gov (United States)

    Ozmutlu, H. Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms. PMID:24977204

  11. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  12. MIP models and hybrid algorithms for simultaneous job splitting and scheduling on unrelated parallel machines.

    Science.gov (United States)

    Eroglu, Duygu Yilmaz; Ozmutlu, H Cenk

    2014-01-01

    We developed mixed integer programming (MIP) models and hybrid genetic-local search algorithms for the scheduling problem of unrelated parallel machines with job sequence and machine-dependent setup times and with job splitting property. The first contribution of this paper is to introduce novel algorithms which make splitting and scheduling simultaneously with variable number of subjobs. We proposed simple chromosome structure which is constituted by random key numbers in hybrid genetic-local search algorithm (GAspLA). Random key numbers are used frequently in genetic algorithms, but it creates additional difficulty when hybrid factors in local search are implemented. We developed algorithms that satisfy the adaptation of results of local search into the genetic algorithms with minimum relocation operation of genes' random key numbers. This is the second contribution of the paper. The third contribution of this paper is three developed new MIP models which are making splitting and scheduling simultaneously. The fourth contribution of this paper is implementation of the GAspLAMIP. This implementation let us verify the optimality of GAspLA for the studied combinations. The proposed methods are tested on a set of problems taken from the literature and the results validate the effectiveness of the proposed algorithms.

  13. Machine tool

    International Nuclear Information System (INIS)

    Kang, Myeong Sun

    1981-01-01

    This book indicates machine tool, which includes cutting process and processing by cutting process, theory of cutting like tool angle and chip molding, cutting tool such as milling cutter and drill, summary and introduction of following machine ; spindle drive and feed drive, pivot and pivot bearing, frame, guide way and table, drilling machine, boring machine, shaper and planer, milling machine, machine tool for precision finishing like lapping machine and super finishing machine gear cutter.

  14. Fishery landing forecasting using EMD-based least square support vector machine models

    Science.gov (United States)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  15. New Approach of Frictional Behaviour for Modelling of High Speed Machining

    Science.gov (United States)

    Watremez, M.; Dubar, L.; Brocail, J.

    2011-05-01

    Numerical approaches to high-speed machining are necessary to increase productivity and to optimise the tool wear and the residual stresses. In order to apply such approaches, rheological behaviour of the antagonists and friction model of interfaces have to be correctly determined. The existing numerical approaches that are used with the current friction models do not generate good correlations of the process variables, such as the cutting forces or the tool-chip contact length. Recent studies [4-6] show the influence of friction model on the numerical results. This paper proposes a new approach for characterizing friction behaviour at the tool-chip interface in the zone near the cutting edge. The study is led with AISI 1045 steel and an uncoated carbide tool. An experimental device is designed to simulate the friction behaviour at the tool-chip interface. During this upsetting-sliding test, an indenter rubs in a specimen with a constant speed, generating a residual friction track. Thermo-mechanical parameters of machining process are firstly considered as the characteristics contact conditions to be reproduced on the testing stand. The contact pressure and the interfacial temperature are assessed with the finite element modelling of high-speed machining. The contactor penetration and the specimen temperature are the parameters to be determined in order to perform tests in concordance with the characteristic contact conditions. The ratio between the tangential and the normal force is defined as a friction index. Contact pressure and friction coefficient are determined from the test's numerical model and an iterative method is used to determine a Coulomb's coefficient by minimizing differences between the experimental and the numerical efforts. Several tests are then performed to provide experimental data and these data are used to define the friction coefficient versus to the contact pressure, the sliding velocity and the interfacial temperature by a new

  16. Limits, modeling and design of high-speed permanent magnet machines

    NARCIS (Netherlands)

    Borisavljevic, A.

    2011-01-01

    There is a growing number of applications that require fast-rotating machines; motivation for this thesis comes from a project in which downsized spindles for micro-machining have been researched (TU Delft Microfactory project). The thesis focuses on analysis and design of high-speed PM machines and

  17. Thermal Error Modeling Method with the Jamming of Temperature-Sensitive Points' Volatility on CNC Machine Tools

    Science.gov (United States)

    MIAO, Enming; LIU, Yi; XU, Jianguo; LIU, Hui

    2017-05-01

    Aiming at the deficiency of the robustness of thermal error compensation models of CNC machine tools, the mechanism of improving the models' robustness is studied by regarding the Leaderway-V450 machining center as the object. Through the analysis of actual spindle air cutting experimental data on Leaderway-V450 machine, it is found that the temperature-sensitive points used for modeling is volatility, and this volatility directly leads to large changes on the collinear degree among modeling independent variables. Thus, the forecasting accuracy of multivariate regression model is severely affected, and the forecasting robustness becomes poor too. To overcome this effect, a modeling method of establishing thermal error models by using single temperature variable under the jamming of temperature-sensitive points' volatility is put forward. According to the actual data of thermal error measured in different seasons, it is proved that the single temperature variable model can reduce the loss of forecasting accuracy resulted from the volatility of temperature-sensitive points, especially for the prediction of cross quarter data, the improvement of forecasting accuracy is about 5 μm or more. The purpose that improving the robustness of the thermal error models is realized, which can provide a reference for selecting the modeling independent variable in the application of thermal error compensation of CNC machine tools.

  18. Mathematically modelling the power requirement for a vertical shaft mowing machine

    Directory of Open Access Journals (Sweden)

    Jorge Simón Pérez de Corcho Fuentes

    2008-09-01

    Full Text Available This work describes a mathematical model for determining the power demand for a vertical shaft mowing machine, particularly taking into account the influence of speed on cutting power, which is different from that of other models of mowers. The influence of the apparatus’ rotation and translation speeds was simulated in determining power demand. The results showed that no chan-ges in cutting power were produced by varying the knives’ angular speed (if translation speed was constant, while cutting power became increased if translation speed was increased. Variations in angular speed, however, influenced other parameters deter-mining total power demand. Determining this vertical shaft mower’s cutting pattern led to obtaining good crop stubble quality at the mower’s lower rotation speed, hence reducing total energy requirements.

  19. Software model of a machine vision system based on the common house fly.

    Science.gov (United States)

    Madsen, Robert; Barrett, Steven; Wilcox, Michael

    2005-01-01

    The vision system of the common house fly has many properties, such as hyperacuity and parallel structure, which would be advantageous in a machine vision system. A software model has been developed which is ultimately intended to be a tool to guide the design of an analog real time vision system. The model starts by laying out cartridges over an image. The cartridges are analogous to the ommatidium of the fly's eye and contain seven photoreceptors each with a Gaussian profile. The spacing between photoreceptors is variable providing for more or less detail as needed. The cartridges provide information on what type of features they see and neighboring cartridges share information to construct a feature map.

  20. A Tractable Model of the LTE Access Reservation Procedure for Machine-Type Communications

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Min Kim, Dong; Madueño, Germán Corrales

    2015-01-01

    A canonical scenario in Machine-Type Communications (MTC) is the one featuring a large number of devices, each of them with sporadic traffic. Hence, the number of served devices in a single LTE cell is not determined by the available aggregate rate, but rather by the limitations of the LTE access...... reservation protocol. Specifically, the limited number of contention preambles and the limited amount of uplink grants per random access response are crucial to consider when dimensioning LTE networks for MTC. We propose a low-complexity model that encompasses these two limitations and allows us to evaluate...... on the preamble collisions. A comparison with the simulated LTE access reservation procedure that follows the 3GPP specifications, confirms that our model provides an accurate estimation of the system outage event and the number of supported MTC devices....

  1. Simulation modeling and tracing optimal trajectory of robotic mining machine effector

    Science.gov (United States)

    Fryanov, VN; Pavlova, LD

    2017-02-01

    Within the framework of the robotic coal mine design for deep-level coal beds with the high gas content in the seismically active areas in the southern Kuzbass, the motion path parameters for an effector of a robotic mining machine are evaluated. The simulation model is meant for selection of minimum energy-based optimum trajectory for the robot effector, calculation of stresses and strains in a coal bed in a variable perimeter shortwall in the course of coal extraction, determination of coordinates of a coal bed edge area with the maximum disintegration of coal, and for choice of direction of the robot effector to get in contact with the mentioned area and to break coal at the minimum energy input. It is suggested to use the model in the engineering of the robot intelligence.

  2. A mathematical model for surface roughness of fluidic channels produced by grinding aided electrochemical discharge machining (G-ECDM

    Directory of Open Access Journals (Sweden)

    Ladeesh V. G.

    2017-01-01

    Full Text Available Grinding aided electrochemical discharge machining is a hybrid technique, which combines the grinding action of an abrasive tool and thermal effects of electrochemical discharges to remove material from the workpiece for producing complex contours. The present study focuses on developing fluidic channels on borosilicate glass using G-ECDM and attempts to develop a mathematical model for surface roughness of the machined channel. Preliminary experiments are conducted to study the effect of machining parameters on surface roughness. Voltage, duty factor, frequency and tool feed rate are identified as the significant factors for controlling surface roughness of the channels produced by G-ECDM. A mathematical model was developed for surface roughness by considering the grinding action and thermal effects of electrochemical discharges in material removal. Experiments are conducted to validate the model and the results obtained are in good agreement with that predicted by the model.

  3. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    Directory of Open Access Journals (Sweden)

    Yulin Jian

    2017-06-01

    Full Text Available A novel classification model, named the quantum-behaved particle swarm optimization (QPSO-based weighted multiple kernel extreme learning machine (QWMK-ELM, is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs. The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM, kernel extreme learning machine (KELM, k-nearest neighbors (KNN, support vector machine (SVM, multi-layer perceptron (MLP, radical basis function neural network (RBFNN, and probabilistic neural network (PNN. The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  4. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  5. Kinetostatic modeling and analysis of an exechon parallel kinematic machine(PKM) module

    Science.gov (United States)

    Zhao, Yanqin; Jin, Yan; Zhang, Jun

    2016-01-01

    As a newly invented parallel kinematic machine(PKM), Exechon has found its potential application in machining and assembling industries due to high rigidity and high dynamics. To guarantee the overall performance, the loading conditions and deflections of the key components must be revealed to provide basic mechanic data for component design. For this purpose, a kinetostatic model is proposed with substructure synthesis technique. The Exechon is divided into a platform subsystem, a fixed base subsystem and three limb subsystems according to its structure. By modeling the limb assemblage as a spatial beam constrained by two sets of lumped virtual springs representing the compliances of revolute joint, universal joint and spherical joint, the equilibrium equations of limb subsystems are derived with finite element method(FEM). The equilibrium equations of the platform are derived with Newton's 2nd law. By introducing deformation compatibility conditions between the platform and limb, the governing equilibrium equations of the system are derived to formulate an analytical expression for system's deflections. The platform's elastic displacements and joint reactions caused by the gravity are investigated to show a strong position-dependency and axis-symmetry due to its kinematic and structure features. The proposed kinetostatic model is a trade-off between the accuracy of FEM and concision of analytical method, thus can predict the kinetostatics throughout the workspace in a quick and succinct manner. The proposed modeling methodology and kinetostatic analysis can be further expanded to other PKMs with necessary modifications, providing useful information for kinematic calibration as well as component strength calculations.

  6. Neural Control and Adaptive Neural Forward Models for Insect-like, Energy-Efficient, and Adaptable Locomotion of Walking Machines

    DEFF Research Database (Denmark)

    Manoonpong, Poramate; Parlitz, Ulrich; Wörgötter, Florentin

    2013-01-01

    basic rhythmic motions which are shaped by sensory feedback while internal models are used for sensory prediction and state estimations. According to this concept, we present here adaptive neural locomotion control consisting of a CPG mechanism with neuromodulation and local leg control mechanisms based...... on sensory feedback and adaptive neural forward models with efference copies. This neural closed-loop controller enables a walking machine to perform a multitude of different walking patterns including insect-like leg movements and gaits as well as energy-efficient locomotion. In addition, the forward models...... that the employed embodied neural closed-loop system can be a powerful way for developing robust and adaptable machines....

  7. QSAR models for prediction study of HIV protease inhibitors using support vector machines, neural networks and multiple linear regression

    Directory of Open Access Journals (Sweden)

    Rachid Darnag

    2017-02-01

    Full Text Available Support vector machines (SVM represent one of the most promising Machine Learning (ML tools that can be applied to develop a predictive quantitative structure–activity relationship (QSAR models using molecular descriptors. Multiple linear regression (MLR and artificial neural networks (ANNs were also utilized to construct quantitative linear and non linear models to compare with the results obtained by SVM. The prediction results are in good agreement with the experimental value of HIV activity; also, the results reveal the superiority of the SVM over MLR and ANN model. The contribution of each descriptor to the structure–activity relationships was evaluated.

  8. Support vector machine-based open crop model (SBOCM: Case of rice production in China

    Directory of Open Access Journals (Sweden)

    Ying-xue Su

    2017-03-01

    Full Text Available Existing crop models produce unsatisfactory simulation results and are operationally complicated. The present study, however, demonstrated the unique advantages of statistical crop models for large-scale simulation. Using rice as the research crop, a support vector machine-based open crop model (SBOCM was developed by integrating developmental stage and yield prediction models. Basic geographical information obtained by surface weather observation stations in China and the 1:1000000 soil database published by the Chinese Academy of Sciences were used. Based on the principle of scale compatibility of modeling data, an open reading frame was designed for the dynamic daily input of meteorological data and output of rice development and yield records. This was used to generate rice developmental stage and yield prediction models, which were integrated into the SBOCM system. The parameters, methods, error resources, and other factors were analyzed. Although not a crop physiology simulation model, the proposed SBOCM can be used for perennial simulation and one-year rice predictions within certain scale ranges. It is convenient for data acquisition, regionally applicable, parametrically simple, and effective for multi-scale factor integration. It has the potential for future integration with extensive social and economic factors to improve the prediction accuracy and practicability.

  9. Unsteady aerodynamic modeling at high angles of attack using support vector machines

    Directory of Open Access Journals (Sweden)

    Wang Qing

    2015-06-01

    Full Text Available Accurate aerodynamic models are the basis of flight simulation and control law design. Mathematically modeling unsteady aerodynamics at high angles of attack bears great difficulties in model structure determination and parameter estimation due to little understanding of the flow mechanism. Support vector machines (SVMs based on statistical learning theory provide a novel tool for nonlinear system modeling. The work presented here examines the feasibility of applying SVMs to high angle-of-attack unsteady aerodynamic modeling field. Mainly, after a review of SVMs, several issues associated with unsteady aerodynamic modeling by use of SVMs are discussed in detail, such as selection of input variables, selection of output variables and determination of SVM parameters. The least squares SVM (LS-SVM models are set up from certain dynamic wind tunnel test data of a delta wing and an aircraft configuration, and then used to predict the aerodynamic responses in other tests. The predictions are in good agreement with the test data, which indicates the satisfying learning and generalization performance of LS-SVMs.

  10. Model design and simulation of automatic sorting machine using proximity sensor

    Directory of Open Access Journals (Sweden)

    Bankole I. Oladapo

    2016-09-01

    Full Text Available The automatic sorting system has been reported to be complex and a global problem. This is because of the inability of sorting machines to incorporate flexibility in their design concept. This research therefore designed and developed an automated sorting object of a conveyor belt. The developed automated sorting machine is able to incorporate flexibility and separate species of non-ferrous metal objects and at the same time move objects automatically to the basket as defined by the regulation of the Programmable Logic Controllers (PLC with a capacitive proximity sensor to detect a value range of objects. The result obtained shows that plastic, wood, and steel were sorted into their respective and correct position with an average, sorting, time of 9.903 s, 14.072 s and 18.648 s respectively. The proposed developed model of this research could be adopted at any institution or industries, whose practices are based on mechatronics engineering systems. This is to guide the industrial sector in sorting of object and teaching aid to institutions and hence produce the list of classified materials according to the enabled sorting program commands.

  11. Multi-model convolutional extreme learning machine with kernel for RGB-D object recognition

    Science.gov (United States)

    Yin, Yunhua; Li, Huifang; Wen, Xinling

    2017-11-01

    With new depth sensing technology such as Kinect providing high quality synchronized RGB and depth images (RGB-D data), learning rich representations efficiently plays an important role in multi-modal recognition task, which is crucial to achieve high generalization performance. To address this problem, in this paper, we propose an effective multi-modal convolutional extreme learning machine with kernel (MMC-KELM) structure, which combines advantages both the power of CNN and fast training of ELM. In this model, CNN uses multiple alternate convolution layers and stochastic pooling layers to effectively abstract high level features from each modality (RGB and depth) separately without adjusting parameters. And then, the shared layer is developed by combining these features from each modality. Finally, the abstracted features are fed to the extreme learning machine with kernel (KELM), which leads to better generalization performance with faster learning speed. Experimental results on Washington RGB-D Object Dataset show that the proposed multiple modality fusion method achieves state-of-the-art performance with much less complexity.

  12. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    Science.gov (United States)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  13. Validation of a Numerical Model for the Prediction of the Annoyance Condition at the Operator Station of Construction Machines

    Directory of Open Access Journals (Sweden)

    Eleonora Carletti

    2016-11-01

    Full Text Available It is well-known that the reduction of noise levels is not strictly linked to the reduction of noise annoyance. Even earthmoving machine manufacturers are facing the problem of customer complaints concerning the noise quality of their machines with increasing frequency. Unfortunately, all the studies geared to the understanding of the relationship between multidimensional characteristics of noise signals and the auditory perception of annoyance require repeated sessions of jury listening tests, which are time-consuming. In this respect, an annoyance prediction model was developed for compact loaders to assess the annoyance sensation perceived by operators at their workplaces without repeating the full sound quality assessment but using objective parameters only. This paper aims at verifying the feasibility of the developed annoyance prediction model when applied to other kinds of earthmoving machines. For this purpose, an experimental investigation was performed on five earthmoving machines, different in type, dimension, and engine mechanical power, and the annoyance predicted by the numerical model was compared to the annoyance given by subjective listening tests. The results were evaluated by means of the squared value of the correlation coefficient, R2, and they confirm the possible applicability of the model to other kinds of machines.

  14. Numerical Simulations of Two-Phase Flow in a Self-Aerated Flotation Machine and Kinetics Modeling

    KAUST Repository

    Fayed, Hassan E.

    2015-03-30

    A new boundary condition treatment has been devised for two-phase flow numerical simulations in a self-aerated minerals flotation machine and applied to a Wemco 0.8 m3 pilot cell. Airflow rate is not specified a priori but is predicted by the simulations as well as power consumption. Time-dependent simulations of two-phase flow in flotation machines are essential to understanding flow behavior and physics in self-aerated machines such as the Wemco machines. In this paper, simulations have been conducted for three different uniform bubble sizes (db = 0.5, 0.7 and 1.0 mm) to study the effects of bubble size on air holdup and hydrodynamics in Wemco pilot cells. Moreover, a computational fluid dynamics (CFD)-based flotation model has been developed to predict the pulp recovery rate of minerals from a flotation cell for different bubble sizes, different particle sizes and particle size distribution. The model uses a first-order rate equation, where models for probabilities of collision, adhesion and stabilization and collisions frequency estimated by Zaitchik-2010 model are used for the calculation of rate constant. Spatial distributions of dissipation rate and air volume fraction (also called void fraction) determined by the two-phase simulations are the input for the flotation kinetics model. The average pulp recovery rate has been calculated locally for different uniform bubble and particle diameters. The CFD-based flotation kinetics model is also used to predict pulp recovery rate in the presence of particle size distribution. Particle number density pdf and the data generated for single particle size are used to compute the recovery rate for a specific mean particle diameter. Our computational model gives a figure of merit for the recovery rate of a flotation machine, and as such can be used to assess incremental design improvements as well as design of new machines.

  15. Modeling of the integrity of machining surfaces: application to the case of 15-5 PH stainless steel finish turning

    International Nuclear Information System (INIS)

    Mondelin, A.

    2012-01-01

    During machining, extreme conditions of pressure, temperature and strain appear in the cutting zone. In this thermo-mechanical context, the link between the cutting conditions (cutting speed, lubrication, feed rate, wear, tool coating...) and the machining surface integrity represents a major scientific target. This PhD study is a part of a global project called MIFSU (Modeling of the Integrity and Fatigue resistance of Machining Surfaces) and it focuses on the finish turning of the 15-5PH (a martensitic stainless steel used for parts of helicopter rotor). Firstly, material behavior has been studied in order to provide data for machining simulations. Stress-free dilatometry tests were conducted to obtain the austenitization kinetics of 15-5PH steel for high heating rates (up to 11,000 degrees C/s). Then, parameters of Leblond metallurgical model have been calibrated. In addition, dynamic compression tests (de/dt ranging from 0.01 to 80/s and e ≥ 1) have been performed to calibrate a strain-rate dependent elasto-plasticity model (for high strains). These tests also helped to highlight the dynamic recrystallization phenomena and their influence on the flow stress of the material. Thus, recrystallization model has also been implemented.In parallel, a numerical model for the prediction of machined surface integrity has been constructed. This model is based on a methodology called 'hybrid' (developed during the PhD thesis of Frederic Valiorgue for the AISI 304L steel). The method consists in replacing tool and chip modeling by equivalent loadings (obtained experimentally). A calibration step of these loadings has been carried out using orthogonal cutting and friction tests (with sensitivity studies of machining forces, friction and heat partition coefficients to cutting parameters variations).Finally, numerical simulations predictions of microstructural changes (austenitization and dynamic recrystallization) and residual stresses have been successfully compared with

  16. Machine Learning-based discovery of closures for reduced models of dynamical systems

    Science.gov (United States)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  17. OxLM: A Neural Language Modelling Framework for Machine Translation

    Directory of Open Access Journals (Sweden)

    Paul Baltescu

    2014-09-01

    Full Text Available This paper presents an open source implementation1 of a neural language model for machine translation. Neural language models deal with the problem of data sparsity by learning distributed representations for words in a continuous vector space. The language modelling probabilities are estimated by projecting a word's context in the same space as the word representations and by assigning probabilities proportional to the distance between the words and the context's projection. Neural language models are notoriously slow to train and test. Our framework is designed with scalability in mind and provides two optional techniques for reducing the computational cost: the so-called class decomposition trick and a training algorithm based on noise contrastive estimation. Our models may be extended to incorporate direct n-gram features to learn weights for every n-gram in the training data. Our framework comes with wrappers for the cdec and Moses translation toolkits, allowing our language models to be incorporated as normalized features in their decoders (inside the beam search.

  18. Modeling and Dynamic Analysis of Cutterhead Driving System in Tunnel Boring Machine

    Directory of Open Access Journals (Sweden)

    Wei Sun

    2017-01-01

    Full Text Available Failure of cutterhead driving system (CDS of tunnel boring machine (TBM often occurs under shock and vibration conditions. To investigate the dynamic characteristics and reduce system vibration further, an electromechanical coupling model of CDS is established which includes the model of direct torque control (DTC system for three-phase asynchronous motor and purely torsional dynamic model of multistage gear transmission system. The proposed DTC model can provide driving torque just as the practical inverter motor operates so that the influence of motor operating behavior will not be erroneously estimated. Moreover, nonlinear gear meshing factors, such as time-variant mesh stiffness and transmission error, are involved in the dynamic model. Based on the established nonlinear model of CDS, vibration modes can be classified into three types, that is, rigid motion mode, rotational vibration mode, and planet vibration mode. Moreover, dynamic responses under actual driving torque and idealized equivalent torque are compared, which reveals that the ripple of actual driving torque would aggravate vibration of gear transmission system. Influence index of torque ripple is proposed to show that vibration of system increases with torque ripple. This study provides useful guideline for antivibration design and motor control of CDS in TBM.

  19. Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.

    Science.gov (United States)

    Cong, W L; Pei, Z J; Sun, X; Zhang, C L

    2014-02-01

    Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Virtual-view PSNR prediction based on a depth distortion tolerance model and support vector machine.

    Science.gov (United States)

    Chen, Fen; Chen, Jiali; Peng, Zongju; Jiang, Gangyi; Yu, Mei; Chen, Hua; Jiao, Renzhi

    2017-10-20

    Quality prediction of virtual-views is important for free viewpoint video systems, and can be used as feedback to improve the performance of depth video coding and virtual-view rendering. In this paper, an efficient virtual-view peak signal to noise ratio (PSNR) prediction method is proposed. First, the effect of depth distortion on virtual-view quality is analyzed in detail, and a depth distortion tolerance (DDT) model that determines the DDT range is presented. Next, the DDT model is used to predict the virtual-view quality. Finally, a support vector machine (SVM) is utilized to train and obtain the virtual-view quality prediction model. Experimental results show that the Spearman's rank correlation coefficient and root mean square error between the actual PSNR and the predicted PSNR by DDT model are 0.8750 and 0.6137 on average, and by the SVM prediction model are 0.9109 and 0.5831. The computational complexity of the SVM method is lower than the DDT model and the state-of-the-art methods.

  1. Hidden Markov models and other machine learning approaches in computational molecular biology

    Energy Technology Data Exchange (ETDEWEB)

    Baldi, P. [California Inst. of Tech., Pasadena, CA (United States)

    1995-12-31

    This tutorial was one of eight tutorials selected to be presented at the Third International Conference on Intelligent Systems for Molecular Biology which was held in the United Kingdom from July 16 to 19, 1995. Computational tools are increasingly needed to process the massive amounts of data, to organize and classify sequences, to detect weak similarities, to separate coding from non-coding regions, and reconstruct the underlying evolutionary history. The fundamental problem in machine learning is the same as in scientific reasoning in general, as well as statistical modeling: to come up with a good model for the data. In this tutorial four classes of models are reviewed. They are: Hidden Markov models; artificial Neural Networks; Belief Networks; and Stochastic Grammars. When dealing with DNA and protein primary sequences, Hidden Markov models are one of the most flexible and powerful alignments and data base searches. In this tutorial, attention is focused on the theory of Hidden Markov Models, and how to apply them to problems in molecular biology.

  2. The impact of near-term climate policy choices on technology and emission transition pathways

    NARCIS (Netherlands)

    Eom, Jiyong; Edmonds, Jae; Krey, Volker; Johnson, Nils; Longden, Thomas; Luderer, Gunnar; Riahi, Keywan; Van Vuuren, Detlef P.|info:eu-repo/dai/nl/11522016X

    2015-01-01

    This paper explores the implications of delays (to 2030) in implementing optimal policies for long-term transition pathways to limit climate forcing to 450ppm CO2e on the basis of the AMPERE Work Package 2 model comparison study. The paper highlights the critical importance of the period 2030-2050

  3. Neural Machine Translation

    OpenAIRE

    Koehn, Philipp

    2017-01-01

    Draft of textbook chapter on neural machine translation. a comprehensive treatment of the topic, ranging from introduction to neural networks, computation graphs, description of the currently dominant attentional sequence-to-sequence model, recent refinements, alternative architectures and challenges. Written as chapter for the textbook Statistical Machine Translation. Used in the JHU Fall 2017 class on machine translation.

  4. A Universal Reactive Machine

    DEFF Research Database (Denmark)

    Andersen, Henrik Reif; Mørk, Simon; Sørensen, Morten U.

    1997-01-01

    Turing showed the existence of a model universal for the set of Turing machines in the sense that given an encoding of any Turing machine asinput the universal Turing machine simulates it. We introduce the concept of universality for reactive systems and construct a CCS processuniversal...

  5. Evaluation of primary epidermal lamellar density in the forefeet of near-term fetal Australian feral and domesticated horses.

    Science.gov (United States)

    Hampson, Brian A; de Laat, Melody A; Mills, Paul C; Pollitt, Christopher C

    2011-07-01

    To investigate the density of the primary epidermal lamellae (PEL) around the solar circumference of the forefeet of near-term fetal feral and nonferal (ie, domesticated) horses. Left forefeet from near-term Australian feral (n = 14) and domesticated (4) horse fetuses. Near-term feral horse fetuses were obtained from culled mares within 10 minutes of death; fetuses that had died in utero 2 weeks prior to anticipated birth date and were delivered from live Thoroughbred mares were also obtained. Following disarticulation at the carpus, the left forefoot of each fetus was frozen during dissection and data collection. In a standard section of each hoof, the stratum internum PEL density was calculated at the midline center (12 o'clock) and the medial and lateral break-over points (11 and 1 o'clock), toe quarters (10 and 2 o'clock), and quarters (4 and 6 o'clock). Values for matching lateral and medial zones were averaged and expressed as 1 density. Density differences at the 4 locations between the feral and domesticated horse feet were assessed by use of imaging software analysis. In fetal domesticated horse feet, PEL density did not differ among the 4 locations. In fetal feral horse feet, PEL density differed significantly among locations, with a pattern of gradual reduction from the dorsal to the palmar aspect of the foot. The PEL density distribution differed significantly between fetal domesticated and feral horse feet. Results indicated that PEL density distribution differs between fetal feral and domesticated horse feet, suggestive of an adaptation of feral horses to environment challenges.

  6. Near-term technology policies for long-term climate targets--economy wide versus technology specific approaches

    International Nuclear Information System (INIS)

    Sanden, B.A.; Azar, Christian

    2005-01-01

    The aim of this paper is to offer suggestions when it comes to near-term technology policies for long-term climate targets based on some insights into the nature of technical change. We make a distinction between economy wide and technology specific policy instruments and put forward two key hypotheses: (i) Near-term carbon targets such as the Kyoto protocol can be met by economy wide price instruments (carbon taxes, or a cap-and-trade system) changing the technologies we pick from the shelf (higher energy efficiency in cars, buildings and industry, wind, biomass for heat and electricity, natural gas instead of coal, solar thermal, etc.). (ii) Technology specific policies are needed to bring new technologies to the shelf. Without these new technologies, stricter emission reduction targets may be considered impossible to meet by the government, industry and the general public, and therefore not adopted. The policies required to bring these more advanced technologies to the shelf are more complex and include increased public research and development, demonstration, niche market creation, support for networks within the new industries, standard settings and infrastructure policies (e.g., when it comes to hydrogen distribution). There is a risk that the society in its quest for cost-efficiency in meeting near-term emissions targets, becomes blindfolded when it comes to the more difficult, but equally important issue of bringing more advanced technologies to the shelf. The paper presents mechanisms that cause technology look in, how these very mechanisms can be used to get out of the current 'carbon lock-in' and the risk with premature lock-ins into new technologies that do not deliver what they currently promise. We then review certain climate policy proposals with regards to their expected technology impact, and finally we present a let-a-hundred-flowers-bloom strategy for the next couple of decades

  7. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    Directory of Open Access Journals (Sweden)

    Gabere MN

    2016-06-01

    Full Text Available Musa Nur Gabere,1 Mohamed Aly Hussein,1 Mohammad Azhar Aziz2 1Department of Bioinformatics, King Abdullah International Medical Research Center/King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; 2Colorectal Cancer Research Program, Department of Medical Genomics, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Purpose: There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC. The selection of important features is a crucial step before training a classifier.Methods: In this study, we built a model that uses support vector machine (SVM to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid.Results: The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF, Bayes net (BN, multilayer perceptron (MLP, naïve Bayes (NB, reduced error pruning tree (REPT, and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP. Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1

  8. The Risk for Impaired Learning-related Abilities in Childhood and Educational Attainment Among Adults Born Near-term

    OpenAIRE

    Nomura, Yoko; Halperin, Jeffrey M.; Newcorn, Jeffrey H.; Davey, Charles; Fifer, William P.; Savitz, David A.; Brooks-Gunn, Jeanne

    2008-01-01

    Objective To examine whether near-term births (NTB) and small-for-gestational-age (SGA) infants are at high risk for childhood learning-related problems and poor adult educational attainment, and whether poverty amplifies the adverse effects of NTB and SGA on those outcomes. Methods A randomly selected birth cohort (n = 1,619) was followed into adulthood. IQ and learning abilities were measured in childhood and educational attainment was measured in adulthood. Results NTB (n = 226) and SGA (n...

  9. LHC 2010: Summary of the Odyssey So Far and Near-Term Prospects (2/3)

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    In 2010, the LHC delivered proton-proton collisions at an energy of 7 TeV, significantly higher than what was previously attained. This has allowed the experiments to complete the commissioning of the detectors and to perform early measurements of key standard model processes. The inclusive production of particles, jets and photons, the observation of onia and heavy-flavored meson decays, the measurement of the W and Z cross sections, and the observation of top-quark production and decay constitute a full set of measurements which form the base from which searches for physics beyond the standard model can be launched. The results from a number of searches for supersymmetry and some exotic signatures are now appearing. The lectures will review this impressive list of physics achievements from 2010 and consider briefly what 2011 may bring.

  10. LHC 2010: Summary of the Odyssey So Far and Near-Term Prospects (3/3)

    CERN Multimedia

    CERN. Geneva

    2011-01-01

    In 2010, the LHC delivered proton-proton collisions at an energy of 7 TeV, significantly higher than what was previously attained. This has allowed the experiments to complete the commissioning of the detectors and to perform early measurements of key standard model processes. The inclusive production of particles, jets and photons, the observation of onia and heavy-flavored meson decays, the measurement of the W and Z cross sections, and the observation of top-quark production and decay constitute a full set of measurements which form the base from which searches for physics beyond the standard model can be launched. The results from a number of searches for supersymmetry and some exotic signatures are now appearing. The lectures will review this impressive list of physics achievements from 2010 and consider briefly what 2011 may bring.

  11. A geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time

    Science.gov (United States)

    Yu, Miaomiao; Tang, Yinghui; Fu, Yonghong

    2013-06-01

    In this article, we consider a geometric process model for M/PH(M/PH)/1/K queue with new service machine procurement lead time. A maintenance policy (N - 1, N) based on the number of failures of the service machine is introduced into the system. Assuming that a failed service machine after repair will not be 'as good as new', and the spare service machine for replacement is only available by an order. More specifically, we suppose that the procurement lead time for delivering the spare service machine follows a phase-type (PH) distribution. Under such assumptions, we apply the matrix-analytic method to develop the steady state probabilities of the system, and then we obtain some system performance measures. Finally, employing an important Lemma, the explicit expression of the long-run average cost rate for the service machine is derived, and the direct search method is also implemented to determine the optimal value of N for minimising the average cost rate.

  12. Support vector machines to model presence/absence of Alburnus alburnus alborella (Teleostea, Cyprinidae) in North-Western Italy: comparison with other machine learning techniques.

    Science.gov (United States)

    Tirelli, Tina; Gamba, Marco; Pessani, Daniela

    2012-01-01

    Alburnus alburnus alborella is a fish species native to northern Italy. It has suffered a very sharp decrease in population over the last 20 years due to human impact. Therefore, it was selected for reintroduction projects. In this research project, support vector machines (SVM) were tested as possible tools for building reliable models of presence/absence of the species. A system of 198 sites located along the rivers of Piedmont in North-Western Italy was investigated. At each site, 19 physical-chemical and environmental variables were measured. We verified that performances did not improve after feature selection but, instead, they slightly decreased (from Correctly Classified Instances [CCI]=84.34 and Cohen's k [k]=0.69 to CCI=82.81 and k=0.66). However, feature selection is crucial in identifying the relevant features for the presence/absence of the species. We then compared SVMs performances with decision trees (DTs) and artificial neural networks (ANNs) built using the same dataset. SVMs outperformed DTs (CCI=81.39 and k=0.63) but not ANNs (CCI=83.03 and k=0.66), showing that SVMs and ANNs are the best performing models, proving that their application in freshwater management is more promising than traditional and other machine-learning techniques. Copyright © 2012 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  13. Mathematical modeling and multi-criteria optimization of rotary electrical discharge machining process

    Science.gov (United States)

    Shrinivas Balraj, U.

    2015-12-01

    In this paper, mathematical modeling of three performance characteristics namely material removal rate, surface roughness and electrode wear rate in rotary electrical discharge machining RENE80 nickel super alloy is done using regression approach. The parameters considered are peak current, pulse on time, pulse off time and electrode rotational speed. The regression approach is very much effective in mathematical modeling when the performance characteristic is influenced by many variables. The modeling of these characteristics is helpful in predicting the performance under a given set of combination of input process parameters. The adequacy of developed models is tested by correlation coefficient and Analysis of Variance. It is observed that the developed models are adequate in establishing the relationship between input parameters and performance characteristics. Further, multi-criteria optimization of process parameter levels is carried using grey based Taguchi method. The experiments are planned based on Taguchi's L9 orthogonal array. The proposed method employs single grey relational grade as a performance index to obtain optimum levels of parameters. It is found that peak current and electrode rotational speed are influential on these characteristics. Confirmation experiments are conducted to validate optimal parameters and it reveals the improvements in material removal rate, surface roughness and electrode wear rate as 13.84%, 12.91% and 19.42% respectively.

  14. Establishment of tunnel-boring machine disk cutter rock-breaking model from energy perspective

    Directory of Open Access Journals (Sweden)

    Liwei Song

    2015-12-01

    Full Text Available As the most important cutting tools during tunnel-boring machine tunneling construction process, V-type disk cutter’s rock-breaking mechanism has been researched by many scholars all over the world. Adopting finite element method, this article focused on the interaction between V-type disk cutters and the intact rock to carry out microscopic parameter analysis methods: first, the stress model of rock breaking was established through V-type disk cutter motion trajectory analysis; second, based on the incremental theorem of the elastic–plastic theory, the strain model of the relative changes of rock displacement during breaking process was created. According to the principle of admissible work by energy method of the elastic–plastic theory to analyze energy transfer rules in the process of breaking rock, rock-breaking force of the V-type disk cutter could be regarded as the external force in the rock system. Finally, by taking the rock system as the reference object, the total potential energy equivalent model of rock system was derived to obtain the forces of the three directions acting on V-type disk cutter during the rock-breaking process. This derived model, which has been proved to be effective and scientific through comparisons with some original force models and by comparative analysis with experimental data, also initiates a new research strategy taking the view of the micro elastic–plastic theory to study the rock-breaking mechanism.

  15. Modeling the Financial Distress of Microenterprise StartUps Using Support Vector Machines: A Case Study

    Directory of Open Access Journals (Sweden)

    Antonio Blanco-Oliver

    2014-10-01

    Full Text Available Despite the leading role that micro-entrepreneurship plays in economic development, and the high failure rate of microenterprise start-ups in their early years, very few studies have designed financial distress models to detect the financial problems of micro-entrepreneurs. Moreover, due to a lack of research, nothing is known about whether non-financial information and nonparametric statistical techniques improve the predictive capacity of these models. Therefore, this paper provides an innovative financial distress model specifically designed for microenterprise startups via support vector machines (SVMs that employs financial, non-financial, and macroeconomic variables. Based on a sample of almost 5,500 micro- entrepreneurs from a Peruvian Microfinance Institution (MFI, our findings show that the introduction of non-financial information related to the zone in which the entrepreneurs live and situate their business, the duration of the MFI-entrepreneur relationship, the number of loans granted by the MFI in the last year, the loan destination, and the opinion of experts on the probability that microenterprise start-ups may experience financial problems, significantly increases the accuracy performance of our financial distress model. Furthermore, the results reveal that the models that use SVMs outperform those which employ traditional logistic regression (LR analysis.

  16. Unsupervised machine learning account of magnetic transitions in the Hubbard model

    Science.gov (United States)

    Ch'ng, Kelvin; Vazquez, Nick; Khatami, Ehsan

    2018-01-01

    We employ several unsupervised machine learning techniques, including autoencoders, random trees embedding, and t -distributed stochastic neighboring ensemble (t -SNE), to reduce the dimensionality of, and therefore classify, raw (auxiliary) spin configurations generated, through Monte Carlo simulations of small clusters, for the Ising and Fermi-Hubbard models at finite temperatures. Results from a convolutional autoencoder for the three-dimensional Ising model can be shown to produce the magnetization and the susceptibility as a function of temperature with a high degree of accuracy. Quantum fluctuations distort this picture and prevent us from making such connections between the output of the autoencoder and physical observables for the Hubbard model. However, we are able to define an indicator based on the output of the t -SNE algorithm that shows a near perfect agreement with the antiferromagnetic structure factor of the model in two and three spatial dimensions in the weak-coupling regime. t -SNE also predicts a transition to the canted antiferromagnetic phase for the three-dimensional model when a strong magnetic field is present. We show that these techniques cannot be expected to work away from half filling when the "sign problem" in quantum Monte Carlo simulations is present.

  17. Reliability enumeration model for the gear in a multi-functional machine

    Science.gov (United States)

    Nasution, M. K. M.; Ambarita, H.

    2018-02-01

    The angle and direction of motion play an important role in the ability of a multifunctional machine to be able to perform the task to be charged. The movement can be a rotational action that appears to perform a round, by which the rotation can be done by connecting the generator by hand through the help of a hinge formed from two rounded surfaces. The rotation of the entire arm can be carried out by the interconnection between two surfaces having a jagged ring. This link will change according to the angle of motion, and any yeast of the serration will have a share in the success of this process, therefore a robust hand measurement model is established based on canonical provisions.

  18. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    International Nuclear Information System (INIS)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-01-01

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelity quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.

  19. Improved Emotion Recognition Using Gaussian Mixture Model and Extreme Learning Machine in Speech and Glottal Signals

    Directory of Open Access Journals (Sweden)

    Hariharan Muthusamy

    2015-01-01

    Full Text Available Recently, researchers have paid escalating attention to studying the emotional state of an individual from his/her speech signals as the speech signal is the fastest and the most natural method of communication between individuals. In this work, new feature enhancement using Gaussian mixture model (GMM was proposed to enhance the discriminatory power of the features extracted from speech and glottal signals. Three different emotional speech databases were utilized to gauge the proposed methods. Extreme learning machine (ELM and k-nearest neighbor (kNN classifier were employed to classify the different types of emotions. Several experiments were conducted and results show that the proposed methods significantly improved the speech emotion recognition performance compared to research works published in the literature.

  20. Model-based orientation-independent 3-D machine vision techniques

    Science.gov (United States)

    De Figueiredo, R. J. P.; Kehtarnavaz, N.

    1988-01-01

    Orientation-dependent techniques for the identification of a three-dimensional object by a machine vision system are represented in parts. In the first part, the data consist of intensity images of polyhedral objects obtained by a single camera, while in the second part, the data consist of range images of curved objects obtained by a laser scanner. In both cases, the attributed graphic representation of the object surface is used to drive the respective algorithm. In this representation, a graph node represents a surface patch and a link represents the adjacency between two patches. The attributes assigned to nodes are moment invariants of the corresponding face for polyhedral objects. For range images, the Gaussian curvature is used as a segmentation criterion for providing symbolic shape attributes. Identification is achieved by an efficient graph-matching algorithm used to match the graph obtained from the data to a subgraph of one of the model graphs stored in the commputer memory.

  1. Modeling of Autovariator Operation as Power Components Adjuster in Adaptive Machine Drives

    Science.gov (United States)

    Balakin, P. D.; Belkov, V. N.; Shtripling, L. O.

    2018-01-01

    Full application of the available power and stationary mode preservation for the power station (engine) operation of the transport machine under the conditions of variable external loading, are topical issues. The issues solution is possible by means of mechanical drives with the autovaried rate transfer function and nonholonomic constraint of the main driving mediums. Additional to the main motion, controlled motion of the driving mediums is formed by a variable part of the transformed power flow and is implemented by the integrated control loop, functioning only on the basis of the laws of motion. The mathematical model of the mechanical autovariator operation is developed using Gibbs function, acceleration energy; the study results are presented; on their basis, the design calculations of the autovariator driving mediums and constraints, including its automatic control loop, are possible.

  2. Data on Support Vector Machines (SVM model to forecast photovoltaic power

    Directory of Open Access Journals (Sweden)

    M. Malvoni

    2016-12-01

    Full Text Available The data concern the photovoltaic (PV power, forecasted by a hybrid model that considers weather variations and applies a technique to reduce the input data size, as presented in the paper entitled “Photovoltaic forecast based on hybrid pca-lssvm using dimensionality reducted data” (M. Malvoni, M.G. De Giorgi, P.M. Congedo, 2015 [1]. The quadratic Renyi entropy criteria together with the principal component analysis (PCA are applied to the Least Squares Support Vector Machines (LS-SVM to predict the PV power in the day-ahead time frame. The data here shared represent the proposed approach results. Hourly PV power predictions for 1,3,6,12, 24 ahead hours and for different data reduction sizes are provided in Supplementary material.

  3. Modeling and Validation of Moving Coil Actuated Valve for Digital Displacement Machines

    DEFF Research Database (Denmark)

    Nørgård, Christian; Christensen, Jeppe Haals; Bech, Michael Møller

    2018-01-01

    differential equations describing the motion dynamics. In this way, the movement induced hydro-mechanical fluid forces caused by rapid acceleration of the valve plunger is coupled with the electro-magnetic dynamics. The proposed model is compared rigorously against measurements obtained from a series......This paper concerns a novel moving coil actuator integrated with a high-performance seat valve for use in Digital Displacement Machines (DDM), which is an emerging fluid power technology that sets strict actuator requirements in order to get a high energy conversion efficiency. Hence......, the mechanical switching time must be in the milli-second range and the actuator power consumption must be in range of few tens of watts. The objectives are two-fold: (i) to establish a proof-of-concept for the integrated actuator/valve that relies on several principles and mechanisms new or uncommon in fluid...

  4. FACT. Streamed data analysis and online application of machine learning models

    Energy Technology Data Exchange (ETDEWEB)

    Bruegge, Kai Arno; Buss, Jens [Technische Universitaet Dortmund (Germany). Astroteilchenphysik; Collaboration: FACT-Collaboration

    2016-07-01

    Imaging Atmospheric Cherenkov Telescopes (IACTs) like FACT produce a continuous flow of data during measurements. Analyzing the data in near real time is essential for monitoring sources. One major task of a monitoring system is to detect changes in the gamma-ray flux of a source, and to alert other experiments if some predefined limit is reached. In order to calculate the flux of an observed source, it is necessary to run an entire data analysis process including calibration, image cleaning, parameterization, signal-background separation and flux estimation. Software built on top of a data streaming framework has been implemented for FACT and generalized to work with the data acquisition framework of the Cherenkov Telescope Array (CTA). We present how the streams-framework is used to apply supervised machine learning models to an online data stream from the telescope.

  5. Numerical modelling of micro-machining of f.c.c. single crystal: Influence of strain gradients

    KAUST Repository

    Demiral, Murat

    2014-11-01

    A micro-machining process becomes increasingly important with the continuous miniaturization of components used in various fields from military to civilian applications. To characterise underlying micromechanics, a 3D finite-element model of orthogonal micro-machining of f.c.c. single crystal copper was developed. The model was implemented in a commercial software ABAQUS/Explicit employing a user-defined subroutine VUMAT. Strain-gradient crystal-plasticity and conventional crystal-plasticity theories were used to demonstrate the influence of pre-existing and evolved strain gradients on the cutting process for different combinations of crystal orientations and cutting directions. Crown Copyright © 2014.

  6. Comparing artificial neural networks, general linear models and support vector machines in building predictive models for small interfering RNAs.

    Directory of Open Access Journals (Sweden)

    Kyle A McQuisten

    2009-10-01

    Full Text Available Exogenous short interfering RNAs (siRNAs induce a gene knockdown effect in cells by interacting with naturally occurring RNA processing machinery. However not all siRNAs induce this effect equally. Several heterogeneous kinds of machine learning techniques and feature sets have been applied to modeling siRNAs and their abilities to induce knockdown. There is some growing agreement to which techniques produce maximally predictive models and yet there is little consensus for methods to compare among predictive models. Also, there are few comparative studies that address what the effect of choosing learning technique, feature set or cross validation approach has on finding and discriminating among predictive models.Three learning techniques were used to develop predictive models for effective siRNA sequences including Artificial Neural Networks (ANNs, General Linear Models (GLMs and Support Vector Machines (SVMs. Five feature mapping methods were also used to generate models of siRNA activities. The 2 factors of learning technique and feature mapping were evaluated by complete 3x5 factorial ANOVA. Overall, both learning techniques and feature mapping contributed significantly to the observed variance in predictive models, but to differing degrees for precision and accuracy as well as across different kinds and levels of model cross-validation.The methods presented here provide a robust statistical framework to compare among models developed under distinct learning techniques and feature sets for siRNAs. Further comparisons among current or future modeling approaches should apply these or other suitable statistically equivalent methods to critically evaluate the performance of proposed models. ANN and GLM techniques tend to be more sensitive to the inclusion of noisy features, but the SVM technique is more robust under large numbers of features for measures of model precision and accuracy. Features found to result in maximally predictive models are

  7. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  8. Machine learning modeling of plant phenology based on coupling satellite and gridded meteorological dataset

    Science.gov (United States)

    Czernecki, Bartosz; Nowosad, Jakub; Jabłońska, Katarzyna

    2018-04-01

    Changes in the timing of plant phenological phases are important proxies in contemporary climate research. However, most of the commonly used traditional phenological observations do not give any coherent spatial information. While consistent spatial data can be obtained from airborne sensors and preprocessed gridded meteorological data, not many studies robustly benefit from these data sources. Therefore, the main aim of this study is to create and evaluate different statistical models for reconstructing, predicting, and improving quality of phenological phases monitoring with the use of satellite and meteorological products. A quality-controlled dataset of the 13 BBCH plant phenophases in Poland was collected for the period 2007-2014. For each phenophase, statistical models were built using the most commonly applied regression-based machine learning techniques, such as multiple linear regression, lasso, principal component regression, generalized boosted models, and random forest. The quality of the models was estimated using a k-fold cross-validation. The obtained results showed varying potential for coupling meteorological derived indices with remote sensing products in terms of phenological modeling; however, application of both data sources improves models' accuracy from 0.6 to 4.6 day in terms of obtained RMSE. It is shown that a robust prediction of early phenological phases is mostly related to meteorological indices, whereas for autumn phenophases, there is a stronger information signal provided by satellite-derived vegetation metrics. Choosing a specific set of predictors and applying a robust preprocessing procedures is more important for final results than the selection of a particular statistical model. The average RMSE for the best models of all phenophases is 6.3, while the individual RMSE vary seasonally from 3.5 to 10 days. Models give reliable proxy for ground observations with RMSE below 5 days for early spring and late spring phenophases. For

  9. Modeling the control of the central nervous system over the cardiovascular system using support vector machines.

    Science.gov (United States)

    Díaz, José; Acosta, Jesús; González, Rafael; Cota, Juan; Sifuentes, Ernesto; Nebot, Àngela

    2018-02-01

    The control of the central nervous system (CNS) over the cardiovascular system (CS) has been modeled using different techniques, such as fuzzy inductive reasoning, genetic fuzzy systems, neural networks, and nonlinear autoregressive techniques; the results obtained so far have been significant, but not solid enough to describe the control response of the CNS over the CS. In this research, support vector machines (SVMs) are used to predict the response of a branch of the CNS, specifically, the one that controls an important part of the cardiovascular system. To do this, five models are developed to emulate the output response of five controllers for the same input signal, the carotid sinus blood pressure (CSBP). These controllers regulate parameters such as heart rate, myocardial contractility, peripheral and coronary resistance, and venous tone. The models are trained using a known set of input-output response in each controller; also, there is a set of six input-output signals for testing each proposed model. The input signals are processed using an all-pass filter, and the accuracy performance of the control models is evaluated using the percentage value of the normalized mean square error (MSE). Experimental results reveal that SVM models achieve a better estimation of the dynamical behavior of the CNS control compared to others modeling systems. The main results obtained show that the best case is for the peripheral resistance controller, with a MSE of 1.20e-4%, while the worst case is for the heart rate controller, with a MSE of 1.80e-3%. These novel models show a great reliability in fitting the output response of the CNS which can be used as an input to the hemodynamic system models in order to predict the behavior of the heart and blood vessels in response to blood pressure variations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Early Colorectal Cancer Detected by Machine Learning Model Using Gender, Age, and Complete Blood Count Data.

    Science.gov (United States)

    Hornbrook, Mark C; Goshen, Ran; Choman, Eran; O'Keeffe-Rosetti, Maureen; Kinar, Yaron; Liles, Elizabeth G; Rust, Kristal C

    2017-10-01

    Machine learning tools identify patients with blood counts indicating greater likelihood of colorectal cancer and warranting colonoscopy referral. To validate a machine learning colorectal cancer detection model on a US community-based insured adult population. Eligible colorectal cancer cases (439 females, 461 males) with complete blood counts before diagnosis were identified from Kaiser Permanente Northwest Region's Tumor Registry. Control patients (n = 9108) were randomly selected from KPNW's population who had no cancers, received at ≥1 blood count, had continuous enrollment from 180 days prior to the blood count through 24 months after the count, and were aged 40-89. For each control, one blood count was randomly selected as the pseudo-colorectal cancer diagnosis date for matching to cases, and assigned a "calendar year" based on the count date. For each calendar year, 18 controls were randomly selected to match the general enrollment's 10-year age groups and lengths of continuous enrollment. Prediction performance was evaluated by area under the curve, specificity, and odds ratios. Area under the receiver operating characteristics curve for detecting colorectal cancer was 0.80 ± 0.01. At 99% specificity, the odds ratio for association of a high-risk detection score with colorectal cancer was 34.7 (95% CI 28.9-40.4). The detection model had the highest accuracy in identifying right-sided colorectal cancers. ColonFlag ® identifies individuals with tenfold higher risk of undiagnosed colorectal cancer at curable stages (0/I/II), flags colorectal tumors 180-360 days prior to usual clinical diagnosis, and is more accurate at identifying right-sided (compared to left-sided) colorectal cancers.

  11. The elements of the mathematical model of education of the spherical working space by manipulatory machines

    Directory of Open Access Journals (Sweden)

    Platonov A.A.

    2018-03-01

    Full Text Available At present, JSCo «Russian Railways» forms a coordinated policy in the field of ensuring the safety and reliability of the transportation process, one of the topical issues being the removal of undesirable woody and shrubby vegetation in the railroad outflow zone. To improve the efficiency of removal of undesired bedrock, as well as branches and stumps, to reduce the proportion of manual labor and to facilitate the working conditions of personnel, the author of the article studied resource-saving, small-scale compact mechanization tools that allow them to be used in Hard-to-reach places. These means of mechanization were considered in conjunction with modern vehicles, which can provide them with the necessary energy, both on the railway track and away from it. To improve labor productivity and quality of strip removal, working bodies are used, which are aggregated with vehicles equipped with manipulator plants. The article deals with the modeling of the spherical working space of manipulator machines in the railroad take-off zone, taking into account the division of the actual volume of the given space into a number of zones. Calculation schemes of manipulators for the mathematical description of the motion of their links in terms of and profile of the railway were compiled, and a scheme for the dynamic interaction of the rotor working organ with tree and shrub vegetation is given. The schemes of formation and limitation of the working space of a low-end manipulator with a rotor working body are given. A conclusion is drawn on the prospects of obtaining, in view of the above elements of the mathematical model, a number of important practical recommendations for an entire system of machines possessing certain common properties.

  12. Prediction of Aerosol Optical Depth in West Asia: Machine Learning Methods versus Numerical Models

    Science.gov (United States)

    Omid Nabavi, Seyed; Haimberger, Leopold; Abbasi, Reyhaneh; Samimi, Cyrus

    2017-04-01

    Dust-prone areas of West Asia are releasing increasingly large amounts of dust particles during warm months. Because of the lack of ground-based observations in the region, this phenomenon is mainly monitored through remotely sensed aerosol products. The recent development of mesoscale Numerical Models (NMs) has offered an unprecedented opportunity to predict dust emission, and, subsequently Aerosol Optical Depth (AOD), at finer spatial and temporal resolutions. Nevertheless, the significant uncertainties in input data and simulations of dust activation and transport limit the performance of numerical models in dust prediction. The presented study aims to evaluate if machine-learning algorithms (MLAs), which require much less computational expense, can yield the same or even better performance than NMs. Deep blue (DB) AOD, which is observed by satellites but also predicted by MLAs and NMs, is used for validation. We concentrate our evaluations on the over dry Iraq plains, known as the main origin of recently intensified dust storms in West Asia. Here we examine the performance of four MLAs including Linear regression Model (LM), Support Vector Machine (SVM), Artificial Neural Network (ANN), Multivariate Adaptive Regression Splines (MARS). The Weather Research and Forecasting model coupled to Chemistry (WRF-Chem) and the Dust REgional Atmosphere Model (DREAM) are included as NMs. The MACC aerosol re-analysis of European Centre for Medium-range Weather Forecast (ECMWF) is also included, although it has assimilated satellite-based AOD data. Using the Recursive Feature Elimination (RFE) method, nine environmental features including soil moisture and temperature, NDVI, dust source function, albedo, dust uplift potential, vertical velocity, precipitation and 9-month SPEI drought index are selected for dust (AOD) modeling by MLAs. During the feature selection process, we noticed that NDVI and SPEI are of the highest importance in MLAs predictions. The data set was divided

  13. Development of a model of machine hand eye coordination and program specifications for a topological machine vision system

    Science.gov (United States)

    1972-01-01

    A unified approach to computer vision and manipulation is developed which is called choreographic vision. In the model, objects to be viewed by a projected robot in the Viking missions to Mars are seen as objects to be manipulated within choreographic contexts controlled by a multimoded remote, supervisory control system on Earth. A new theory of context relations is introduced as a basis for choreographic programming languages. A topological vision model is developed for recognizing objects by shape and contour. This model is integrated with a projected vision system consisting of a multiaperture image dissector TV camera and a ranging laser system. System program specifications integrate eye-hand coordination and topological vision functions and an aerospace multiprocessor implementation is described.

  14. Model of Peatland Vegetation Species using HyMap Image and Machine Learning

    Science.gov (United States)

    Dayuf Jusuf, Muhammad; Danoedoro, Projo; Muljo Sukojo, Bangun; Hartono

    2017-12-01

    Species Tumih / Parepat (Combretocarpus-rotundatus Mig. Dancer) family Anisophylleaceae and Meranti (Shorea Belangerang, Shorea Teysmanniana Dyer ex Brandis) family Dipterocarpaceae is a group of vegetation species distribution model. Species pioneer is predicted as an indicator of the succession of ecosystem restoration of tropical peatland characteristics and extremely fragile (unique) in the endemic hot spot of Sundaland. Climate change projections and conservation planning are hot topics of current discussion, analysis of alternative approaches and the development of combinations of species projection modelling algorithms through geospatial information systems technology. Approach model to find out the research problem of vegetation level based on the machine learning hybrid method, wavelet and artificial neural networks. Field data are used as a reference collection of natural resource field sample objects and biodiversity assessment. The testing and training ANN data set iterations times 28, achieve a performance value of 0.0867 MSE value is smaller than the ANN training data, above 50%, and spectral accuracy 82.1 %. Identify the location of the sample point position of the Tumih / Parepat vegetation species using HyMap Image is good enough, at least the modelling, design of the species distribution can reach the target in this study. The computation validation rate above 90% proves the calculation can be considered.

  15. Evaluation of different machine learning models for predicting and mapping the susceptibility of gully erosion

    Science.gov (United States)

    Rahmati, Omid; Tahmasebipour, Nasser; Haghizadeh, Ali; Pourghasemi, Hamid Reza; Feizizadeh, Bakhtiar

    2017-12-01

    Gully erosion constitutes a serious problem for land degradation in a wide range of environments. The main objective of this research was to compare the performance of seven state-of-the-art machine learning models (SVM with four kernel types, BP-ANN, RF, and BRT) to model the occurrence of gully erosion in the Kashkan-Poldokhtar Watershed, Iran. In the first step, a gully inventory map consisting of 65 gully polygons was prepared through field surveys. Three different sample data sets (S1, S2, and S3), including both positive and negative cells (70% for training and 30% for validation), were randomly prepared to evaluate the robustness of the models. To model the gully erosion susceptibility, 12 geo-environmental factors were selected as predictors. Finally, the goodness-of-fit and prediction skill of the models were evaluated by different criteria, including efficiency percent, kappa coefficient, and the area under the ROC curves (AUC). In terms of accuracy, the RF, RBF-SVM, BRT, and P-SVM models performed excellently both in the degree of fitting and in predictive performance (AUC values well above 0.9), which resulted in accurate predictions. Therefore, these models can be used in other gully erosion studies, as they are capable of rapidly producing accurate and robust gully erosion susceptibility maps (GESMs) for decision-making and soil and water management practices. Furthermore, it was found that performance of RF and RBF-SVM for modelling gully erosion occurrence is quite stable when the learning and validation samples are changed.

  16. On the Fielding of a High Gain, Shock-Ignited Target on the National Ignitiion Facility in the Near Term

    Energy Technology Data Exchange (ETDEWEB)

    Perkins, L J; Betti, R; Schurtz, G P; Craxton, R S; Dunne, A M; LaFortune, K N; Schmitt, A J; McKenty, P W; Bailey, D S; Lambert, M A; Ribeyre, X; Theobald, W R; Strozzi, D J; Harding, D R; Casner, A; Atzemi, S; Erbert, G V; Andersen, K S; Murakami, M; Comley, A J; Cook, R C; Stephens, R B

    2010-04-12

    Shock ignition, a new concept for igniting thermonuclear fuel, offers the possibility for a near-term ({approx}3-4 years) test of high gain inertial confinement fusion on the National Ignition Facility at less than 1MJ drive energy and without the need for new laser hardware. In shock ignition, compressed fusion fuel is separately ignited by a strong spherically converging shock and, because capsule implosion velocities are significantly lower than those required for conventional hotpot ignition, fusion energy gains of {approx}60 may be achievable on NIF at laser drive energies around {approx}0.5MJ. Because of the simple all-DT target design, its in-flight robustness, the potential need for only 1D SSD beam smoothing, minimal early time LPI preheat, and use of present (indirect drive) laser hardware, this target may be easier to field on NIF than a conventional (polar) direct drive hotspot ignition target. Like fast ignition, shock ignition has the potential for high fusion yields at low drive energy, but requires only a single laser with less demanding timing and spatial focusing requirements. Of course, conventional symmetry and stability constraints still apply. In this paper we present initial target performance simulations, delineate the critical issues and describe the immediate-term R&D program that must be performed in order to test the potential of a high gain shock ignition target on NIF in the near term.

  17. Toward a Progress Indicator for Machine Learning Model Building and Data Mining Algorithm Execution: A Position Paper

    Science.gov (United States)

    Luo, Gang

    2017-01-01

    For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic. PMID:29177022

  18. Toward a Progress Indicator for Machine Learning Model Building and Data Mining Algorithm Execution: A Position Paper.

    Science.gov (United States)

    Luo, Gang

    2017-12-01

    For user-friendliness, many software systems offer progress indicators for long-duration tasks. A typical progress indicator continuously estimates the remaining task execution time as well as the portion of the task that has been finished. Building a machine learning model often takes a long time, but no existing machine learning software supplies a non-trivial progress indicator. Similarly, running a data mining algorithm often takes a long time, but no existing data mining software provides a nontrivial progress indicator. In this article, we consider the problem of offering progress indicators for machine learning model building and data mining algorithm execution. We discuss the goals and challenges intrinsic to this problem. Then we describe an initial framework for implementing such progress indicators and two advanced, potential uses of them, with the goal of inspiring future research on this topic.

  19. Machine medical ethics

    CERN Document Server

    Pontier, Matthijs

    2015-01-01

    The essays in this book, written by researchers from both humanities and sciences, describe various theoretical and experimental approaches to adding medical ethics to a machine in medical settings. Medical machines are in close proximity with human beings, and getting closer: with patients who are in vulnerable states of health, who have disabilities of various kinds, with the very young or very old, and with medical professionals. In such contexts, machines are undertaking important medical tasks that require emotional sensitivity, knowledge of medical codes, human dignity, and privacy. As machine technology advances, ethical concerns become more urgent: should medical machines be programmed to follow a code of medical ethics? What theory or theories should constrain medical machine conduct? What design features are required? Should machines share responsibility with humans for the ethical consequences of medical actions? How ought clinical relationships involving machines to be modeled? Is a capacity for e...

  20. Control System Architectures, Technologies and Concepts for Near Term and Future Human Exploration of Space

    Science.gov (United States)

    Boulanger, Richard; Overland, David

    2004-01-01

    Technologies that facilitate the design and control of complex, hybrid, and resource-constrained systems are examined. This paper focuses on design methodologies, and system architectures, not on specific control methods that may be applied to life support subsystems. Honeywell and Boeing have estimated that 60-80Y0 of the effort in developing complex control systems is software development, and only 20-40% is control system development. It has also been shown that large software projects have failure rates of as high as 50-65%. Concepts discussed include the Unified Modeling Language (UML) and design patterns with the goal of creating a self-improving, self-documenting system design process. Successful architectures for control must not only facilitate hardware to software integration, but must also reconcile continuously changing software with much less frequently changing hardware. These architectures rely on software modules or components to facilitate change. Architecting such systems for change leverages the interfaces between these modules or components.

  1. The Impact of Near-term Climate Policy Choices on Technology and Emissions Transition Pathways

    Energy Technology Data Exchange (ETDEWEB)

    Eom, Jiyong; Edmonds, James A.; Krey, Volker; Johnson, Nils; Longden, Thomas; Luderer, Gunnar; Riahi, Keywan; Van Vuuren, Detlef

    2015-01-01

    This paper explores the implications of delays associated with currently formulated climate policies (compared to optimal policies) for long-term transition pathways to limit climate forcing to 450ppm CO2e on the basis of the AMPERE Work Package 2 model comparison study. The paper highlights the critical importance of the 2030-2050 period for ambitious mitigation strategies. In this period, the most rapid shift to non-greenhouse gas emitting technology occurs. In the delayed response emissions mitigation scenarios, an even faster transition rate in this period is required to compensate for the additional emissions before 2030. Our physical deployment measures indicate that, without CCS, technology deployment rates in the 2030-2050 period would become considerably high. Yet the presence of CCS greatly alleviates the challenges to the transition particularly after the delayed climate policies. The results also highlight the critical role that bioenergy and CO2 capture and storage (BECCS) could play. If this technology is available, transition pathways exceed the emissions budget in the mid-term, removing the excess with BECCS in the long term. Excluding either BE or CCS from the technology portfolio implies that emission reductions need to take place much earlier.

  2. Development of hardware system using temperature and vibration maintenance models integration concepts for conventional machines monitoring: a case study

    Science.gov (United States)

    Adeyeri, Michael Kanisuru; Mpofu, Khumbulani; Kareem, Buliaminu

    2016-12-01

    This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal and best functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers' demand, poor inventory control and just to mention a few. The work entails the use of temperature and vibration sensors as monitoring probes programmed in microcontroller using C language. The developed hardware consists of vibration sensor of ADXL345, temperature sensor of AD594/595 of type K thermocouple, microcontroller, graphic liquid crystal display, real time clock, etc. The hardware is divided into two: one is based at the workstation (majorly meant to monitor machines behaviour) and the other at the base station (meant to receive transmission of machines information sent from the workstation), working cooperatively for effective functionalities. The resulting hardware built was calibrated, tested using model verification and validated through principles pivoted on least square and regression analysis approach using data read from the gear boxes of extruding and cutting machines used for polyethylene bag production. The results got therein confirmed related correlation existing between time, vibration and temperature, which are reflections of effective formulation of the developed concept.

  3. Improved Oil Recovery in Fluvial Dominated Deltaic Reservoirs of Kansas - Near-Term

    International Nuclear Information System (INIS)

    Green, Don W.; McCune, A.D.; Michnick, M.; Reynolds, R.; Walton, A.; Watney, L.; Willhite, G. Paul

    1999-01-01

    The objective of this project is to address waterflood problems of the type found in Morrow sandstone reservoirs in southwestern Kansas and in Cherokee Group reservoirs in southeastern Kansas. Two demonstration sites operated by different independent oil operators are involved in this project. The Stewart Field is located in Finney County, Kansas and is operated by PetroSantander, Inc. Te Nelson Lease is located in Allen County, Kansas, in the N.E. Savonburg Field and is operated by James E. Russell Petroleum, Inc. General topics to be addressed are (1) reservoir management and performance evaluation, (2) waterflood optimization, and (3) the demonstration of recovery processes involving off-the-shelf technologies which can be used to enhance waterflood recovery, increase reserves, and reduce the abandonment rate of these reservoir types. In the Stewart Project, the reservoir management portion of the project conducted during Budget Period 1 involved performance evaluation. This included (1) reservoir characterization and the development of a reservoir database, (2) volumetric analysis to evaluate production performance, (3) reservoir modeling, (4) laboratory work, (5) identification of operational problems, (6) identification of unrecovered mobile oil and estimation of recovery factors, and (7) Identification of the most efficient and economical recovery process. To accomplish these objectives the initial budget period was subdivided into three major tasks. The tasks were (1) geological and engineering analysis, (2) laboratory testing, and (3) unitization. Due to the presence of different operators within the field, it was necessary to unitize the field in order to demonstrate a field-wide improved recovery process. This work was completed and the project moved into Budget Period 2

  4. A Near-Term Concept for Trajectory Based Operations with Air/Ground Data Link Communication

    Science.gov (United States)

    McNally, David; Mueller, Eric; Thipphavong, David; Paielli, Russell; Cheng, Jinn-Hwei; Lee, Chuhan; Sahlman, Scott; Walton, Joe

    2010-01-01

    An operating concept and required system components for trajectory-based operations with air/ground data link for today's en route and transition airspace is proposed. Controllers are fully responsible for separation as they are today, and no new aircraft equipage is required. Trajectory automation computes integrated solutions to problems like metering, weather avoidance, traffic conflicts and the desire to find and fly more time/fuel efficient flight trajectories. A common ground-based system supports all levels of aircraft equipage and performance including those equipped and not equipped for data link. User interface functions for the radar controller's display make trajectory-based clearance advisories easy to visualize, modify if necessary, and implement. Laboratory simulations (without human operators) were conducted to test integrated operation of selected system components with uncertainty modeling. Results are based on 102 hours of Fort Worth Center traffic recordings involving over 37,000 individual flights. The presence of uncertainty had a marginal effect (5%) on minimum-delay conflict resolution performance, and windfavorable routes had no effect on detection and resolution metrics. Flight plan amendments and clearances were substantially reduced compared to today s operations. Top-of-descent prediction errors are the largest cause of failure indicating that better descent predictions are needed to reliably achieve fuel-efficient descent profiles in medium to heavy traffic. Improved conflict detections for climbing flights could enable substantially more continuous climbs to cruise altitude. Unlike today s Conflict Alert, tactical automation must alert when an altitude amendment is entered, but before the aircraft starts the maneuver. In every other failure case tactical automation prevented losses of separation. A real-time prototype trajectory trajectory-automation system is running now and could be made ready for operational testing at an en route

  5. Analysis of near-term spent fuel transportation hardware requirements and transportation costs

    International Nuclear Information System (INIS)

    Daling, P.M.; Engel, R.L.

    1983-01-01

    A computer model was developed to quantify the transportation hardware requirements and transportation costs associated with shipping spent fuel in the commercial nucler fuel cycle in the near future. Results from this study indicate that alternative spent fuel shipping systems (consolidated or disassembled fuel elements and new casks designed for older fuel) will significantly reduce the transportation hardware requirements and costs for shipping spent fuel in the commercial nuclear fuel cycle, if there is no significant change in their operating/handling characteristics. It was also found that a more modest cost reduction results from increasing the fraction of spent fuel shipped by truck from 25% to 50%. Larger transportation cost reductions could be realized with further increases in the truck shipping fraction. Using the given set of assumptions, it was found that the existing spent fuel cask fleet size is generally adequate to perform the needed transportation services until a fuel reprocessing plant (FRP) begins to receive fuel (assumed in 1987). Once the FRP opens, up to 7 additional truck systems and 16 additional rail systems are required at the reference truck shipping fraction of 25%. For the 50% truck shipping fraction, 17 additional truck systems and 9 additional rail systems are required. If consolidated fuel only is shipped (25% by truck), 5 additional rail casks are required and the current truck cask fleet is more than adequate until at least 1995. Changes in assumptions could affect the results. Transportation costs for a federal interim storage program could total about $25M if the FRP begins receiving fuel in 1987 or about $95M if the FRP is delayed until 1989. This is due to an increased utilization of federal interim storage facility from 350 MTU for the reference scenario to about 750 MTU if reprocessing is delayed by two years

  6. Present Status and Near Term Activities for the ExoMars Trace Gas Orbiter.

    Science.gov (United States)

    Svedhem, H.; Vago, J. L.

    2017-12-01

    The ExoMars 2016 mission was launched on a Proton rocket from Baikonur, Kazakhstan, on 14 March 2016 and arrived at Mars on 19 October 2016. The spacecraft is now performing aerobraking to reduce its orbital period from initial post-insertion orbital period of one Sol to the final science orbit with a 2 hours period. The orbital inclination will be 74 degrees. During the aerobraking a wealth of data has been acquired on the state of the atmosphere along the tracks between 140km and the lowest altitude at about 105 km. These data are now being analysed and compared with existing models. In average TGO measures a lower atmospheric density than predicted, but the numbers lay within the expected variability. ExoMars is a joint programme of the European Space Agency (ESA) and Roscosmos, Russia. It consists of the ExoMars 2016 mission with the Trace Gas Orbiter, TGO, and the Entry Descent and Landing Demonstrator, EDM, named Schiaparelli, and the ExoMars 2020 mission, which carries a lander and a rover. The TGO scientific payload consists of four instruments: ACS and NOMAD, both infrared spectrometers for atmospheric measurements in solar occultation mode and in nadir mode, CASSIS, a multichannel camera with stereo imaging capability, and FREND, an epithermal neutron detector to search for subsurface hydrogen (as proxy for water ice and hydrated minerals). The launch mass of the TGO was 3700 kg, including fuel. In addition to its scientific measurements TGO will act as a relay orbiter for NASA's landers on Mars and as from 2021 for the ESA-Roscosmos Rover and Surface Station.

  7. Advanced Amine Solvent Formulations and Process Integration for Near-Term CO2 Capture Success

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, Kevin S.; Searcy, Katherine; Rochelle, Gary T.; Ziaii, Sepideh; Schubert, Craig

    2007-06-28

    This Phase I SBIR project investigated the economic and technical feasibility of advanced amine scrubbing systems for post-combustion CO2 capture at coal-fired power plants. Numerous combinations of advanced solvent formulations and process configurations were screened for energy requirements, and three cases were selected for detailed analysis: a monoethanolamine (MEA) base case and two “advanced” cases: an MEA/Piperazine (PZ) case, and a methyldiethanolamine (MDEA) / PZ case. The MEA/PZ and MDEA/PZ cases employed an advanced “double matrix” stripper configuration. The basis for calculations was a model plant with a gross capacity of 500 MWe. Results indicated that CO2 capture increased the base cost of electricity from 5 cents/kWh to 10.7 c/kWh for the MEA base case, 10.1 c/kWh for the MEA / PZ double matrix, and 9.7 c/kWh for the MDEA / PZ double matrix. The corresponding cost per metric tonne CO2 avoided was 67.20 $/tonne CO2, 60.19 $/tonne CO2, and 55.05 $/tonne CO2, respectively. Derated capacities, including base plant auxiliary load of 29 MWe, were 339 MWe for the base case, 356 MWe for the MEA/PZ double matrix, and 378 MWe for the MDEA / PZ double matrix. When compared to the base case, systems employing advanced solvent formulations and process configurations were estimated to reduce reboiler steam requirements by 20 to 44%, to reduce derating due to CO2 capture by 13 to 30%, and to reduce the cost of CO2 avoided by 10 to 18%. These results demonstrate the potential for significant improvements in the overall economics of CO2 capture via advanced solvent formulations and process configurations.

  8. Analysis of near-term spent fuel transportation hardware requirements and transportation costs

    Energy Technology Data Exchange (ETDEWEB)

    Daling, P.M.; Engel, R.L.

    1983-01-01

    A computer model was developed to quantify the transportation hardware requirements and transportation costs associated with shipping spent fuel in the commercial nucler fuel cycle in the near future. Results from this study indicate that alternative spent fuel shipping systems (consolidated or disassembled fuel elements and new casks designed for older fuel) will significantly reduce the transportation hardware requirements and costs for shipping spent fuel in the commercial nuclear fuel cycle, if there is no significant change in their operating/handling characteristics. It was also found that a more modest cost reduction results from increasing the fraction of spent fuel shipped by truck from 25% to 50%. Larger transportation cost reductions could be realized with further increases in the truck shipping fraction. Using the given set of assumptions, it was found that the existing spent fuel cask fleet size is generally adequate to perform the needed transportation services until a fuel reprocessing plant (FRP) begins to receive fuel (assumed in 1987). Once the FRP opens, up to 7 additional truck systems and 16 additional rail systems are required at the reference truck shipping fraction of 25%. For the 50% truck shipping fraction, 17 additional truck systems and 9 additional rail systems are required. If consolidated fuel only is shipped (25% by truck), 5 additional rail casks are required and the current truck cask fleet is more than adequate until at least 1995. Changes in assumptions could affect the results. Transportation costs for a federal interim storage program could total about $25M if the FRP begins receiving fuel in 1987 or about $95M if the FRP is delayed until 1989. This is due to an increased utilization of federal interim storage facility from 350 MTU for the reference scenario to about 750 MTU if reprocessing is delayed by two years.

  9. Modeling and Forecast Biological Oxygen Demand (BOD using Combination Support Vector Machine with Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Abazar Solgi

    2017-06-01

    Full Text Available Introduction: Chemical pollution of surface water is one of the serious issues that threaten the quality of water. This would be more important when the surface waters used for human drinking supply. One of the key parameters used to measure water pollution is BOD. Because many variables affect the water quality parameters and a complex nonlinear relationship between them is established conventional methods can not solve the problem of quality management of water resources. For years, the Artificial Intelligence methods were used for prediction of nonlinear time series and a good performance of them has been reported. Recently, the wavelet transform that is a signal processing method, has shown good performance in hydrological modeling and is widely used. Extensive research has been globally provided in use of Artificial Neural Network and Adaptive Neural Fuzzy Inference System models to forecast the BOD. But support vector machine has not yet been extensively studied. For this purpose, in this study the ability of support vector machine to predict the monthly BOD parameter based on the available data, temperature, river flow, DO and BOD was evaluated. Materials and Methods: SVM was introduced in 1992 by Vapnik that was a Russian mathematician. This method has been built based on the statistical learning theory. In recent years the use of SVM, is highly taken into consideration. SVM was used in applications such as handwriting recognition, face recognition and has good results. Linear SVM is simplest type of SVM, consists of a hyperplane that dataset of positive and negative is separated with maximum distance. The suitable separator has maximum distance from every one of two dataset. So about this machine that its output groups label (here -1 to +1, the aim is to obtain the maximum distance between categories. This is interpreted to have a maximum margin. Wavelet transform is one of methods in the mathematical science that its main idea was

  10. Including crystal structure attributes in machine learning models of formation energies via Voronoi tessellations

    Science.gov (United States)

    Ward, Logan; Liu, Ruoqian; Krishna, Amar; Hegde, Vinay I.; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris

    2017-07-01

    While high-throughput density functional theory (DFT) has become a prevalent tool for materials discovery, it is limited by the relatively large computational cost. In this paper, we explore using DFT data from high-throughput calculations to create faster, surrogate models with machine learning (ML) that can be used to guide new searches. Our method works by using decision tree models to map DFT-calculated formation enthalpies to a set of attributes consisting of two distinct types: (i) composition-dependent attributes of elemental properties (as have been used in previous ML models of DFT formation energies), combined with (ii) attributes derived from the Voronoi tessellation of the compound's crystal structure. The ML models created using this method have half the cross-validation error and similar training and evaluation speeds to models created with the Coulomb matrix and partial radial distribution function methods. For a dataset of 435 000 formation energies taken from the Open Quantum Materials Database (OQMD), our model achieves a mean absolute error of 80 meV/atom in cross validation, which is lower than the approximate error between DFT-computed and experimentally measured formation enthalpies and below 15% of the mean absolute deviation of the training set. We also demonstrate that our method can accurately estimate the formation energy of materials outside of the training set and be used to identify materials with especially large formation enthalpies. We propose that our models can be used to accelerate the discovery of new materials by identifying the most promising materials to study with DFT at little additional computational cost.

  11. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  12. Machine learning and hurdle models for improving regional predictions of stream water acid neutralizing capacity

    Science.gov (United States)

    Povak, Nicholas A.; Hessburg, Paul F.; Reynolds, Keith M.; Sullivan, Timothy J.; McDonnell, Todd C.; Salter, R. Brion

    2013-06-01

    In many industrialized regions of the world, atmospherically deposited sulfur derived from industrial, nonpoint air pollution sources reduces stream water quality and results in acidic conditions that threaten aquatic resources. Accurate maps of predicted stream water acidity are an essential aid to managers who must identify acid-sensitive streams, potentially affected biota, and create resource protection strategies. In this study, we developed correlative models to predict the acid neutralizing capacity (ANC) of streams across the southern Appalachian Mountain region, USA. Models were developed using stream water chemistry data from 933 sampled locations and continuous maps of pertinent environmental and climatic predictors. Environmental predictors were averaged across the upslope contributing area for each sampled stream location and submitted to both statistical and machine-learning regression models. Predictor variables represented key aspects of the contributing geology, soils, climate, topography, and acidic deposition. To reduce model error rates, we employed hurdle modeling to screen out well-buffered sites and predict continuous ANC for the remainder of the stream network. Models predicted acid-sensitive streams in forested watersheds with small contributing areas, siliceous lithologies, cool and moist environments, low clay content soils, and moderate or higher dry sulfur deposition. Our results confirmed findings from other studies and further identified several influential climatic variables and variable interactions. Model predictions indicated that one quarter of the total stream network was sensitive to additional sulfur inputs (i.e., ANC < 100 µeq L-1), while <10% displayed much lower ANC (<50 µeq L-1). These methods may be readily adapted in other regions to assess stream water quality and potential biotic sensitivity to acidic inputs.

  13. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  14. Modeling workflow to design machine translation applications for public health practice.

    Science.gov (United States)

    Turner, Anne M; Brownstein, Megumu K; Cole, Kate; Karasz, Hilary; Kirchhoff, Katrin

    2015-02-01

    Provide a detailed understanding of the information workflow processes related to translating health promotion materials for limited English proficiency individuals in order to inform the design of context-driven machine translation (MT) tools for public health (PH). We applied a cognitive work analysis framework to investigate the translation information workflow processes of two large health departments in Washington State. Researchers conducted interviews, performed a task analysis, and validated results with PH professionals to model translation workflow and identify functional requirements for a translation system for PH. The study resulted in a detailed description of work related to translation of PH materials, an information workflow diagram, and a description of attitudes towards MT technology. We identified a number of themes that hold design implications for incorporating MT in PH translation practice. A PH translation tool prototype was designed based on these findings. This study underscores the importance of understanding the work context and information workflow for which systems will be designed. Based on themes and translation information workflow processes, we identified key design guidelines for incorporating MT into PH translation work. Primary amongst these is that MT should be followed by human review for translations to be of high quality and for the technology to be adopted into practice. The time and costs of creating multilingual health promotion materials are barriers to translation. PH personnel were interested in MT's potential to improve access to low-cost translated PH materials, but expressed concerns about ensuring quality. We outline design considerations and a potential machine translation tool to best fit MT systems into PH practice. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. A survey of supervised machine learning models for mobile-phone based pathogen identification and classification

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Tseng, Derek; Benien, Parul; Ozcan, Aydogan

    2017-03-01

    Giardia lamblia causes a disease known as giardiasis, which results in diarrhea, abdominal cramps, and bloating. Although conventional pathogen detection methods used in water analysis laboratories offer high sensitivity and specificity, they are time consuming, and need experts to operate bulky equipment and analyze the samples. Here we present a field-portable and cost-effective smartphone-based waterborne pathogen detection platform that can automatically classify Giardia cysts using machine learning. Our platform enables the detection and quantification of Giardia cysts in one hour, including sample collection, labeling, filtration, and automated counting steps. We evaluated the performance of three prototypes using Giardia-spiked water samples from different sources (e.g., reagent-grade, tap, non-potable, and pond water samples). We populated a training database with >30,000 cysts and estimated our detection sensitivity and specificity using 20 different classifier models, including decision trees, nearest neighbor classifiers, support vector machines (SVMs), and ensemble classifiers, and compared their speed of training and classification, as well as predicted accuracies. Among them, cubic SVM, medium Gaussian SVM, and bagged-trees were the most promising classifier types with accuracies of 94.1%, 94.2%, and 95%, respectively; we selected the latter as our preferred classifier for the detection and enumeration of Giardia cysts that are imaged using our mobile-phone fluorescence microscope. Without the need for any experts or microbiologists, this field-portable pathogen detection platform can present a useful tool for water quality monitoring in resource-limited-settings.

  16. Improving Simulations of Extreme Flows by Coupling a Physically-based Hydrologic Model with a Machine Learning Model

    Science.gov (United States)

    Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.

    2017-12-01

    With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967

  17. Prediction of recombinant protein overexpression in Escherichia coli using a machine learning based model (RPOLP).

    Science.gov (United States)

    Habibi, Narjeskhatoon; Norouzi, Alireza; Mohd Hashim, Siti Z; Shamsir, Mohd Shahir; Samian, Razip

    2015-11-01

    Recombinant protein overexpression, an important biotechnological process, is ruled by complex biological rules which are mostly unknown, is in need of an intelligent algorithm so as to avoid resource-intensive lab-based trial and error experiments in order to determine the expression level of the recombinant protein. The purpose of this study is to propose a predictive model to estimate the level of recombinant protein overexpression for the first time in the literature using a machine learning approach based on the sequence, expression vector, and expression host. The expression host was confined to Escherichia coli which is the most popular bacterial host to overexpress recombinant proteins. To provide a handle to the problem, the overexpression level was categorized as low, medium and high. A set of features which were likely to affect the overexpression level was generated based on the known facts (e.g. gene length) and knowledge gathered from related literature. Then, a representative sub-set of features generated in the previous objective was determined using feature selection techniques. Finally a predictive model was developed using random forest classifier which was able to adequately classify the multi-class imbalanced small dataset constructed. The result showed that the predictive model provided a promising accuracy of 80% on average, in estimating the overexpression level of a recombinant protein. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Mathematical model of the crystallizing blank`s thermal state at the horizontal continuous casting machine

    Directory of Open Access Journals (Sweden)

    Kryukov Igor Yu.

    2017-01-01

    Full Text Available Present article is devoted to the development of the mathematical model, which describes thermal state and crystallization process of the rectangular cross-section blank while continious process of extraction from a horysontal continious casting machine (HCCM.The developed model took cue for the heat-transfer properties of non-iron metal teeming; its temperature on entry to the casting mold; cooling conditions of blank in the carbon molds in the presence of a copper water cooler. Besides, has been considered the asymmetry of heat interchange from blank`s head and drag at mold, coming out from fluid contraction and features of the horizontal casting mold. The developed mathematical model allows to determine alterations in crystallizing blank of the following factors with respect to time: temperature pattern of crystallizing blank under different technical working regimes of HCCM; boundaries of solid two-phase field and liquid two-phase filed; blank`s thickness variation under shrinkage of the ingot`s material

  19. An unsupervised machine learning model for discovering latent infectious diseases using social media data.

    Science.gov (United States)

    Lim, Sunghoon; Tucker, Conrad S; Kumara, Soundar

    2017-02-01

    The authors of this work propose an unsupervised machine learning model that has the ability to identify real-world latent infectious diseases by mining social media data. In this study, a latent infectious disease is defined as a communicable disease that has not yet been formalized by national public health institutes and explicitly communicated to the general public. Most existing approaches to modeling infectious-disease-related knowledge discovery through social media networks are top-down approaches that are based on already known information, such as the names of diseases and their symptoms. In existing top-down approaches, necessary but unknown information, such as disease names and symptoms, is mostly unidentified in social media data until national public health institutes have formalized that disease. Most of the formalizing processes for latent infectious diseases are time consuming. Therefore, this study presents a bottom-up approach for latent infectious disease discovery in a given location without prior information, such as disease names and related symptoms. Social media messages with user and temporal information are extracted during the data preprocessing stage. An unsupervised sentiment analysis model is then presented. Users' expressions about symptoms, body parts, and pain locations are also identified from social media data. Then, symptom weighting vectors for each individual and time period are created, based on their sentiment and social media expressions. Finally, latent-infectious-disease-related information is retrieved from individuals' symptom weighting vectors. Twitter data from August 2012 to May 2013 are used to validate this study. Real electronic medical records for 104 individuals, who were diagnosed with influenza in the same period, are used to serve as ground truth validation. The results are promising, with the highest precision, recall, and F 1 score values of 0.773, 0.680, and 0.724, respectively. This work uses individuals

  20. Non-parametric temporal modeling of the hemodynamic response function via a liquid state machine.

    Science.gov (United States)

    Avesani, Paolo; Hazan, Hananel; Koilis, Ester; Manevitz, Larry M; Sona, Diego

    2015-10-01

    Standard methods for the analysis of functional MRI data strongly rely on prior implicit and explicit hypotheses made to simplify the analysis. In this work the attention is focused on two such commonly accepted hypotheses: (i) the hemodynamic response function (HRF) to be searched in the BOLD signal can be described by a specific parametric model e.g., double-gamma; (ii) the effect of stimuli on the signal is taken to be linearly additive. While these assumptions have been empirically proven to generate high sensitivity for statistical methods, they also limit the identification of relevant voxels to what is already postulated in the signal, thus not allowing the discovery of unknown correlates in the data due to the presence of unexpected hemodynamics. This paper tries to overcome these limitations by proposing a method wherein the HRF is learned directly from data rather than induced from its basic form assumed in advance. This approach produces a set of voxel-wise models of HRF and, as a result, relevant voxels are filterable according to the accuracy of their prediction in a machine learning framework. This approach is instantiated using a temporal architecture based on the paradigm of Reservoir Computing wherein a Liquid State Machine is combined with a decoding Feed-Forward Neural Network. This splits the modeling into two parts: first a representation of the complex temporal reactivity of the hemodynamic response is determined by a universal global "reservoir" which is essentially temporal; second an interpretation of the encoded representation is determined by a standard feed-forward neural network, which is trained by the data. Thus the reservoir models the temporal state of information during and following temporal stimuli in a feed-back system, while the neural network "translates" this data to fit the specific HRF response as given, e.g. by BOLD signal measurements in fMRI. An empirical analysis on synthetic datasets shows that the learning process can